Jan 26 18:30:25 crc systemd[1]: Starting Kubernetes Kubelet... Jan 26 18:30:25 crc restorecon[4696]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 18:30:25 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 18:30:26 crc restorecon[4696]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 18:30:26 crc restorecon[4696]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 26 18:30:26 crc kubenswrapper[4737]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 18:30:26 crc kubenswrapper[4737]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 26 18:30:26 crc kubenswrapper[4737]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 18:30:26 crc kubenswrapper[4737]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 18:30:26 crc kubenswrapper[4737]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 26 18:30:26 crc kubenswrapper[4737]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.807827 4737 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.810860 4737 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.810880 4737 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.810885 4737 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.810889 4737 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.810902 4737 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.810908 4737 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.810914 4737 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.810919 4737 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.810924 4737 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.810928 4737 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.810932 4737 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.810936 4737 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.810940 4737 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.810944 4737 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.810948 4737 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.810952 4737 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.810955 4737 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.810959 4737 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.810964 4737 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.810968 4737 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.810971 4737 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.810975 4737 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.810979 4737 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.810983 4737 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.810987 4737 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.810991 4737 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.810995 4737 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.810999 4737 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.811005 4737 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.811011 4737 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.811018 4737 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.811024 4737 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.811031 4737 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.811038 4737 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.811044 4737 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.811049 4737 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.811054 4737 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.811058 4737 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.811062 4737 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.811088 4737 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.811096 4737 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.811102 4737 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.811107 4737 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.811112 4737 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.811117 4737 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.811122 4737 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.811127 4737 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.811131 4737 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.811136 4737 feature_gate.go:330] unrecognized feature gate: Example Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.811143 4737 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.811149 4737 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.811154 4737 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.811157 4737 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.811161 4737 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.811165 4737 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.811169 4737 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.811175 4737 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.811180 4737 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.811183 4737 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.811187 4737 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.811191 4737 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.811194 4737 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.811198 4737 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.811201 4737 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.811205 4737 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.811208 4737 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.811212 4737 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.811216 4737 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.811221 4737 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.811225 4737 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.811231 4737 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811651 4737 flags.go:64] FLAG: --address="0.0.0.0" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811666 4737 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811675 4737 flags.go:64] FLAG: --anonymous-auth="true" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811681 4737 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811687 4737 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811692 4737 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811699 4737 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811705 4737 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811709 4737 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811713 4737 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811718 4737 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811722 4737 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811726 4737 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811730 4737 flags.go:64] FLAG: --cgroup-root="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811735 4737 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811739 4737 flags.go:64] FLAG: --client-ca-file="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811744 4737 flags.go:64] FLAG: --cloud-config="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811749 4737 flags.go:64] FLAG: --cloud-provider="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811753 4737 flags.go:64] FLAG: --cluster-dns="[]" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811758 4737 flags.go:64] FLAG: --cluster-domain="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811763 4737 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811767 4737 flags.go:64] FLAG: --config-dir="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811771 4737 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811776 4737 flags.go:64] FLAG: --container-log-max-files="5" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811781 4737 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811785 4737 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811789 4737 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811794 4737 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811798 4737 flags.go:64] FLAG: --contention-profiling="false" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811802 4737 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811807 4737 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811812 4737 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811816 4737 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811821 4737 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811825 4737 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811829 4737 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811833 4737 flags.go:64] FLAG: --enable-load-reader="false" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811837 4737 flags.go:64] FLAG: --enable-server="true" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811841 4737 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811847 4737 flags.go:64] FLAG: --event-burst="100" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811851 4737 flags.go:64] FLAG: --event-qps="50" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811856 4737 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811860 4737 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811864 4737 flags.go:64] FLAG: --eviction-hard="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811870 4737 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811875 4737 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811880 4737 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811885 4737 flags.go:64] FLAG: --eviction-soft="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811892 4737 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811898 4737 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811904 4737 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811909 4737 flags.go:64] FLAG: --experimental-mounter-path="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811914 4737 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811919 4737 flags.go:64] FLAG: --fail-swap-on="true" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811925 4737 flags.go:64] FLAG: --feature-gates="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811931 4737 flags.go:64] FLAG: --file-check-frequency="20s" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811936 4737 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811941 4737 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811946 4737 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811951 4737 flags.go:64] FLAG: --healthz-port="10248" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811956 4737 flags.go:64] FLAG: --help="false" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811962 4737 flags.go:64] FLAG: --hostname-override="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811968 4737 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811974 4737 flags.go:64] FLAG: --http-check-frequency="20s" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811979 4737 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811985 4737 flags.go:64] FLAG: --image-credential-provider-config="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811990 4737 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.811995 4737 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812000 4737 flags.go:64] FLAG: --image-service-endpoint="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812005 4737 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812010 4737 flags.go:64] FLAG: --kube-api-burst="100" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812015 4737 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812021 4737 flags.go:64] FLAG: --kube-api-qps="50" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812026 4737 flags.go:64] FLAG: --kube-reserved="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812030 4737 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812034 4737 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812039 4737 flags.go:64] FLAG: --kubelet-cgroups="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812044 4737 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812048 4737 flags.go:64] FLAG: --lock-file="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812052 4737 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812058 4737 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812065 4737 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812088 4737 flags.go:64] FLAG: --log-json-split-stream="false" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812092 4737 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812096 4737 flags.go:64] FLAG: --log-text-split-stream="false" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812100 4737 flags.go:64] FLAG: --logging-format="text" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812105 4737 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812110 4737 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812114 4737 flags.go:64] FLAG: --manifest-url="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812118 4737 flags.go:64] FLAG: --manifest-url-header="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812124 4737 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812128 4737 flags.go:64] FLAG: --max-open-files="1000000" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812134 4737 flags.go:64] FLAG: --max-pods="110" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812138 4737 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812142 4737 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812147 4737 flags.go:64] FLAG: --memory-manager-policy="None" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812151 4737 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812155 4737 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812159 4737 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812164 4737 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812174 4737 flags.go:64] FLAG: --node-status-max-images="50" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812178 4737 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812182 4737 flags.go:64] FLAG: --oom-score-adj="-999" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812187 4737 flags.go:64] FLAG: --pod-cidr="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812191 4737 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812198 4737 flags.go:64] FLAG: --pod-manifest-path="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812201 4737 flags.go:64] FLAG: --pod-max-pids="-1" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812206 4737 flags.go:64] FLAG: --pods-per-core="0" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812210 4737 flags.go:64] FLAG: --port="10250" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812214 4737 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812218 4737 flags.go:64] FLAG: --provider-id="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812222 4737 flags.go:64] FLAG: --qos-reserved="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812226 4737 flags.go:64] FLAG: --read-only-port="10255" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812232 4737 flags.go:64] FLAG: --register-node="true" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812237 4737 flags.go:64] FLAG: --register-schedulable="true" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812241 4737 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812249 4737 flags.go:64] FLAG: --registry-burst="10" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812252 4737 flags.go:64] FLAG: --registry-qps="5" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812257 4737 flags.go:64] FLAG: --reserved-cpus="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812261 4737 flags.go:64] FLAG: --reserved-memory="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812266 4737 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812270 4737 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812275 4737 flags.go:64] FLAG: --rotate-certificates="false" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812279 4737 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812283 4737 flags.go:64] FLAG: --runonce="false" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812287 4737 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812291 4737 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812295 4737 flags.go:64] FLAG: --seccomp-default="false" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812299 4737 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812303 4737 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812307 4737 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812312 4737 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812317 4737 flags.go:64] FLAG: --storage-driver-password="root" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812321 4737 flags.go:64] FLAG: --storage-driver-secure="false" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812325 4737 flags.go:64] FLAG: --storage-driver-table="stats" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812329 4737 flags.go:64] FLAG: --storage-driver-user="root" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812333 4737 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812337 4737 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812342 4737 flags.go:64] FLAG: --system-cgroups="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812346 4737 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812353 4737 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812358 4737 flags.go:64] FLAG: --tls-cert-file="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812362 4737 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812367 4737 flags.go:64] FLAG: --tls-min-version="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812372 4737 flags.go:64] FLAG: --tls-private-key-file="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812377 4737 flags.go:64] FLAG: --topology-manager-policy="none" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812381 4737 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812385 4737 flags.go:64] FLAG: --topology-manager-scope="container" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812389 4737 flags.go:64] FLAG: --v="2" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812396 4737 flags.go:64] FLAG: --version="false" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812401 4737 flags.go:64] FLAG: --vmodule="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812406 4737 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.812411 4737 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812699 4737 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812706 4737 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812711 4737 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812715 4737 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812719 4737 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812723 4737 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812727 4737 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812731 4737 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812735 4737 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812740 4737 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812744 4737 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812748 4737 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812752 4737 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812756 4737 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812761 4737 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812764 4737 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812768 4737 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812772 4737 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812776 4737 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812779 4737 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812782 4737 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812786 4737 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812789 4737 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812793 4737 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812797 4737 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812801 4737 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812805 4737 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812808 4737 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812813 4737 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812817 4737 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812822 4737 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812826 4737 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812831 4737 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812836 4737 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812841 4737 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812845 4737 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812849 4737 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812853 4737 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812857 4737 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812861 4737 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812865 4737 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812870 4737 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812874 4737 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812878 4737 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812882 4737 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812887 4737 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812891 4737 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812895 4737 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812900 4737 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812904 4737 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812907 4737 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812911 4737 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812915 4737 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812921 4737 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812927 4737 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812933 4737 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812941 4737 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812948 4737 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812954 4737 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812961 4737 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812966 4737 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812971 4737 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812975 4737 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812978 4737 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812983 4737 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812988 4737 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812992 4737 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.812995 4737 feature_gate.go:330] unrecognized feature gate: Example Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.813007 4737 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.813010 4737 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.813015 4737 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.813022 4737 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.825835 4737 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.825881 4737 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.826733 4737 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.826767 4737 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.826780 4737 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.826789 4737 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.826795 4737 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.826803 4737 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.826819 4737 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.826827 4737 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.826834 4737 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.826840 4737 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.826846 4737 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.826854 4737 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.826861 4737 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.826868 4737 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.826875 4737 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.826881 4737 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.826887 4737 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.826895 4737 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.826902 4737 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.826916 4737 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.826922 4737 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.826929 4737 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.826936 4737 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.826949 4737 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.826958 4737 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.826967 4737 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.826974 4737 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.826982 4737 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.826989 4737 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.826996 4737 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.827003 4737 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.827019 4737 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.827027 4737 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.827037 4737 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.827045 4737 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.827056 4737 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.827087 4737 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.827096 4737 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.827104 4737 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.827111 4737 feature_gate.go:330] unrecognized feature gate: Example Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.827119 4737 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.827126 4737 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.827133 4737 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.827148 4737 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.827155 4737 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.827163 4737 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.827170 4737 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.827177 4737 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.827183 4737 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.827190 4737 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.827197 4737 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.827204 4737 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.827211 4737 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.827218 4737 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.827225 4737 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.827232 4737 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.827245 4737 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.827252 4737 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.827259 4737 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.827268 4737 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.827275 4737 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.827283 4737 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.827293 4737 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.827303 4737 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.827311 4737 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.827319 4737 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.827326 4737 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.827334 4737 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.828354 4737 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.828434 4737 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.828443 4737 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.828456 4737 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829557 4737 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829571 4737 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829577 4737 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829582 4737 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829588 4737 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829593 4737 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829600 4737 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829606 4737 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829611 4737 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829615 4737 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829620 4737 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829625 4737 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829631 4737 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829636 4737 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829640 4737 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829645 4737 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829653 4737 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829662 4737 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829669 4737 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829674 4737 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829680 4737 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829685 4737 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829691 4737 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829696 4737 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829700 4737 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829705 4737 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829710 4737 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829714 4737 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829720 4737 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829725 4737 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829730 4737 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829734 4737 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829739 4737 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829744 4737 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829749 4737 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829754 4737 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829758 4737 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829763 4737 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829767 4737 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829771 4737 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829776 4737 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829781 4737 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829785 4737 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829791 4737 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829795 4737 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829800 4737 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829806 4737 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829810 4737 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829815 4737 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829822 4737 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829827 4737 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829832 4737 feature_gate.go:330] unrecognized feature gate: Example Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829838 4737 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829844 4737 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829851 4737 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829857 4737 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829863 4737 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829868 4737 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829873 4737 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829879 4737 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829886 4737 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829891 4737 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829896 4737 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829901 4737 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829905 4737 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829910 4737 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829914 4737 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829919 4737 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829923 4737 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829930 4737 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.829935 4737 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.829943 4737 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.830491 4737 server.go:940] "Client rotation is on, will bootstrap in background" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.833934 4737 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.834051 4737 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.834972 4737 server.go:997] "Starting client certificate rotation" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.834998 4737 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.835407 4737 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-28 22:03:17.137674695 +0000 UTC Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.835528 4737 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.842089 4737 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.843820 4737 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 18:30:26 crc kubenswrapper[4737]: E0126 18:30:26.843849 4737 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.236:6443: connect: connection refused" logger="UnhandledError" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.851924 4737 log.go:25] "Validated CRI v1 runtime API" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.868762 4737 log.go:25] "Validated CRI v1 image API" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.870407 4737 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.873319 4737 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-26-18-25-42-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.873375 4737 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.896522 4737 manager.go:217] Machine: {Timestamp:2026-01-26 18:30:26.893877958 +0000 UTC m=+0.202072756 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:4ebf7606-e2ee-4d28-b0b5-b6f922331ef2 BootID:163b9b97-5fa6-4443-9f0c-6d278a8ade1d Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:a7:b3:df Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:a7:b3:df Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:ba:0d:a3 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:97:00:98 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:b4:e8:5c Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:95:e3:6c Speed:-1 Mtu:1496} {Name:eth10 MacAddress:36:4e:c1:6b:c0:75 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:fa:d5:50:35:34:1d Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.897111 4737 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.897444 4737 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.898262 4737 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.898537 4737 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.898600 4737 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.898905 4737 topology_manager.go:138] "Creating topology manager with none policy" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.898921 4737 container_manager_linux.go:303] "Creating device plugin manager" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.899197 4737 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.899239 4737 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.899804 4737 state_mem.go:36] "Initialized new in-memory state store" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.900240 4737 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.901393 4737 kubelet.go:418] "Attempting to sync node with API server" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.901428 4737 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.901463 4737 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.901481 4737 kubelet.go:324] "Adding apiserver pod source" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.901496 4737 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.904110 4737 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.236:6443: connect: connection refused Jan 26 18:30:26 crc kubenswrapper[4737]: E0126 18:30:26.904213 4737 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.236:6443: connect: connection refused" logger="UnhandledError" Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.903994 4737 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.236:6443: connect: connection refused Jan 26 18:30:26 crc kubenswrapper[4737]: E0126 18:30:26.904615 4737 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.236:6443: connect: connection refused" logger="UnhandledError" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.904682 4737 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.905235 4737 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.906061 4737 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.906757 4737 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.906794 4737 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.906809 4737 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.906821 4737 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.906840 4737 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.906853 4737 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.906866 4737 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.906884 4737 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.906896 4737 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.906907 4737 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.906940 4737 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.906950 4737 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.907212 4737 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.907824 4737 server.go:1280] "Started kubelet" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.908281 4737 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.236:6443: connect: connection refused Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.908345 4737 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.908599 4737 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.909416 4737 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 26 18:30:26 crc systemd[1]: Started Kubernetes Kubelet. Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.911172 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.911215 4737 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.911459 4737 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.911486 4737 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.911465 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 04:58:07.469120485 +0000 UTC Jan 26 18:30:26 crc kubenswrapper[4737]: E0126 18:30:26.911567 4737 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.911600 4737 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.912369 4737 server.go:460] "Adding debug handlers to kubelet server" Jan 26 18:30:26 crc kubenswrapper[4737]: E0126 18:30:26.912535 4737 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.236:6443: connect: connection refused" interval="200ms" Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.912666 4737 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.236:6443: connect: connection refused Jan 26 18:30:26 crc kubenswrapper[4737]: E0126 18:30:26.912781 4737 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.236:6443: connect: connection refused" logger="UnhandledError" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.916449 4737 factory.go:55] Registering systemd factory Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.916490 4737 factory.go:221] Registration of the systemd container factory successfully Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.916882 4737 factory.go:153] Registering CRI-O factory Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.916933 4737 factory.go:221] Registration of the crio container factory successfully Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.917029 4737 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.917058 4737 factory.go:103] Registering Raw factory Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.917111 4737 manager.go:1196] Started watching for new ooms in manager Jan 26 18:30:26 crc kubenswrapper[4737]: E0126 18:30:26.916152 4737 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.236:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188e5b6ce7dc9040 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 18:30:26.907770944 +0000 UTC m=+0.215965662,LastTimestamp:2026-01-26 18:30:26.907770944 +0000 UTC m=+0.215965662,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.919787 4737 manager.go:319] Starting recovery of all containers Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.928895 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.928991 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.929011 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.929028 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.929046 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.929066 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.929135 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.929153 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.929173 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.929190 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.929207 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.929229 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.929246 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.929266 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.929285 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.929303 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.929319 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.929335 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.929349 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.929399 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.929417 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.929432 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.929448 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.929465 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.929481 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.929499 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.929528 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.929545 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.929561 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.929613 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.929632 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.929651 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.929667 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.929683 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.929699 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.929714 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.929730 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.929746 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.929760 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.929779 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.929795 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.929812 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.929830 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.929848 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.929867 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.929885 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.929907 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.929926 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.929945 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.929962 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.929979 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.929996 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.930019 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.930096 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.930117 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.930139 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.930165 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.930193 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.930210 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.930226 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.930266 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.930291 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.930307 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.930326 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.930343 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.930360 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.930378 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.930394 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.930413 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.930430 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.930448 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.930464 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.930480 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.930498 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.930516 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.930534 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.930552 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.930571 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.930589 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.930611 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.930670 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.930687 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.930704 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.930725 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.930742 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.930761 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.930779 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.930797 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.930815 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.930829 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.930842 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.930857 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.930871 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.930889 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.930903 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.930920 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.930934 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.930948 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.930966 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.930984 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.931005 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.931024 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.931041 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.931058 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.931112 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.931137 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.931157 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.931177 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.931197 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.931216 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.931237 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.931260 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.931282 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.931299 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.931319 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.931338 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.931357 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.931374 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.931393 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.931413 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.931430 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.931449 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.931467 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.931484 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.931507 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.931532 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.931550 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.931573 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.931594 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.931613 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.931635 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.931654 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.931674 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.931693 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.931709 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.931726 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.931743 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.931760 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.931777 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.931794 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.931812 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.931829 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.931846 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.931866 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.931883 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.931898 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.931915 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.931930 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.931944 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.931963 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.931979 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.931998 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.932017 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.932033 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.933438 4737 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.933509 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.933532 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.933549 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.933563 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.933584 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.933596 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.933609 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.933621 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.933633 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.933647 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.933662 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.933677 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.933691 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.933703 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.933715 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.933733 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.933751 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.933767 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.933782 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.933798 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.933820 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.933835 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.933846 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.933861 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.933872 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.933886 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.933932 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.933953 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.933969 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.933986 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.934002 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.934018 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.934032 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.934044 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.934059 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.934088 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.934106 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.934120 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.934136 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.934149 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.934162 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.934177 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.934191 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.934203 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.934215 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.934226 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.934239 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.934302 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.934314 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.934325 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.934335 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.934347 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.934357 4737 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.934368 4737 reconstruct.go:97] "Volume reconstruction finished" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.934375 4737 reconciler.go:26] "Reconciler: start to sync state" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.941111 4737 manager.go:324] Recovery completed Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.956819 4737 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.958682 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.958731 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.958744 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.960828 4737 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.960847 4737 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.960869 4737 state_mem.go:36] "Initialized new in-memory state store" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.973598 4737 policy_none.go:49] "None policy: Start" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.975672 4737 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.975710 4737 state_mem.go:35] "Initializing new in-memory state store" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.978274 4737 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.980485 4737 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.980524 4737 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 26 18:30:26 crc kubenswrapper[4737]: I0126 18:30:26.980555 4737 kubelet.go:2335] "Starting kubelet main sync loop" Jan 26 18:30:26 crc kubenswrapper[4737]: E0126 18:30:26.980716 4737 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 26 18:30:26 crc kubenswrapper[4737]: W0126 18:30:26.981751 4737 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.236:6443: connect: connection refused Jan 26 18:30:26 crc kubenswrapper[4737]: E0126 18:30:26.981824 4737 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.236:6443: connect: connection refused" logger="UnhandledError" Jan 26 18:30:27 crc kubenswrapper[4737]: E0126 18:30:27.012394 4737 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.037299 4737 manager.go:334] "Starting Device Plugin manager" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.037353 4737 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.037368 4737 server.go:79] "Starting device plugin registration server" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.037918 4737 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.037951 4737 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.038176 4737 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.038250 4737 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.038257 4737 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 26 18:30:27 crc kubenswrapper[4737]: E0126 18:30:27.051695 4737 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.081447 4737 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc"] Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.081633 4737 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.083355 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.083387 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.083396 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.083539 4737 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.084047 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.084109 4737 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.084498 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.084528 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.084537 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.084662 4737 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.084773 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.084802 4737 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.085096 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.085134 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.085145 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.085409 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.085424 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.085432 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.085599 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.085673 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.085693 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.085937 4737 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.086007 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.086028 4737 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.086909 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.086964 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.086990 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.087238 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.087269 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.087279 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.087372 4737 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.087584 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.087650 4737 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.087999 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.088061 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.088114 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.088415 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.088476 4737 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.088511 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.088535 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.088546 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.089468 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.089523 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.089543 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:27 crc kubenswrapper[4737]: E0126 18:30:27.113950 4737 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.236:6443: connect: connection refused" interval="400ms" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.138338 4737 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.138423 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.138502 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.138528 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.138544 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.138581 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.138614 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.138721 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.138770 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.138792 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.138950 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.139042 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.139128 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.139173 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.139211 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.139256 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.139757 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.139799 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.139816 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.139849 4737 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 18:30:27 crc kubenswrapper[4737]: E0126 18:30:27.140522 4737 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.236:6443: connect: connection refused" node="crc" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.241025 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.241330 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.241724 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.241642 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.241814 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.241844 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.241872 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.241894 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.241915 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.241934 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.241957 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.241979 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.242004 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.242000 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.242032 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.242103 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.242102 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.242127 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.242154 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.242154 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.242230 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.242187 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.242212 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.242290 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.242275 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.242329 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.242358 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.242263 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.242508 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.242532 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.341593 4737 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.343929 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.343984 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.343997 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.344039 4737 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 18:30:27 crc kubenswrapper[4737]: E0126 18:30:27.344389 4737 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.236:6443: connect: connection refused" node="crc" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.407432 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.412840 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.434084 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 18:30:27 crc kubenswrapper[4737]: W0126 18:30:27.441376 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-1433af3cfb2ab82ff9d9a364bee769f2c4e6ccdf6a791ee98316bd7f86870160 WatchSource:0}: Error finding container 1433af3cfb2ab82ff9d9a364bee769f2c4e6ccdf6a791ee98316bd7f86870160: Status 404 returned error can't find the container with id 1433af3cfb2ab82ff9d9a364bee769f2c4e6ccdf6a791ee98316bd7f86870160 Jan 26 18:30:27 crc kubenswrapper[4737]: W0126 18:30:27.453856 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-531ce10069e85d9beb4bf8592bead2a04e107eb2d9799a2dec86b36a4893483d WatchSource:0}: Error finding container 531ce10069e85d9beb4bf8592bead2a04e107eb2d9799a2dec86b36a4893483d: Status 404 returned error can't find the container with id 531ce10069e85d9beb4bf8592bead2a04e107eb2d9799a2dec86b36a4893483d Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.456181 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.464316 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 26 18:30:27 crc kubenswrapper[4737]: W0126 18:30:27.481603 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-c05ae3137a1be989e90fa1d0c8c7c9ad08814f90a4fa87599ed0e3f0fd7f67b1 WatchSource:0}: Error finding container c05ae3137a1be989e90fa1d0c8c7c9ad08814f90a4fa87599ed0e3f0fd7f67b1: Status 404 returned error can't find the container with id c05ae3137a1be989e90fa1d0c8c7c9ad08814f90a4fa87599ed0e3f0fd7f67b1 Jan 26 18:30:27 crc kubenswrapper[4737]: W0126 18:30:27.488197 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-8a26c7b6363651e559fbe7c454df897d0fbe2a29d5666806629a0d63f43262e4 WatchSource:0}: Error finding container 8a26c7b6363651e559fbe7c454df897d0fbe2a29d5666806629a0d63f43262e4: Status 404 returned error can't find the container with id 8a26c7b6363651e559fbe7c454df897d0fbe2a29d5666806629a0d63f43262e4 Jan 26 18:30:27 crc kubenswrapper[4737]: E0126 18:30:27.515503 4737 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.236:6443: connect: connection refused" interval="800ms" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.745445 4737 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.746747 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.746781 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.746791 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.746814 4737 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 18:30:27 crc kubenswrapper[4737]: E0126 18:30:27.747362 4737 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.236:6443: connect: connection refused" node="crc" Jan 26 18:30:27 crc kubenswrapper[4737]: W0126 18:30:27.865119 4737 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.236:6443: connect: connection refused Jan 26 18:30:27 crc kubenswrapper[4737]: E0126 18:30:27.865209 4737 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.236:6443: connect: connection refused" logger="UnhandledError" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.909507 4737 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.236:6443: connect: connection refused Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.911565 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 04:05:09.820808306 +0000 UTC Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.987301 4737 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935" exitCode=0 Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.987373 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935"} Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.987496 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8a26c7b6363651e559fbe7c454df897d0fbe2a29d5666806629a0d63f43262e4"} Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.987624 4737 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.989592 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.989801 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.989814 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.995116 4737 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="82b8f6ddca9dd101abf072f2cd61c297b2dd32397a6ab33c8aec8640fea83afe" exitCode=0 Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.995223 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"82b8f6ddca9dd101abf072f2cd61c297b2dd32397a6ab33c8aec8640fea83afe"} Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.995323 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"c05ae3137a1be989e90fa1d0c8c7c9ad08814f90a4fa87599ed0e3f0fd7f67b1"} Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.995468 4737 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.996973 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.997008 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.997024 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.998474 4737 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="a4aba885244febd5d5191fbd34d2ee56412140bedfaf405e1a6b8bdeba2814f1" exitCode=0 Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.998566 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"a4aba885244febd5d5191fbd34d2ee56412140bedfaf405e1a6b8bdeba2814f1"} Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.998600 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"531ce10069e85d9beb4bf8592bead2a04e107eb2d9799a2dec86b36a4893483d"} Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.998699 4737 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.999708 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.999764 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:27 crc kubenswrapper[4737]: I0126 18:30:27.999783 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:28 crc kubenswrapper[4737]: I0126 18:30:28.000185 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a7338aa3bff3561881f454689b4ae1ab8b46ddf950c45dd080107c7b78e6766a"} Jan 26 18:30:28 crc kubenswrapper[4737]: I0126 18:30:28.000211 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1433af3cfb2ab82ff9d9a364bee769f2c4e6ccdf6a791ee98316bd7f86870160"} Jan 26 18:30:28 crc kubenswrapper[4737]: I0126 18:30:28.003526 4737 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291" exitCode=0 Jan 26 18:30:28 crc kubenswrapper[4737]: I0126 18:30:28.003574 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291"} Jan 26 18:30:28 crc kubenswrapper[4737]: I0126 18:30:28.003610 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7aa89d76ed7a2c5c59d4eac21756b868d840ecca25236dcf9d9d2d84f5bd01eb"} Jan 26 18:30:28 crc kubenswrapper[4737]: I0126 18:30:28.003713 4737 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:30:28 crc kubenswrapper[4737]: I0126 18:30:28.004635 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:28 crc kubenswrapper[4737]: I0126 18:30:28.004666 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:28 crc kubenswrapper[4737]: I0126 18:30:28.004675 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:28 crc kubenswrapper[4737]: I0126 18:30:28.008334 4737 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:30:28 crc kubenswrapper[4737]: I0126 18:30:28.009463 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:28 crc kubenswrapper[4737]: I0126 18:30:28.009521 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:28 crc kubenswrapper[4737]: I0126 18:30:28.009620 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:28 crc kubenswrapper[4737]: W0126 18:30:28.164747 4737 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.236:6443: connect: connection refused Jan 26 18:30:28 crc kubenswrapper[4737]: E0126 18:30:28.164880 4737 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.236:6443: connect: connection refused" logger="UnhandledError" Jan 26 18:30:28 crc kubenswrapper[4737]: W0126 18:30:28.260030 4737 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.236:6443: connect: connection refused Jan 26 18:30:28 crc kubenswrapper[4737]: E0126 18:30:28.260151 4737 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.236:6443: connect: connection refused" logger="UnhandledError" Jan 26 18:30:28 crc kubenswrapper[4737]: E0126 18:30:28.316413 4737 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.236:6443: connect: connection refused" interval="1.6s" Jan 26 18:30:28 crc kubenswrapper[4737]: W0126 18:30:28.387900 4737 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.236:6443: connect: connection refused Jan 26 18:30:28 crc kubenswrapper[4737]: E0126 18:30:28.387969 4737 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.236:6443: connect: connection refused" logger="UnhandledError" Jan 26 18:30:28 crc kubenswrapper[4737]: I0126 18:30:28.548291 4737 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:30:28 crc kubenswrapper[4737]: I0126 18:30:28.554335 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:28 crc kubenswrapper[4737]: I0126 18:30:28.554390 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:28 crc kubenswrapper[4737]: I0126 18:30:28.554410 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:28 crc kubenswrapper[4737]: I0126 18:30:28.554483 4737 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 18:30:28 crc kubenswrapper[4737]: I0126 18:30:28.912734 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 04:34:19.545281688 +0000 UTC Jan 26 18:30:29 crc kubenswrapper[4737]: I0126 18:30:29.004444 4737 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 26 18:30:29 crc kubenswrapper[4737]: I0126 18:30:29.009375 4737 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09" exitCode=0 Jan 26 18:30:29 crc kubenswrapper[4737]: I0126 18:30:29.009463 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09"} Jan 26 18:30:29 crc kubenswrapper[4737]: I0126 18:30:29.009616 4737 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:30:29 crc kubenswrapper[4737]: I0126 18:30:29.010530 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:29 crc kubenswrapper[4737]: I0126 18:30:29.010564 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:29 crc kubenswrapper[4737]: I0126 18:30:29.010577 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:29 crc kubenswrapper[4737]: I0126 18:30:29.011453 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"9d52ac89fea984085d49fba71ada8accb5c8a57c7d898b2b3f994cd01a485c4c"} Jan 26 18:30:29 crc kubenswrapper[4737]: I0126 18:30:29.011656 4737 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:30:29 crc kubenswrapper[4737]: I0126 18:30:29.012919 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:29 crc kubenswrapper[4737]: I0126 18:30:29.012950 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:29 crc kubenswrapper[4737]: I0126 18:30:29.012962 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:29 crc kubenswrapper[4737]: I0126 18:30:29.014062 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"81db4bac81727e02147b813300003fba15b7daf01d124d40ee30e4a87446ed1e"} Jan 26 18:30:29 crc kubenswrapper[4737]: I0126 18:30:29.014102 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"15312b4318e6f2175734be08ac5efbea4b0a46e2810e7223575671671408a157"} Jan 26 18:30:29 crc kubenswrapper[4737]: I0126 18:30:29.014113 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7e22cbaea409b90eb9ad8f629cc94f12d5d94913c660d1f4ecbf3b1dd136d009"} Jan 26 18:30:29 crc kubenswrapper[4737]: I0126 18:30:29.014185 4737 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:30:29 crc kubenswrapper[4737]: I0126 18:30:29.014718 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:29 crc kubenswrapper[4737]: I0126 18:30:29.014749 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:29 crc kubenswrapper[4737]: I0126 18:30:29.014759 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:29 crc kubenswrapper[4737]: I0126 18:30:29.015938 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d7c275106783e56387249df9619e22fd0eca28516545f77cead21b8c925f9c36"} Jan 26 18:30:29 crc kubenswrapper[4737]: I0126 18:30:29.015975 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e8ccdee3654b2923f02f6071aa3924d0934ed028d809dfbf120ba387637632dc"} Jan 26 18:30:29 crc kubenswrapper[4737]: I0126 18:30:29.015990 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f4f461b168b044c50f281bafc5c7ef0d877392e3cc72edc7b2f0028cf8fe6b6a"} Jan 26 18:30:29 crc kubenswrapper[4737]: I0126 18:30:29.015977 4737 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:30:29 crc kubenswrapper[4737]: I0126 18:30:29.016922 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:29 crc kubenswrapper[4737]: I0126 18:30:29.016972 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:29 crc kubenswrapper[4737]: I0126 18:30:29.016986 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:29 crc kubenswrapper[4737]: I0126 18:30:29.018771 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d2968ec8a8ae174c006de379e7fae84b111c90cb44e51bb8d0fdcbc0e66a5842"} Jan 26 18:30:29 crc kubenswrapper[4737]: I0126 18:30:29.018793 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"45b34a9d70cf8504fd809f816a326a74e9a3c422a1ed1ffc221e72f90629b420"} Jan 26 18:30:29 crc kubenswrapper[4737]: I0126 18:30:29.018803 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"bcce3c0b3eaf0ab467b2dbcadc4770536de6e0abf901c9636df113498aff77a1"} Jan 26 18:30:29 crc kubenswrapper[4737]: I0126 18:30:29.018814 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e96d13541d78d88ffb1e1dcff16556814da8c438d160fef0ea16468954f300dd"} Jan 26 18:30:29 crc kubenswrapper[4737]: I0126 18:30:29.018823 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"03d782bb5883158eb31686ef882923bc0fe18907ec34b462ad7641b8d0a6e675"} Jan 26 18:30:29 crc kubenswrapper[4737]: I0126 18:30:29.018903 4737 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:30:29 crc kubenswrapper[4737]: I0126 18:30:29.019508 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:29 crc kubenswrapper[4737]: I0126 18:30:29.019568 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:29 crc kubenswrapper[4737]: I0126 18:30:29.019585 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:29 crc kubenswrapper[4737]: I0126 18:30:29.381516 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:30:29 crc kubenswrapper[4737]: I0126 18:30:29.913507 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 13:30:44.072250247 +0000 UTC Jan 26 18:30:29 crc kubenswrapper[4737]: I0126 18:30:29.939691 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:30:29 crc kubenswrapper[4737]: I0126 18:30:29.998562 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:30:30 crc kubenswrapper[4737]: I0126 18:30:30.026134 4737 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6" exitCode=0 Jan 26 18:30:30 crc kubenswrapper[4737]: I0126 18:30:30.026169 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6"} Jan 26 18:30:30 crc kubenswrapper[4737]: I0126 18:30:30.026392 4737 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:30:30 crc kubenswrapper[4737]: I0126 18:30:30.026515 4737 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:30:30 crc kubenswrapper[4737]: I0126 18:30:30.026555 4737 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:30:30 crc kubenswrapper[4737]: I0126 18:30:30.028333 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:30 crc kubenswrapper[4737]: I0126 18:30:30.028384 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:30 crc kubenswrapper[4737]: I0126 18:30:30.028397 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:30 crc kubenswrapper[4737]: I0126 18:30:30.029271 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:30 crc kubenswrapper[4737]: I0126 18:30:30.029331 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:30 crc kubenswrapper[4737]: I0126 18:30:30.029346 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:30 crc kubenswrapper[4737]: I0126 18:30:30.029859 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:30 crc kubenswrapper[4737]: I0126 18:30:30.029971 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:30 crc kubenswrapper[4737]: I0126 18:30:30.030034 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:30 crc kubenswrapper[4737]: I0126 18:30:30.914581 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 11:56:12.212730798 +0000 UTC Jan 26 18:30:31 crc kubenswrapper[4737]: I0126 18:30:31.034172 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8cfbe9f1ae9deaee4bbb0db6d490c25bd86326a3b962d2221cffa8c7e8431cc9"} Jan 26 18:30:31 crc kubenswrapper[4737]: I0126 18:30:31.034218 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"67cf97240160ecd3f4e73effbeb33f85dad6c12afbfe10315b8624d5c366945d"} Jan 26 18:30:31 crc kubenswrapper[4737]: I0126 18:30:31.034229 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a2838d2a1b16be346b2d6a63998cd81416ef81978be369242fae471f6a53fdbe"} Jan 26 18:30:31 crc kubenswrapper[4737]: I0126 18:30:31.034238 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5b2decc4fe0a94f1c54bc9b532267b0cbac17f7762e628835a11ba40561c8971"} Jan 26 18:30:31 crc kubenswrapper[4737]: I0126 18:30:31.034249 4737 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:30:31 crc kubenswrapper[4737]: I0126 18:30:31.035331 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:31 crc kubenswrapper[4737]: I0126 18:30:31.035374 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:31 crc kubenswrapper[4737]: I0126 18:30:31.035393 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:31 crc kubenswrapper[4737]: I0126 18:30:31.447262 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 18:30:31 crc kubenswrapper[4737]: I0126 18:30:31.447501 4737 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:30:31 crc kubenswrapper[4737]: I0126 18:30:31.449396 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:31 crc kubenswrapper[4737]: I0126 18:30:31.449453 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:31 crc kubenswrapper[4737]: I0126 18:30:31.449484 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:31 crc kubenswrapper[4737]: I0126 18:30:31.776531 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 18:30:31 crc kubenswrapper[4737]: I0126 18:30:31.776839 4737 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:30:31 crc kubenswrapper[4737]: I0126 18:30:31.778603 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:31 crc kubenswrapper[4737]: I0126 18:30:31.778654 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:31 crc kubenswrapper[4737]: I0126 18:30:31.778668 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:31 crc kubenswrapper[4737]: I0126 18:30:31.915147 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 08:01:06.473673393 +0000 UTC Jan 26 18:30:32 crc kubenswrapper[4737]: I0126 18:30:32.045842 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"35617b01f73620a31d80cfbb5bc2c444ee37cdf3cfd67d62b70f36c6738bfc83"} Jan 26 18:30:32 crc kubenswrapper[4737]: I0126 18:30:32.046019 4737 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:30:32 crc kubenswrapper[4737]: I0126 18:30:32.047520 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:32 crc kubenswrapper[4737]: I0126 18:30:32.047750 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:32 crc kubenswrapper[4737]: I0126 18:30:32.047982 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:32 crc kubenswrapper[4737]: I0126 18:30:32.051953 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 18:30:32 crc kubenswrapper[4737]: I0126 18:30:32.052184 4737 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:30:32 crc kubenswrapper[4737]: I0126 18:30:32.053741 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:32 crc kubenswrapper[4737]: I0126 18:30:32.053817 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:32 crc kubenswrapper[4737]: I0126 18:30:32.053834 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:32 crc kubenswrapper[4737]: I0126 18:30:32.916332 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 06:06:02.403358521 +0000 UTC Jan 26 18:30:33 crc kubenswrapper[4737]: I0126 18:30:33.049150 4737 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:30:33 crc kubenswrapper[4737]: I0126 18:30:33.050575 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:33 crc kubenswrapper[4737]: I0126 18:30:33.050643 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:33 crc kubenswrapper[4737]: I0126 18:30:33.050663 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:33 crc kubenswrapper[4737]: I0126 18:30:33.511525 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 18:30:33 crc kubenswrapper[4737]: I0126 18:30:33.512345 4737 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:30:33 crc kubenswrapper[4737]: I0126 18:30:33.515158 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:33 crc kubenswrapper[4737]: I0126 18:30:33.515221 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:33 crc kubenswrapper[4737]: I0126 18:30:33.515246 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:33 crc kubenswrapper[4737]: I0126 18:30:33.523421 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 18:30:33 crc kubenswrapper[4737]: I0126 18:30:33.917169 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 18:24:02.399202641 +0000 UTC Jan 26 18:30:34 crc kubenswrapper[4737]: I0126 18:30:34.053992 4737 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:30:34 crc kubenswrapper[4737]: I0126 18:30:34.054173 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 18:30:34 crc kubenswrapper[4737]: I0126 18:30:34.056118 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:34 crc kubenswrapper[4737]: I0126 18:30:34.056231 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:34 crc kubenswrapper[4737]: I0126 18:30:34.056359 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:34 crc kubenswrapper[4737]: I0126 18:30:34.918380 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 10:59:28.100094109 +0000 UTC Jan 26 18:30:35 crc kubenswrapper[4737]: I0126 18:30:35.052060 4737 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 18:30:35 crc kubenswrapper[4737]: I0126 18:30:35.052194 4737 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 18:30:35 crc kubenswrapper[4737]: I0126 18:30:35.056587 4737 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:30:35 crc kubenswrapper[4737]: I0126 18:30:35.057607 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:35 crc kubenswrapper[4737]: I0126 18:30:35.057676 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:35 crc kubenswrapper[4737]: I0126 18:30:35.057693 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:35 crc kubenswrapper[4737]: I0126 18:30:35.432249 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 26 18:30:35 crc kubenswrapper[4737]: I0126 18:30:35.432509 4737 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:30:35 crc kubenswrapper[4737]: I0126 18:30:35.434467 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:35 crc kubenswrapper[4737]: I0126 18:30:35.434560 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:35 crc kubenswrapper[4737]: I0126 18:30:35.434587 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:35 crc kubenswrapper[4737]: I0126 18:30:35.458572 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 26 18:30:35 crc kubenswrapper[4737]: I0126 18:30:35.919428 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 07:27:19.111932396 +0000 UTC Jan 26 18:30:36 crc kubenswrapper[4737]: I0126 18:30:36.059164 4737 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:30:36 crc kubenswrapper[4737]: I0126 18:30:36.060518 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:36 crc kubenswrapper[4737]: I0126 18:30:36.060589 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:36 crc kubenswrapper[4737]: I0126 18:30:36.060610 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:36 crc kubenswrapper[4737]: I0126 18:30:36.919677 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 22:55:22.658477872 +0000 UTC Jan 26 18:30:37 crc kubenswrapper[4737]: E0126 18:30:37.051841 4737 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 18:30:37 crc kubenswrapper[4737]: I0126 18:30:37.920179 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 15:00:18.414403711 +0000 UTC Jan 26 18:30:38 crc kubenswrapper[4737]: E0126 18:30:38.556167 4737 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Jan 26 18:30:38 crc kubenswrapper[4737]: I0126 18:30:38.910505 4737 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 26 18:30:38 crc kubenswrapper[4737]: I0126 18:30:38.920840 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 06:39:24.027307159 +0000 UTC Jan 26 18:30:38 crc kubenswrapper[4737]: I0126 18:30:38.973212 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 18:30:38 crc kubenswrapper[4737]: I0126 18:30:38.973359 4737 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:30:38 crc kubenswrapper[4737]: I0126 18:30:38.974819 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:38 crc kubenswrapper[4737]: I0126 18:30:38.974852 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:38 crc kubenswrapper[4737]: I0126 18:30:38.974863 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:39 crc kubenswrapper[4737]: E0126 18:30:39.006716 4737 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 26 18:30:39 crc kubenswrapper[4737]: I0126 18:30:39.529715 4737 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 26 18:30:39 crc kubenswrapper[4737]: I0126 18:30:39.529813 4737 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 26 18:30:39 crc kubenswrapper[4737]: I0126 18:30:39.536697 4737 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 26 18:30:39 crc kubenswrapper[4737]: I0126 18:30:39.536759 4737 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 26 18:30:39 crc kubenswrapper[4737]: I0126 18:30:39.921713 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 08:23:07.74210548 +0000 UTC Jan 26 18:30:40 crc kubenswrapper[4737]: I0126 18:30:40.006984 4737 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 26 18:30:40 crc kubenswrapper[4737]: [+]log ok Jan 26 18:30:40 crc kubenswrapper[4737]: [+]etcd ok Jan 26 18:30:40 crc kubenswrapper[4737]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 26 18:30:40 crc kubenswrapper[4737]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 26 18:30:40 crc kubenswrapper[4737]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 26 18:30:40 crc kubenswrapper[4737]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 26 18:30:40 crc kubenswrapper[4737]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 26 18:30:40 crc kubenswrapper[4737]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 26 18:30:40 crc kubenswrapper[4737]: [+]poststarthook/generic-apiserver-start-informers ok Jan 26 18:30:40 crc kubenswrapper[4737]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 26 18:30:40 crc kubenswrapper[4737]: [+]poststarthook/priority-and-fairness-filter ok Jan 26 18:30:40 crc kubenswrapper[4737]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 26 18:30:40 crc kubenswrapper[4737]: [+]poststarthook/start-apiextensions-informers ok Jan 26 18:30:40 crc kubenswrapper[4737]: [+]poststarthook/start-apiextensions-controllers ok Jan 26 18:30:40 crc kubenswrapper[4737]: [+]poststarthook/crd-informer-synced ok Jan 26 18:30:40 crc kubenswrapper[4737]: [+]poststarthook/start-system-namespaces-controller ok Jan 26 18:30:40 crc kubenswrapper[4737]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 26 18:30:40 crc kubenswrapper[4737]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 26 18:30:40 crc kubenswrapper[4737]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 26 18:30:40 crc kubenswrapper[4737]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 26 18:30:40 crc kubenswrapper[4737]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 26 18:30:40 crc kubenswrapper[4737]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 26 18:30:40 crc kubenswrapper[4737]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Jan 26 18:30:40 crc kubenswrapper[4737]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 26 18:30:40 crc kubenswrapper[4737]: [+]poststarthook/bootstrap-controller ok Jan 26 18:30:40 crc kubenswrapper[4737]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 26 18:30:40 crc kubenswrapper[4737]: [+]poststarthook/start-kube-aggregator-informers ok Jan 26 18:30:40 crc kubenswrapper[4737]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 26 18:30:40 crc kubenswrapper[4737]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 26 18:30:40 crc kubenswrapper[4737]: [+]poststarthook/apiservice-registration-controller ok Jan 26 18:30:40 crc kubenswrapper[4737]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 26 18:30:40 crc kubenswrapper[4737]: [+]poststarthook/apiservice-discovery-controller ok Jan 26 18:30:40 crc kubenswrapper[4737]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 26 18:30:40 crc kubenswrapper[4737]: [+]autoregister-completion ok Jan 26 18:30:40 crc kubenswrapper[4737]: [+]poststarthook/apiservice-openapi-controller ok Jan 26 18:30:40 crc kubenswrapper[4737]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 26 18:30:40 crc kubenswrapper[4737]: livez check failed Jan 26 18:30:40 crc kubenswrapper[4737]: I0126 18:30:40.007114 4737 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 18:30:40 crc kubenswrapper[4737]: I0126 18:30:40.156646 4737 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:30:40 crc kubenswrapper[4737]: I0126 18:30:40.158984 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:40 crc kubenswrapper[4737]: I0126 18:30:40.159033 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:40 crc kubenswrapper[4737]: I0126 18:30:40.159048 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:40 crc kubenswrapper[4737]: I0126 18:30:40.159109 4737 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 18:30:40 crc kubenswrapper[4737]: I0126 18:30:40.922656 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 04:38:01.097484018 +0000 UTC Jan 26 18:30:41 crc kubenswrapper[4737]: I0126 18:30:41.922774 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 09:55:45.187600653 +0000 UTC Jan 26 18:30:42 crc kubenswrapper[4737]: I0126 18:30:42.923024 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 10:26:26.265928357 +0000 UTC Jan 26 18:30:43 crc kubenswrapper[4737]: I0126 18:30:43.277146 4737 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 26 18:30:43 crc kubenswrapper[4737]: I0126 18:30:43.294348 4737 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 26 18:30:43 crc kubenswrapper[4737]: I0126 18:30:43.314418 4737 csr.go:261] certificate signing request csr-fhwsd is approved, waiting to be issued Jan 26 18:30:43 crc kubenswrapper[4737]: I0126 18:30:43.322834 4737 csr.go:257] certificate signing request csr-fhwsd is issued Jan 26 18:30:43 crc kubenswrapper[4737]: I0126 18:30:43.923778 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 11:20:55.123408407 +0000 UTC Jan 26 18:30:44 crc kubenswrapper[4737]: I0126 18:30:44.324996 4737 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-26 18:25:43 +0000 UTC, rotation deadline is 2026-12-10 09:11:43.723854677 +0000 UTC Jan 26 18:30:44 crc kubenswrapper[4737]: I0126 18:30:44.325102 4737 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7622h40m59.398758038s for next certificate rotation Jan 26 18:30:44 crc kubenswrapper[4737]: E0126 18:30:44.538345 4737 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Jan 26 18:30:44 crc kubenswrapper[4737]: I0126 18:30:44.540131 4737 trace.go:236] Trace[928341918]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 18:30:30.827) (total time: 13712ms): Jan 26 18:30:44 crc kubenswrapper[4737]: Trace[928341918]: ---"Objects listed" error: 13712ms (18:30:44.539) Jan 26 18:30:44 crc kubenswrapper[4737]: Trace[928341918]: [13.712963721s] [13.712963721s] END Jan 26 18:30:44 crc kubenswrapper[4737]: I0126 18:30:44.540169 4737 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 26 18:30:44 crc kubenswrapper[4737]: I0126 18:30:44.540941 4737 trace.go:236] Trace[444082987]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 18:30:30.587) (total time: 13953ms): Jan 26 18:30:44 crc kubenswrapper[4737]: Trace[444082987]: ---"Objects listed" error: 13953ms (18:30:44.540) Jan 26 18:30:44 crc kubenswrapper[4737]: Trace[444082987]: [13.95306349s] [13.95306349s] END Jan 26 18:30:44 crc kubenswrapper[4737]: I0126 18:30:44.540990 4737 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 26 18:30:44 crc kubenswrapper[4737]: I0126 18:30:44.541666 4737 trace.go:236] Trace[35461022]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 18:30:29.769) (total time: 14771ms): Jan 26 18:30:44 crc kubenswrapper[4737]: Trace[35461022]: ---"Objects listed" error: 14771ms (18:30:44.541) Jan 26 18:30:44 crc kubenswrapper[4737]: Trace[35461022]: [14.771994038s] [14.771994038s] END Jan 26 18:30:44 crc kubenswrapper[4737]: I0126 18:30:44.541701 4737 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 26 18:30:44 crc kubenswrapper[4737]: I0126 18:30:44.542046 4737 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 26 18:30:44 crc kubenswrapper[4737]: I0126 18:30:44.544974 4737 trace.go:236] Trace[158179387]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 18:30:30.610) (total time: 13933ms): Jan 26 18:30:44 crc kubenswrapper[4737]: Trace[158179387]: ---"Objects listed" error: 13933ms (18:30:44.544) Jan 26 18:30:44 crc kubenswrapper[4737]: Trace[158179387]: [13.933920259s] [13.933920259s] END Jan 26 18:30:44 crc kubenswrapper[4737]: I0126 18:30:44.545032 4737 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 26 18:30:44 crc kubenswrapper[4737]: I0126 18:30:44.710489 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 18:30:44 crc kubenswrapper[4737]: I0126 18:30:44.714236 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 18:30:44 crc kubenswrapper[4737]: I0126 18:30:44.912004 4737 apiserver.go:52] "Watching apiserver" Jan 26 18:30:44 crc kubenswrapper[4737]: I0126 18:30:44.917937 4737 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 26 18:30:44 crc kubenswrapper[4737]: I0126 18:30:44.918791 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-fsmsj","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Jan 26 18:30:44 crc kubenswrapper[4737]: I0126 18:30:44.919825 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 18:30:44 crc kubenswrapper[4737]: I0126 18:30:44.919962 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:30:44 crc kubenswrapper[4737]: E0126 18:30:44.920100 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:30:44 crc kubenswrapper[4737]: I0126 18:30:44.920205 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:30:44 crc kubenswrapper[4737]: E0126 18:30:44.920250 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:30:44 crc kubenswrapper[4737]: I0126 18:30:44.920814 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 18:30:44 crc kubenswrapper[4737]: I0126 18:30:44.920907 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-fsmsj" Jan 26 18:30:44 crc kubenswrapper[4737]: I0126 18:30:44.920945 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:30:44 crc kubenswrapper[4737]: I0126 18:30:44.921279 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 18:30:44 crc kubenswrapper[4737]: E0126 18:30:44.921930 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:30:44 crc kubenswrapper[4737]: I0126 18:30:44.923877 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 14:06:43.978287862 +0000 UTC Jan 26 18:30:44 crc kubenswrapper[4737]: I0126 18:30:44.923947 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 26 18:30:44 crc kubenswrapper[4737]: I0126 18:30:44.924289 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 26 18:30:44 crc kubenswrapper[4737]: I0126 18:30:44.924371 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 26 18:30:44 crc kubenswrapper[4737]: I0126 18:30:44.924477 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 26 18:30:44 crc kubenswrapper[4737]: I0126 18:30:44.924690 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 26 18:30:44 crc kubenswrapper[4737]: I0126 18:30:44.924744 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 26 18:30:44 crc kubenswrapper[4737]: I0126 18:30:44.925050 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 26 18:30:44 crc kubenswrapper[4737]: I0126 18:30:44.925196 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 26 18:30:44 crc kubenswrapper[4737]: I0126 18:30:44.925399 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 26 18:30:44 crc kubenswrapper[4737]: I0126 18:30:44.925620 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 26 18:30:44 crc kubenswrapper[4737]: I0126 18:30:44.925786 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 26 18:30:44 crc kubenswrapper[4737]: I0126 18:30:44.926018 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 26 18:30:44 crc kubenswrapper[4737]: I0126 18:30:44.942853 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:30:44 crc kubenswrapper[4737]: I0126 18:30:44.955693 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:30:44 crc kubenswrapper[4737]: I0126 18:30:44.971545 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:30:44 crc kubenswrapper[4737]: I0126 18:30:44.983397 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:30:44 crc kubenswrapper[4737]: I0126 18:30:44.994360 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.004642 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.006259 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.011215 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.011814 4737 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:53812->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.011883 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:53812->192.168.126.11:17697: read: connection reset by peer" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.012975 4737 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.013084 4737 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.013055 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.034031 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.045712 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.045767 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.045792 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.045814 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.045842 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.045870 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.045895 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.045915 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.045939 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.045960 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.046089 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.046116 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.046137 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.046160 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.046189 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.046209 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.046228 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.046250 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.046270 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.046288 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.046747 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.046831 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.046964 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.047303 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fsmsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79f4091b-95d7-420a-b90a-1b6f48fb634e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fsmsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.047497 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.047632 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.047666 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.047686 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.047708 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.047727 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.047746 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.047762 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.047778 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.047838 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.047857 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.047875 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.047892 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.047909 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.047926 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.047944 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.047960 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.047979 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.047998 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.048028 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.048045 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.048061 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.048103 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.048126 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.048144 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.048161 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.048182 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.048200 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.048217 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.048236 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.048255 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.048272 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.048295 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.048315 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.048338 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.048356 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.048388 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.048407 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.048448 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.048534 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.048554 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.048575 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.048615 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.048637 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.048656 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.048678 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.048723 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.048746 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.048764 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.048783 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.048804 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.048821 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.048838 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.048861 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.048879 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.048896 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.048916 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.048933 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.048951 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.048971 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.048997 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.049029 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.049047 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.049080 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.082378 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.047938 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.047970 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.048160 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.048548 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.048639 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.048950 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: E0126 18:30:45.049114 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:30:45.54909174 +0000 UTC m=+18.857286438 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.086104 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.086142 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.086163 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.086186 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.086207 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.086226 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.086234 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.086249 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.086268 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.086284 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.086300 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.086319 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.086339 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.086356 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.086374 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.086392 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.086408 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.086424 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.086442 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.086492 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.086511 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.086530 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.086548 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.086463 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.086573 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.086664 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.086690 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.086718 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.086740 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.086786 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.049331 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.049464 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.049574 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.050052 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.054279 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.054514 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.054653 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.082185 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.082212 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.082286 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.082560 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.082579 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.082772 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.082807 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.083751 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.083825 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.083919 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.083912 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.084019 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.084364 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.084575 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.084703 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.084968 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.084998 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.085011 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.085189 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.085291 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.085417 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.087013 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.085456 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.085591 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.085665 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.085693 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.085986 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.087303 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.087515 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.091708 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.091734 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.091894 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.091929 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.092349 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.049197 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.092537 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.092790 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.092825 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.092860 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.086747 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.092946 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.092983 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.093016 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.093043 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.093061 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.093126 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.093149 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.093168 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.093184 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.093203 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.093222 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.093241 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.093258 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.093275 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.093292 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.093308 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.093325 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.093343 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.093365 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.093383 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.093401 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.093420 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.093437 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.093454 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.093474 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.093491 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.093507 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.093525 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.093542 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.093558 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.093582 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.093604 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.093624 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.093645 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.093667 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.093687 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.093704 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.093724 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.093745 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.093764 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.093780 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.093798 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.093818 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.093837 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.093853 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.093871 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.093888 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.093907 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.093928 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.093948 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.093969 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.094031 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.094094 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.094129 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.094161 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.094185 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.094210 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.094241 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.094272 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.094294 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.094321 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.094346 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.094368 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.094387 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.094411 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.094431 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.094450 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.094472 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.094491 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.094509 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.094528 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.094548 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.094569 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.094589 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.094611 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.094634 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.094661 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.094700 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.094720 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.094741 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.094760 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.094789 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.094808 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.094832 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.094850 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.092980 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.093559 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.093619 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.093877 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.094189 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.094460 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.094619 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.094629 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.094844 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.095173 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.095353 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.095497 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.096482 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.096742 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.096781 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.096908 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.097283 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.098769 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.099199 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.099225 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.099288 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.099486 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.099645 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.100088 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.100424 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.100507 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.100580 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.101028 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.101125 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.101340 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.101395 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.101601 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.101767 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.102009 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.102045 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.102167 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.102302 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.102413 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.102651 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.102906 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.103131 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.103317 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.106581 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.106797 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.106980 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.107157 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.107375 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.109336 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.109431 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/79f4091b-95d7-420a-b90a-1b6f48fb634e-hosts-file\") pod \"node-resolver-fsmsj\" (UID: \"79f4091b-95d7-420a-b90a-1b6f48fb634e\") " pod="openshift-dns/node-resolver-fsmsj" Jan 26 18:30:45 crc kubenswrapper[4737]: E0126 18:30:45.109538 4737 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.109585 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.109638 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.110211 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.110342 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.110495 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.110545 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: E0126 18:30:45.110624 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 18:30:45.610600897 +0000 UTC m=+18.918795605 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.110952 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.111166 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.111369 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.111740 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.112057 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.112162 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.112445 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.112671 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.112702 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.112888 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.112903 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.113187 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.113294 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.113786 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.114123 4737 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.114271 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.114370 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.114470 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.114553 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.114605 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.114651 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.114700 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.114718 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.114884 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.114985 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.115058 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.115360 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.115409 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.115485 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.115522 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.115549 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtlt5\" (UniqueName: \"kubernetes.io/projected/79f4091b-95d7-420a-b90a-1b6f48fb634e-kube-api-access-qtlt5\") pod \"node-resolver-fsmsj\" (UID: \"79f4091b-95d7-420a-b90a-1b6f48fb634e\") " pod="openshift-dns/node-resolver-fsmsj" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.115613 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.115643 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.115673 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.115704 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.117836 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.118189 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.118487 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.119510 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.119913 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 18:30:45 crc kubenswrapper[4737]: E0126 18:30:45.120048 4737 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.120564 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.122146 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.122280 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: E0126 18:30:45.122448 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 18:30:45.62242031 +0000 UTC m=+18.930615208 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.122942 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.123600 4737 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.123624 4737 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.123638 4737 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.123651 4737 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.124476 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.124582 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.125537 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.126394 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.126525 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.126637 4737 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.126663 4737 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.126699 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.126709 4737 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.126719 4737 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.126729 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.126742 4737 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.126763 4737 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.126776 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.126793 4737 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.126802 4737 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.126810 4737 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.126820 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.126829 4737 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.126838 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.126775 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.126879 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.126906 4737 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.126925 4737 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.126941 4737 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.126957 4737 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.126971 4737 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.126984 4737 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.126999 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.127012 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.127027 4737 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.127042 4737 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.127057 4737 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.127094 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.127192 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.127248 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.128632 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d16415ca-2740-4247-846a-9afd1ebcca48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4f461b168b044c50f281bafc5c7ef0d877392e3cc72edc7b2f0028cf8fe6b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7338aa3bff3561881f454689b4ae1ab8b46ddf950c45dd080107c7b78e6766a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ccdee3654b2923f02f6071aa3924d0934ed028d809dfbf120ba387637632dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c275106783e56387249df9619e22fd0eca28516545f77cead21b8c925f9c36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.129962 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.130207 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.130468 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: E0126 18:30:45.131031 4737 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 18:30:45 crc kubenswrapper[4737]: E0126 18:30:45.131052 4737 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 18:30:45 crc kubenswrapper[4737]: E0126 18:30:45.131093 4737 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:30:45 crc kubenswrapper[4737]: E0126 18:30:45.131175 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 18:30:45.631152244 +0000 UTC m=+18.939347152 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:30:45 crc kubenswrapper[4737]: E0126 18:30:45.131395 4737 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 18:30:45 crc kubenswrapper[4737]: E0126 18:30:45.131430 4737 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 18:30:45 crc kubenswrapper[4737]: E0126 18:30:45.131459 4737 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:30:45 crc kubenswrapper[4737]: E0126 18:30:45.131553 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 18:30:45.631516694 +0000 UTC m=+18.939711592 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.131710 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.131824 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.132042 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.132318 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.132835 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.132865 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.133572 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.136590 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.137383 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.138240 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.139307 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.139743 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.140364 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.140484 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.140839 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.141751 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.142137 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.142880 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.143568 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.144750 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.145464 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d16415ca-2740-4247-846a-9afd1ebcca48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4f461b168b044c50f281bafc5c7ef0d877392e3cc72edc7b2f0028cf8fe6b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7338aa3bff3561881f454689b4ae1ab8b46ddf950c45dd080107c7b78e6766a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ccdee3654b2923f02f6071aa3924d0934ed028d809dfbf120ba387637632dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c275106783e56387249df9619e22fd0eca28516545f77cead21b8c925f9c36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.145956 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.146011 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.146413 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.146535 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.146638 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.147211 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.148052 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.148152 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.148371 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.148987 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.149031 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.149265 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.149324 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.149581 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.149911 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.150342 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.151062 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.152044 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.152243 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.152327 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.152418 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.152448 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.152699 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.153018 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.153052 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.153103 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.153345 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.153760 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.160236 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.161711 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.168753 4737 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.168917 4737 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.171366 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.171399 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.171415 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.171462 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.171556 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fsmsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79f4091b-95d7-420a-b90a-1b6f48fb634e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fsmsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.171477 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:45Z","lastTransitionTime":"2026-01-26T18:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.179977 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.184871 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d641e5-0291-480c-9413-478267450e45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d782bb5883158eb31686ef882923bc0fe18907ec34b462ad7641b8d0a6e675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce3c0b3eaf0ab467b2dbcadc4770536de6e0abf901c9636df113498aff77a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d13541d78d88ffb1e1dcff16556814da8c438d160fef0ea16468954f300dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2968ec8a8ae174c006de379e7fae84b111c90cb44e51bb8d0fdcbc0e66a5842\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45b34a9d70cf8504fd809f816a326a74e9a3c422a1ed1ffc221e72f90629b420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.185452 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:30:45 crc kubenswrapper[4737]: E0126 18:30:45.196538 4737 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"163b9b97-5fa6-4443-9f0c-6d278a8ade1d\\\",\\\"systemUUID\\\":\\\"4ebf7606-e2ee-4d28-b0b5-b6f922331ef2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.204153 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-cvbml"] Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.204702 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.204804 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.204867 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.204930 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.204990 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:45Z","lastTransitionTime":"2026-01-26T18:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.204828 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-qxkj5"] Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.204964 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-cvbml" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.205767 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-qjff2"] Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.205994 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-qjff2" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.206304 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.212841 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.213045 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.213206 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.213354 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.213579 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.213690 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.213983 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.214230 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.214353 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.214598 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.214762 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.215326 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.219633 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.225265 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 26 18:30:45 crc kubenswrapper[4737]: E0126 18:30:45.226507 4737 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"163b9b97-5fa6-4443-9f0c-6d278a8ade1d\\\",\\\"systemUUID\\\":\\\"4ebf7606-e2ee-4d28-b0b5-b6f922331ef2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.227546 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/82627aad-2019-482e-934a-7e9729927a34-system-cni-dir\") pod \"multus-qjff2\" (UID: \"82627aad-2019-482e-934a-7e9729927a34\") " pod="openshift-multus/multus-qjff2" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.227583 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/82627aad-2019-482e-934a-7e9729927a34-host-run-k8s-cni-cncf-io\") pod \"multus-qjff2\" (UID: \"82627aad-2019-482e-934a-7e9729927a34\") " pod="openshift-multus/multus-qjff2" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.227613 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.227633 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/82627aad-2019-482e-934a-7e9729927a34-cnibin\") pod \"multus-qjff2\" (UID: \"82627aad-2019-482e-934a-7e9729927a34\") " pod="openshift-multus/multus-qjff2" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.227651 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/82627aad-2019-482e-934a-7e9729927a34-host-var-lib-kubelet\") pod \"multus-qjff2\" (UID: \"82627aad-2019-482e-934a-7e9729927a34\") " pod="openshift-multus/multus-qjff2" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.227670 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/82627aad-2019-482e-934a-7e9729927a34-multus-daemon-config\") pod \"multus-qjff2\" (UID: \"82627aad-2019-482e-934a-7e9729927a34\") " pod="openshift-multus/multus-qjff2" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.227687 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9ggl\" (UniqueName: \"kubernetes.io/projected/82627aad-2019-482e-934a-7e9729927a34-kube-api-access-q9ggl\") pod \"multus-qjff2\" (UID: \"82627aad-2019-482e-934a-7e9729927a34\") " pod="openshift-multus/multus-qjff2" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.227703 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/82627aad-2019-482e-934a-7e9729927a34-host-run-multus-certs\") pod \"multus-qjff2\" (UID: \"82627aad-2019-482e-934a-7e9729927a34\") " pod="openshift-multus/multus-qjff2" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.227719 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f32d3b75-6d15-4fb7-9559-d3df1d77071e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-cvbml\" (UID: \"f32d3b75-6d15-4fb7-9559-d3df1d77071e\") " pod="openshift-multus/multus-additional-cni-plugins-cvbml" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.227999 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.227858 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/82627aad-2019-482e-934a-7e9729927a34-multus-cni-dir\") pod \"multus-qjff2\" (UID: \"82627aad-2019-482e-934a-7e9729927a34\") " pod="openshift-multus/multus-qjff2" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.228451 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/82627aad-2019-482e-934a-7e9729927a34-host-var-lib-cni-bin\") pod \"multus-qjff2\" (UID: \"82627aad-2019-482e-934a-7e9729927a34\") " pod="openshift-multus/multus-qjff2" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.228554 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/afd75772-7900-46c3-b392-afb075e1cc08-rootfs\") pod \"machine-config-daemon-qxkj5\" (UID: \"afd75772-7900-46c3-b392-afb075e1cc08\") " pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.228609 4737 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d2968ec8a8ae174c006de379e7fae84b111c90cb44e51bb8d0fdcbc0e66a5842" exitCode=255 Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.228622 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/afd75772-7900-46c3-b392-afb075e1cc08-mcd-auth-proxy-config\") pod \"machine-config-daemon-qxkj5\" (UID: \"afd75772-7900-46c3-b392-afb075e1cc08\") " pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.228644 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/afd75772-7900-46c3-b392-afb075e1cc08-proxy-tls\") pod \"machine-config-daemon-qxkj5\" (UID: \"afd75772-7900-46c3-b392-afb075e1cc08\") " pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.228662 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/82627aad-2019-482e-934a-7e9729927a34-host-run-netns\") pod \"multus-qjff2\" (UID: \"82627aad-2019-482e-934a-7e9729927a34\") " pod="openshift-multus/multus-qjff2" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.228739 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/82627aad-2019-482e-934a-7e9729927a34-multus-conf-dir\") pod \"multus-qjff2\" (UID: \"82627aad-2019-482e-934a-7e9729927a34\") " pod="openshift-multus/multus-qjff2" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.228762 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9v4q\" (UniqueName: \"kubernetes.io/projected/afd75772-7900-46c3-b392-afb075e1cc08-kube-api-access-l9v4q\") pod \"machine-config-daemon-qxkj5\" (UID: \"afd75772-7900-46c3-b392-afb075e1cc08\") " pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.228788 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.228813 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/82627aad-2019-482e-934a-7e9729927a34-host-var-lib-cni-multus\") pod \"multus-qjff2\" (UID: \"82627aad-2019-482e-934a-7e9729927a34\") " pod="openshift-multus/multus-qjff2" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.228839 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtlt5\" (UniqueName: \"kubernetes.io/projected/79f4091b-95d7-420a-b90a-1b6f48fb634e-kube-api-access-qtlt5\") pod \"node-resolver-fsmsj\" (UID: \"79f4091b-95d7-420a-b90a-1b6f48fb634e\") " pod="openshift-dns/node-resolver-fsmsj" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.228857 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f32d3b75-6d15-4fb7-9559-d3df1d77071e-cnibin\") pod \"multus-additional-cni-plugins-cvbml\" (UID: \"f32d3b75-6d15-4fb7-9559-d3df1d77071e\") " pod="openshift-multus/multus-additional-cni-plugins-cvbml" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.228878 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/82627aad-2019-482e-934a-7e9729927a34-os-release\") pod \"multus-qjff2\" (UID: \"82627aad-2019-482e-934a-7e9729927a34\") " pod="openshift-multus/multus-qjff2" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.228922 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/82627aad-2019-482e-934a-7e9729927a34-multus-socket-dir-parent\") pod \"multus-qjff2\" (UID: \"82627aad-2019-482e-934a-7e9729927a34\") " pod="openshift-multus/multus-qjff2" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.228951 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f32d3b75-6d15-4fb7-9559-d3df1d77071e-os-release\") pod \"multus-additional-cni-plugins-cvbml\" (UID: \"f32d3b75-6d15-4fb7-9559-d3df1d77071e\") " pod="openshift-multus/multus-additional-cni-plugins-cvbml" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.228987 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/79f4091b-95d7-420a-b90a-1b6f48fb634e-hosts-file\") pod \"node-resolver-fsmsj\" (UID: \"79f4091b-95d7-420a-b90a-1b6f48fb634e\") " pod="openshift-dns/node-resolver-fsmsj" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.229015 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/82627aad-2019-482e-934a-7e9729927a34-etc-kubernetes\") pod \"multus-qjff2\" (UID: \"82627aad-2019-482e-934a-7e9729927a34\") " pod="openshift-multus/multus-qjff2" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.229034 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/82627aad-2019-482e-934a-7e9729927a34-cni-binary-copy\") pod \"multus-qjff2\" (UID: \"82627aad-2019-482e-934a-7e9729927a34\") " pod="openshift-multus/multus-qjff2" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.229051 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/82627aad-2019-482e-934a-7e9729927a34-hostroot\") pod \"multus-qjff2\" (UID: \"82627aad-2019-482e-934a-7e9729927a34\") " pod="openshift-multus/multus-qjff2" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.229094 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f32d3b75-6d15-4fb7-9559-d3df1d77071e-cni-binary-copy\") pod \"multus-additional-cni-plugins-cvbml\" (UID: \"f32d3b75-6d15-4fb7-9559-d3df1d77071e\") " pod="openshift-multus/multus-additional-cni-plugins-cvbml" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.229112 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f32d3b75-6d15-4fb7-9559-d3df1d77071e-system-cni-dir\") pod \"multus-additional-cni-plugins-cvbml\" (UID: \"f32d3b75-6d15-4fb7-9559-d3df1d77071e\") " pod="openshift-multus/multus-additional-cni-plugins-cvbml" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.229211 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f32d3b75-6d15-4fb7-9559-d3df1d77071e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-cvbml\" (UID: \"f32d3b75-6d15-4fb7-9559-d3df1d77071e\") " pod="openshift-multus/multus-additional-cni-plugins-cvbml" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.229233 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4jhv\" (UniqueName: \"kubernetes.io/projected/f32d3b75-6d15-4fb7-9559-d3df1d77071e-kube-api-access-s4jhv\") pod \"multus-additional-cni-plugins-cvbml\" (UID: \"f32d3b75-6d15-4fb7-9559-d3df1d77071e\") " pod="openshift-multus/multus-additional-cni-plugins-cvbml" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.229306 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.229324 4737 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.229337 4737 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.229347 4737 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.229358 4737 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.229367 4737 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.229409 4737 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.229420 4737 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.229485 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.229479 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"d2968ec8a8ae174c006de379e7fae84b111c90cb44e51bb8d0fdcbc0e66a5842"} Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.229501 4737 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.229573 4737 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.229789 4737 scope.go:117] "RemoveContainer" containerID="d2968ec8a8ae174c006de379e7fae84b111c90cb44e51bb8d0fdcbc0e66a5842" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.230185 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.230232 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/79f4091b-95d7-420a-b90a-1b6f48fb634e-hosts-file\") pod \"node-resolver-fsmsj\" (UID: \"79f4091b-95d7-420a-b90a-1b6f48fb634e\") " pod="openshift-dns/node-resolver-fsmsj" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.230504 4737 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.230526 4737 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.230538 4737 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.230548 4737 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.230558 4737 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.230568 4737 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.230580 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.230590 4737 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.230602 4737 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.230613 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.230624 4737 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.230635 4737 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.230644 4737 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.230653 4737 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.230662 4737 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.230671 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.230680 4737 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.230689 4737 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.230697 4737 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.230706 4737 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.230715 4737 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.230726 4737 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.230736 4737 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.230745 4737 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.230754 4737 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.230764 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.230775 4737 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.230785 4737 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.230794 4737 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.230802 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.230811 4737 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.230820 4737 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.230829 4737 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.230840 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.230850 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.235852 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.238177 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.238224 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.238240 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.238271 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.238289 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:45Z","lastTransitionTime":"2026-01-26T18:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.238727 4737 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.238763 4737 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.238787 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.238808 4737 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.238821 4737 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.238837 4737 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.238860 4737 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.238875 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.238887 4737 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.238899 4737 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.238918 4737 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.238932 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.239224 4737 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.239276 4737 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.239309 4737 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.239334 4737 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.239349 4737 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.239364 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.239387 4737 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.239404 4737 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.239419 4737 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.239433 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.239448 4737 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.239470 4737 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.239483 4737 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.239500 4737 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.239521 4737 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.239544 4737 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.239564 4737 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.239579 4737 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.239598 4737 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.239611 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.239624 4737 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.239638 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.239655 4737 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.239669 4737 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.239682 4737 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.239699 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.239711 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.239727 4737 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.239739 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.239755 4737 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.239767 4737 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.239782 4737 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.239796 4737 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.239815 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.239831 4737 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.239843 4737 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.239856 4737 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.239872 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.239884 4737 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.239896 4737 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.239914 4737 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.239927 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.239941 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.239953 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.239967 4737 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.239980 4737 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.239994 4737 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.240007 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.240024 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.240036 4737 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.240048 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.240064 4737 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.240100 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.240112 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.240123 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.240138 4737 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.240151 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.240162 4737 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.240173 4737 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.240188 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.240201 4737 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.240212 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.240225 4737 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.240240 4737 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.240254 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.240267 4737 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.240281 4737 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.240294 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.240305 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.240316 4737 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.240330 4737 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.240341 4737 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.240350 4737 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.240362 4737 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.240379 4737 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.240391 4737 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.240403 4737 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.240419 4737 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.240436 4737 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.240448 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.240463 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.240479 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.240491 4737 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.240503 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.240516 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.240532 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.240544 4737 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.240555 4737 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.240567 4737 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.240585 4737 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.240600 4737 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.240613 4737 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.240630 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.240643 4737 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.240655 4737 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.240669 4737 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.240684 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.240724 4737 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.240737 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.240751 4737 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.240766 4737 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.240779 4737 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.240792 4737 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.240807 4737 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.242554 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.244821 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:30:45 crc kubenswrapper[4737]: E0126 18:30:45.252791 4737 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"163b9b97-5fa6-4443-9f0c-6d278a8ade1d\\\",\\\"systemUUID\\\":\\\"4ebf7606-e2ee-4d28-b0b5-b6f922331ef2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.255590 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.258004 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtlt5\" (UniqueName: \"kubernetes.io/projected/79f4091b-95d7-420a-b90a-1b6f48fb634e-kube-api-access-qtlt5\") pod \"node-resolver-fsmsj\" (UID: \"79f4091b-95d7-420a-b90a-1b6f48fb634e\") " pod="openshift-dns/node-resolver-fsmsj" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.262048 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-fsmsj" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.264013 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.264044 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.264053 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.264085 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.264095 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:45Z","lastTransitionTime":"2026-01-26T18:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.267929 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:30:45 crc kubenswrapper[4737]: E0126 18:30:45.273269 4737 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 18:30:45 crc kubenswrapper[4737]: E0126 18:30:45.276930 4737 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"163b9b97-5fa6-4443-9f0c-6d278a8ade1d\\\",\\\"systemUUID\\\":\\\"4ebf7606-e2ee-4d28-b0b5-b6f922331ef2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.281357 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.281392 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.281401 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.281416 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.281426 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:45Z","lastTransitionTime":"2026-01-26T18:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:45 crc kubenswrapper[4737]: W0126 18:30:45.283374 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-f301fe76b5deb158ce15195e10d449d15d1633511d59a2626a25216a751228d9 WatchSource:0}: Error finding container f301fe76b5deb158ce15195e10d449d15d1633511d59a2626a25216a751228d9: Status 404 returned error can't find the container with id f301fe76b5deb158ce15195e10d449d15d1633511d59a2626a25216a751228d9 Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.288605 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:30:45 crc kubenswrapper[4737]: E0126 18:30:45.296474 4737 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"163b9b97-5fa6-4443-9f0c-6d278a8ade1d\\\",\\\"systemUUID\\\":\\\"4ebf7606-e2ee-4d28-b0b5-b6f922331ef2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:30:45 crc kubenswrapper[4737]: E0126 18:30:45.296591 4737 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.304983 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.305024 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.305035 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.305052 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.305064 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:45Z","lastTransitionTime":"2026-01-26T18:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.307552 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:30:45 crc kubenswrapper[4737]: W0126 18:30:45.309222 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79f4091b_95d7_420a_b90a_1b6f48fb634e.slice/crio-85e5b2a4ba2d3ed2bd38d58949be1f0b9947737c5ad09a31c92dfdbf3a350437 WatchSource:0}: Error finding container 85e5b2a4ba2d3ed2bd38d58949be1f0b9947737c5ad09a31c92dfdbf3a350437: Status 404 returned error can't find the container with id 85e5b2a4ba2d3ed2bd38d58949be1f0b9947737c5ad09a31c92dfdbf3a350437 Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.321390 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d16415ca-2740-4247-846a-9afd1ebcca48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4f461b168b044c50f281bafc5c7ef0d877392e3cc72edc7b2f0028cf8fe6b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7338aa3bff3561881f454689b4ae1ab8b46ddf950c45dd080107c7b78e6766a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ccdee3654b2923f02f6071aa3924d0934ed028d809dfbf120ba387637632dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c275106783e56387249df9619e22fd0eca28516545f77cead21b8c925f9c36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.332411 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qjff2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82627aad-2019-482e-934a-7e9729927a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9ggl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qjff2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.341597 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/82627aad-2019-482e-934a-7e9729927a34-etc-kubernetes\") pod \"multus-qjff2\" (UID: \"82627aad-2019-482e-934a-7e9729927a34\") " pod="openshift-multus/multus-qjff2" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.341763 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/82627aad-2019-482e-934a-7e9729927a34-cni-binary-copy\") pod \"multus-qjff2\" (UID: \"82627aad-2019-482e-934a-7e9729927a34\") " pod="openshift-multus/multus-qjff2" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.341859 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/82627aad-2019-482e-934a-7e9729927a34-hostroot\") pod \"multus-qjff2\" (UID: \"82627aad-2019-482e-934a-7e9729927a34\") " pod="openshift-multus/multus-qjff2" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.342004 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f32d3b75-6d15-4fb7-9559-d3df1d77071e-cni-binary-copy\") pod \"multus-additional-cni-plugins-cvbml\" (UID: \"f32d3b75-6d15-4fb7-9559-d3df1d77071e\") " pod="openshift-multus/multus-additional-cni-plugins-cvbml" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.342115 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4jhv\" (UniqueName: \"kubernetes.io/projected/f32d3b75-6d15-4fb7-9559-d3df1d77071e-kube-api-access-s4jhv\") pod \"multus-additional-cni-plugins-cvbml\" (UID: \"f32d3b75-6d15-4fb7-9559-d3df1d77071e\") " pod="openshift-multus/multus-additional-cni-plugins-cvbml" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.342223 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f32d3b75-6d15-4fb7-9559-d3df1d77071e-system-cni-dir\") pod \"multus-additional-cni-plugins-cvbml\" (UID: \"f32d3b75-6d15-4fb7-9559-d3df1d77071e\") " pod="openshift-multus/multus-additional-cni-plugins-cvbml" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.342313 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f32d3b75-6d15-4fb7-9559-d3df1d77071e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-cvbml\" (UID: \"f32d3b75-6d15-4fb7-9559-d3df1d77071e\") " pod="openshift-multus/multus-additional-cni-plugins-cvbml" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.342402 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/82627aad-2019-482e-934a-7e9729927a34-host-run-k8s-cni-cncf-io\") pod \"multus-qjff2\" (UID: \"82627aad-2019-482e-934a-7e9729927a34\") " pod="openshift-multus/multus-qjff2" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.342494 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/82627aad-2019-482e-934a-7e9729927a34-system-cni-dir\") pod \"multus-qjff2\" (UID: \"82627aad-2019-482e-934a-7e9729927a34\") " pod="openshift-multus/multus-qjff2" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.342578 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/82627aad-2019-482e-934a-7e9729927a34-multus-daemon-config\") pod \"multus-qjff2\" (UID: \"82627aad-2019-482e-934a-7e9729927a34\") " pod="openshift-multus/multus-qjff2" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.342680 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9ggl\" (UniqueName: \"kubernetes.io/projected/82627aad-2019-482e-934a-7e9729927a34-kube-api-access-q9ggl\") pod \"multus-qjff2\" (UID: \"82627aad-2019-482e-934a-7e9729927a34\") " pod="openshift-multus/multus-qjff2" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.342775 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/82627aad-2019-482e-934a-7e9729927a34-cnibin\") pod \"multus-qjff2\" (UID: \"82627aad-2019-482e-934a-7e9729927a34\") " pod="openshift-multus/multus-qjff2" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.342863 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/82627aad-2019-482e-934a-7e9729927a34-host-var-lib-kubelet\") pod \"multus-qjff2\" (UID: \"82627aad-2019-482e-934a-7e9729927a34\") " pod="openshift-multus/multus-qjff2" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.342951 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/82627aad-2019-482e-934a-7e9729927a34-host-run-multus-certs\") pod \"multus-qjff2\" (UID: \"82627aad-2019-482e-934a-7e9729927a34\") " pod="openshift-multus/multus-qjff2" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.343062 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f32d3b75-6d15-4fb7-9559-d3df1d77071e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-cvbml\" (UID: \"f32d3b75-6d15-4fb7-9559-d3df1d77071e\") " pod="openshift-multus/multus-additional-cni-plugins-cvbml" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.344262 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/82627aad-2019-482e-934a-7e9729927a34-multus-cni-dir\") pod \"multus-qjff2\" (UID: \"82627aad-2019-482e-934a-7e9729927a34\") " pod="openshift-multus/multus-qjff2" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.344364 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/82627aad-2019-482e-934a-7e9729927a34-host-var-lib-cni-bin\") pod \"multus-qjff2\" (UID: \"82627aad-2019-482e-934a-7e9729927a34\") " pod="openshift-multus/multus-qjff2" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.344477 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/afd75772-7900-46c3-b392-afb075e1cc08-rootfs\") pod \"machine-config-daemon-qxkj5\" (UID: \"afd75772-7900-46c3-b392-afb075e1cc08\") " pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.343687 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f32d3b75-6d15-4fb7-9559-d3df1d77071e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-cvbml\" (UID: \"f32d3b75-6d15-4fb7-9559-d3df1d77071e\") " pod="openshift-multus/multus-additional-cni-plugins-cvbml" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.344615 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/82627aad-2019-482e-934a-7e9729927a34-multus-daemon-config\") pod \"multus-qjff2\" (UID: \"82627aad-2019-482e-934a-7e9729927a34\") " pod="openshift-multus/multus-qjff2" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.343759 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afd75772-7900-46c3-b392-afb075e1cc08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qxkj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.344615 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/82627aad-2019-482e-934a-7e9729927a34-host-var-lib-cni-bin\") pod \"multus-qjff2\" (UID: \"82627aad-2019-482e-934a-7e9729927a34\") " pod="openshift-multus/multus-qjff2" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.343857 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/82627aad-2019-482e-934a-7e9729927a34-system-cni-dir\") pod \"multus-qjff2\" (UID: \"82627aad-2019-482e-934a-7e9729927a34\") " pod="openshift-multus/multus-qjff2" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.343983 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/82627aad-2019-482e-934a-7e9729927a34-etc-kubernetes\") pod \"multus-qjff2\" (UID: \"82627aad-2019-482e-934a-7e9729927a34\") " pod="openshift-multus/multus-qjff2" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.344749 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/82627aad-2019-482e-934a-7e9729927a34-multus-cni-dir\") pod \"multus-qjff2\" (UID: \"82627aad-2019-482e-934a-7e9729927a34\") " pod="openshift-multus/multus-qjff2" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.344213 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f32d3b75-6d15-4fb7-9559-d3df1d77071e-system-cni-dir\") pod \"multus-additional-cni-plugins-cvbml\" (UID: \"f32d3b75-6d15-4fb7-9559-d3df1d77071e\") " pod="openshift-multus/multus-additional-cni-plugins-cvbml" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.344783 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/afd75772-7900-46c3-b392-afb075e1cc08-rootfs\") pod \"machine-config-daemon-qxkj5\" (UID: \"afd75772-7900-46c3-b392-afb075e1cc08\") " pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.343768 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/82627aad-2019-482e-934a-7e9729927a34-host-var-lib-kubelet\") pod \"multus-qjff2\" (UID: \"82627aad-2019-482e-934a-7e9729927a34\") " pod="openshift-multus/multus-qjff2" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.343787 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/82627aad-2019-482e-934a-7e9729927a34-host-run-k8s-cni-cncf-io\") pod \"multus-qjff2\" (UID: \"82627aad-2019-482e-934a-7e9729927a34\") " pod="openshift-multus/multus-qjff2" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.343961 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/82627aad-2019-482e-934a-7e9729927a34-hostroot\") pod \"multus-qjff2\" (UID: \"82627aad-2019-482e-934a-7e9729927a34\") " pod="openshift-multus/multus-qjff2" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.343893 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/82627aad-2019-482e-934a-7e9729927a34-host-run-multus-certs\") pod \"multus-qjff2\" (UID: \"82627aad-2019-482e-934a-7e9729927a34\") " pod="openshift-multus/multus-qjff2" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.343813 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/82627aad-2019-482e-934a-7e9729927a34-cnibin\") pod \"multus-qjff2\" (UID: \"82627aad-2019-482e-934a-7e9729927a34\") " pod="openshift-multus/multus-qjff2" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.344176 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f32d3b75-6d15-4fb7-9559-d3df1d77071e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-cvbml\" (UID: \"f32d3b75-6d15-4fb7-9559-d3df1d77071e\") " pod="openshift-multus/multus-additional-cni-plugins-cvbml" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.344874 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f32d3b75-6d15-4fb7-9559-d3df1d77071e-cni-binary-copy\") pod \"multus-additional-cni-plugins-cvbml\" (UID: \"f32d3b75-6d15-4fb7-9559-d3df1d77071e\") " pod="openshift-multus/multus-additional-cni-plugins-cvbml" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.344572 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/afd75772-7900-46c3-b392-afb075e1cc08-mcd-auth-proxy-config\") pod \"machine-config-daemon-qxkj5\" (UID: \"afd75772-7900-46c3-b392-afb075e1cc08\") " pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.345542 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/afd75772-7900-46c3-b392-afb075e1cc08-proxy-tls\") pod \"machine-config-daemon-qxkj5\" (UID: \"afd75772-7900-46c3-b392-afb075e1cc08\") " pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.345714 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/82627aad-2019-482e-934a-7e9729927a34-host-run-netns\") pod \"multus-qjff2\" (UID: \"82627aad-2019-482e-934a-7e9729927a34\") " pod="openshift-multus/multus-qjff2" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.345814 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/82627aad-2019-482e-934a-7e9729927a34-multus-conf-dir\") pod \"multus-qjff2\" (UID: \"82627aad-2019-482e-934a-7e9729927a34\") " pod="openshift-multus/multus-qjff2" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.345904 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9v4q\" (UniqueName: \"kubernetes.io/projected/afd75772-7900-46c3-b392-afb075e1cc08-kube-api-access-l9v4q\") pod \"machine-config-daemon-qxkj5\" (UID: \"afd75772-7900-46c3-b392-afb075e1cc08\") " pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.345957 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/82627aad-2019-482e-934a-7e9729927a34-host-run-netns\") pod \"multus-qjff2\" (UID: \"82627aad-2019-482e-934a-7e9729927a34\") " pod="openshift-multus/multus-qjff2" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.346024 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/82627aad-2019-482e-934a-7e9729927a34-multus-conf-dir\") pod \"multus-qjff2\" (UID: \"82627aad-2019-482e-934a-7e9729927a34\") " pod="openshift-multus/multus-qjff2" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.346019 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/82627aad-2019-482e-934a-7e9729927a34-host-var-lib-cni-multus\") pod \"multus-qjff2\" (UID: \"82627aad-2019-482e-934a-7e9729927a34\") " pod="openshift-multus/multus-qjff2" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.346191 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/82627aad-2019-482e-934a-7e9729927a34-host-var-lib-cni-multus\") pod \"multus-qjff2\" (UID: \"82627aad-2019-482e-934a-7e9729927a34\") " pod="openshift-multus/multus-qjff2" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.346822 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/82627aad-2019-482e-934a-7e9729927a34-cni-binary-copy\") pod \"multus-qjff2\" (UID: \"82627aad-2019-482e-934a-7e9729927a34\") " pod="openshift-multus/multus-qjff2" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.346830 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f32d3b75-6d15-4fb7-9559-d3df1d77071e-cnibin\") pod \"multus-additional-cni-plugins-cvbml\" (UID: \"f32d3b75-6d15-4fb7-9559-d3df1d77071e\") " pod="openshift-multus/multus-additional-cni-plugins-cvbml" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.347168 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f32d3b75-6d15-4fb7-9559-d3df1d77071e-cnibin\") pod \"multus-additional-cni-plugins-cvbml\" (UID: \"f32d3b75-6d15-4fb7-9559-d3df1d77071e\") " pod="openshift-multus/multus-additional-cni-plugins-cvbml" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.347284 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/82627aad-2019-482e-934a-7e9729927a34-multus-socket-dir-parent\") pod \"multus-qjff2\" (UID: \"82627aad-2019-482e-934a-7e9729927a34\") " pod="openshift-multus/multus-qjff2" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.347393 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f32d3b75-6d15-4fb7-9559-d3df1d77071e-os-release\") pod \"multus-additional-cni-plugins-cvbml\" (UID: \"f32d3b75-6d15-4fb7-9559-d3df1d77071e\") " pod="openshift-multus/multus-additional-cni-plugins-cvbml" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.347508 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/82627aad-2019-482e-934a-7e9729927a34-os-release\") pod \"multus-qjff2\" (UID: \"82627aad-2019-482e-934a-7e9729927a34\") " pod="openshift-multus/multus-qjff2" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.347552 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f32d3b75-6d15-4fb7-9559-d3df1d77071e-os-release\") pod \"multus-additional-cni-plugins-cvbml\" (UID: \"f32d3b75-6d15-4fb7-9559-d3df1d77071e\") " pod="openshift-multus/multus-additional-cni-plugins-cvbml" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.347597 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/82627aad-2019-482e-934a-7e9729927a34-multus-socket-dir-parent\") pod \"multus-qjff2\" (UID: \"82627aad-2019-482e-934a-7e9729927a34\") " pod="openshift-multus/multus-qjff2" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.347634 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/82627aad-2019-482e-934a-7e9729927a34-os-release\") pod \"multus-qjff2\" (UID: \"82627aad-2019-482e-934a-7e9729927a34\") " pod="openshift-multus/multus-qjff2" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.349554 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/afd75772-7900-46c3-b392-afb075e1cc08-proxy-tls\") pod \"machine-config-daemon-qxkj5\" (UID: \"afd75772-7900-46c3-b392-afb075e1cc08\") " pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.351044 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/afd75772-7900-46c3-b392-afb075e1cc08-mcd-auth-proxy-config\") pod \"machine-config-daemon-qxkj5\" (UID: \"afd75772-7900-46c3-b392-afb075e1cc08\") " pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.360745 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.366522 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9v4q\" (UniqueName: \"kubernetes.io/projected/afd75772-7900-46c3-b392-afb075e1cc08-kube-api-access-l9v4q\") pod \"machine-config-daemon-qxkj5\" (UID: \"afd75772-7900-46c3-b392-afb075e1cc08\") " pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.368087 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4jhv\" (UniqueName: \"kubernetes.io/projected/f32d3b75-6d15-4fb7-9559-d3df1d77071e-kube-api-access-s4jhv\") pod \"multus-additional-cni-plugins-cvbml\" (UID: \"f32d3b75-6d15-4fb7-9559-d3df1d77071e\") " pod="openshift-multus/multus-additional-cni-plugins-cvbml" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.369563 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9ggl\" (UniqueName: \"kubernetes.io/projected/82627aad-2019-482e-934a-7e9729927a34-kube-api-access-q9ggl\") pod \"multus-qjff2\" (UID: \"82627aad-2019-482e-934a-7e9729927a34\") " pod="openshift-multus/multus-qjff2" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.384818 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d641e5-0291-480c-9413-478267450e45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d782bb5883158eb31686ef882923bc0fe18907ec34b462ad7641b8d0a6e675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce3c0b3eaf0ab467b2dbcadc4770536de6e0abf901c9636df113498aff77a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d13541d78d88ffb1e1dcff16556814da8c438d160fef0ea16468954f300dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2968ec8a8ae174c006de379e7fae84b111c90cb44e51bb8d0fdcbc0e66a5842\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2968ec8a8ae174c006de379e7fae84b111c90cb44e51bb8d0fdcbc0e66a5842\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:30:39.472985 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:30:39.474507 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1374176662/tls.crt::/tmp/serving-cert-1374176662/tls.key\\\\\\\"\\\\nI0126 18:30:44.993991 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:30:44.996847 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:30:44.996868 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:30:44.996891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:30:44.996897 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:30:45.005311 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 18:30:45.005355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 18:30:45.005375 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005386 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005391 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:30:45.005396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:30:45.005400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:30:45.005403 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 18:30:45.006492 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45b34a9d70cf8504fd809f816a326a74e9a3c422a1ed1ffc221e72f90629b420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.402112 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.410544 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.410612 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.410626 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.410651 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.410665 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:45Z","lastTransitionTime":"2026-01-26T18:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.414348 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.423337 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.435130 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvbml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f32d3b75-6d15-4fb7-9559-d3df1d77071e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvbml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.445168 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.459081 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.463130 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.471576 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fsmsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79f4091b-95d7-420a-b90a-1b6f48fb634e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fsmsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.475236 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.503715 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d641e5-0291-480c-9413-478267450e45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d782bb5883158eb31686ef882923bc0fe18907ec34b462ad7641b8d0a6e675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce3c0b3eaf0ab467b2dbcadc4770536de6e0abf901c9636df113498aff77a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d13541d78d88ffb1e1dcff16556814da8c438d160fef0ea16468954f300dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2968ec8a8ae174c006de379e7fae84b111c90cb44e51bb8d0fdcbc0e66a5842\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2968ec8a8ae174c006de379e7fae84b111c90cb44e51bb8d0fdcbc0e66a5842\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:30:39.472985 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:30:39.474507 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1374176662/tls.crt::/tmp/serving-cert-1374176662/tls.key\\\\\\\"\\\\nI0126 18:30:44.993991 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:30:44.996847 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:30:44.996868 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:30:44.996891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:30:44.996897 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:30:45.005311 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 18:30:45.005355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 18:30:45.005375 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005386 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005391 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:30:45.005396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:30:45.005400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:30:45.005403 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 18:30:45.006492 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45b34a9d70cf8504fd809f816a326a74e9a3c422a1ed1ffc221e72f90629b420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.515563 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.515611 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.515625 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.515645 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.515658 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:45Z","lastTransitionTime":"2026-01-26T18:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.527511 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.528599 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-cvbml" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.540123 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-qjff2" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.542004 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.546710 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.550375 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:30:45 crc kubenswrapper[4737]: E0126 18:30:45.550692 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:30:46.550666295 +0000 UTC m=+19.858861003 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.577678 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.579803 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.580267 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jgjrk"] Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.581423 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:45 crc kubenswrapper[4737]: W0126 18:30:45.591138 4737 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": failed to list *v1.Secret: secrets "ovn-kubernetes-node-dockercfg-pwtwl" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 26 18:30:45 crc kubenswrapper[4737]: E0126 18:30:45.591508 4737 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-pwtwl\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-kubernetes-node-dockercfg-pwtwl\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 18:30:45 crc kubenswrapper[4737]: W0126 18:30:45.591267 4737 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": failed to list *v1.ConfigMap: configmaps "ovnkube-script-lib" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 26 18:30:45 crc kubenswrapper[4737]: W0126 18:30:45.591330 4737 reflector.go:561] object-"openshift-ovn-kubernetes"/"env-overrides": failed to list *v1.ConfigMap: configmaps "env-overrides" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 26 18:30:45 crc kubenswrapper[4737]: W0126 18:30:45.591410 4737 reflector.go:561] object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 26 18:30:45 crc kubenswrapper[4737]: E0126 18:30:45.591984 4737 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 18:30:45 crc kubenswrapper[4737]: W0126 18:30:45.591432 4737 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": failed to list *v1.Secret: secrets "ovn-node-metrics-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 26 18:30:45 crc kubenswrapper[4737]: E0126 18:30:45.592033 4737 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-node-metrics-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 18:30:45 crc kubenswrapper[4737]: W0126 18:30:45.592033 4737 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-config": failed to list *v1.ConfigMap: configmaps "ovnkube-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 26 18:30:45 crc kubenswrapper[4737]: E0126 18:30:45.592063 4737 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"ovnkube-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 18:30:45 crc kubenswrapper[4737]: W0126 18:30:45.591455 4737 reflector.go:561] object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 26 18:30:45 crc kubenswrapper[4737]: E0126 18:30:45.592144 4737 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 18:30:45 crc kubenswrapper[4737]: E0126 18:30:45.592172 4737 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"env-overrides\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"env-overrides\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 18:30:45 crc kubenswrapper[4737]: E0126 18:30:45.592269 4737 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"ovnkube-script-lib\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.598308 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.622468 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvbml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f32d3b75-6d15-4fb7-9559-d3df1d77071e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvbml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.624388 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.624447 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.624470 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.624498 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.624512 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:45Z","lastTransitionTime":"2026-01-26T18:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.642934 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.653282 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-var-lib-openvswitch\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.653328 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ecb40773-20dc-48ef-bf7f-17f4a042b01c-ovn-node-metrics-cert\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.653355 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ecb40773-20dc-48ef-bf7f-17f4a042b01c-ovnkube-config\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.653382 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ecb40773-20dc-48ef-bf7f-17f4a042b01c-env-overrides\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.653407 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.653447 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.653473 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-systemd-units\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.653497 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnp4x\" (UniqueName: \"kubernetes.io/projected/ecb40773-20dc-48ef-bf7f-17f4a042b01c-kube-api-access-cnp4x\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.653526 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-run-systemd\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.653611 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-host-cni-netd\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.653644 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.653673 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.653710 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:30:45 crc kubenswrapper[4737]: E0126 18:30:45.653724 4737 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 18:30:45 crc kubenswrapper[4737]: E0126 18:30:45.653797 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 18:30:46.65377687 +0000 UTC m=+19.961971578 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.653734 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-host-kubelet\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.653889 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-log-socket\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.653919 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-run-openvswitch\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.653936 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-host-run-ovn-kubernetes\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.653956 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-host-cni-bin\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.653971 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ecb40773-20dc-48ef-bf7f-17f4a042b01c-ovnkube-script-lib\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:45 crc kubenswrapper[4737]: E0126 18:30:45.653971 4737 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.654000 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-host-slash\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.654023 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-host-run-netns\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.654039 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-etc-openvswitch\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:45 crc kubenswrapper[4737]: E0126 18:30:45.654051 4737 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.654057 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-run-ovn\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:45 crc kubenswrapper[4737]: E0126 18:30:45.654090 4737 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.654101 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-node-log\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:45 crc kubenswrapper[4737]: E0126 18:30:45.654154 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 18:30:46.654133839 +0000 UTC m=+19.962328747 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:30:45 crc kubenswrapper[4737]: E0126 18:30:45.654205 4737 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 18:30:45 crc kubenswrapper[4737]: E0126 18:30:45.654234 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 18:30:46.654227251 +0000 UTC m=+19.962422229 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 18:30:45 crc kubenswrapper[4737]: E0126 18:30:45.654330 4737 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 18:30:45 crc kubenswrapper[4737]: E0126 18:30:45.654357 4737 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 18:30:45 crc kubenswrapper[4737]: E0126 18:30:45.654370 4737 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:30:45 crc kubenswrapper[4737]: E0126 18:30:45.654425 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 18:30:46.654400766 +0000 UTC m=+19.962595474 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.661012 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.670091 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fsmsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79f4091b-95d7-420a-b90a-1b6f48fb634e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fsmsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.682887 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d16415ca-2740-4247-846a-9afd1ebcca48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4f461b168b044c50f281bafc5c7ef0d877392e3cc72edc7b2f0028cf8fe6b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7338aa3bff3561881f454689b4ae1ab8b46ddf950c45dd080107c7b78e6766a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ccdee3654b2923f02f6071aa3924d0934ed028d809dfbf120ba387637632dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c275106783e56387249df9619e22fd0eca28516545f77cead21b8c925f9c36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.702517 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qjff2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82627aad-2019-482e-934a-7e9729927a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9ggl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qjff2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.717711 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afd75772-7900-46c3-b392-afb075e1cc08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qxkj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.728696 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.728729 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.728739 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.728769 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.728779 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:45Z","lastTransitionTime":"2026-01-26T18:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.732487 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d641e5-0291-480c-9413-478267450e45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d782bb5883158eb31686ef882923bc0fe18907ec34b462ad7641b8d0a6e675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce3c0b3eaf0ab467b2dbcadc4770536de6e0abf901c9636df113498aff77a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d13541d78d88ffb1e1dcff16556814da8c438d160fef0ea16468954f300dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2968ec8a8ae174c006de379e7fae84b111c90cb44e51bb8d0fdcbc0e66a5842\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2968ec8a8ae174c006de379e7fae84b111c90cb44e51bb8d0fdcbc0e66a5842\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:30:39.472985 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:30:39.474507 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1374176662/tls.crt::/tmp/serving-cert-1374176662/tls.key\\\\\\\"\\\\nI0126 18:30:44.993991 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:30:44.996847 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:30:44.996868 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:30:44.996891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:30:44.996897 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:30:45.005311 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 18:30:45.005355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 18:30:45.005375 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005386 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005391 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:30:45.005396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:30:45.005400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:30:45.005403 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 18:30:45.006492 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45b34a9d70cf8504fd809f816a326a74e9a3c422a1ed1ffc221e72f90629b420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.744692 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.754661 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-host-run-netns\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.754706 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-etc-openvswitch\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.754722 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-run-ovn\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.754742 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-node-log\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.754758 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-var-lib-openvswitch\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.754774 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ecb40773-20dc-48ef-bf7f-17f4a042b01c-ovn-node-metrics-cert\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.754797 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ecb40773-20dc-48ef-bf7f-17f4a042b01c-ovnkube-config\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.754814 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ecb40773-20dc-48ef-bf7f-17f4a042b01c-env-overrides\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.754831 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.754858 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-systemd-units\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.754874 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnp4x\" (UniqueName: \"kubernetes.io/projected/ecb40773-20dc-48ef-bf7f-17f4a042b01c-kube-api-access-cnp4x\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.754889 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-run-systemd\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.754908 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-host-cni-netd\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.754952 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-host-kubelet\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.754972 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-log-socket\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.754991 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-run-openvswitch\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.755019 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-host-run-ovn-kubernetes\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.755044 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-host-cni-bin\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.755081 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ecb40773-20dc-48ef-bf7f-17f4a042b01c-ovnkube-script-lib\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.755099 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-host-slash\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.755176 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-host-slash\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.755220 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-host-run-netns\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.755244 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-etc-openvswitch\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.755269 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-run-ovn\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.755301 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-node-log\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.755328 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-var-lib-openvswitch\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.755490 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-host-kubelet\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.755518 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-log-socket\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.755545 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-run-openvswitch\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.755573 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-host-run-ovn-kubernetes\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.755601 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-host-cni-bin\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.755608 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-host-cni-netd\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.755649 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-systemd-units\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.755688 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.755872 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-run-systemd\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.820913 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.831693 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.831743 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.831753 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.831780 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.831793 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:45Z","lastTransitionTime":"2026-01-26T18:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.845185 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.858033 4737 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.867959 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.910895 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb40773-20dc-48ef-bf7f-17f4a042b01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jgjrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.924293 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 05:08:38.793290493 +0000 UTC Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.935162 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.935264 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.935282 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.935307 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.935322 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:45Z","lastTransitionTime":"2026-01-26T18:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.946216 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvbml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f32d3b75-6d15-4fb7-9559-d3df1d77071e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvbml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:30:45 crc kubenswrapper[4737]: I0126 18:30:45.995537 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6554c7-415f-457d-8121-82981ebe2781\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2838d2a1b16be346b2d6a63998cd81416ef81978be369242fae471f6a53fdbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf97240160ecd3f4e73effbeb33f85dad6c12afbfe10315b8624d5c366945d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cfbe9f1ae9deaee4bbb0db6d490c25bd86326a3b962d2221cffa8c7e8431cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35617b01f73620a31d80cfbb5bc2c444ee37cdf3cfd67d62b70f36c6738bfc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2decc4fe0a94f1c54bc9b532267b0cbac17f7762e628835a11ba40561c8971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:45Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.025734 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:46Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.038395 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.038432 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.038442 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.038468 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.038480 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:46Z","lastTransitionTime":"2026-01-26T18:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.064937 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:46Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.102329 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fsmsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79f4091b-95d7-420a-b90a-1b6f48fb634e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fsmsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:46Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.146930 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.146974 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.146986 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.147183 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.147199 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:46Z","lastTransitionTime":"2026-01-26T18:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.148877 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d16415ca-2740-4247-846a-9afd1ebcca48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4f461b168b044c50f281bafc5c7ef0d877392e3cc72edc7b2f0028cf8fe6b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7338aa3bff3561881f454689b4ae1ab8b46ddf950c45dd080107c7b78e6766a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ccdee3654b2923f02f6071aa3924d0934ed028d809dfbf120ba387637632dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c275106783e56387249df9619e22fd0eca28516545f77cead21b8c925f9c36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:46Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.185988 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qjff2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82627aad-2019-482e-934a-7e9729927a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9ggl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qjff2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:46Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.225006 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afd75772-7900-46c3-b392-afb075e1cc08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qxkj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:46Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.243119 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qjff2" event={"ID":"82627aad-2019-482e-934a-7e9729927a34","Type":"ContainerStarted","Data":"938d6c4b9c86f851e8346bde5364b9a2463869d85fef2bc4e705335f9253be4c"} Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.243164 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qjff2" event={"ID":"82627aad-2019-482e-934a-7e9729927a34","Type":"ContainerStarted","Data":"3e257c1f9a2022ec3a34f39bb69e90246d4312db6ad3b3ba7d85c0daf02c75df"} Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.245190 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" event={"ID":"afd75772-7900-46c3-b392-afb075e1cc08","Type":"ContainerStarted","Data":"a44e1f827ccc2bfeece3e663dd96fc5e48e301dbf7ac31e381e7a33a8a4a422c"} Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.245280 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" event={"ID":"afd75772-7900-46c3-b392-afb075e1cc08","Type":"ContainerStarted","Data":"bea5fce0e1e77606f5e8f6cb2c1b339d6b7b8174e1f68a050834be1f5bedfec6"} Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.245296 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" event={"ID":"afd75772-7900-46c3-b392-afb075e1cc08","Type":"ContainerStarted","Data":"298d91899c04a7610998ca947f802083a497e78f3b4724807ad100abc11c04f0"} Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.247302 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.248502 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.248538 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.248552 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.248566 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.248579 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:46Z","lastTransitionTime":"2026-01-26T18:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.249898 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"209ecbbc6838b629efde256a421bfd4b6926d2a9cd2f02e4fb7df9325fdecfc5"} Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.250252 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.251619 4737 generic.go:334] "Generic (PLEG): container finished" podID="f32d3b75-6d15-4fb7-9559-d3df1d77071e" containerID="f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8" exitCode=0 Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.251736 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cvbml" event={"ID":"f32d3b75-6d15-4fb7-9559-d3df1d77071e","Type":"ContainerDied","Data":"f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8"} Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.251836 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cvbml" event={"ID":"f32d3b75-6d15-4fb7-9559-d3df1d77071e","Type":"ContainerStarted","Data":"8e85469ad3e958f0ec5c74a8fd1a610fc34eaeab0e11f112d0be9631981a8f72"} Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.253134 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-fsmsj" event={"ID":"79f4091b-95d7-420a-b90a-1b6f48fb634e","Type":"ContainerStarted","Data":"182bb7a343b62287950a4012ccd463ab6a7d339540f40db94e83248958d49095"} Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.253167 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-fsmsj" event={"ID":"79f4091b-95d7-420a-b90a-1b6f48fb634e","Type":"ContainerStarted","Data":"85e5b2a4ba2d3ed2bd38d58949be1f0b9947737c5ad09a31c92dfdbf3a350437"} Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.255111 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"f301fe76b5deb158ce15195e10d449d15d1633511d59a2626a25216a751228d9"} Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.257640 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"925178b6076a7c576bc84fb58255bac5e1dcd86eda3fd94f0f93504a7cd7625a"} Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.257702 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"548ccd6a70ea74a2030c871c94d8d7ac1de313de023b6a16b4a3a3bb2a2d7003"} Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.257717 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"589828a3bf3b2dbf15cd9e6d9475fb905f0422a47abe6067e38e7e45ae8b9b08"} Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.259634 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"3512c1850ad62aad579725558f83686c93dad645cc56cc852438dc2b4a6c35c1"} Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.259674 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"caa977f712e7e7e806503c7ffc25d4dffc834a6a367af134d0304f8bcd56378a"} Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.277515 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6554c7-415f-457d-8121-82981ebe2781\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2838d2a1b16be346b2d6a63998cd81416ef81978be369242fae471f6a53fdbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf97240160ecd3f4e73effbeb33f85dad6c12afbfe10315b8624d5c366945d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cfbe9f1ae9deaee4bbb0db6d490c25bd86326a3b962d2221cffa8c7e8431cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35617b01f73620a31d80cfbb5bc2c444ee37cdf3cfd67d62b70f36c6738bfc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2decc4fe0a94f1c54bc9b532267b0cbac17f7762e628835a11ba40561c8971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:46Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:46 crc kubenswrapper[4737]: E0126 18:30:46.285870 4737 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.325190 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:46Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.354293 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.354339 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.354352 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.354375 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.354391 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:46Z","lastTransitionTime":"2026-01-26T18:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.375987 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:46Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.411102 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fsmsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79f4091b-95d7-420a-b90a-1b6f48fb634e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fsmsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:46Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.445024 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d16415ca-2740-4247-846a-9afd1ebcca48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4f461b168b044c50f281bafc5c7ef0d877392e3cc72edc7b2f0028cf8fe6b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7338aa3bff3561881f454689b4ae1ab8b46ddf950c45dd080107c7b78e6766a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ccdee3654b2923f02f6071aa3924d0934ed028d809dfbf120ba387637632dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c275106783e56387249df9619e22fd0eca28516545f77cead21b8c925f9c36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:46Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.457916 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.457966 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.457979 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.457997 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.458011 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:46Z","lastTransitionTime":"2026-01-26T18:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.489005 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qjff2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82627aad-2019-482e-934a-7e9729927a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d6c4b9c86f851e8346bde5364b9a2463869d85fef2bc4e705335f9253be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9ggl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qjff2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:46Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.523813 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afd75772-7900-46c3-b392-afb075e1cc08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qxkj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:46Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.561631 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.561687 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.561699 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.561724 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.561737 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:46Z","lastTransitionTime":"2026-01-26T18:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.565282 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:46Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.574203 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:30:46 crc kubenswrapper[4737]: E0126 18:30:46.574455 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:30:48.574413284 +0000 UTC m=+21.882607992 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.594209 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.633260 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb40773-20dc-48ef-bf7f-17f4a042b01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jgjrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:46Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.633622 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.665414 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.665507 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.665522 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.665547 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.665559 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:46Z","lastTransitionTime":"2026-01-26T18:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.678656 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.678729 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.678755 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.678782 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.679004 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 26 18:30:46 crc kubenswrapper[4737]: E0126 18:30:46.679601 4737 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 18:30:46 crc kubenswrapper[4737]: E0126 18:30:46.679646 4737 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 18:30:46 crc kubenswrapper[4737]: E0126 18:30:46.679666 4737 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:30:46 crc kubenswrapper[4737]: E0126 18:30:46.679801 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 18:30:48.679775197 +0000 UTC m=+21.987969925 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:30:46 crc kubenswrapper[4737]: E0126 18:30:46.679871 4737 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 18:30:46 crc kubenswrapper[4737]: E0126 18:30:46.679909 4737 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 18:30:46 crc kubenswrapper[4737]: E0126 18:30:46.679933 4737 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:30:46 crc kubenswrapper[4737]: E0126 18:30:46.680003 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 18:30:48.679986743 +0000 UTC m=+21.988181451 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:30:46 crc kubenswrapper[4737]: E0126 18:30:46.680303 4737 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 18:30:46 crc kubenswrapper[4737]: E0126 18:30:46.680338 4737 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 18:30:46 crc kubenswrapper[4737]: E0126 18:30:46.680591 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 18:30:48.680364892 +0000 UTC m=+21.988559600 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 18:30:46 crc kubenswrapper[4737]: E0126 18:30:46.680689 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 18:30:48.68065803 +0000 UTC m=+21.988852748 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.690835 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnp4x\" (UniqueName: \"kubernetes.io/projected/ecb40773-20dc-48ef-bf7f-17f4a042b01c-kube-api-access-cnp4x\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.705738 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d641e5-0291-480c-9413-478267450e45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d782bb5883158eb31686ef882923bc0fe18907ec34b462ad7641b8d0a6e675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce3c0b3eaf0ab467b2dbcadc4770536de6e0abf901c9636df113498aff77a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d13541d78d88ffb1e1dcff16556814da8c438d160fef0ea16468954f300dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2968ec8a8ae174c006de379e7fae84b111c90cb44e51bb8d0fdcbc0e66a5842\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2968ec8a8ae174c006de379e7fae84b111c90cb44e51bb8d0fdcbc0e66a5842\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:30:39.472985 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:30:39.474507 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1374176662/tls.crt::/tmp/serving-cert-1374176662/tls.key\\\\\\\"\\\\nI0126 18:30:44.993991 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:30:44.996847 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:30:44.996868 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:30:44.996891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:30:44.996897 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:30:45.005311 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 18:30:45.005355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 18:30:45.005375 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005386 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005391 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:30:45.005396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:30:45.005400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:30:45.005403 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 18:30:45.006492 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45b34a9d70cf8504fd809f816a326a74e9a3c422a1ed1ffc221e72f90629b420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:46Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.743191 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:46Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:46 crc kubenswrapper[4737]: E0126 18:30:46.755925 4737 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/env-overrides: failed to sync configmap cache: timed out waiting for the condition Jan 26 18:30:46 crc kubenswrapper[4737]: E0126 18:30:46.755943 4737 secret.go:188] Couldn't get secret openshift-ovn-kubernetes/ovn-node-metrics-cert: failed to sync secret cache: timed out waiting for the condition Jan 26 18:30:46 crc kubenswrapper[4737]: E0126 18:30:46.755989 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ecb40773-20dc-48ef-bf7f-17f4a042b01c-env-overrides podName:ecb40773-20dc-48ef-bf7f-17f4a042b01c nodeName:}" failed. No retries permitted until 2026-01-26 18:30:47.255973812 +0000 UTC m=+20.564168520 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "env-overrides" (UniqueName: "kubernetes.io/configmap/ecb40773-20dc-48ef-bf7f-17f4a042b01c-env-overrides") pod "ovnkube-node-jgjrk" (UID: "ecb40773-20dc-48ef-bf7f-17f4a042b01c") : failed to sync configmap cache: timed out waiting for the condition Jan 26 18:30:46 crc kubenswrapper[4737]: E0126 18:30:46.756003 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ecb40773-20dc-48ef-bf7f-17f4a042b01c-ovn-node-metrics-cert podName:ecb40773-20dc-48ef-bf7f-17f4a042b01c nodeName:}" failed. No retries permitted until 2026-01-26 18:30:47.255996962 +0000 UTC m=+20.564191670 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovn-node-metrics-cert" (UniqueName: "kubernetes.io/secret/ecb40773-20dc-48ef-bf7f-17f4a042b01c-ovn-node-metrics-cert") pod "ovnkube-node-jgjrk" (UID: "ecb40773-20dc-48ef-bf7f-17f4a042b01c") : failed to sync secret cache: timed out waiting for the condition Jan 26 18:30:46 crc kubenswrapper[4737]: E0126 18:30:46.756011 4737 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/ovnkube-config: failed to sync configmap cache: timed out waiting for the condition Jan 26 18:30:46 crc kubenswrapper[4737]: E0126 18:30:46.756112 4737 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/ovnkube-script-lib: failed to sync configmap cache: timed out waiting for the condition Jan 26 18:30:46 crc kubenswrapper[4737]: E0126 18:30:46.756168 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ecb40773-20dc-48ef-bf7f-17f4a042b01c-ovnkube-config podName:ecb40773-20dc-48ef-bf7f-17f4a042b01c nodeName:}" failed. No retries permitted until 2026-01-26 18:30:47.256138056 +0000 UTC m=+20.564332764 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovnkube-config" (UniqueName: "kubernetes.io/configmap/ecb40773-20dc-48ef-bf7f-17f4a042b01c-ovnkube-config") pod "ovnkube-node-jgjrk" (UID: "ecb40773-20dc-48ef-bf7f-17f4a042b01c") : failed to sync configmap cache: timed out waiting for the condition Jan 26 18:30:46 crc kubenswrapper[4737]: E0126 18:30:46.756249 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ecb40773-20dc-48ef-bf7f-17f4a042b01c-ovnkube-script-lib podName:ecb40773-20dc-48ef-bf7f-17f4a042b01c nodeName:}" failed. No retries permitted until 2026-01-26 18:30:47.256220218 +0000 UTC m=+20.564414936 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovnkube-script-lib" (UniqueName: "kubernetes.io/configmap/ecb40773-20dc-48ef-bf7f-17f4a042b01c-ovnkube-script-lib") pod "ovnkube-node-jgjrk" (UID: "ecb40773-20dc-48ef-bf7f-17f4a042b01c") : failed to sync configmap cache: timed out waiting for the condition Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.768459 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.768708 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.768781 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.768918 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.768982 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:46Z","lastTransitionTime":"2026-01-26T18:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.784721 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:46Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.795014 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.836124 4737 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.836556 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/iptables-alerter-4ln5h/status\": read tcp 38.102.83.236:58388->38.102.83.236:6443: use of closed network connection" Jan 26 18:30:46 crc kubenswrapper[4737]: W0126 18:30:46.836659 4737 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": watch of *v1.Secret ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": Unexpected watch close - watch lasted less than a second and no items received Jan 26 18:30:46 crc kubenswrapper[4737]: W0126 18:30:46.836708 4737 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovnkube-config": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovnkube-config": Unexpected watch close - watch lasted less than a second and no items received Jan 26 18:30:46 crc kubenswrapper[4737]: W0126 18:30:46.837156 4737 reflector.go:484] object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 26 18:30:46 crc kubenswrapper[4737]: W0126 18:30:46.837573 4737 reflector.go:484] object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.871138 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.871575 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.871586 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.871607 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.871619 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:46Z","lastTransitionTime":"2026-01-26T18:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.873657 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.925044 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 20:50:11.326312929 +0000 UTC Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.927567 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvbml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f32d3b75-6d15-4fb7-9559-d3df1d77071e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvbml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:46Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.964881 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3512c1850ad62aad579725558f83686c93dad645cc56cc852438dc2b4a6c35c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:46Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.977394 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.977438 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.977452 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.977468 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.977485 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:46Z","lastTransitionTime":"2026-01-26T18:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.983185 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:30:46 crc kubenswrapper[4737]: E0126 18:30:46.983318 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.983662 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:30:46 crc kubenswrapper[4737]: E0126 18:30:46.983802 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.983880 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:30:46 crc kubenswrapper[4737]: E0126 18:30:46.983969 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.987540 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.988351 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.989887 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.990575 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.991574 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.992118 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.992753 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.994116 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.994846 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.995979 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.996577 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.997793 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.998342 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 26 18:30:46 crc kubenswrapper[4737]: I0126 18:30:46.998838 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.000006 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.000538 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.001604 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.002115 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.002784 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.003949 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.004466 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.005547 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.005895 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925178b6076a7c576bc84fb58255bac5e1dcd86eda3fd94f0f93504a7cd7625a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548ccd6a70ea74a2030c871c94d8d7ac1de313de023b6a16b4a3a3bb2a2d7003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:46Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.005998 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.007812 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.008368 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.008994 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.010853 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.011401 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.012517 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.013014 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.013918 4737 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.014027 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.015629 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.016629 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.017210 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.018640 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.019290 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.020186 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.020821 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.022129 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.022660 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.023638 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.024652 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.025441 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.026348 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.026875 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.027824 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.028543 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.029501 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.029615 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.030153 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.031085 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.031644 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.032213 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.033044 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.064936 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.079664 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.079697 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.079707 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.079721 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.079756 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:47Z","lastTransitionTime":"2026-01-26T18:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:47 crc kubenswrapper[4737]: E0126 18:30:47.088376 4737 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf32d3b75_6d15_4fb7_9559_d3df1d77071e.slice/crio-26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f.scope\": RecentStats: unable to find data in memory cache]" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.110576 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb40773-20dc-48ef-bf7f-17f4a042b01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jgjrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.114243 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.164623 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d641e5-0291-480c-9413-478267450e45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d782bb5883158eb31686ef882923bc0fe18907ec34b462ad7641b8d0a6e675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce3c0b3eaf0ab467b2dbcadc4770536de6e0abf901c9636df113498aff77a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d13541d78d88ffb1e1dcff16556814da8c438d160fef0ea16468954f300dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209ecbbc6838b629efde256a421bfd4b6926d2a9cd2f02e4fb7df9325fdecfc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2968ec8a8ae174c006de379e7fae84b111c90cb44e51bb8d0fdcbc0e66a5842\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:30:39.472985 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:30:39.474507 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1374176662/tls.crt::/tmp/serving-cert-1374176662/tls.key\\\\\\\"\\\\nI0126 18:30:44.993991 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:30:44.996847 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:30:44.996868 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:30:44.996891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:30:44.996897 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:30:45.005311 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 18:30:45.005355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 18:30:45.005375 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005386 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005391 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:30:45.005396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:30:45.005400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:30:45.005403 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 18:30:45.006492 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45b34a9d70cf8504fd809f816a326a74e9a3c422a1ed1ffc221e72f90629b420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.175724 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.183114 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.183289 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.183358 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.183445 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.183504 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:47Z","lastTransitionTime":"2026-01-26T18:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.228007 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvbml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f32d3b75-6d15-4fb7-9559-d3df1d77071e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvbml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.263081 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.264666 4737 generic.go:334] "Generic (PLEG): container finished" podID="f32d3b75-6d15-4fb7-9559-d3df1d77071e" containerID="26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f" exitCode=0 Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.264723 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cvbml" event={"ID":"f32d3b75-6d15-4fb7-9559-d3df1d77071e","Type":"ContainerDied","Data":"26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f"} Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.286307 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ecb40773-20dc-48ef-bf7f-17f4a042b01c-ovnkube-script-lib\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.286361 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ecb40773-20dc-48ef-bf7f-17f4a042b01c-ovn-node-metrics-cert\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.286391 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ecb40773-20dc-48ef-bf7f-17f4a042b01c-ovnkube-config\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.286418 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ecb40773-20dc-48ef-bf7f-17f4a042b01c-env-overrides\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.287175 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ecb40773-20dc-48ef-bf7f-17f4a042b01c-env-overrides\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.287359 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ecb40773-20dc-48ef-bf7f-17f4a042b01c-ovnkube-script-lib\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.287416 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.287452 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.287466 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.287489 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.287506 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:47Z","lastTransitionTime":"2026-01-26T18:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.287696 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ecb40773-20dc-48ef-bf7f-17f4a042b01c-ovnkube-config\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.293093 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ecb40773-20dc-48ef-bf7f-17f4a042b01c-ovn-node-metrics-cert\") pod \"ovnkube-node-jgjrk\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.303793 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.342539 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fsmsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79f4091b-95d7-420a-b90a-1b6f48fb634e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://182bb7a343b62287950a4012ccd463ab6a7d339540f40db94e83248958d49095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fsmsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.391695 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.391750 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.391764 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.391789 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.391802 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:47Z","lastTransitionTime":"2026-01-26T18:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.392145 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6554c7-415f-457d-8121-82981ebe2781\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2838d2a1b16be346b2d6a63998cd81416ef81978be369242fae471f6a53fdbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf97240160ecd3f4e73effbeb33f85dad6c12afbfe10315b8624d5c366945d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cfbe9f1ae9deaee4bbb0db6d490c25bd86326a3b962d2221cffa8c7e8431cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35617b01f73620a31d80cfbb5bc2c444ee37cdf3cfd67d62b70f36c6738bfc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2decc4fe0a94f1c54bc9b532267b0cbac17f7762e628835a11ba40561c8971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.411891 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:47 crc kubenswrapper[4737]: W0126 18:30:47.437354 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podecb40773_20dc_48ef_bf7f_17f4a042b01c.slice/crio-56dbde75f9c625602d0f93fe42f936bce62a2956e6b776567123379cdc8cd4c6 WatchSource:0}: Error finding container 56dbde75f9c625602d0f93fe42f936bce62a2956e6b776567123379cdc8cd4c6: Status 404 returned error can't find the container with id 56dbde75f9c625602d0f93fe42f936bce62a2956e6b776567123379cdc8cd4c6 Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.437508 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qjff2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82627aad-2019-482e-934a-7e9729927a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d6c4b9c86f851e8346bde5364b9a2463869d85fef2bc4e705335f9253be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9ggl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qjff2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.462578 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afd75772-7900-46c3-b392-afb075e1cc08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44e1f827ccc2bfeece3e663dd96fc5e48e301dbf7ac31e381e7a33a8a4a422c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea5fce0e1e77606f5e8f6cb2c1b339d6b7b8174e1f68a050834be1f5bedfec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qxkj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.495466 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.495524 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.495536 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.495555 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.495567 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:47Z","lastTransitionTime":"2026-01-26T18:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.504899 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d16415ca-2740-4247-846a-9afd1ebcca48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4f461b168b044c50f281bafc5c7ef0d877392e3cc72edc7b2f0028cf8fe6b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7338aa3bff3561881f454689b4ae1ab8b46ddf950c45dd080107c7b78e6766a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ccdee3654b2923f02f6071aa3924d0934ed028d809dfbf120ba387637632dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c275106783e56387249df9619e22fd0eca28516545f77cead21b8c925f9c36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.552134 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6554c7-415f-457d-8121-82981ebe2781\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2838d2a1b16be346b2d6a63998cd81416ef81978be369242fae471f6a53fdbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf97240160ecd3f4e73effbeb33f85dad6c12afbfe10315b8624d5c366945d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cfbe9f1ae9deaee4bbb0db6d490c25bd86326a3b962d2221cffa8c7e8431cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35617b01f73620a31d80cfbb5bc2c444ee37cdf3cfd67d62b70f36c6738bfc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2decc4fe0a94f1c54bc9b532267b0cbac17f7762e628835a11ba40561c8971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.583942 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.598678 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.598733 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.598749 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.598771 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.598787 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:47Z","lastTransitionTime":"2026-01-26T18:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.623558 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.662916 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fsmsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79f4091b-95d7-420a-b90a-1b6f48fb634e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://182bb7a343b62287950a4012ccd463ab6a7d339540f40db94e83248958d49095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fsmsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.701355 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.701407 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.701420 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.701437 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.701450 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:47Z","lastTransitionTime":"2026-01-26T18:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.703225 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d16415ca-2740-4247-846a-9afd1ebcca48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4f461b168b044c50f281bafc5c7ef0d877392e3cc72edc7b2f0028cf8fe6b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7338aa3bff3561881f454689b4ae1ab8b46ddf950c45dd080107c7b78e6766a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ccdee3654b2923f02f6071aa3924d0934ed028d809dfbf120ba387637632dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c275106783e56387249df9619e22fd0eca28516545f77cead21b8c925f9c36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.743065 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qjff2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82627aad-2019-482e-934a-7e9729927a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d6c4b9c86f851e8346bde5364b9a2463869d85fef2bc4e705335f9253be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9ggl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qjff2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.783832 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afd75772-7900-46c3-b392-afb075e1cc08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44e1f827ccc2bfeece3e663dd96fc5e48e301dbf7ac31e381e7a33a8a4a422c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea5fce0e1e77606f5e8f6cb2c1b339d6b7b8174e1f68a050834be1f5bedfec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qxkj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.804221 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.804272 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.804284 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.804308 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.804323 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:47Z","lastTransitionTime":"2026-01-26T18:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.826335 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.872627 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb40773-20dc-48ef-bf7f-17f4a042b01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jgjrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.906396 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d641e5-0291-480c-9413-478267450e45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d782bb5883158eb31686ef882923bc0fe18907ec34b462ad7641b8d0a6e675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce3c0b3eaf0ab467b2dbcadc4770536de6e0abf901c9636df113498aff77a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d13541d78d88ffb1e1dcff16556814da8c438d160fef0ea16468954f300dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209ecbbc6838b629efde256a421bfd4b6926d2a9cd2f02e4fb7df9325fdecfc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2968ec8a8ae174c006de379e7fae84b111c90cb44e51bb8d0fdcbc0e66a5842\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:30:39.472985 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:30:39.474507 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1374176662/tls.crt::/tmp/serving-cert-1374176662/tls.key\\\\\\\"\\\\nI0126 18:30:44.993991 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:30:44.996847 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:30:44.996868 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:30:44.996891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:30:44.996897 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:30:45.005311 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 18:30:45.005355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 18:30:45.005375 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005386 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005391 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:30:45.005396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:30:45.005400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:30:45.005403 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 18:30:45.006492 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45b34a9d70cf8504fd809f816a326a74e9a3c422a1ed1ffc221e72f90629b420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.907721 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.907774 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.907787 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.907808 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.907821 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:47Z","lastTransitionTime":"2026-01-26T18:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.925294 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 00:44:23.109990455 +0000 UTC Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.945047 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3512c1850ad62aad579725558f83686c93dad645cc56cc852438dc2b4a6c35c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:47 crc kubenswrapper[4737]: I0126 18:30:47.984134 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925178b6076a7c576bc84fb58255bac5e1dcd86eda3fd94f0f93504a7cd7625a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548ccd6a70ea74a2030c871c94d8d7ac1de313de023b6a16b4a3a3bb2a2d7003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.010980 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.011026 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.011037 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.011058 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.011105 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:48Z","lastTransitionTime":"2026-01-26T18:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.023086 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:48Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.065321 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvbml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f32d3b75-6d15-4fb7-9559-d3df1d77071e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvbml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:48Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.107444 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3512c1850ad62aad579725558f83686c93dad645cc56cc852438dc2b4a6c35c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:48Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.113633 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.113695 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.113716 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.113738 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.113750 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:48Z","lastTransitionTime":"2026-01-26T18:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.144191 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925178b6076a7c576bc84fb58255bac5e1dcd86eda3fd94f0f93504a7cd7625a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548ccd6a70ea74a2030c871c94d8d7ac1de313de023b6a16b4a3a3bb2a2d7003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:48Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.186419 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:48Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.216459 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.216517 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.216529 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.216548 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.216559 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:48Z","lastTransitionTime":"2026-01-26T18:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.229971 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:48Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.273754 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb40773-20dc-48ef-bf7f-17f4a042b01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jgjrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:48Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.277030 4737 generic.go:334] "Generic (PLEG): container finished" podID="f32d3b75-6d15-4fb7-9559-d3df1d77071e" containerID="0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a" exitCode=0 Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.277129 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cvbml" event={"ID":"f32d3b75-6d15-4fb7-9559-d3df1d77071e","Type":"ContainerDied","Data":"0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a"} Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.282182 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"e65f82894ec49f5a88663c42b77ad7d6f19fa922c45052d24272144140f979b6"} Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.283564 4737 generic.go:334] "Generic (PLEG): container finished" podID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" containerID="a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9" exitCode=0 Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.283629 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" event={"ID":"ecb40773-20dc-48ef-bf7f-17f4a042b01c","Type":"ContainerDied","Data":"a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9"} Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.283668 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" event={"ID":"ecb40773-20dc-48ef-bf7f-17f4a042b01c","Type":"ContainerStarted","Data":"56dbde75f9c625602d0f93fe42f936bce62a2956e6b776567123379cdc8cd4c6"} Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.305204 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d641e5-0291-480c-9413-478267450e45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d782bb5883158eb31686ef882923bc0fe18907ec34b462ad7641b8d0a6e675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce3c0b3eaf0ab467b2dbcadc4770536de6e0abf901c9636df113498aff77a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d13541d78d88ffb1e1dcff16556814da8c438d160fef0ea16468954f300dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209ecbbc6838b629efde256a421bfd4b6926d2a9cd2f02e4fb7df9325fdecfc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2968ec8a8ae174c006de379e7fae84b111c90cb44e51bb8d0fdcbc0e66a5842\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:30:39.472985 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:30:39.474507 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1374176662/tls.crt::/tmp/serving-cert-1374176662/tls.key\\\\\\\"\\\\nI0126 18:30:44.993991 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:30:44.996847 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:30:44.996868 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:30:44.996891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:30:44.996897 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:30:45.005311 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 18:30:45.005355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 18:30:45.005375 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005386 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005391 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:30:45.005396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:30:45.005400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:30:45.005403 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 18:30:45.006492 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45b34a9d70cf8504fd809f816a326a74e9a3c422a1ed1ffc221e72f90629b420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:48Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.320192 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.320225 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.320235 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.320255 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.320266 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:48Z","lastTransitionTime":"2026-01-26T18:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.344854 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvbml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f32d3b75-6d15-4fb7-9559-d3df1d77071e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvbml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:48Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.386757 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:48Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.423735 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:48Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.426227 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.426278 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.426292 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.426311 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.426322 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:48Z","lastTransitionTime":"2026-01-26T18:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.464631 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fsmsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79f4091b-95d7-420a-b90a-1b6f48fb634e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://182bb7a343b62287950a4012ccd463ab6a7d339540f40db94e83248958d49095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fsmsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:48Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.510874 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6554c7-415f-457d-8121-82981ebe2781\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2838d2a1b16be346b2d6a63998cd81416ef81978be369242fae471f6a53fdbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf97240160ecd3f4e73effbeb33f85dad6c12afbfe10315b8624d5c366945d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cfbe9f1ae9deaee4bbb0db6d490c25bd86326a3b962d2221cffa8c7e8431cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35617b01f73620a31d80cfbb5bc2c444ee37cdf3cfd67d62b70f36c6738bfc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2decc4fe0a94f1c54bc9b532267b0cbac17f7762e628835a11ba40561c8971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:48Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.529008 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.529046 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.529056 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.529103 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.529118 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:48Z","lastTransitionTime":"2026-01-26T18:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.545882 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qjff2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82627aad-2019-482e-934a-7e9729927a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d6c4b9c86f851e8346bde5364b9a2463869d85fef2bc4e705335f9253be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9ggl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qjff2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:48Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.574881 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.601978 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:30:48 crc kubenswrapper[4737]: E0126 18:30:48.602307 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:30:52.602272199 +0000 UTC m=+25.910466927 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.603304 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afd75772-7900-46c3-b392-afb075e1cc08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44e1f827ccc2bfeece3e663dd96fc5e48e301dbf7ac31e381e7a33a8a4a422c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea5fce0e1e77606f5e8f6cb2c1b339d6b7b8174e1f68a050834be1f5bedfec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qxkj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:48Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.631337 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.631380 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.631391 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.631410 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.631421 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:48Z","lastTransitionTime":"2026-01-26T18:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.645108 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d16415ca-2740-4247-846a-9afd1ebcca48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4f461b168b044c50f281bafc5c7ef0d877392e3cc72edc7b2f0028cf8fe6b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7338aa3bff3561881f454689b4ae1ab8b46ddf950c45dd080107c7b78e6766a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ccdee3654b2923f02f6071aa3924d0934ed028d809dfbf120ba387637632dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c275106783e56387249df9619e22fd0eca28516545f77cead21b8c925f9c36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:48Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.689905 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6554c7-415f-457d-8121-82981ebe2781\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2838d2a1b16be346b2d6a63998cd81416ef81978be369242fae471f6a53fdbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf97240160ecd3f4e73effbeb33f85dad6c12afbfe10315b8624d5c366945d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cfbe9f1ae9deaee4bbb0db6d490c25bd86326a3b962d2221cffa8c7e8431cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35617b01f73620a31d80cfbb5bc2c444ee37cdf3cfd67d62b70f36c6738bfc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2decc4fe0a94f1c54bc9b532267b0cbac17f7762e628835a11ba40561c8971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:48Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.702877 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.702925 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.702969 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.702997 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:30:48 crc kubenswrapper[4737]: E0126 18:30:48.703128 4737 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 18:30:48 crc kubenswrapper[4737]: E0126 18:30:48.703144 4737 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 18:30:48 crc kubenswrapper[4737]: E0126 18:30:48.703164 4737 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 18:30:48 crc kubenswrapper[4737]: E0126 18:30:48.703176 4737 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:30:48 crc kubenswrapper[4737]: E0126 18:30:48.703192 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 18:30:52.703177608 +0000 UTC m=+26.011372316 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 18:30:48 crc kubenswrapper[4737]: E0126 18:30:48.703202 4737 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 18:30:48 crc kubenswrapper[4737]: E0126 18:30:48.703253 4737 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 18:30:48 crc kubenswrapper[4737]: E0126 18:30:48.703271 4737 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:30:48 crc kubenswrapper[4737]: E0126 18:30:48.703257 4737 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 18:30:48 crc kubenswrapper[4737]: E0126 18:30:48.703212 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 18:30:52.703200349 +0000 UTC m=+26.011395057 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:30:48 crc kubenswrapper[4737]: E0126 18:30:48.703489 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 18:30:52.703455435 +0000 UTC m=+26.011650293 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:30:48 crc kubenswrapper[4737]: E0126 18:30:48.703515 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 18:30:52.703501136 +0000 UTC m=+26.011696064 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.723933 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:48Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.734179 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.734239 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.734256 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.734281 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.734305 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:48Z","lastTransitionTime":"2026-01-26T18:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.765764 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:48Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.806694 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fsmsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79f4091b-95d7-420a-b90a-1b6f48fb634e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://182bb7a343b62287950a4012ccd463ab6a7d339540f40db94e83248958d49095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fsmsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:48Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.836359 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.836407 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.836419 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.836440 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.836454 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:48Z","lastTransitionTime":"2026-01-26T18:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.845218 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d16415ca-2740-4247-846a-9afd1ebcca48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4f461b168b044c50f281bafc5c7ef0d877392e3cc72edc7b2f0028cf8fe6b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7338aa3bff3561881f454689b4ae1ab8b46ddf950c45dd080107c7b78e6766a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ccdee3654b2923f02f6071aa3924d0934ed028d809dfbf120ba387637632dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c275106783e56387249df9619e22fd0eca28516545f77cead21b8c925f9c36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:48Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.883039 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qjff2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82627aad-2019-482e-934a-7e9729927a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d6c4b9c86f851e8346bde5364b9a2463869d85fef2bc4e705335f9253be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9ggl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qjff2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:48Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.923718 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afd75772-7900-46c3-b392-afb075e1cc08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44e1f827ccc2bfeece3e663dd96fc5e48e301dbf7ac31e381e7a33a8a4a422c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea5fce0e1e77606f5e8f6cb2c1b339d6b7b8174e1f68a050834be1f5bedfec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qxkj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:48Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.925741 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 08:34:45.709844021 +0000 UTC Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.938918 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.938950 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.938960 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.938976 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.938986 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:48Z","lastTransitionTime":"2026-01-26T18:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.963452 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d641e5-0291-480c-9413-478267450e45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d782bb5883158eb31686ef882923bc0fe18907ec34b462ad7641b8d0a6e675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce3c0b3eaf0ab467b2dbcadc4770536de6e0abf901c9636df113498aff77a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d13541d78d88ffb1e1dcff16556814da8c438d160fef0ea16468954f300dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209ecbbc6838b629efde256a421bfd4b6926d2a9cd2f02e4fb7df9325fdecfc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2968ec8a8ae174c006de379e7fae84b111c90cb44e51bb8d0fdcbc0e66a5842\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:30:39.472985 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:30:39.474507 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1374176662/tls.crt::/tmp/serving-cert-1374176662/tls.key\\\\\\\"\\\\nI0126 18:30:44.993991 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:30:44.996847 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:30:44.996868 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:30:44.996891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:30:44.996897 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:30:45.005311 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 18:30:45.005355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 18:30:45.005375 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005386 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005391 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:30:45.005396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:30:45.005400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:30:45.005403 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 18:30:45.006492 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45b34a9d70cf8504fd809f816a326a74e9a3c422a1ed1ffc221e72f90629b420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:48Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.981624 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.981653 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:30:48 crc kubenswrapper[4737]: I0126 18:30:48.981724 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:30:48 crc kubenswrapper[4737]: E0126 18:30:48.981801 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:30:48 crc kubenswrapper[4737]: E0126 18:30:48.982094 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:30:48 crc kubenswrapper[4737]: E0126 18:30:48.982199 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.004455 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3512c1850ad62aad579725558f83686c93dad645cc56cc852438dc2b4a6c35c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:49Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.041613 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.041665 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.041677 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.041698 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.041711 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:49Z","lastTransitionTime":"2026-01-26T18:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.046763 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925178b6076a7c576bc84fb58255bac5e1dcd86eda3fd94f0f93504a7cd7625a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548ccd6a70ea74a2030c871c94d8d7ac1de313de023b6a16b4a3a3bb2a2d7003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:49Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.086169 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e65f82894ec49f5a88663c42b77ad7d6f19fa922c45052d24272144140f979b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:49Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.123276 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:49Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.144463 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.144514 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.144527 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.144552 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.144566 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:49Z","lastTransitionTime":"2026-01-26T18:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.169063 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb40773-20dc-48ef-bf7f-17f4a042b01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jgjrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:49Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.204750 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvbml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f32d3b75-6d15-4fb7-9559-d3df1d77071e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvbml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:49Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.247872 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.247924 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.247935 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.248007 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.248025 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:49Z","lastTransitionTime":"2026-01-26T18:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.291981 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" event={"ID":"ecb40773-20dc-48ef-bf7f-17f4a042b01c","Type":"ContainerStarted","Data":"0330027f82eafcc297d9ea91babd144a993a1f9d5b5f376274521364421fb70d"} Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.292062 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" event={"ID":"ecb40773-20dc-48ef-bf7f-17f4a042b01c","Type":"ContainerStarted","Data":"8b3d9e7e5a84aa89a81ca65443973a1a75bc1b54c2f3f5cbd6cf7a00f8d04704"} Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.292125 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" event={"ID":"ecb40773-20dc-48ef-bf7f-17f4a042b01c","Type":"ContainerStarted","Data":"13f6776860714e1ab348c9b7a767366f0b4b425d08ed27ee64abfaf2770f1889"} Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.292151 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" event={"ID":"ecb40773-20dc-48ef-bf7f-17f4a042b01c","Type":"ContainerStarted","Data":"66ec75b04c2383311d9d4c54148415f6f45821810aa9e68c12fa36c22637341c"} Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.292175 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" event={"ID":"ecb40773-20dc-48ef-bf7f-17f4a042b01c","Type":"ContainerStarted","Data":"ee2019712957d6ff1e329746e69d806c2cb554917815ebbac73b321965e5d981"} Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.292199 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" event={"ID":"ecb40773-20dc-48ef-bf7f-17f4a042b01c","Type":"ContainerStarted","Data":"067cf449746568a0f2fa056863be0cc0bf40390eb6f239e011639fdc05f2ea8f"} Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.295030 4737 generic.go:334] "Generic (PLEG): container finished" podID="f32d3b75-6d15-4fb7-9559-d3df1d77071e" containerID="964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc" exitCode=0 Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.295204 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cvbml" event={"ID":"f32d3b75-6d15-4fb7-9559-d3df1d77071e","Type":"ContainerDied","Data":"964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc"} Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.323508 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6554c7-415f-457d-8121-82981ebe2781\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2838d2a1b16be346b2d6a63998cd81416ef81978be369242fae471f6a53fdbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf97240160ecd3f4e73effbeb33f85dad6c12afbfe10315b8624d5c366945d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cfbe9f1ae9deaee4bbb0db6d490c25bd86326a3b962d2221cffa8c7e8431cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35617b01f73620a31d80cfbb5bc2c444ee37cdf3cfd67d62b70f36c6738bfc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2decc4fe0a94f1c54bc9b532267b0cbac17f7762e628835a11ba40561c8971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:49Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.341262 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:49Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.351203 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.351246 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.351255 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.351273 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.351285 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:49Z","lastTransitionTime":"2026-01-26T18:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.357742 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:49Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.371843 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fsmsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79f4091b-95d7-420a-b90a-1b6f48fb634e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://182bb7a343b62287950a4012ccd463ab6a7d339540f40db94e83248958d49095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fsmsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:49Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.403352 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d16415ca-2740-4247-846a-9afd1ebcca48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4f461b168b044c50f281bafc5c7ef0d877392e3cc72edc7b2f0028cf8fe6b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7338aa3bff3561881f454689b4ae1ab8b46ddf950c45dd080107c7b78e6766a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ccdee3654b2923f02f6071aa3924d0934ed028d809dfbf120ba387637632dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c275106783e56387249df9619e22fd0eca28516545f77cead21b8c925f9c36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:49Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.442686 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qjff2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82627aad-2019-482e-934a-7e9729927a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d6c4b9c86f851e8346bde5364b9a2463869d85fef2bc4e705335f9253be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9ggl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qjff2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:49Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.453815 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.453847 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.453862 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.453882 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.453900 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:49Z","lastTransitionTime":"2026-01-26T18:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.483237 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afd75772-7900-46c3-b392-afb075e1cc08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44e1f827ccc2bfeece3e663dd96fc5e48e301dbf7ac31e381e7a33a8a4a422c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea5fce0e1e77606f5e8f6cb2c1b339d6b7b8174e1f68a050834be1f5bedfec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qxkj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:49Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.524190 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d641e5-0291-480c-9413-478267450e45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d782bb5883158eb31686ef882923bc0fe18907ec34b462ad7641b8d0a6e675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce3c0b3eaf0ab467b2dbcadc4770536de6e0abf901c9636df113498aff77a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d13541d78d88ffb1e1dcff16556814da8c438d160fef0ea16468954f300dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209ecbbc6838b629efde256a421bfd4b6926d2a9cd2f02e4fb7df9325fdecfc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2968ec8a8ae174c006de379e7fae84b111c90cb44e51bb8d0fdcbc0e66a5842\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:30:39.472985 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:30:39.474507 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1374176662/tls.crt::/tmp/serving-cert-1374176662/tls.key\\\\\\\"\\\\nI0126 18:30:44.993991 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:30:44.996847 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:30:44.996868 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:30:44.996891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:30:44.996897 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:30:45.005311 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 18:30:45.005355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 18:30:45.005375 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005386 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005391 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:30:45.005396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:30:45.005400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:30:45.005403 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 18:30:45.006492 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45b34a9d70cf8504fd809f816a326a74e9a3c422a1ed1ffc221e72f90629b420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:49Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.557565 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.557609 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.557621 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.557642 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.557656 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:49Z","lastTransitionTime":"2026-01-26T18:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.563776 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3512c1850ad62aad579725558f83686c93dad645cc56cc852438dc2b4a6c35c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:49Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.606738 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925178b6076a7c576bc84fb58255bac5e1dcd86eda3fd94f0f93504a7cd7625a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548ccd6a70ea74a2030c871c94d8d7ac1de313de023b6a16b4a3a3bb2a2d7003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:49Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.643903 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e65f82894ec49f5a88663c42b77ad7d6f19fa922c45052d24272144140f979b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:49Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.660400 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.660458 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.660470 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.660487 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.660499 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:49Z","lastTransitionTime":"2026-01-26T18:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.683574 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:49Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.734742 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb40773-20dc-48ef-bf7f-17f4a042b01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jgjrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:49Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.764340 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.764392 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.764438 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.764461 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.764473 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:49Z","lastTransitionTime":"2026-01-26T18:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.779654 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvbml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f32d3b75-6d15-4fb7-9559-d3df1d77071e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvbml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:49Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.798959 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.867219 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.867266 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.867277 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.867296 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.867307 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:49Z","lastTransitionTime":"2026-01-26T18:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.880102 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.897056 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.926621 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 06:56:11.840252018 +0000 UTC Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.970619 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.970666 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.970680 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.970700 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:49 crc kubenswrapper[4737]: I0126 18:30:49.970714 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:49Z","lastTransitionTime":"2026-01-26T18:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.072731 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.072783 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.072798 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.072821 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.072834 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:50Z","lastTransitionTime":"2026-01-26T18:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.179830 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.179911 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.179925 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.179951 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.179973 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:50Z","lastTransitionTime":"2026-01-26T18:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.290026 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.290097 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.290108 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.290126 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.290139 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:50Z","lastTransitionTime":"2026-01-26T18:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.303629 4737 generic.go:334] "Generic (PLEG): container finished" podID="f32d3b75-6d15-4fb7-9559-d3df1d77071e" containerID="c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184" exitCode=0 Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.303734 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cvbml" event={"ID":"f32d3b75-6d15-4fb7-9559-d3df1d77071e","Type":"ContainerDied","Data":"c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184"} Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.321435 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvbml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f32d3b75-6d15-4fb7-9559-d3df1d77071e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvbml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:50Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.337200 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:50Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.350041 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fsmsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79f4091b-95d7-420a-b90a-1b6f48fb634e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://182bb7a343b62287950a4012ccd463ab6a7d339540f40db94e83248958d49095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fsmsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:50Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.374900 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6554c7-415f-457d-8121-82981ebe2781\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2838d2a1b16be346b2d6a63998cd81416ef81978be369242fae471f6a53fdbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf97240160ecd3f4e73effbeb33f85dad6c12afbfe10315b8624d5c366945d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cfbe9f1ae9deaee4bbb0db6d490c25bd86326a3b962d2221cffa8c7e8431cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35617b01f73620a31d80cfbb5bc2c444ee37cdf3cfd67d62b70f36c6738bfc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2decc4fe0a94f1c54bc9b532267b0cbac17f7762e628835a11ba40561c8971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:50Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.389099 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:50Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.395900 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.395982 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.395997 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.396019 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.396033 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:50Z","lastTransitionTime":"2026-01-26T18:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.409207 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qjff2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82627aad-2019-482e-934a-7e9729927a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d6c4b9c86f851e8346bde5364b9a2463869d85fef2bc4e705335f9253be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9ggl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qjff2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:50Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.428225 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afd75772-7900-46c3-b392-afb075e1cc08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44e1f827ccc2bfeece3e663dd96fc5e48e301dbf7ac31e381e7a33a8a4a422c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea5fce0e1e77606f5e8f6cb2c1b339d6b7b8174e1f68a050834be1f5bedfec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qxkj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:50Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.443249 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d16415ca-2740-4247-846a-9afd1ebcca48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4f461b168b044c50f281bafc5c7ef0d877392e3cc72edc7b2f0028cf8fe6b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7338aa3bff3561881f454689b4ae1ab8b46ddf950c45dd080107c7b78e6766a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ccdee3654b2923f02f6071aa3924d0934ed028d809dfbf120ba387637632dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c275106783e56387249df9619e22fd0eca28516545f77cead21b8c925f9c36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:50Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.457552 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925178b6076a7c576bc84fb58255bac5e1dcd86eda3fd94f0f93504a7cd7625a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548ccd6a70ea74a2030c871c94d8d7ac1de313de023b6a16b4a3a3bb2a2d7003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:50Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.474182 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e65f82894ec49f5a88663c42b77ad7d6f19fa922c45052d24272144140f979b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:50Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.490852 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:50Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.499266 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.499335 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.499350 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.499372 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.499385 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:50Z","lastTransitionTime":"2026-01-26T18:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.511363 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb40773-20dc-48ef-bf7f-17f4a042b01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jgjrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:50Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.528216 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d641e5-0291-480c-9413-478267450e45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d782bb5883158eb31686ef882923bc0fe18907ec34b462ad7641b8d0a6e675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce3c0b3eaf0ab467b2dbcadc4770536de6e0abf901c9636df113498aff77a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d13541d78d88ffb1e1dcff16556814da8c438d160fef0ea16468954f300dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209ecbbc6838b629efde256a421bfd4b6926d2a9cd2f02e4fb7df9325fdecfc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2968ec8a8ae174c006de379e7fae84b111c90cb44e51bb8d0fdcbc0e66a5842\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:30:39.472985 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:30:39.474507 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1374176662/tls.crt::/tmp/serving-cert-1374176662/tls.key\\\\\\\"\\\\nI0126 18:30:44.993991 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:30:44.996847 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:30:44.996868 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:30:44.996891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:30:44.996897 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:30:45.005311 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 18:30:45.005355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 18:30:45.005375 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005386 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005391 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:30:45.005396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:30:45.005400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:30:45.005403 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 18:30:45.006492 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45b34a9d70cf8504fd809f816a326a74e9a3c422a1ed1ffc221e72f90629b420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:50Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.542792 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3512c1850ad62aad579725558f83686c93dad645cc56cc852438dc2b4a6c35c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:50Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.602492 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.602521 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.602530 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.602546 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.602554 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:50Z","lastTransitionTime":"2026-01-26T18:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.705712 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.705793 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.705808 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.705829 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.705842 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:50Z","lastTransitionTime":"2026-01-26T18:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.808367 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.808402 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.808411 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.808425 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.808435 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:50Z","lastTransitionTime":"2026-01-26T18:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.910383 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.910425 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.910437 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.910456 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.910469 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:50Z","lastTransitionTime":"2026-01-26T18:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.926939 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 15:12:10.006887922 +0000 UTC Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.981650 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.981681 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:30:50 crc kubenswrapper[4737]: E0126 18:30:50.981812 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:30:50 crc kubenswrapper[4737]: I0126 18:30:50.981832 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:30:50 crc kubenswrapper[4737]: E0126 18:30:50.981914 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:30:50 crc kubenswrapper[4737]: E0126 18:30:50.981976 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.012249 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.012282 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.012292 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.012307 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.012316 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:51Z","lastTransitionTime":"2026-01-26T18:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.114418 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.114459 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.114470 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.114490 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.114500 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:51Z","lastTransitionTime":"2026-01-26T18:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.217364 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.217442 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.217463 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.217493 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.217526 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:51Z","lastTransitionTime":"2026-01-26T18:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.311192 4737 generic.go:334] "Generic (PLEG): container finished" podID="f32d3b75-6d15-4fb7-9559-d3df1d77071e" containerID="e81b1b4cdfa531e63bf8499478cc1f6813d659b2b1b160d374133713382cff7b" exitCode=0 Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.311257 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cvbml" event={"ID":"f32d3b75-6d15-4fb7-9559-d3df1d77071e","Type":"ContainerDied","Data":"e81b1b4cdfa531e63bf8499478cc1f6813d659b2b1b160d374133713382cff7b"} Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.325106 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.325169 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.325183 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.325204 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.325217 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:51Z","lastTransitionTime":"2026-01-26T18:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.334367 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvbml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f32d3b75-6d15-4fb7-9559-d3df1d77071e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81b1b4cdfa531e63bf8499478cc1f6813d659b2b1b160d374133713382cff7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81b1b4cdfa531e63bf8499478cc1f6813d659b2b1b160d374133713382cff7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvbml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:51Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.358256 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6554c7-415f-457d-8121-82981ebe2781\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2838d2a1b16be346b2d6a63998cd81416ef81978be369242fae471f6a53fdbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf97240160ecd3f4e73effbeb33f85dad6c12afbfe10315b8624d5c366945d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cfbe9f1ae9deaee4bbb0db6d490c25bd86326a3b962d2221cffa8c7e8431cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35617b01f73620a31d80cfbb5bc2c444ee37cdf3cfd67d62b70f36c6738bfc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2decc4fe0a94f1c54bc9b532267b0cbac17f7762e628835a11ba40561c8971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:51Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.375997 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:51Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.390907 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:51Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.404009 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fsmsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79f4091b-95d7-420a-b90a-1b6f48fb634e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://182bb7a343b62287950a4012ccd463ab6a7d339540f40db94e83248958d49095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fsmsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:51Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.418426 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d16415ca-2740-4247-846a-9afd1ebcca48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4f461b168b044c50f281bafc5c7ef0d877392e3cc72edc7b2f0028cf8fe6b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7338aa3bff3561881f454689b4ae1ab8b46ddf950c45dd080107c7b78e6766a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ccdee3654b2923f02f6071aa3924d0934ed028d809dfbf120ba387637632dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c275106783e56387249df9619e22fd0eca28516545f77cead21b8c925f9c36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:51Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.430059 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.430134 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.430146 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.430177 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.430194 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:51Z","lastTransitionTime":"2026-01-26T18:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.433787 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qjff2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82627aad-2019-482e-934a-7e9729927a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d6c4b9c86f851e8346bde5364b9a2463869d85fef2bc4e705335f9253be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9ggl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qjff2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:51Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.450019 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afd75772-7900-46c3-b392-afb075e1cc08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44e1f827ccc2bfeece3e663dd96fc5e48e301dbf7ac31e381e7a33a8a4a422c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea5fce0e1e77606f5e8f6cb2c1b339d6b7b8174e1f68a050834be1f5bedfec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qxkj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:51Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.469961 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb40773-20dc-48ef-bf7f-17f4a042b01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jgjrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:51Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.485526 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d641e5-0291-480c-9413-478267450e45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d782bb5883158eb31686ef882923bc0fe18907ec34b462ad7641b8d0a6e675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce3c0b3eaf0ab467b2dbcadc4770536de6e0abf901c9636df113498aff77a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d13541d78d88ffb1e1dcff16556814da8c438d160fef0ea16468954f300dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209ecbbc6838b629efde256a421bfd4b6926d2a9cd2f02e4fb7df9325fdecfc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2968ec8a8ae174c006de379e7fae84b111c90cb44e51bb8d0fdcbc0e66a5842\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:30:39.472985 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:30:39.474507 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1374176662/tls.crt::/tmp/serving-cert-1374176662/tls.key\\\\\\\"\\\\nI0126 18:30:44.993991 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:30:44.996847 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:30:44.996868 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:30:44.996891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:30:44.996897 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:30:45.005311 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 18:30:45.005355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 18:30:45.005375 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005386 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005391 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:30:45.005396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:30:45.005400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:30:45.005403 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 18:30:45.006492 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45b34a9d70cf8504fd809f816a326a74e9a3c422a1ed1ffc221e72f90629b420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:51Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.500457 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3512c1850ad62aad579725558f83686c93dad645cc56cc852438dc2b4a6c35c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:51Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.516987 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925178b6076a7c576bc84fb58255bac5e1dcd86eda3fd94f0f93504a7cd7625a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548ccd6a70ea74a2030c871c94d8d7ac1de313de023b6a16b4a3a3bb2a2d7003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:51Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.531847 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e65f82894ec49f5a88663c42b77ad7d6f19fa922c45052d24272144140f979b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:51Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.533643 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.533708 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.533725 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.533748 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.533787 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:51Z","lastTransitionTime":"2026-01-26T18:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.549841 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:51Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.637145 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.637193 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.637205 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.637223 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.637237 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:51Z","lastTransitionTime":"2026-01-26T18:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.740419 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.740503 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.740519 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.740549 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.740566 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:51Z","lastTransitionTime":"2026-01-26T18:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.843706 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.843749 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.843758 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.843776 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.843787 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:51Z","lastTransitionTime":"2026-01-26T18:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.927157 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 23:17:44.497385287 +0000 UTC Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.946861 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.946948 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.946968 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.947004 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:51 crc kubenswrapper[4737]: I0126 18:30:51.947023 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:51Z","lastTransitionTime":"2026-01-26T18:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.049722 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.049773 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.049796 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.049819 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.049833 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:52Z","lastTransitionTime":"2026-01-26T18:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.153282 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.153327 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.153336 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.153350 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.153359 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:52Z","lastTransitionTime":"2026-01-26T18:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.256035 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.256116 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.256129 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.256146 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.256159 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:52Z","lastTransitionTime":"2026-01-26T18:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.319919 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" event={"ID":"ecb40773-20dc-48ef-bf7f-17f4a042b01c","Type":"ContainerStarted","Data":"570bf995c9ab0a04cff8ada5b82ef19c9299d86ab480a43ea1446a3aedb8cd86"} Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.325326 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cvbml" event={"ID":"f32d3b75-6d15-4fb7-9559-d3df1d77071e","Type":"ContainerStarted","Data":"3e973f3c659c65849958ccb32d18d8e67d42874690df337699f6cf976485c536"} Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.340780 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afd75772-7900-46c3-b392-afb075e1cc08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44e1f827ccc2bfeece3e663dd96fc5e48e301dbf7ac31e381e7a33a8a4a422c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea5fce0e1e77606f5e8f6cb2c1b339d6b7b8174e1f68a050834be1f5bedfec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qxkj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:52Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.359245 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.359897 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.359913 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.359934 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.359952 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:52Z","lastTransitionTime":"2026-01-26T18:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.361195 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d16415ca-2740-4247-846a-9afd1ebcca48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4f461b168b044c50f281bafc5c7ef0d877392e3cc72edc7b2f0028cf8fe6b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7338aa3bff3561881f454689b4ae1ab8b46ddf950c45dd080107c7b78e6766a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ccdee3654b2923f02f6071aa3924d0934ed028d809dfbf120ba387637632dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c275106783e56387249df9619e22fd0eca28516545f77cead21b8c925f9c36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:52Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.376723 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qjff2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82627aad-2019-482e-934a-7e9729927a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d6c4b9c86f851e8346bde5364b9a2463869d85fef2bc4e705335f9253be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9ggl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qjff2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:52Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.393772 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e65f82894ec49f5a88663c42b77ad7d6f19fa922c45052d24272144140f979b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:52Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.409004 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:52Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.431127 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb40773-20dc-48ef-bf7f-17f4a042b01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jgjrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:52Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.447211 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d641e5-0291-480c-9413-478267450e45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d782bb5883158eb31686ef882923bc0fe18907ec34b462ad7641b8d0a6e675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce3c0b3eaf0ab467b2dbcadc4770536de6e0abf901c9636df113498aff77a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d13541d78d88ffb1e1dcff16556814da8c438d160fef0ea16468954f300dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209ecbbc6838b629efde256a421bfd4b6926d2a9cd2f02e4fb7df9325fdecfc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2968ec8a8ae174c006de379e7fae84b111c90cb44e51bb8d0fdcbc0e66a5842\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:30:39.472985 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:30:39.474507 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1374176662/tls.crt::/tmp/serving-cert-1374176662/tls.key\\\\\\\"\\\\nI0126 18:30:44.993991 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:30:44.996847 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:30:44.996868 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:30:44.996891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:30:44.996897 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:30:45.005311 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 18:30:45.005355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 18:30:45.005375 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005386 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005391 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:30:45.005396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:30:45.005400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:30:45.005403 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 18:30:45.006492 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45b34a9d70cf8504fd809f816a326a74e9a3c422a1ed1ffc221e72f90629b420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:52Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.462369 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.462441 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.462458 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.462482 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.462498 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:52Z","lastTransitionTime":"2026-01-26T18:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.465118 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3512c1850ad62aad579725558f83686c93dad645cc56cc852438dc2b4a6c35c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:52Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.480398 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925178b6076a7c576bc84fb58255bac5e1dcd86eda3fd94f0f93504a7cd7625a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548ccd6a70ea74a2030c871c94d8d7ac1de313de023b6a16b4a3a3bb2a2d7003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:52Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.500466 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvbml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f32d3b75-6d15-4fb7-9559-d3df1d77071e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e973f3c659c65849958ccb32d18d8e67d42874690df337699f6cf976485c536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81b1b4cdfa531e63bf8499478cc1f6813d659b2b1b160d374133713382cff7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81b1b4cdfa531e63bf8499478cc1f6813d659b2b1b160d374133713382cff7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvbml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:52Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.518227 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fsmsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79f4091b-95d7-420a-b90a-1b6f48fb634e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://182bb7a343b62287950a4012ccd463ab6a7d339540f40db94e83248958d49095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fsmsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:52Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.540409 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6554c7-415f-457d-8121-82981ebe2781\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2838d2a1b16be346b2d6a63998cd81416ef81978be369242fae471f6a53fdbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf97240160ecd3f4e73effbeb33f85dad6c12afbfe10315b8624d5c366945d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cfbe9f1ae9deaee4bbb0db6d490c25bd86326a3b962d2221cffa8c7e8431cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35617b01f73620a31d80cfbb5bc2c444ee37cdf3cfd67d62b70f36c6738bfc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2decc4fe0a94f1c54bc9b532267b0cbac17f7762e628835a11ba40561c8971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:52Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.556651 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:52Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.566008 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.566114 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.566130 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.566156 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.566171 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:52Z","lastTransitionTime":"2026-01-26T18:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.575371 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:52Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.645096 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:30:52 crc kubenswrapper[4737]: E0126 18:30:52.645323 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:31:00.645294984 +0000 UTC m=+33.953489692 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.668642 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.668679 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.668688 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.668703 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.668712 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:52Z","lastTransitionTime":"2026-01-26T18:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.746912 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.746967 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.747001 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.747029 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:30:52 crc kubenswrapper[4737]: E0126 18:30:52.747166 4737 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 18:30:52 crc kubenswrapper[4737]: E0126 18:30:52.747181 4737 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 18:30:52 crc kubenswrapper[4737]: E0126 18:30:52.747236 4737 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 18:30:52 crc kubenswrapper[4737]: E0126 18:30:52.747262 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 18:31:00.747237079 +0000 UTC m=+34.055431787 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 18:30:52 crc kubenswrapper[4737]: E0126 18:30:52.747276 4737 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 18:30:52 crc kubenswrapper[4737]: E0126 18:30:52.747287 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 18:31:00.7472782 +0000 UTC m=+34.055472908 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 18:30:52 crc kubenswrapper[4737]: E0126 18:30:52.747298 4737 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:30:52 crc kubenswrapper[4737]: E0126 18:30:52.747298 4737 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 18:30:52 crc kubenswrapper[4737]: E0126 18:30:52.747342 4737 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 18:30:52 crc kubenswrapper[4737]: E0126 18:30:52.747370 4737 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:30:52 crc kubenswrapper[4737]: E0126 18:30:52.747373 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 18:31:00.747350092 +0000 UTC m=+34.055544960 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:30:52 crc kubenswrapper[4737]: E0126 18:30:52.747504 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 18:31:00.747450395 +0000 UTC m=+34.055645243 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.772163 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.772206 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.772217 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.772233 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.772245 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:52Z","lastTransitionTime":"2026-01-26T18:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.875143 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.875197 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.875211 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.875237 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.875255 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:52Z","lastTransitionTime":"2026-01-26T18:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.928032 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 20:12:57.684033149 +0000 UTC Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.978938 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.978991 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.979003 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.979020 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.979034 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:52Z","lastTransitionTime":"2026-01-26T18:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.981260 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.981332 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:30:52 crc kubenswrapper[4737]: I0126 18:30:52.981357 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:30:52 crc kubenswrapper[4737]: E0126 18:30:52.981387 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:30:52 crc kubenswrapper[4737]: E0126 18:30:52.981472 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:30:52 crc kubenswrapper[4737]: E0126 18:30:52.981618 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:30:53 crc kubenswrapper[4737]: I0126 18:30:53.081586 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:53 crc kubenswrapper[4737]: I0126 18:30:53.081642 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:53 crc kubenswrapper[4737]: I0126 18:30:53.081659 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:53 crc kubenswrapper[4737]: I0126 18:30:53.081682 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:53 crc kubenswrapper[4737]: I0126 18:30:53.081700 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:53Z","lastTransitionTime":"2026-01-26T18:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:53 crc kubenswrapper[4737]: I0126 18:30:53.184188 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:53 crc kubenswrapper[4737]: I0126 18:30:53.184222 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:53 crc kubenswrapper[4737]: I0126 18:30:53.184232 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:53 crc kubenswrapper[4737]: I0126 18:30:53.184248 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:53 crc kubenswrapper[4737]: I0126 18:30:53.184257 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:53Z","lastTransitionTime":"2026-01-26T18:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:53 crc kubenswrapper[4737]: I0126 18:30:53.287832 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:53 crc kubenswrapper[4737]: I0126 18:30:53.287894 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:53 crc kubenswrapper[4737]: I0126 18:30:53.287907 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:53 crc kubenswrapper[4737]: I0126 18:30:53.287928 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:53 crc kubenswrapper[4737]: I0126 18:30:53.287941 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:53Z","lastTransitionTime":"2026-01-26T18:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:53 crc kubenswrapper[4737]: I0126 18:30:53.391753 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:53 crc kubenswrapper[4737]: I0126 18:30:53.391814 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:53 crc kubenswrapper[4737]: I0126 18:30:53.391828 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:53 crc kubenswrapper[4737]: I0126 18:30:53.391848 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:53 crc kubenswrapper[4737]: I0126 18:30:53.391860 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:53Z","lastTransitionTime":"2026-01-26T18:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:53 crc kubenswrapper[4737]: I0126 18:30:53.494136 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:53 crc kubenswrapper[4737]: I0126 18:30:53.494188 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:53 crc kubenswrapper[4737]: I0126 18:30:53.494206 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:53 crc kubenswrapper[4737]: I0126 18:30:53.494222 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:53 crc kubenswrapper[4737]: I0126 18:30:53.494235 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:53Z","lastTransitionTime":"2026-01-26T18:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:53 crc kubenswrapper[4737]: I0126 18:30:53.598202 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:53 crc kubenswrapper[4737]: I0126 18:30:53.598272 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:53 crc kubenswrapper[4737]: I0126 18:30:53.598289 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:53 crc kubenswrapper[4737]: I0126 18:30:53.598321 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:53 crc kubenswrapper[4737]: I0126 18:30:53.598337 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:53Z","lastTransitionTime":"2026-01-26T18:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:53 crc kubenswrapper[4737]: I0126 18:30:53.702269 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:53 crc kubenswrapper[4737]: I0126 18:30:53.702342 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:53 crc kubenswrapper[4737]: I0126 18:30:53.702367 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:53 crc kubenswrapper[4737]: I0126 18:30:53.702395 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:53 crc kubenswrapper[4737]: I0126 18:30:53.702416 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:53Z","lastTransitionTime":"2026-01-26T18:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:53 crc kubenswrapper[4737]: I0126 18:30:53.806283 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:53 crc kubenswrapper[4737]: I0126 18:30:53.806372 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:53 crc kubenswrapper[4737]: I0126 18:30:53.806389 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:53 crc kubenswrapper[4737]: I0126 18:30:53.806414 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:53 crc kubenswrapper[4737]: I0126 18:30:53.806459 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:53Z","lastTransitionTime":"2026-01-26T18:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:53 crc kubenswrapper[4737]: I0126 18:30:53.909486 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:53 crc kubenswrapper[4737]: I0126 18:30:53.909535 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:53 crc kubenswrapper[4737]: I0126 18:30:53.909549 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:53 crc kubenswrapper[4737]: I0126 18:30:53.909578 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:53 crc kubenswrapper[4737]: I0126 18:30:53.909592 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:53Z","lastTransitionTime":"2026-01-26T18:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:53 crc kubenswrapper[4737]: I0126 18:30:53.929227 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 16:40:07.488756638 +0000 UTC Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.012141 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.012196 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.012213 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.012235 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.012251 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:54Z","lastTransitionTime":"2026-01-26T18:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.115039 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.115157 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.115182 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.115212 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.115240 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:54Z","lastTransitionTime":"2026-01-26T18:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.218506 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.218547 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.218561 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.218581 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.218593 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:54Z","lastTransitionTime":"2026-01-26T18:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.321811 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.321881 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.321893 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.321917 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.321931 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:54Z","lastTransitionTime":"2026-01-26T18:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.340358 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" event={"ID":"ecb40773-20dc-48ef-bf7f-17f4a042b01c","Type":"ContainerStarted","Data":"b6d3752e2178a20cb4f04bcb4301397a5888811fbaaf3d02403559e4cf938832"} Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.341053 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.341192 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.357477 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qjff2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82627aad-2019-482e-934a-7e9729927a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d6c4b9c86f851e8346bde5364b9a2463869d85fef2bc4e705335f9253be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9ggl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qjff2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:54Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.410834 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.411596 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afd75772-7900-46c3-b392-afb075e1cc08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44e1f827ccc2bfeece3e663dd96fc5e48e301dbf7ac31e381e7a33a8a4a422c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea5fce0e1e77606f5e8f6cb2c1b339d6b7b8174e1f68a050834be1f5bedfec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qxkj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:54Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.425131 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.425194 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.425204 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.425224 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.425240 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:54Z","lastTransitionTime":"2026-01-26T18:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.426150 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d16415ca-2740-4247-846a-9afd1ebcca48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4f461b168b044c50f281bafc5c7ef0d877392e3cc72edc7b2f0028cf8fe6b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7338aa3bff3561881f454689b4ae1ab8b46ddf950c45dd080107c7b78e6766a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ccdee3654b2923f02f6071aa3924d0934ed028d809dfbf120ba387637632dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c275106783e56387249df9619e22fd0eca28516545f77cead21b8c925f9c36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:54Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.439710 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925178b6076a7c576bc84fb58255bac5e1dcd86eda3fd94f0f93504a7cd7625a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548ccd6a70ea74a2030c871c94d8d7ac1de313de023b6a16b4a3a3bb2a2d7003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:54Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.460463 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e65f82894ec49f5a88663c42b77ad7d6f19fa922c45052d24272144140f979b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:54Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.475732 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:54Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.495254 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb40773-20dc-48ef-bf7f-17f4a042b01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66ec75b04c2383311d9d4c54148415f6f45821810aa9e68c12fa36c22637341c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13f6776860714e1ab348c9b7a767366f0b4b425d08ed27ee64abfaf2770f1889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0330027f82eafcc297d9ea91babd144a993a1f9d5b5f376274521364421fb70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b3d9e7e5a84aa89a81ca65443973a1a75bc1b54c2f3f5cbd6cf7a00f8d04704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2019712957d6ff1e329746e69d806c2cb554917815ebbac73b321965e5d981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://067cf449746568a0f2fa056863be0cc0bf40390eb6f239e011639fdc05f2ea8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6d3752e2178a20cb4f04bcb4301397a5888811fbaaf3d02403559e4cf938832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://570bf995c9ab0a04cff8ada5b82ef19c9299d86ab480a43ea1446a3aedb8cd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jgjrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:54Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.509779 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d641e5-0291-480c-9413-478267450e45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d782bb5883158eb31686ef882923bc0fe18907ec34b462ad7641b8d0a6e675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce3c0b3eaf0ab467b2dbcadc4770536de6e0abf901c9636df113498aff77a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d13541d78d88ffb1e1dcff16556814da8c438d160fef0ea16468954f300dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209ecbbc6838b629efde256a421bfd4b6926d2a9cd2f02e4fb7df9325fdecfc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2968ec8a8ae174c006de379e7fae84b111c90cb44e51bb8d0fdcbc0e66a5842\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:30:39.472985 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:30:39.474507 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1374176662/tls.crt::/tmp/serving-cert-1374176662/tls.key\\\\\\\"\\\\nI0126 18:30:44.993991 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:30:44.996847 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:30:44.996868 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:30:44.996891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:30:44.996897 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:30:45.005311 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 18:30:45.005355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 18:30:45.005375 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005386 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005391 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:30:45.005396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:30:45.005400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:30:45.005403 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 18:30:45.006492 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45b34a9d70cf8504fd809f816a326a74e9a3c422a1ed1ffc221e72f90629b420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:54Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.522799 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3512c1850ad62aad579725558f83686c93dad645cc56cc852438dc2b4a6c35c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:54Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.528357 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.528401 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.528413 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.528435 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.528446 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:54Z","lastTransitionTime":"2026-01-26T18:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.540911 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvbml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f32d3b75-6d15-4fb7-9559-d3df1d77071e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e973f3c659c65849958ccb32d18d8e67d42874690df337699f6cf976485c536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81b1b4cdfa531e63bf8499478cc1f6813d659b2b1b160d374133713382cff7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81b1b4cdfa531e63bf8499478cc1f6813d659b2b1b160d374133713382cff7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvbml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:54Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.559808 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:54Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.570456 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fsmsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79f4091b-95d7-420a-b90a-1b6f48fb634e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://182bb7a343b62287950a4012ccd463ab6a7d339540f40db94e83248958d49095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fsmsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:54Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.591239 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6554c7-415f-457d-8121-82981ebe2781\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2838d2a1b16be346b2d6a63998cd81416ef81978be369242fae471f6a53fdbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf97240160ecd3f4e73effbeb33f85dad6c12afbfe10315b8624d5c366945d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cfbe9f1ae9deaee4bbb0db6d490c25bd86326a3b962d2221cffa8c7e8431cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35617b01f73620a31d80cfbb5bc2c444ee37cdf3cfd67d62b70f36c6738bfc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2decc4fe0a94f1c54bc9b532267b0cbac17f7762e628835a11ba40561c8971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:54Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.605328 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:54Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.627179 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:54Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.631803 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.631875 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.631893 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.631920 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.631937 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:54Z","lastTransitionTime":"2026-01-26T18:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.641960 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:54Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.654440 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fsmsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79f4091b-95d7-420a-b90a-1b6f48fb634e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://182bb7a343b62287950a4012ccd463ab6a7d339540f40db94e83248958d49095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fsmsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:54Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.679620 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6554c7-415f-457d-8121-82981ebe2781\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2838d2a1b16be346b2d6a63998cd81416ef81978be369242fae471f6a53fdbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf97240160ecd3f4e73effbeb33f85dad6c12afbfe10315b8624d5c366945d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cfbe9f1ae9deaee4bbb0db6d490c25bd86326a3b962d2221cffa8c7e8431cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35617b01f73620a31d80cfbb5bc2c444ee37cdf3cfd67d62b70f36c6738bfc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2decc4fe0a94f1c54bc9b532267b0cbac17f7762e628835a11ba40561c8971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:54Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.702721 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qjff2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82627aad-2019-482e-934a-7e9729927a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d6c4b9c86f851e8346bde5364b9a2463869d85fef2bc4e705335f9253be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9ggl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qjff2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:54Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.717255 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afd75772-7900-46c3-b392-afb075e1cc08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44e1f827ccc2bfeece3e663dd96fc5e48e301dbf7ac31e381e7a33a8a4a422c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea5fce0e1e77606f5e8f6cb2c1b339d6b7b8174e1f68a050834be1f5bedfec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qxkj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:54Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.729635 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d16415ca-2740-4247-846a-9afd1ebcca48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4f461b168b044c50f281bafc5c7ef0d877392e3cc72edc7b2f0028cf8fe6b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7338aa3bff3561881f454689b4ae1ab8b46ddf950c45dd080107c7b78e6766a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ccdee3654b2923f02f6071aa3924d0934ed028d809dfbf120ba387637632dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c275106783e56387249df9619e22fd0eca28516545f77cead21b8c925f9c36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:54Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.734415 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.734460 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.734474 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.734493 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.734505 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:54Z","lastTransitionTime":"2026-01-26T18:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.743980 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3512c1850ad62aad579725558f83686c93dad645cc56cc852438dc2b4a6c35c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:54Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.757394 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925178b6076a7c576bc84fb58255bac5e1dcd86eda3fd94f0f93504a7cd7625a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548ccd6a70ea74a2030c871c94d8d7ac1de313de023b6a16b4a3a3bb2a2d7003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:54Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.769873 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e65f82894ec49f5a88663c42b77ad7d6f19fa922c45052d24272144140f979b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:54Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.784899 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:54Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.805870 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb40773-20dc-48ef-bf7f-17f4a042b01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66ec75b04c2383311d9d4c54148415f6f45821810aa9e68c12fa36c22637341c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13f6776860714e1ab348c9b7a767366f0b4b425d08ed27ee64abfaf2770f1889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0330027f82eafcc297d9ea91babd144a993a1f9d5b5f376274521364421fb70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b3d9e7e5a84aa89a81ca65443973a1a75bc1b54c2f3f5cbd6cf7a00f8d04704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2019712957d6ff1e329746e69d806c2cb554917815ebbac73b321965e5d981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://067cf449746568a0f2fa056863be0cc0bf40390eb6f239e011639fdc05f2ea8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6d3752e2178a20cb4f04bcb4301397a5888811fbaaf3d02403559e4cf938832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://570bf995c9ab0a04cff8ada5b82ef19c9299d86ab480a43ea1446a3aedb8cd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jgjrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:54Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.822989 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d641e5-0291-480c-9413-478267450e45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d782bb5883158eb31686ef882923bc0fe18907ec34b462ad7641b8d0a6e675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce3c0b3eaf0ab467b2dbcadc4770536de6e0abf901c9636df113498aff77a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d13541d78d88ffb1e1dcff16556814da8c438d160fef0ea16468954f300dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209ecbbc6838b629efde256a421bfd4b6926d2a9cd2f02e4fb7df9325fdecfc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2968ec8a8ae174c006de379e7fae84b111c90cb44e51bb8d0fdcbc0e66a5842\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:30:39.472985 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:30:39.474507 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1374176662/tls.crt::/tmp/serving-cert-1374176662/tls.key\\\\\\\"\\\\nI0126 18:30:44.993991 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:30:44.996847 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:30:44.996868 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:30:44.996891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:30:44.996897 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:30:45.005311 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 18:30:45.005355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 18:30:45.005375 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005386 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005391 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:30:45.005396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:30:45.005400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:30:45.005403 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 18:30:45.006492 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45b34a9d70cf8504fd809f816a326a74e9a3c422a1ed1ffc221e72f90629b420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:54Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.836987 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.837040 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.837053 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.837092 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.837111 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:54Z","lastTransitionTime":"2026-01-26T18:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.843832 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvbml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f32d3b75-6d15-4fb7-9559-d3df1d77071e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e973f3c659c65849958ccb32d18d8e67d42874690df337699f6cf976485c536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81b1b4cdfa531e63bf8499478cc1f6813d659b2b1b160d374133713382cff7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81b1b4cdfa531e63bf8499478cc1f6813d659b2b1b160d374133713382cff7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvbml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:54Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.930119 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 06:42:31.270307637 +0000 UTC Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.940455 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.940536 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.940561 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.940592 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.940616 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:54Z","lastTransitionTime":"2026-01-26T18:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.981158 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.981158 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:30:54 crc kubenswrapper[4737]: E0126 18:30:54.981382 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:30:54 crc kubenswrapper[4737]: I0126 18:30:54.981167 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:30:54 crc kubenswrapper[4737]: E0126 18:30:54.981478 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:30:54 crc kubenswrapper[4737]: E0126 18:30:54.981530 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.044549 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.044947 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.045020 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.045131 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.045220 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:55Z","lastTransitionTime":"2026-01-26T18:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.148049 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.148495 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.148586 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.148664 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.148722 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:55Z","lastTransitionTime":"2026-01-26T18:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.251599 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.251654 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.251668 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.251688 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.251702 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:55Z","lastTransitionTime":"2026-01-26T18:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.347200 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.354921 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.354975 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.354987 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.355012 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.355033 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:55Z","lastTransitionTime":"2026-01-26T18:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.369699 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.394394 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6554c7-415f-457d-8121-82981ebe2781\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2838d2a1b16be346b2d6a63998cd81416ef81978be369242fae471f6a53fdbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf97240160ecd3f4e73effbeb33f85dad6c12afbfe10315b8624d5c366945d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cfbe9f1ae9deaee4bbb0db6d490c25bd86326a3b962d2221cffa8c7e8431cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35617b01f73620a31d80cfbb5bc2c444ee37cdf3cfd67d62b70f36c6738bfc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2decc4fe0a94f1c54bc9b532267b0cbac17f7762e628835a11ba40561c8971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:55Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.417739 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:55Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.433491 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:55Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.447984 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fsmsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79f4091b-95d7-420a-b90a-1b6f48fb634e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://182bb7a343b62287950a4012ccd463ab6a7d339540f40db94e83248958d49095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fsmsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:55Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.458209 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.458253 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.458263 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.458283 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.458295 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:55Z","lastTransitionTime":"2026-01-26T18:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.464848 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d16415ca-2740-4247-846a-9afd1ebcca48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4f461b168b044c50f281bafc5c7ef0d877392e3cc72edc7b2f0028cf8fe6b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7338aa3bff3561881f454689b4ae1ab8b46ddf950c45dd080107c7b78e6766a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ccdee3654b2923f02f6071aa3924d0934ed028d809dfbf120ba387637632dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c275106783e56387249df9619e22fd0eca28516545f77cead21b8c925f9c36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:55Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.480720 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qjff2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82627aad-2019-482e-934a-7e9729927a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d6c4b9c86f851e8346bde5364b9a2463869d85fef2bc4e705335f9253be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9ggl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qjff2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:55Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.494457 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afd75772-7900-46c3-b392-afb075e1cc08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44e1f827ccc2bfeece3e663dd96fc5e48e301dbf7ac31e381e7a33a8a4a422c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea5fce0e1e77606f5e8f6cb2c1b339d6b7b8174e1f68a050834be1f5bedfec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qxkj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:55Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.505901 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.505945 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.505955 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.505980 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.505992 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:55Z","lastTransitionTime":"2026-01-26T18:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.510199 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d641e5-0291-480c-9413-478267450e45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d782bb5883158eb31686ef882923bc0fe18907ec34b462ad7641b8d0a6e675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce3c0b3eaf0ab467b2dbcadc4770536de6e0abf901c9636df113498aff77a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d13541d78d88ffb1e1dcff16556814da8c438d160fef0ea16468954f300dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209ecbbc6838b629efde256a421bfd4b6926d2a9cd2f02e4fb7df9325fdecfc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2968ec8a8ae174c006de379e7fae84b111c90cb44e51bb8d0fdcbc0e66a5842\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:30:39.472985 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:30:39.474507 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1374176662/tls.crt::/tmp/serving-cert-1374176662/tls.key\\\\\\\"\\\\nI0126 18:30:44.993991 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:30:44.996847 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:30:44.996868 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:30:44.996891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:30:44.996897 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:30:45.005311 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 18:30:45.005355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 18:30:45.005375 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005386 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005391 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:30:45.005396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:30:45.005400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:30:45.005403 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 18:30:45.006492 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45b34a9d70cf8504fd809f816a326a74e9a3c422a1ed1ffc221e72f90629b420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:55Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:55 crc kubenswrapper[4737]: E0126 18:30:55.521507 4737 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:30:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:30:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:30:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:30:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"163b9b97-5fa6-4443-9f0c-6d278a8ade1d\\\",\\\"systemUUID\\\":\\\"4ebf7606-e2ee-4d28-b0b5-b6f922331ef2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:55Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.523743 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3512c1850ad62aad579725558f83686c93dad645cc56cc852438dc2b4a6c35c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:55Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.526051 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.526142 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.526158 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.526181 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.526196 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:55Z","lastTransitionTime":"2026-01-26T18:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.541590 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925178b6076a7c576bc84fb58255bac5e1dcd86eda3fd94f0f93504a7cd7625a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548ccd6a70ea74a2030c871c94d8d7ac1de313de023b6a16b4a3a3bb2a2d7003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:55Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:55 crc kubenswrapper[4737]: E0126 18:30:55.541613 4737 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:30:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:30:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:30:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:30:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"163b9b97-5fa6-4443-9f0c-6d278a8ade1d\\\",\\\"systemUUID\\\":\\\"4ebf7606-e2ee-4d28-b0b5-b6f922331ef2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:55Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.547138 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.547187 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.547202 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.547226 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.547240 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:55Z","lastTransitionTime":"2026-01-26T18:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.560766 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e65f82894ec49f5a88663c42b77ad7d6f19fa922c45052d24272144140f979b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:55Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:55 crc kubenswrapper[4737]: E0126 18:30:55.565050 4737 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:30:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:30:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:30:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:30:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"163b9b97-5fa6-4443-9f0c-6d278a8ade1d\\\",\\\"systemUUID\\\":\\\"4ebf7606-e2ee-4d28-b0b5-b6f922331ef2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:55Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.571912 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.572226 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.572303 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.572388 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.572452 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:55Z","lastTransitionTime":"2026-01-26T18:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.577932 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:55Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:55 crc kubenswrapper[4737]: E0126 18:30:55.593664 4737 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:30:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:30:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:30:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:30:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"163b9b97-5fa6-4443-9f0c-6d278a8ade1d\\\",\\\"systemUUID\\\":\\\"4ebf7606-e2ee-4d28-b0b5-b6f922331ef2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:55Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.603567 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb40773-20dc-48ef-bf7f-17f4a042b01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66ec75b04c2383311d9d4c54148415f6f45821810aa9e68c12fa36c22637341c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13f6776860714e1ab348c9b7a767366f0b4b425d08ed27ee64abfaf2770f1889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0330027f82eafcc297d9ea91babd144a993a1f9d5b5f376274521364421fb70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b3d9e7e5a84aa89a81ca65443973a1a75bc1b54c2f3f5cbd6cf7a00f8d04704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2019712957d6ff1e329746e69d806c2cb554917815ebbac73b321965e5d981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://067cf449746568a0f2fa056863be0cc0bf40390eb6f239e011639fdc05f2ea8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6d3752e2178a20cb4f04bcb4301397a5888811fbaaf3d02403559e4cf938832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://570bf995c9ab0a04cff8ada5b82ef19c9299d86ab480a43ea1446a3aedb8cd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jgjrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:55Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.604381 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.604409 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.604419 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.604439 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.604450 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:55Z","lastTransitionTime":"2026-01-26T18:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:55 crc kubenswrapper[4737]: E0126 18:30:55.619230 4737 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:30:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:30:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:30:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:30:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"163b9b97-5fa6-4443-9f0c-6d278a8ade1d\\\",\\\"systemUUID\\\":\\\"4ebf7606-e2ee-4d28-b0b5-b6f922331ef2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:55Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:55 crc kubenswrapper[4737]: E0126 18:30:55.619350 4737 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.621112 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.621133 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.621145 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.621169 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.621179 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:55Z","lastTransitionTime":"2026-01-26T18:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.622204 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvbml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f32d3b75-6d15-4fb7-9559-d3df1d77071e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e973f3c659c65849958ccb32d18d8e67d42874690df337699f6cf976485c536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81b1b4cdfa531e63bf8499478cc1f6813d659b2b1b160d374133713382cff7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81b1b4cdfa531e63bf8499478cc1f6813d659b2b1b160d374133713382cff7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvbml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:55Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.723844 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.723881 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.723889 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.723906 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.723915 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:55Z","lastTransitionTime":"2026-01-26T18:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.831269 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.831315 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.831325 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.831340 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.831350 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:55Z","lastTransitionTime":"2026-01-26T18:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.930627 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 15:44:58.672748222 +0000 UTC Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.933761 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.933796 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.933808 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.933826 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:55 crc kubenswrapper[4737]: I0126 18:30:55.933842 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:55Z","lastTransitionTime":"2026-01-26T18:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:56 crc kubenswrapper[4737]: I0126 18:30:56.036270 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:56 crc kubenswrapper[4737]: I0126 18:30:56.036315 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:56 crc kubenswrapper[4737]: I0126 18:30:56.036325 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:56 crc kubenswrapper[4737]: I0126 18:30:56.036345 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:56 crc kubenswrapper[4737]: I0126 18:30:56.036358 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:56Z","lastTransitionTime":"2026-01-26T18:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:56 crc kubenswrapper[4737]: I0126 18:30:56.139887 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:56 crc kubenswrapper[4737]: I0126 18:30:56.139937 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:56 crc kubenswrapper[4737]: I0126 18:30:56.139956 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:56 crc kubenswrapper[4737]: I0126 18:30:56.139981 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:56 crc kubenswrapper[4737]: I0126 18:30:56.139998 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:56Z","lastTransitionTime":"2026-01-26T18:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:56 crc kubenswrapper[4737]: I0126 18:30:56.242781 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:56 crc kubenswrapper[4737]: I0126 18:30:56.242830 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:56 crc kubenswrapper[4737]: I0126 18:30:56.242840 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:56 crc kubenswrapper[4737]: I0126 18:30:56.242856 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:56 crc kubenswrapper[4737]: I0126 18:30:56.242866 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:56Z","lastTransitionTime":"2026-01-26T18:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:56 crc kubenswrapper[4737]: I0126 18:30:56.346318 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:56 crc kubenswrapper[4737]: I0126 18:30:56.346367 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:56 crc kubenswrapper[4737]: I0126 18:30:56.346412 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:56 crc kubenswrapper[4737]: I0126 18:30:56.346435 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:56 crc kubenswrapper[4737]: I0126 18:30:56.346447 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:56Z","lastTransitionTime":"2026-01-26T18:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:56 crc kubenswrapper[4737]: I0126 18:30:56.449626 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:56 crc kubenswrapper[4737]: I0126 18:30:56.449668 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:56 crc kubenswrapper[4737]: I0126 18:30:56.449678 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:56 crc kubenswrapper[4737]: I0126 18:30:56.449695 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:56 crc kubenswrapper[4737]: I0126 18:30:56.449705 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:56Z","lastTransitionTime":"2026-01-26T18:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:56 crc kubenswrapper[4737]: I0126 18:30:56.552626 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:56 crc kubenswrapper[4737]: I0126 18:30:56.552673 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:56 crc kubenswrapper[4737]: I0126 18:30:56.552683 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:56 crc kubenswrapper[4737]: I0126 18:30:56.552697 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:56 crc kubenswrapper[4737]: I0126 18:30:56.552707 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:56Z","lastTransitionTime":"2026-01-26T18:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:56 crc kubenswrapper[4737]: I0126 18:30:56.655129 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:56 crc kubenswrapper[4737]: I0126 18:30:56.655181 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:56 crc kubenswrapper[4737]: I0126 18:30:56.655195 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:56 crc kubenswrapper[4737]: I0126 18:30:56.655215 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:56 crc kubenswrapper[4737]: I0126 18:30:56.655228 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:56Z","lastTransitionTime":"2026-01-26T18:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:56 crc kubenswrapper[4737]: I0126 18:30:56.757833 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:56 crc kubenswrapper[4737]: I0126 18:30:56.757888 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:56 crc kubenswrapper[4737]: I0126 18:30:56.757910 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:56 crc kubenswrapper[4737]: I0126 18:30:56.757937 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:56 crc kubenswrapper[4737]: I0126 18:30:56.757956 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:56Z","lastTransitionTime":"2026-01-26T18:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:56 crc kubenswrapper[4737]: I0126 18:30:56.860953 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:56 crc kubenswrapper[4737]: I0126 18:30:56.861004 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:56 crc kubenswrapper[4737]: I0126 18:30:56.861018 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:56 crc kubenswrapper[4737]: I0126 18:30:56.861039 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:56 crc kubenswrapper[4737]: I0126 18:30:56.861054 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:56Z","lastTransitionTime":"2026-01-26T18:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:56 crc kubenswrapper[4737]: I0126 18:30:56.931343 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 06:59:38.473682274 +0000 UTC Jan 26 18:30:56 crc kubenswrapper[4737]: I0126 18:30:56.964039 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:56 crc kubenswrapper[4737]: I0126 18:30:56.964114 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:56 crc kubenswrapper[4737]: I0126 18:30:56.964125 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:56 crc kubenswrapper[4737]: I0126 18:30:56.964147 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:56 crc kubenswrapper[4737]: I0126 18:30:56.964163 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:56Z","lastTransitionTime":"2026-01-26T18:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:56 crc kubenswrapper[4737]: I0126 18:30:56.981261 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:30:56 crc kubenswrapper[4737]: I0126 18:30:56.981278 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:30:56 crc kubenswrapper[4737]: I0126 18:30:56.981408 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:30:56 crc kubenswrapper[4737]: E0126 18:30:56.981533 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:30:56 crc kubenswrapper[4737]: E0126 18:30:56.981654 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:30:56 crc kubenswrapper[4737]: E0126 18:30:56.981773 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.002711 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d641e5-0291-480c-9413-478267450e45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d782bb5883158eb31686ef882923bc0fe18907ec34b462ad7641b8d0a6e675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce3c0b3eaf0ab467b2dbcadc4770536de6e0abf901c9636df113498aff77a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d13541d78d88ffb1e1dcff16556814da8c438d160fef0ea16468954f300dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209ecbbc6838b629efde256a421bfd4b6926d2a9cd2f02e4fb7df9325fdecfc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2968ec8a8ae174c006de379e7fae84b111c90cb44e51bb8d0fdcbc0e66a5842\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:30:39.472985 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:30:39.474507 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1374176662/tls.crt::/tmp/serving-cert-1374176662/tls.key\\\\\\\"\\\\nI0126 18:30:44.993991 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:30:44.996847 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:30:44.996868 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:30:44.996891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:30:44.996897 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:30:45.005311 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 18:30:45.005355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 18:30:45.005375 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005386 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005391 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:30:45.005396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:30:45.005400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:30:45.005403 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 18:30:45.006492 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45b34a9d70cf8504fd809f816a326a74e9a3c422a1ed1ffc221e72f90629b420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:57Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.019087 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3512c1850ad62aad579725558f83686c93dad645cc56cc852438dc2b4a6c35c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:57Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.035326 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925178b6076a7c576bc84fb58255bac5e1dcd86eda3fd94f0f93504a7cd7625a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548ccd6a70ea74a2030c871c94d8d7ac1de313de023b6a16b4a3a3bb2a2d7003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:57Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.054334 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e65f82894ec49f5a88663c42b77ad7d6f19fa922c45052d24272144140f979b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:57Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.067148 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.067233 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.067253 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.067280 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.067300 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:57Z","lastTransitionTime":"2026-01-26T18:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.074781 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:57Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.096371 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb40773-20dc-48ef-bf7f-17f4a042b01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66ec75b04c2383311d9d4c54148415f6f45821810aa9e68c12fa36c22637341c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13f6776860714e1ab348c9b7a767366f0b4b425d08ed27ee64abfaf2770f1889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0330027f82eafcc297d9ea91babd144a993a1f9d5b5f376274521364421fb70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b3d9e7e5a84aa89a81ca65443973a1a75bc1b54c2f3f5cbd6cf7a00f8d04704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2019712957d6ff1e329746e69d806c2cb554917815ebbac73b321965e5d981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://067cf449746568a0f2fa056863be0cc0bf40390eb6f239e011639fdc05f2ea8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6d3752e2178a20cb4f04bcb4301397a5888811fbaaf3d02403559e4cf938832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://570bf995c9ab0a04cff8ada5b82ef19c9299d86ab480a43ea1446a3aedb8cd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jgjrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:57Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.113495 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvbml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f32d3b75-6d15-4fb7-9559-d3df1d77071e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e973f3c659c65849958ccb32d18d8e67d42874690df337699f6cf976485c536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81b1b4cdfa531e63bf8499478cc1f6813d659b2b1b160d374133713382cff7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81b1b4cdfa531e63bf8499478cc1f6813d659b2b1b160d374133713382cff7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvbml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:57Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.145383 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6554c7-415f-457d-8121-82981ebe2781\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2838d2a1b16be346b2d6a63998cd81416ef81978be369242fae471f6a53fdbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf97240160ecd3f4e73effbeb33f85dad6c12afbfe10315b8624d5c366945d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cfbe9f1ae9deaee4bbb0db6d490c25bd86326a3b962d2221cffa8c7e8431cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35617b01f73620a31d80cfbb5bc2c444ee37cdf3cfd67d62b70f36c6738bfc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2decc4fe0a94f1c54bc9b532267b0cbac17f7762e628835a11ba40561c8971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:57Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.159363 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:57Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.169937 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.169991 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.170003 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.170026 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.170040 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:57Z","lastTransitionTime":"2026-01-26T18:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.172748 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:57Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.184420 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fsmsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79f4091b-95d7-420a-b90a-1b6f48fb634e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://182bb7a343b62287950a4012ccd463ab6a7d339540f40db94e83248958d49095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fsmsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:57Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.197935 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d16415ca-2740-4247-846a-9afd1ebcca48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4f461b168b044c50f281bafc5c7ef0d877392e3cc72edc7b2f0028cf8fe6b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7338aa3bff3561881f454689b4ae1ab8b46ddf950c45dd080107c7b78e6766a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ccdee3654b2923f02f6071aa3924d0934ed028d809dfbf120ba387637632dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c275106783e56387249df9619e22fd0eca28516545f77cead21b8c925f9c36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:57Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.219372 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qjff2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82627aad-2019-482e-934a-7e9729927a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d6c4b9c86f851e8346bde5364b9a2463869d85fef2bc4e705335f9253be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9ggl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qjff2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:57Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.233913 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afd75772-7900-46c3-b392-afb075e1cc08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44e1f827ccc2bfeece3e663dd96fc5e48e301dbf7ac31e381e7a33a8a4a422c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea5fce0e1e77606f5e8f6cb2c1b339d6b7b8174e1f68a050834be1f5bedfec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qxkj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:57Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.273581 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.273634 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.273646 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.273664 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.273678 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:57Z","lastTransitionTime":"2026-01-26T18:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.376170 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.376266 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.376290 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.376325 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.376350 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:57Z","lastTransitionTime":"2026-01-26T18:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.479407 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.479491 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.479516 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.479548 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.479573 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:57Z","lastTransitionTime":"2026-01-26T18:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.582761 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.582818 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.582881 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.582900 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.582913 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:57Z","lastTransitionTime":"2026-01-26T18:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.685141 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.685205 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.685221 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.685245 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.685259 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:57Z","lastTransitionTime":"2026-01-26T18:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.788388 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.788430 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.788438 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.788452 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.788500 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:57Z","lastTransitionTime":"2026-01-26T18:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.892298 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.892389 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.892409 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.892438 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.892538 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:57Z","lastTransitionTime":"2026-01-26T18:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.932626 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 03:26:42.279180753 +0000 UTC Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.996492 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.997191 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.997230 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.997270 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.997294 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:57Z","lastTransitionTime":"2026-01-26T18:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.997917 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpxj"] Jan 26 18:30:57 crc kubenswrapper[4737]: I0126 18:30:57.998346 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpxj" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.000440 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.002680 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.014360 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:58Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.036836 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb40773-20dc-48ef-bf7f-17f4a042b01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66ec75b04c2383311d9d4c54148415f6f45821810aa9e68c12fa36c22637341c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13f6776860714e1ab348c9b7a767366f0b4b425d08ed27ee64abfaf2770f1889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0330027f82eafcc297d9ea91babd144a993a1f9d5b5f376274521364421fb70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b3d9e7e5a84aa89a81ca65443973a1a75bc1b54c2f3f5cbd6cf7a00f8d04704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2019712957d6ff1e329746e69d806c2cb554917815ebbac73b321965e5d981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://067cf449746568a0f2fa056863be0cc0bf40390eb6f239e011639fdc05f2ea8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6d3752e2178a20cb4f04bcb4301397a5888811fbaaf3d02403559e4cf938832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://570bf995c9ab0a04cff8ada5b82ef19c9299d86ab480a43ea1446a3aedb8cd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jgjrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:58Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.054875 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d641e5-0291-480c-9413-478267450e45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d782bb5883158eb31686ef882923bc0fe18907ec34b462ad7641b8d0a6e675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce3c0b3eaf0ab467b2dbcadc4770536de6e0abf901c9636df113498aff77a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d13541d78d88ffb1e1dcff16556814da8c438d160fef0ea16468954f300dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209ecbbc6838b629efde256a421bfd4b6926d2a9cd2f02e4fb7df9325fdecfc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2968ec8a8ae174c006de379e7fae84b111c90cb44e51bb8d0fdcbc0e66a5842\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:30:39.472985 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:30:39.474507 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1374176662/tls.crt::/tmp/serving-cert-1374176662/tls.key\\\\\\\"\\\\nI0126 18:30:44.993991 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:30:44.996847 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:30:44.996868 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:30:44.996891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:30:44.996897 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:30:45.005311 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 18:30:45.005355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 18:30:45.005375 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005386 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005391 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:30:45.005396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:30:45.005400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:30:45.005403 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 18:30:45.006492 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45b34a9d70cf8504fd809f816a326a74e9a3c422a1ed1ffc221e72f90629b420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:58Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.073283 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3512c1850ad62aad579725558f83686c93dad645cc56cc852438dc2b4a6c35c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:58Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.091339 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925178b6076a7c576bc84fb58255bac5e1dcd86eda3fd94f0f93504a7cd7625a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548ccd6a70ea74a2030c871c94d8d7ac1de313de023b6a16b4a3a3bb2a2d7003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:58Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.100304 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.100375 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.100396 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.100425 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.100445 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:58Z","lastTransitionTime":"2026-01-26T18:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.109327 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e65f82894ec49f5a88663c42b77ad7d6f19fa922c45052d24272144140f979b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:58Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.109653 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9bc7b559-f4f0-47b0-b148-6d0915785538-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-rzpxj\" (UID: \"9bc7b559-f4f0-47b0-b148-6d0915785538\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpxj" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.109703 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9bc7b559-f4f0-47b0-b148-6d0915785538-env-overrides\") pod \"ovnkube-control-plane-749d76644c-rzpxj\" (UID: \"9bc7b559-f4f0-47b0-b148-6d0915785538\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpxj" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.109741 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9bc7b559-f4f0-47b0-b148-6d0915785538-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-rzpxj\" (UID: \"9bc7b559-f4f0-47b0-b148-6d0915785538\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpxj" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.109800 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knvgd\" (UniqueName: \"kubernetes.io/projected/9bc7b559-f4f0-47b0-b148-6d0915785538-kube-api-access-knvgd\") pod \"ovnkube-control-plane-749d76644c-rzpxj\" (UID: \"9bc7b559-f4f0-47b0-b148-6d0915785538\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpxj" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.132590 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvbml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f32d3b75-6d15-4fb7-9559-d3df1d77071e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e973f3c659c65849958ccb32d18d8e67d42874690df337699f6cf976485c536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81b1b4cdfa531e63bf8499478cc1f6813d659b2b1b160d374133713382cff7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81b1b4cdfa531e63bf8499478cc1f6813d659b2b1b160d374133713382cff7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvbml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:58Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.149204 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bc7b559-f4f0-47b0-b148-6d0915785538\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knvgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knvgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rzpxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:58Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.184887 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6554c7-415f-457d-8121-82981ebe2781\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2838d2a1b16be346b2d6a63998cd81416ef81978be369242fae471f6a53fdbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf97240160ecd3f4e73effbeb33f85dad6c12afbfe10315b8624d5c366945d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cfbe9f1ae9deaee4bbb0db6d490c25bd86326a3b962d2221cffa8c7e8431cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35617b01f73620a31d80cfbb5bc2c444ee37cdf3cfd67d62b70f36c6738bfc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2decc4fe0a94f1c54bc9b532267b0cbac17f7762e628835a11ba40561c8971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:58Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.202837 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.203247 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.203271 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.203289 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.203301 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:58Z","lastTransitionTime":"2026-01-26T18:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.203696 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:58Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.210930 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9bc7b559-f4f0-47b0-b148-6d0915785538-env-overrides\") pod \"ovnkube-control-plane-749d76644c-rzpxj\" (UID: \"9bc7b559-f4f0-47b0-b148-6d0915785538\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpxj" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.211210 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9bc7b559-f4f0-47b0-b148-6d0915785538-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-rzpxj\" (UID: \"9bc7b559-f4f0-47b0-b148-6d0915785538\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpxj" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.211410 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knvgd\" (UniqueName: \"kubernetes.io/projected/9bc7b559-f4f0-47b0-b148-6d0915785538-kube-api-access-knvgd\") pod \"ovnkube-control-plane-749d76644c-rzpxj\" (UID: \"9bc7b559-f4f0-47b0-b148-6d0915785538\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpxj" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.211663 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9bc7b559-f4f0-47b0-b148-6d0915785538-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-rzpxj\" (UID: \"9bc7b559-f4f0-47b0-b148-6d0915785538\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpxj" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.211898 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9bc7b559-f4f0-47b0-b148-6d0915785538-env-overrides\") pod \"ovnkube-control-plane-749d76644c-rzpxj\" (UID: \"9bc7b559-f4f0-47b0-b148-6d0915785538\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpxj" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.212139 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9bc7b559-f4f0-47b0-b148-6d0915785538-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-rzpxj\" (UID: \"9bc7b559-f4f0-47b0-b148-6d0915785538\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpxj" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.219086 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9bc7b559-f4f0-47b0-b148-6d0915785538-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-rzpxj\" (UID: \"9bc7b559-f4f0-47b0-b148-6d0915785538\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpxj" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.220857 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:58Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.234427 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fsmsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79f4091b-95d7-420a-b90a-1b6f48fb634e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://182bb7a343b62287950a4012ccd463ab6a7d339540f40db94e83248958d49095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fsmsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:58Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.235596 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knvgd\" (UniqueName: \"kubernetes.io/projected/9bc7b559-f4f0-47b0-b148-6d0915785538-kube-api-access-knvgd\") pod \"ovnkube-control-plane-749d76644c-rzpxj\" (UID: \"9bc7b559-f4f0-47b0-b148-6d0915785538\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpxj" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.250731 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d16415ca-2740-4247-846a-9afd1ebcca48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4f461b168b044c50f281bafc5c7ef0d877392e3cc72edc7b2f0028cf8fe6b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7338aa3bff3561881f454689b4ae1ab8b46ddf950c45dd080107c7b78e6766a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ccdee3654b2923f02f6071aa3924d0934ed028d809dfbf120ba387637632dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c275106783e56387249df9619e22fd0eca28516545f77cead21b8c925f9c36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:58Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.268922 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qjff2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82627aad-2019-482e-934a-7e9729927a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d6c4b9c86f851e8346bde5364b9a2463869d85fef2bc4e705335f9253be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9ggl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qjff2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:58Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.285800 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afd75772-7900-46c3-b392-afb075e1cc08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44e1f827ccc2bfeece3e663dd96fc5e48e301dbf7ac31e381e7a33a8a4a422c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea5fce0e1e77606f5e8f6cb2c1b339d6b7b8174e1f68a050834be1f5bedfec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qxkj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:58Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.305477 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.305522 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.305533 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.305552 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.305563 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:58Z","lastTransitionTime":"2026-01-26T18:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.321968 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpxj" Jan 26 18:30:58 crc kubenswrapper[4737]: W0126 18:30:58.344451 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9bc7b559_f4f0_47b0_b148_6d0915785538.slice/crio-d033bbae39f0d51893ad27dd04ef724dbaacee9c1c2f97f3d148612a8887c8ba WatchSource:0}: Error finding container d033bbae39f0d51893ad27dd04ef724dbaacee9c1c2f97f3d148612a8887c8ba: Status 404 returned error can't find the container with id d033bbae39f0d51893ad27dd04ef724dbaacee9c1c2f97f3d148612a8887c8ba Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.359902 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpxj" event={"ID":"9bc7b559-f4f0-47b0-b148-6d0915785538","Type":"ContainerStarted","Data":"d033bbae39f0d51893ad27dd04ef724dbaacee9c1c2f97f3d148612a8887c8ba"} Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.361835 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jgjrk_ecb40773-20dc-48ef-bf7f-17f4a042b01c/ovnkube-controller/0.log" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.365401 4737 generic.go:334] "Generic (PLEG): container finished" podID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" containerID="b6d3752e2178a20cb4f04bcb4301397a5888811fbaaf3d02403559e4cf938832" exitCode=1 Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.365439 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" event={"ID":"ecb40773-20dc-48ef-bf7f-17f4a042b01c","Type":"ContainerDied","Data":"b6d3752e2178a20cb4f04bcb4301397a5888811fbaaf3d02403559e4cf938832"} Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.366285 4737 scope.go:117] "RemoveContainer" containerID="b6d3752e2178a20cb4f04bcb4301397a5888811fbaaf3d02403559e4cf938832" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.383532 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925178b6076a7c576bc84fb58255bac5e1dcd86eda3fd94f0f93504a7cd7625a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548ccd6a70ea74a2030c871c94d8d7ac1de313de023b6a16b4a3a3bb2a2d7003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:58Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.398840 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e65f82894ec49f5a88663c42b77ad7d6f19fa922c45052d24272144140f979b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:58Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.411678 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.411737 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.411755 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.411778 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.411794 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:58Z","lastTransitionTime":"2026-01-26T18:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.415517 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:58Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.447858 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb40773-20dc-48ef-bf7f-17f4a042b01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66ec75b04c2383311d9d4c54148415f6f45821810aa9e68c12fa36c22637341c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13f6776860714e1ab348c9b7a767366f0b4b425d08ed27ee64abfaf2770f1889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0330027f82eafcc297d9ea91babd144a993a1f9d5b5f376274521364421fb70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b3d9e7e5a84aa89a81ca65443973a1a75bc1b54c2f3f5cbd6cf7a00f8d04704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2019712957d6ff1e329746e69d806c2cb554917815ebbac73b321965e5d981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://067cf449746568a0f2fa056863be0cc0bf40390eb6f239e011639fdc05f2ea8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6d3752e2178a20cb4f04bcb4301397a5888811fbaaf3d02403559e4cf938832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d3752e2178a20cb4f04bcb4301397a5888811fbaaf3d02403559e4cf938832\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:30:57Z\\\",\\\"message\\\":\\\"8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 18:30:56.611723 6016 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 18:30:56.611763 6016 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 18:30:56.611769 6016 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 18:30:56.611815 6016 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 18:30:56.611829 6016 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 18:30:56.611827 6016 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 18:30:56.611815 6016 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 18:30:56.611857 6016 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 18:30:56.611862 6016 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 18:30:56.611877 6016 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 18:30:56.611903 6016 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 18:30:56.611947 6016 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 18:30:56.611961 6016 factory.go:656] Stopping watch factory\\\\nI0126 18:30:56.611960 6016 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 18:30:56.611982 6016 ovnkube.go:599] Stopped ovnkube\\\\nI0126 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://570bf995c9ab0a04cff8ada5b82ef19c9299d86ab480a43ea1446a3aedb8cd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jgjrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:58Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.468026 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d641e5-0291-480c-9413-478267450e45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d782bb5883158eb31686ef882923bc0fe18907ec34b462ad7641b8d0a6e675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce3c0b3eaf0ab467b2dbcadc4770536de6e0abf901c9636df113498aff77a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d13541d78d88ffb1e1dcff16556814da8c438d160fef0ea16468954f300dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209ecbbc6838b629efde256a421bfd4b6926d2a9cd2f02e4fb7df9325fdecfc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2968ec8a8ae174c006de379e7fae84b111c90cb44e51bb8d0fdcbc0e66a5842\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:30:39.472985 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:30:39.474507 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1374176662/tls.crt::/tmp/serving-cert-1374176662/tls.key\\\\\\\"\\\\nI0126 18:30:44.993991 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:30:44.996847 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:30:44.996868 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:30:44.996891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:30:44.996897 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:30:45.005311 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 18:30:45.005355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 18:30:45.005375 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005386 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005391 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:30:45.005396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:30:45.005400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:30:45.005403 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 18:30:45.006492 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45b34a9d70cf8504fd809f816a326a74e9a3c422a1ed1ffc221e72f90629b420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:58Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.484464 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3512c1850ad62aad579725558f83686c93dad645cc56cc852438dc2b4a6c35c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:58Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.500430 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bc7b559-f4f0-47b0-b148-6d0915785538\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knvgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knvgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rzpxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:58Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.515005 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.515056 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.515087 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.515106 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.515120 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:58Z","lastTransitionTime":"2026-01-26T18:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.518535 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvbml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f32d3b75-6d15-4fb7-9559-d3df1d77071e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e973f3c659c65849958ccb32d18d8e67d42874690df337699f6cf976485c536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81b1b4cdfa531e63bf8499478cc1f6813d659b2b1b160d374133713382cff7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81b1b4cdfa531e63bf8499478cc1f6813d659b2b1b160d374133713382cff7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvbml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:58Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.534498 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:58Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.548408 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fsmsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79f4091b-95d7-420a-b90a-1b6f48fb634e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://182bb7a343b62287950a4012ccd463ab6a7d339540f40db94e83248958d49095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fsmsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:58Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.569101 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6554c7-415f-457d-8121-82981ebe2781\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2838d2a1b16be346b2d6a63998cd81416ef81978be369242fae471f6a53fdbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf97240160ecd3f4e73effbeb33f85dad6c12afbfe10315b8624d5c366945d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cfbe9f1ae9deaee4bbb0db6d490c25bd86326a3b962d2221cffa8c7e8431cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35617b01f73620a31d80cfbb5bc2c444ee37cdf3cfd67d62b70f36c6738bfc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2decc4fe0a94f1c54bc9b532267b0cbac17f7762e628835a11ba40561c8971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:58Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.584737 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:58Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.597157 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qjff2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82627aad-2019-482e-934a-7e9729927a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d6c4b9c86f851e8346bde5364b9a2463869d85fef2bc4e705335f9253be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9ggl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qjff2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:58Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.609285 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afd75772-7900-46c3-b392-afb075e1cc08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44e1f827ccc2bfeece3e663dd96fc5e48e301dbf7ac31e381e7a33a8a4a422c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea5fce0e1e77606f5e8f6cb2c1b339d6b7b8174e1f68a050834be1f5bedfec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qxkj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:58Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.617642 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.617701 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.617715 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.617738 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.617751 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:58Z","lastTransitionTime":"2026-01-26T18:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.624512 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d16415ca-2740-4247-846a-9afd1ebcca48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4f461b168b044c50f281bafc5c7ef0d877392e3cc72edc7b2f0028cf8fe6b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7338aa3bff3561881f454689b4ae1ab8b46ddf950c45dd080107c7b78e6766a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ccdee3654b2923f02f6071aa3924d0934ed028d809dfbf120ba387637632dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c275106783e56387249df9619e22fd0eca28516545f77cead21b8c925f9c36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:58Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.719894 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.719940 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.719953 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.719971 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.719984 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:58Z","lastTransitionTime":"2026-01-26T18:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.758564 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-gxxjs"] Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.758935 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-gxxjs" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.761800 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.762111 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.762611 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.764045 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.786997 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gxxjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"632d368f-0ceb-4edc-aac0-b760c24da635\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:58Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:58Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrskd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gxxjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:58Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.809331 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d16415ca-2740-4247-846a-9afd1ebcca48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4f461b168b044c50f281bafc5c7ef0d877392e3cc72edc7b2f0028cf8fe6b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7338aa3bff3561881f454689b4ae1ab8b46ddf950c45dd080107c7b78e6766a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ccdee3654b2923f02f6071aa3924d0934ed028d809dfbf120ba387637632dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c275106783e56387249df9619e22fd0eca28516545f77cead21b8c925f9c36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:58Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.822985 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.823023 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.823034 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.823054 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.823085 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:58Z","lastTransitionTime":"2026-01-26T18:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.828104 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qjff2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82627aad-2019-482e-934a-7e9729927a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d6c4b9c86f851e8346bde5364b9a2463869d85fef2bc4e705335f9253be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9ggl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qjff2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:58Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.842128 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afd75772-7900-46c3-b392-afb075e1cc08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44e1f827ccc2bfeece3e663dd96fc5e48e301dbf7ac31e381e7a33a8a4a422c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea5fce0e1e77606f5e8f6cb2c1b339d6b7b8174e1f68a050834be1f5bedfec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qxkj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:58Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.857415 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:58Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.877677 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb40773-20dc-48ef-bf7f-17f4a042b01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66ec75b04c2383311d9d4c54148415f6f45821810aa9e68c12fa36c22637341c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13f6776860714e1ab348c9b7a767366f0b4b425d08ed27ee64abfaf2770f1889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0330027f82eafcc297d9ea91babd144a993a1f9d5b5f376274521364421fb70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b3d9e7e5a84aa89a81ca65443973a1a75bc1b54c2f3f5cbd6cf7a00f8d04704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2019712957d6ff1e329746e69d806c2cb554917815ebbac73b321965e5d981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://067cf449746568a0f2fa056863be0cc0bf40390eb6f239e011639fdc05f2ea8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6d3752e2178a20cb4f04bcb4301397a5888811fbaaf3d02403559e4cf938832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d3752e2178a20cb4f04bcb4301397a5888811fbaaf3d02403559e4cf938832\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:30:57Z\\\",\\\"message\\\":\\\"8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 18:30:56.611723 6016 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 18:30:56.611763 6016 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 18:30:56.611769 6016 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 18:30:56.611815 6016 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 18:30:56.611829 6016 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 18:30:56.611827 6016 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 18:30:56.611815 6016 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 18:30:56.611857 6016 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 18:30:56.611862 6016 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 18:30:56.611877 6016 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 18:30:56.611903 6016 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 18:30:56.611947 6016 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 18:30:56.611961 6016 factory.go:656] Stopping watch factory\\\\nI0126 18:30:56.611960 6016 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 18:30:56.611982 6016 ovnkube.go:599] Stopped ovnkube\\\\nI0126 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://570bf995c9ab0a04cff8ada5b82ef19c9299d86ab480a43ea1446a3aedb8cd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jgjrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:58Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.896395 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d641e5-0291-480c-9413-478267450e45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d782bb5883158eb31686ef882923bc0fe18907ec34b462ad7641b8d0a6e675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce3c0b3eaf0ab467b2dbcadc4770536de6e0abf901c9636df113498aff77a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d13541d78d88ffb1e1dcff16556814da8c438d160fef0ea16468954f300dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209ecbbc6838b629efde256a421bfd4b6926d2a9cd2f02e4fb7df9325fdecfc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2968ec8a8ae174c006de379e7fae84b111c90cb44e51bb8d0fdcbc0e66a5842\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:30:39.472985 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:30:39.474507 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1374176662/tls.crt::/tmp/serving-cert-1374176662/tls.key\\\\\\\"\\\\nI0126 18:30:44.993991 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:30:44.996847 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:30:44.996868 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:30:44.996891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:30:44.996897 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:30:45.005311 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 18:30:45.005355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 18:30:45.005375 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005386 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005391 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:30:45.005396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:30:45.005400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:30:45.005403 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 18:30:45.006492 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45b34a9d70cf8504fd809f816a326a74e9a3c422a1ed1ffc221e72f90629b420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:58Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.916241 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3512c1850ad62aad579725558f83686c93dad645cc56cc852438dc2b4a6c35c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:58Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.918842 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/632d368f-0ceb-4edc-aac0-b760c24da635-host\") pod \"node-ca-gxxjs\" (UID: \"632d368f-0ceb-4edc-aac0-b760c24da635\") " pod="openshift-image-registry/node-ca-gxxjs" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.918868 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/632d368f-0ceb-4edc-aac0-b760c24da635-serviceca\") pod \"node-ca-gxxjs\" (UID: \"632d368f-0ceb-4edc-aac0-b760c24da635\") " pod="openshift-image-registry/node-ca-gxxjs" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.918888 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrskd\" (UniqueName: \"kubernetes.io/projected/632d368f-0ceb-4edc-aac0-b760c24da635-kube-api-access-mrskd\") pod \"node-ca-gxxjs\" (UID: \"632d368f-0ceb-4edc-aac0-b760c24da635\") " pod="openshift-image-registry/node-ca-gxxjs" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.925426 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.925450 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.925458 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.925470 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.925479 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:58Z","lastTransitionTime":"2026-01-26T18:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.933223 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 12:03:12.590006129 +0000 UTC Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.933705 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925178b6076a7c576bc84fb58255bac5e1dcd86eda3fd94f0f93504a7cd7625a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548ccd6a70ea74a2030c871c94d8d7ac1de313de023b6a16b4a3a3bb2a2d7003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:58Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.950212 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e65f82894ec49f5a88663c42b77ad7d6f19fa922c45052d24272144140f979b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:58Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.969719 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvbml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f32d3b75-6d15-4fb7-9559-d3df1d77071e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e973f3c659c65849958ccb32d18d8e67d42874690df337699f6cf976485c536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81b1b4cdfa531e63bf8499478cc1f6813d659b2b1b160d374133713382cff7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81b1b4cdfa531e63bf8499478cc1f6813d659b2b1b160d374133713382cff7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvbml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:58Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.981248 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.981292 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.981353 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:30:58 crc kubenswrapper[4737]: E0126 18:30:58.981388 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:30:58 crc kubenswrapper[4737]: E0126 18:30:58.981550 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:30:58 crc kubenswrapper[4737]: E0126 18:30:58.981658 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:30:58 crc kubenswrapper[4737]: I0126 18:30:58.990646 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bc7b559-f4f0-47b0-b148-6d0915785538\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knvgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knvgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rzpxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:58Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.014114 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6554c7-415f-457d-8121-82981ebe2781\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2838d2a1b16be346b2d6a63998cd81416ef81978be369242fae471f6a53fdbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf97240160ecd3f4e73effbeb33f85dad6c12afbfe10315b8624d5c366945d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cfbe9f1ae9deaee4bbb0db6d490c25bd86326a3b962d2221cffa8c7e8431cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35617b01f73620a31d80cfbb5bc2c444ee37cdf3cfd67d62b70f36c6738bfc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2decc4fe0a94f1c54bc9b532267b0cbac17f7762e628835a11ba40561c8971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:59Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.020342 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/632d368f-0ceb-4edc-aac0-b760c24da635-host\") pod \"node-ca-gxxjs\" (UID: \"632d368f-0ceb-4edc-aac0-b760c24da635\") " pod="openshift-image-registry/node-ca-gxxjs" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.020520 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/632d368f-0ceb-4edc-aac0-b760c24da635-serviceca\") pod \"node-ca-gxxjs\" (UID: \"632d368f-0ceb-4edc-aac0-b760c24da635\") " pod="openshift-image-registry/node-ca-gxxjs" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.020596 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrskd\" (UniqueName: \"kubernetes.io/projected/632d368f-0ceb-4edc-aac0-b760c24da635-kube-api-access-mrskd\") pod \"node-ca-gxxjs\" (UID: \"632d368f-0ceb-4edc-aac0-b760c24da635\") " pod="openshift-image-registry/node-ca-gxxjs" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.020451 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/632d368f-0ceb-4edc-aac0-b760c24da635-host\") pod \"node-ca-gxxjs\" (UID: \"632d368f-0ceb-4edc-aac0-b760c24da635\") " pod="openshift-image-registry/node-ca-gxxjs" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.021612 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/632d368f-0ceb-4edc-aac0-b760c24da635-serviceca\") pod \"node-ca-gxxjs\" (UID: \"632d368f-0ceb-4edc-aac0-b760c24da635\") " pod="openshift-image-registry/node-ca-gxxjs" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.029715 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.029988 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.030062 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.030172 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.030236 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:59Z","lastTransitionTime":"2026-01-26T18:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.038648 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrskd\" (UniqueName: \"kubernetes.io/projected/632d368f-0ceb-4edc-aac0-b760c24da635-kube-api-access-mrskd\") pod \"node-ca-gxxjs\" (UID: \"632d368f-0ceb-4edc-aac0-b760c24da635\") " pod="openshift-image-registry/node-ca-gxxjs" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.042023 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:59Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.059042 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:59Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.078134 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-gxxjs" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.089346 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fsmsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79f4091b-95d7-420a-b90a-1b6f48fb634e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://182bb7a343b62287950a4012ccd463ab6a7d339540f40db94e83248958d49095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fsmsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:59Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:59 crc kubenswrapper[4737]: W0126 18:30:59.097707 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod632d368f_0ceb_4edc_aac0_b760c24da635.slice/crio-8cdfbb888a6f926479e266b5fcd91f815fcddcc478727bfdead8ce0c77eab163 WatchSource:0}: Error finding container 8cdfbb888a6f926479e266b5fcd91f815fcddcc478727bfdead8ce0c77eab163: Status 404 returned error can't find the container with id 8cdfbb888a6f926479e266b5fcd91f815fcddcc478727bfdead8ce0c77eab163 Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.133154 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.133206 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.133217 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.133243 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.133257 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:59Z","lastTransitionTime":"2026-01-26T18:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.236523 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.237299 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.237353 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.237372 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.237383 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:59Z","lastTransitionTime":"2026-01-26T18:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.340560 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.340602 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.340611 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.340625 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.340635 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:59Z","lastTransitionTime":"2026-01-26T18:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.373402 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-gxxjs" event={"ID":"632d368f-0ceb-4edc-aac0-b760c24da635","Type":"ContainerStarted","Data":"045cdffff188229daeee7faf3a96a61c6b0ab18fdd0908f528b8a2a5b19059bc"} Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.373464 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-gxxjs" event={"ID":"632d368f-0ceb-4edc-aac0-b760c24da635","Type":"ContainerStarted","Data":"8cdfbb888a6f926479e266b5fcd91f815fcddcc478727bfdead8ce0c77eab163"} Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.377057 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpxj" event={"ID":"9bc7b559-f4f0-47b0-b148-6d0915785538","Type":"ContainerStarted","Data":"4df8c189f585082008e31ded41ba96e5939a894300f9dc29b53768a05cea54c4"} Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.377141 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpxj" event={"ID":"9bc7b559-f4f0-47b0-b148-6d0915785538","Type":"ContainerStarted","Data":"10904723390bf4505ed547f04ed3a24b1e7debcf7f089e7de30eb5166c8f6d46"} Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.380601 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jgjrk_ecb40773-20dc-48ef-bf7f-17f4a042b01c/ovnkube-controller/0.log" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.384365 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" event={"ID":"ecb40773-20dc-48ef-bf7f-17f4a042b01c","Type":"ContainerStarted","Data":"3914aba793322088149ecf9d7ad29dc5cbc6240e243dd5ce17c8df1ae4e39af5"} Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.384935 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.390703 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:59Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.402749 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fsmsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79f4091b-95d7-420a-b90a-1b6f48fb634e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://182bb7a343b62287950a4012ccd463ab6a7d339540f40db94e83248958d49095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fsmsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:59Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.422707 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6554c7-415f-457d-8121-82981ebe2781\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2838d2a1b16be346b2d6a63998cd81416ef81978be369242fae471f6a53fdbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf97240160ecd3f4e73effbeb33f85dad6c12afbfe10315b8624d5c366945d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cfbe9f1ae9deaee4bbb0db6d490c25bd86326a3b962d2221cffa8c7e8431cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35617b01f73620a31d80cfbb5bc2c444ee37cdf3cfd67d62b70f36c6738bfc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2decc4fe0a94f1c54bc9b532267b0cbac17f7762e628835a11ba40561c8971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:59Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.439112 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:59Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.443695 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.443867 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.443949 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.444035 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.444188 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:59Z","lastTransitionTime":"2026-01-26T18:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.454273 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qjff2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82627aad-2019-482e-934a-7e9729927a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d6c4b9c86f851e8346bde5364b9a2463869d85fef2bc4e705335f9253be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9ggl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qjff2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:59Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.467813 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afd75772-7900-46c3-b392-afb075e1cc08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44e1f827ccc2bfeece3e663dd96fc5e48e301dbf7ac31e381e7a33a8a4a422c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea5fce0e1e77606f5e8f6cb2c1b339d6b7b8174e1f68a050834be1f5bedfec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qxkj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:59Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.480380 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gxxjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"632d368f-0ceb-4edc-aac0-b760c24da635\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://045cdffff188229daeee7faf3a96a61c6b0ab18fdd0908f528b8a2a5b19059bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrskd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gxxjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:59Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.495389 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d16415ca-2740-4247-846a-9afd1ebcca48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4f461b168b044c50f281bafc5c7ef0d877392e3cc72edc7b2f0028cf8fe6b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7338aa3bff3561881f454689b4ae1ab8b46ddf950c45dd080107c7b78e6766a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ccdee3654b2923f02f6071aa3924d0934ed028d809dfbf120ba387637632dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c275106783e56387249df9619e22fd0eca28516545f77cead21b8c925f9c36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:59Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.509052 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925178b6076a7c576bc84fb58255bac5e1dcd86eda3fd94f0f93504a7cd7625a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548ccd6a70ea74a2030c871c94d8d7ac1de313de023b6a16b4a3a3bb2a2d7003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:59Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.516749 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-4pv7r"] Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.517229 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:30:59 crc kubenswrapper[4737]: E0126 18:30:59.517303 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4pv7r" podUID="1a3aadb5-b908-4300-af5f-e3c37dff9e14" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.525170 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e65f82894ec49f5a88663c42b77ad7d6f19fa922c45052d24272144140f979b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:59Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.539385 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:59Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.547011 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.547045 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.547087 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.547116 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.547127 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:59Z","lastTransitionTime":"2026-01-26T18:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.557971 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb40773-20dc-48ef-bf7f-17f4a042b01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66ec75b04c2383311d9d4c54148415f6f45821810aa9e68c12fa36c22637341c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13f6776860714e1ab348c9b7a767366f0b4b425d08ed27ee64abfaf2770f1889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0330027f82eafcc297d9ea91babd144a993a1f9d5b5f376274521364421fb70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b3d9e7e5a84aa89a81ca65443973a1a75bc1b54c2f3f5cbd6cf7a00f8d04704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2019712957d6ff1e329746e69d806c2cb554917815ebbac73b321965e5d981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://067cf449746568a0f2fa056863be0cc0bf40390eb6f239e011639fdc05f2ea8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6d3752e2178a20cb4f04bcb4301397a5888811fbaaf3d02403559e4cf938832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d3752e2178a20cb4f04bcb4301397a5888811fbaaf3d02403559e4cf938832\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:30:57Z\\\",\\\"message\\\":\\\"8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 18:30:56.611723 6016 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 18:30:56.611763 6016 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 18:30:56.611769 6016 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 18:30:56.611815 6016 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 18:30:56.611829 6016 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 18:30:56.611827 6016 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 18:30:56.611815 6016 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 18:30:56.611857 6016 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 18:30:56.611862 6016 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 18:30:56.611877 6016 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 18:30:56.611903 6016 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 18:30:56.611947 6016 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 18:30:56.611961 6016 factory.go:656] Stopping watch factory\\\\nI0126 18:30:56.611960 6016 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 18:30:56.611982 6016 ovnkube.go:599] Stopped ovnkube\\\\nI0126 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://570bf995c9ab0a04cff8ada5b82ef19c9299d86ab480a43ea1446a3aedb8cd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jgjrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:59Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.575631 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d641e5-0291-480c-9413-478267450e45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d782bb5883158eb31686ef882923bc0fe18907ec34b462ad7641b8d0a6e675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce3c0b3eaf0ab467b2dbcadc4770536de6e0abf901c9636df113498aff77a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d13541d78d88ffb1e1dcff16556814da8c438d160fef0ea16468954f300dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209ecbbc6838b629efde256a421bfd4b6926d2a9cd2f02e4fb7df9325fdecfc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2968ec8a8ae174c006de379e7fae84b111c90cb44e51bb8d0fdcbc0e66a5842\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:30:39.472985 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:30:39.474507 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1374176662/tls.crt::/tmp/serving-cert-1374176662/tls.key\\\\\\\"\\\\nI0126 18:30:44.993991 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:30:44.996847 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:30:44.996868 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:30:44.996891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:30:44.996897 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:30:45.005311 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 18:30:45.005355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 18:30:45.005375 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005386 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005391 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:30:45.005396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:30:45.005400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:30:45.005403 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 18:30:45.006492 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45b34a9d70cf8504fd809f816a326a74e9a3c422a1ed1ffc221e72f90629b420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:59Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.593719 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3512c1850ad62aad579725558f83686c93dad645cc56cc852438dc2b4a6c35c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:59Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.605921 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bc7b559-f4f0-47b0-b148-6d0915785538\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knvgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knvgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rzpxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:59Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.619527 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvbml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f32d3b75-6d15-4fb7-9559-d3df1d77071e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e973f3c659c65849958ccb32d18d8e67d42874690df337699f6cf976485c536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81b1b4cdfa531e63bf8499478cc1f6813d659b2b1b160d374133713382cff7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81b1b4cdfa531e63bf8499478cc1f6813d659b2b1b160d374133713382cff7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvbml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:59Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.630875 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7cfj\" (UniqueName: \"kubernetes.io/projected/1a3aadb5-b908-4300-af5f-e3c37dff9e14-kube-api-access-v7cfj\") pod \"network-metrics-daemon-4pv7r\" (UID: \"1a3aadb5-b908-4300-af5f-e3c37dff9e14\") " pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.630934 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a3aadb5-b908-4300-af5f-e3c37dff9e14-metrics-certs\") pod \"network-metrics-daemon-4pv7r\" (UID: \"1a3aadb5-b908-4300-af5f-e3c37dff9e14\") " pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.637547 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvbml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f32d3b75-6d15-4fb7-9559-d3df1d77071e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e973f3c659c65849958ccb32d18d8e67d42874690df337699f6cf976485c536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81b1b4cdfa531e63bf8499478cc1f6813d659b2b1b160d374133713382cff7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81b1b4cdfa531e63bf8499478cc1f6813d659b2b1b160d374133713382cff7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvbml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:59Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.650603 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.650674 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.650692 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.650722 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.650738 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:59Z","lastTransitionTime":"2026-01-26T18:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.651866 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bc7b559-f4f0-47b0-b148-6d0915785538\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10904723390bf4505ed547f04ed3a24b1e7debcf7f089e7de30eb5166c8f6d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knvgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df8c189f585082008e31ded41ba96e5939a894300f9dc29b53768a05cea54c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knvgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rzpxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:59Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.665226 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4pv7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a3aadb5-b908-4300-af5f-e3c37dff9e14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7cfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7cfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4pv7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:59Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.685007 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6554c7-415f-457d-8121-82981ebe2781\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2838d2a1b16be346b2d6a63998cd81416ef81978be369242fae471f6a53fdbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf97240160ecd3f4e73effbeb33f85dad6c12afbfe10315b8624d5c366945d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cfbe9f1ae9deaee4bbb0db6d490c25bd86326a3b962d2221cffa8c7e8431cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35617b01f73620a31d80cfbb5bc2c444ee37cdf3cfd67d62b70f36c6738bfc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2decc4fe0a94f1c54bc9b532267b0cbac17f7762e628835a11ba40561c8971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:59Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.698511 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:59Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.710887 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:59Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.722442 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fsmsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79f4091b-95d7-420a-b90a-1b6f48fb634e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://182bb7a343b62287950a4012ccd463ab6a7d339540f40db94e83248958d49095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fsmsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:59Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.732096 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7cfj\" (UniqueName: \"kubernetes.io/projected/1a3aadb5-b908-4300-af5f-e3c37dff9e14-kube-api-access-v7cfj\") pod \"network-metrics-daemon-4pv7r\" (UID: \"1a3aadb5-b908-4300-af5f-e3c37dff9e14\") " pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.732163 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a3aadb5-b908-4300-af5f-e3c37dff9e14-metrics-certs\") pod \"network-metrics-daemon-4pv7r\" (UID: \"1a3aadb5-b908-4300-af5f-e3c37dff9e14\") " pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:30:59 crc kubenswrapper[4737]: E0126 18:30:59.732273 4737 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 18:30:59 crc kubenswrapper[4737]: E0126 18:30:59.732361 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a3aadb5-b908-4300-af5f-e3c37dff9e14-metrics-certs podName:1a3aadb5-b908-4300-af5f-e3c37dff9e14 nodeName:}" failed. No retries permitted until 2026-01-26 18:31:00.232344929 +0000 UTC m=+33.540539637 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1a3aadb5-b908-4300-af5f-e3c37dff9e14-metrics-certs") pod "network-metrics-daemon-4pv7r" (UID: "1a3aadb5-b908-4300-af5f-e3c37dff9e14") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.735471 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d16415ca-2740-4247-846a-9afd1ebcca48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4f461b168b044c50f281bafc5c7ef0d877392e3cc72edc7b2f0028cf8fe6b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7338aa3bff3561881f454689b4ae1ab8b46ddf950c45dd080107c7b78e6766a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ccdee3654b2923f02f6071aa3924d0934ed028d809dfbf120ba387637632dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c275106783e56387249df9619e22fd0eca28516545f77cead21b8c925f9c36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:59Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.750041 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qjff2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82627aad-2019-482e-934a-7e9729927a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d6c4b9c86f851e8346bde5364b9a2463869d85fef2bc4e705335f9253be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9ggl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qjff2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:59Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.752180 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7cfj\" (UniqueName: \"kubernetes.io/projected/1a3aadb5-b908-4300-af5f-e3c37dff9e14-kube-api-access-v7cfj\") pod \"network-metrics-daemon-4pv7r\" (UID: \"1a3aadb5-b908-4300-af5f-e3c37dff9e14\") " pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.756657 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.756849 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.756927 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.757007 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.757272 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:59Z","lastTransitionTime":"2026-01-26T18:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.767627 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afd75772-7900-46c3-b392-afb075e1cc08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44e1f827ccc2bfeece3e663dd96fc5e48e301dbf7ac31e381e7a33a8a4a422c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea5fce0e1e77606f5e8f6cb2c1b339d6b7b8174e1f68a050834be1f5bedfec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qxkj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:59Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.780390 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gxxjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"632d368f-0ceb-4edc-aac0-b760c24da635\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://045cdffff188229daeee7faf3a96a61c6b0ab18fdd0908f528b8a2a5b19059bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrskd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gxxjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:59Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.793537 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d641e5-0291-480c-9413-478267450e45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d782bb5883158eb31686ef882923bc0fe18907ec34b462ad7641b8d0a6e675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce3c0b3eaf0ab467b2dbcadc4770536de6e0abf901c9636df113498aff77a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d13541d78d88ffb1e1dcff16556814da8c438d160fef0ea16468954f300dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209ecbbc6838b629efde256a421bfd4b6926d2a9cd2f02e4fb7df9325fdecfc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2968ec8a8ae174c006de379e7fae84b111c90cb44e51bb8d0fdcbc0e66a5842\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:30:39.472985 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:30:39.474507 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1374176662/tls.crt::/tmp/serving-cert-1374176662/tls.key\\\\\\\"\\\\nI0126 18:30:44.993991 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:30:44.996847 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:30:44.996868 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:30:44.996891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:30:44.996897 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:30:45.005311 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 18:30:45.005355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 18:30:45.005375 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005386 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005391 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:30:45.005396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:30:45.005400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:30:45.005403 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 18:30:45.006492 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45b34a9d70cf8504fd809f816a326a74e9a3c422a1ed1ffc221e72f90629b420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:59Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.808034 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3512c1850ad62aad579725558f83686c93dad645cc56cc852438dc2b4a6c35c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:59Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.822884 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925178b6076a7c576bc84fb58255bac5e1dcd86eda3fd94f0f93504a7cd7625a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548ccd6a70ea74a2030c871c94d8d7ac1de313de023b6a16b4a3a3bb2a2d7003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:59Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.835694 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e65f82894ec49f5a88663c42b77ad7d6f19fa922c45052d24272144140f979b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:59Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.848194 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:59Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.860828 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.860871 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.860886 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.860907 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.860919 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:59Z","lastTransitionTime":"2026-01-26T18:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.868452 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb40773-20dc-48ef-bf7f-17f4a042b01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66ec75b04c2383311d9d4c54148415f6f45821810aa9e68c12fa36c22637341c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13f6776860714e1ab348c9b7a767366f0b4b425d08ed27ee64abfaf2770f1889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0330027f82eafcc297d9ea91babd144a993a1f9d5b5f376274521364421fb70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b3d9e7e5a84aa89a81ca65443973a1a75bc1b54c2f3f5cbd6cf7a00f8d04704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2019712957d6ff1e329746e69d806c2cb554917815ebbac73b321965e5d981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://067cf449746568a0f2fa056863be0cc0bf40390eb6f239e011639fdc05f2ea8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3914aba793322088149ecf9d7ad29dc5cbc6240e243dd5ce17c8df1ae4e39af5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d3752e2178a20cb4f04bcb4301397a5888811fbaaf3d02403559e4cf938832\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:30:57Z\\\",\\\"message\\\":\\\"8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 18:30:56.611723 6016 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 18:30:56.611763 6016 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 18:30:56.611769 6016 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 18:30:56.611815 6016 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 18:30:56.611829 6016 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 18:30:56.611827 6016 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 18:30:56.611815 6016 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 18:30:56.611857 6016 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 18:30:56.611862 6016 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 18:30:56.611877 6016 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 18:30:56.611903 6016 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 18:30:56.611947 6016 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 18:30:56.611961 6016 factory.go:656] Stopping watch factory\\\\nI0126 18:30:56.611960 6016 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 18:30:56.611982 6016 ovnkube.go:599] Stopped ovnkube\\\\nI0126 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://570bf995c9ab0a04cff8ada5b82ef19c9299d86ab480a43ea1446a3aedb8cd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jgjrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:59Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.933961 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 17:20:00.198033198 +0000 UTC Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.944550 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.960711 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:59Z is after 2025-08-24T17:21:41Z" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.963961 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.964008 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.964021 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.964040 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.964053 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:30:59Z","lastTransitionTime":"2026-01-26T18:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:30:59 crc kubenswrapper[4737]: I0126 18:30:59.990476 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb40773-20dc-48ef-bf7f-17f4a042b01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66ec75b04c2383311d9d4c54148415f6f45821810aa9e68c12fa36c22637341c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13f6776860714e1ab348c9b7a767366f0b4b425d08ed27ee64abfaf2770f1889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0330027f82eafcc297d9ea91babd144a993a1f9d5b5f376274521364421fb70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b3d9e7e5a84aa89a81ca65443973a1a75bc1b54c2f3f5cbd6cf7a00f8d04704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2019712957d6ff1e329746e69d806c2cb554917815ebbac73b321965e5d981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://067cf449746568a0f2fa056863be0cc0bf40390eb6f239e011639fdc05f2ea8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3914aba793322088149ecf9d7ad29dc5cbc6240e243dd5ce17c8df1ae4e39af5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d3752e2178a20cb4f04bcb4301397a5888811fbaaf3d02403559e4cf938832\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:30:57Z\\\",\\\"message\\\":\\\"8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 18:30:56.611723 6016 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 18:30:56.611763 6016 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 18:30:56.611769 6016 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 18:30:56.611815 6016 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 18:30:56.611829 6016 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 18:30:56.611827 6016 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 18:30:56.611815 6016 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 18:30:56.611857 6016 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 18:30:56.611862 6016 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 18:30:56.611877 6016 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 18:30:56.611903 6016 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 18:30:56.611947 6016 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 18:30:56.611961 6016 factory.go:656] Stopping watch factory\\\\nI0126 18:30:56.611960 6016 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 18:30:56.611982 6016 ovnkube.go:599] Stopped ovnkube\\\\nI0126 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://570bf995c9ab0a04cff8ada5b82ef19c9299d86ab480a43ea1446a3aedb8cd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jgjrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:30:59Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.007762 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d641e5-0291-480c-9413-478267450e45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d782bb5883158eb31686ef882923bc0fe18907ec34b462ad7641b8d0a6e675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce3c0b3eaf0ab467b2dbcadc4770536de6e0abf901c9636df113498aff77a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d13541d78d88ffb1e1dcff16556814da8c438d160fef0ea16468954f300dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209ecbbc6838b629efde256a421bfd4b6926d2a9cd2f02e4fb7df9325fdecfc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2968ec8a8ae174c006de379e7fae84b111c90cb44e51bb8d0fdcbc0e66a5842\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:30:39.472985 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:30:39.474507 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1374176662/tls.crt::/tmp/serving-cert-1374176662/tls.key\\\\\\\"\\\\nI0126 18:30:44.993991 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:30:44.996847 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:30:44.996868 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:30:44.996891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:30:44.996897 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:30:45.005311 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 18:30:45.005355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 18:30:45.005375 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005386 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005391 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:30:45.005396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:30:45.005400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:30:45.005403 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 18:30:45.006492 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45b34a9d70cf8504fd809f816a326a74e9a3c422a1ed1ffc221e72f90629b420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:00Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.022491 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3512c1850ad62aad579725558f83686c93dad645cc56cc852438dc2b4a6c35c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:00Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.037931 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925178b6076a7c576bc84fb58255bac5e1dcd86eda3fd94f0f93504a7cd7625a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548ccd6a70ea74a2030c871c94d8d7ac1de313de023b6a16b4a3a3bb2a2d7003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:00Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.055948 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e65f82894ec49f5a88663c42b77ad7d6f19fa922c45052d24272144140f979b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:00Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.066346 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.066419 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.066436 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.066454 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.066465 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:00Z","lastTransitionTime":"2026-01-26T18:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.074257 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvbml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f32d3b75-6d15-4fb7-9559-d3df1d77071e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e973f3c659c65849958ccb32d18d8e67d42874690df337699f6cf976485c536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81b1b4cdfa531e63bf8499478cc1f6813d659b2b1b160d374133713382cff7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81b1b4cdfa531e63bf8499478cc1f6813d659b2b1b160d374133713382cff7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvbml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:00Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.089208 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bc7b559-f4f0-47b0-b148-6d0915785538\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10904723390bf4505ed547f04ed3a24b1e7debcf7f089e7de30eb5166c8f6d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knvgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df8c189f585082008e31ded41ba96e5939a894300f9dc29b53768a05cea54c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knvgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rzpxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:00Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.104315 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4pv7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a3aadb5-b908-4300-af5f-e3c37dff9e14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7cfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7cfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4pv7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:00Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.124982 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6554c7-415f-457d-8121-82981ebe2781\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2838d2a1b16be346b2d6a63998cd81416ef81978be369242fae471f6a53fdbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf97240160ecd3f4e73effbeb33f85dad6c12afbfe10315b8624d5c366945d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cfbe9f1ae9deaee4bbb0db6d490c25bd86326a3b962d2221cffa8c7e8431cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35617b01f73620a31d80cfbb5bc2c444ee37cdf3cfd67d62b70f36c6738bfc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2decc4fe0a94f1c54bc9b532267b0cbac17f7762e628835a11ba40561c8971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:00Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.142715 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:00Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.159672 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:00Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.169169 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.169223 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.169241 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.169261 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.169276 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:00Z","lastTransitionTime":"2026-01-26T18:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.171433 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fsmsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79f4091b-95d7-420a-b90a-1b6f48fb634e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://182bb7a343b62287950a4012ccd463ab6a7d339540f40db94e83248958d49095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fsmsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:00Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.184576 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gxxjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"632d368f-0ceb-4edc-aac0-b760c24da635\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://045cdffff188229daeee7faf3a96a61c6b0ab18fdd0908f528b8a2a5b19059bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrskd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gxxjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:00Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.197455 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d16415ca-2740-4247-846a-9afd1ebcca48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4f461b168b044c50f281bafc5c7ef0d877392e3cc72edc7b2f0028cf8fe6b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7338aa3bff3561881f454689b4ae1ab8b46ddf950c45dd080107c7b78e6766a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ccdee3654b2923f02f6071aa3924d0934ed028d809dfbf120ba387637632dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c275106783e56387249df9619e22fd0eca28516545f77cead21b8c925f9c36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:00Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.210904 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qjff2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82627aad-2019-482e-934a-7e9729927a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d6c4b9c86f851e8346bde5364b9a2463869d85fef2bc4e705335f9253be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9ggl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qjff2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:00Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.224880 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afd75772-7900-46c3-b392-afb075e1cc08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44e1f827ccc2bfeece3e663dd96fc5e48e301dbf7ac31e381e7a33a8a4a422c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea5fce0e1e77606f5e8f6cb2c1b339d6b7b8174e1f68a050834be1f5bedfec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qxkj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:00Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.237480 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a3aadb5-b908-4300-af5f-e3c37dff9e14-metrics-certs\") pod \"network-metrics-daemon-4pv7r\" (UID: \"1a3aadb5-b908-4300-af5f-e3c37dff9e14\") " pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:31:00 crc kubenswrapper[4737]: E0126 18:31:00.237781 4737 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 18:31:00 crc kubenswrapper[4737]: E0126 18:31:00.238036 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a3aadb5-b908-4300-af5f-e3c37dff9e14-metrics-certs podName:1a3aadb5-b908-4300-af5f-e3c37dff9e14 nodeName:}" failed. No retries permitted until 2026-01-26 18:31:01.23801525 +0000 UTC m=+34.546209958 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1a3aadb5-b908-4300-af5f-e3c37dff9e14-metrics-certs") pod "network-metrics-daemon-4pv7r" (UID: "1a3aadb5-b908-4300-af5f-e3c37dff9e14") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.271878 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.271925 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.271937 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.271955 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.271969 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:00Z","lastTransitionTime":"2026-01-26T18:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.375304 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.375354 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.375369 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.375391 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.375405 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:00Z","lastTransitionTime":"2026-01-26T18:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.393763 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jgjrk_ecb40773-20dc-48ef-bf7f-17f4a042b01c/ovnkube-controller/1.log" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.394901 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jgjrk_ecb40773-20dc-48ef-bf7f-17f4a042b01c/ovnkube-controller/0.log" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.398429 4737 generic.go:334] "Generic (PLEG): container finished" podID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" containerID="3914aba793322088149ecf9d7ad29dc5cbc6240e243dd5ce17c8df1ae4e39af5" exitCode=1 Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.398530 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" event={"ID":"ecb40773-20dc-48ef-bf7f-17f4a042b01c","Type":"ContainerDied","Data":"3914aba793322088149ecf9d7ad29dc5cbc6240e243dd5ce17c8df1ae4e39af5"} Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.398743 4737 scope.go:117] "RemoveContainer" containerID="b6d3752e2178a20cb4f04bcb4301397a5888811fbaaf3d02403559e4cf938832" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.399529 4737 scope.go:117] "RemoveContainer" containerID="3914aba793322088149ecf9d7ad29dc5cbc6240e243dd5ce17c8df1ae4e39af5" Jan 26 18:31:00 crc kubenswrapper[4737]: E0126 18:31:00.399811 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jgjrk_openshift-ovn-kubernetes(ecb40773-20dc-48ef-bf7f-17f4a042b01c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" podUID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.415930 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:00Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.427921 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fsmsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79f4091b-95d7-420a-b90a-1b6f48fb634e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://182bb7a343b62287950a4012ccd463ab6a7d339540f40db94e83248958d49095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fsmsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:00Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.450449 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6554c7-415f-457d-8121-82981ebe2781\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2838d2a1b16be346b2d6a63998cd81416ef81978be369242fae471f6a53fdbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf97240160ecd3f4e73effbeb33f85dad6c12afbfe10315b8624d5c366945d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cfbe9f1ae9deaee4bbb0db6d490c25bd86326a3b962d2221cffa8c7e8431cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35617b01f73620a31d80cfbb5bc2c444ee37cdf3cfd67d62b70f36c6738bfc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2decc4fe0a94f1c54bc9b532267b0cbac17f7762e628835a11ba40561c8971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:00Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.468210 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:00Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.477933 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.477957 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.477965 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.477978 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.477988 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:00Z","lastTransitionTime":"2026-01-26T18:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.482860 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qjff2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82627aad-2019-482e-934a-7e9729927a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d6c4b9c86f851e8346bde5364b9a2463869d85fef2bc4e705335f9253be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9ggl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qjff2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:00Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.500027 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afd75772-7900-46c3-b392-afb075e1cc08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44e1f827ccc2bfeece3e663dd96fc5e48e301dbf7ac31e381e7a33a8a4a422c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea5fce0e1e77606f5e8f6cb2c1b339d6b7b8174e1f68a050834be1f5bedfec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qxkj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:00Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.515725 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gxxjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"632d368f-0ceb-4edc-aac0-b760c24da635\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://045cdffff188229daeee7faf3a96a61c6b0ab18fdd0908f528b8a2a5b19059bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrskd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gxxjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:00Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.537619 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d16415ca-2740-4247-846a-9afd1ebcca48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4f461b168b044c50f281bafc5c7ef0d877392e3cc72edc7b2f0028cf8fe6b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7338aa3bff3561881f454689b4ae1ab8b46ddf950c45dd080107c7b78e6766a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ccdee3654b2923f02f6071aa3924d0934ed028d809dfbf120ba387637632dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c275106783e56387249df9619e22fd0eca28516545f77cead21b8c925f9c36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:00Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.553603 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925178b6076a7c576bc84fb58255bac5e1dcd86eda3fd94f0f93504a7cd7625a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548ccd6a70ea74a2030c871c94d8d7ac1de313de023b6a16b4a3a3bb2a2d7003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:00Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.566694 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e65f82894ec49f5a88663c42b77ad7d6f19fa922c45052d24272144140f979b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:00Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.580920 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.580962 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.580973 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.580987 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.580997 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:00Z","lastTransitionTime":"2026-01-26T18:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.582156 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:00Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.636764 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb40773-20dc-48ef-bf7f-17f4a042b01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66ec75b04c2383311d9d4c54148415f6f45821810aa9e68c12fa36c22637341c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13f6776860714e1ab348c9b7a767366f0b4b425d08ed27ee64abfaf2770f1889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0330027f82eafcc297d9ea91babd144a993a1f9d5b5f376274521364421fb70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b3d9e7e5a84aa89a81ca65443973a1a75bc1b54c2f3f5cbd6cf7a00f8d04704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2019712957d6ff1e329746e69d806c2cb554917815ebbac73b321965e5d981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://067cf449746568a0f2fa056863be0cc0bf40390eb6f239e011639fdc05f2ea8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3914aba793322088149ecf9d7ad29dc5cbc6240e243dd5ce17c8df1ae4e39af5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d3752e2178a20cb4f04bcb4301397a5888811fbaaf3d02403559e4cf938832\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:30:57Z\\\",\\\"message\\\":\\\"8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 18:30:56.611723 6016 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 18:30:56.611763 6016 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 18:30:56.611769 6016 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 18:30:56.611815 6016 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 18:30:56.611829 6016 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 18:30:56.611827 6016 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 18:30:56.611815 6016 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 18:30:56.611857 6016 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 18:30:56.611862 6016 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 18:30:56.611877 6016 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 18:30:56.611903 6016 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 18:30:56.611947 6016 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 18:30:56.611961 6016 factory.go:656] Stopping watch factory\\\\nI0126 18:30:56.611960 6016 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 18:30:56.611982 6016 ovnkube.go:599] Stopped ovnkube\\\\nI0126 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3914aba793322088149ecf9d7ad29dc5cbc6240e243dd5ce17c8df1ae4e39af5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"message\\\":\\\"luster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.188:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {53c717ca-2174-4315-bb03-c937a9c0d9b6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 18:30:59.535532 6159 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication/oauth-openshift]} name:Service_openshift-authentication/oauth-openshift_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.222:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c0c2f725-e461-454e-a88c-c8350d62e1ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 18:30:59.535235 6159 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0126 18:30:59.535640 6159 ovnkube.go:599] Stopped ovnkube\\\\nI0126 18:30:59.535664 6159 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 18:30:59.535725 6159 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://570bf995c9ab0a04cff8ada5b82ef19c9299d86ab480a43ea1446a3aedb8cd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jgjrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:00Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.672700 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d641e5-0291-480c-9413-478267450e45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d782bb5883158eb31686ef882923bc0fe18907ec34b462ad7641b8d0a6e675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce3c0b3eaf0ab467b2dbcadc4770536de6e0abf901c9636df113498aff77a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d13541d78d88ffb1e1dcff16556814da8c438d160fef0ea16468954f300dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209ecbbc6838b629efde256a421bfd4b6926d2a9cd2f02e4fb7df9325fdecfc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2968ec8a8ae174c006de379e7fae84b111c90cb44e51bb8d0fdcbc0e66a5842\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:30:39.472985 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:30:39.474507 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1374176662/tls.crt::/tmp/serving-cert-1374176662/tls.key\\\\\\\"\\\\nI0126 18:30:44.993991 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:30:44.996847 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:30:44.996868 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:30:44.996891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:30:44.996897 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:30:45.005311 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 18:30:45.005355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 18:30:45.005375 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005386 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005391 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:30:45.005396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:30:45.005400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:30:45.005403 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 18:30:45.006492 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45b34a9d70cf8504fd809f816a326a74e9a3c422a1ed1ffc221e72f90629b420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:00Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.683453 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.683801 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.683865 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.683934 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.684091 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:00Z","lastTransitionTime":"2026-01-26T18:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.689755 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3512c1850ad62aad579725558f83686c93dad645cc56cc852438dc2b4a6c35c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:00Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.702857 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bc7b559-f4f0-47b0-b148-6d0915785538\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10904723390bf4505ed547f04ed3a24b1e7debcf7f089e7de30eb5166c8f6d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knvgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df8c189f585082008e31ded41ba96e5939a894300f9dc29b53768a05cea54c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knvgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rzpxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:00Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.717702 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4pv7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a3aadb5-b908-4300-af5f-e3c37dff9e14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7cfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7cfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4pv7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:00Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.735450 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvbml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f32d3b75-6d15-4fb7-9559-d3df1d77071e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e973f3c659c65849958ccb32d18d8e67d42874690df337699f6cf976485c536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81b1b4cdfa531e63bf8499478cc1f6813d659b2b1b160d374133713382cff7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81b1b4cdfa531e63bf8499478cc1f6813d659b2b1b160d374133713382cff7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvbml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:00Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.743803 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:31:00 crc kubenswrapper[4737]: E0126 18:31:00.744010 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:31:16.743982558 +0000 UTC m=+50.052177266 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.786786 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.786832 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.786844 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.786859 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.786869 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:00Z","lastTransitionTime":"2026-01-26T18:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.845418 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.845481 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.845505 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.845532 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:31:00 crc kubenswrapper[4737]: E0126 18:31:00.845632 4737 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 18:31:00 crc kubenswrapper[4737]: E0126 18:31:00.845679 4737 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 18:31:00 crc kubenswrapper[4737]: E0126 18:31:00.845706 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 18:31:16.845687637 +0000 UTC m=+50.153882345 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 18:31:00 crc kubenswrapper[4737]: E0126 18:31:00.845811 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 18:31:16.845784279 +0000 UTC m=+50.153978997 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 18:31:00 crc kubenswrapper[4737]: E0126 18:31:00.845915 4737 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 18:31:00 crc kubenswrapper[4737]: E0126 18:31:00.845932 4737 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 18:31:00 crc kubenswrapper[4737]: E0126 18:31:00.845946 4737 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:31:00 crc kubenswrapper[4737]: E0126 18:31:00.845983 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 18:31:16.845971264 +0000 UTC m=+50.154165992 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:31:00 crc kubenswrapper[4737]: E0126 18:31:00.846045 4737 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 18:31:00 crc kubenswrapper[4737]: E0126 18:31:00.846057 4737 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 18:31:00 crc kubenswrapper[4737]: E0126 18:31:00.846092 4737 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:31:00 crc kubenswrapper[4737]: E0126 18:31:00.846123 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 18:31:16.846113897 +0000 UTC m=+50.154308615 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.889255 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.889322 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.889346 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.889373 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.889396 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:00Z","lastTransitionTime":"2026-01-26T18:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.934581 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 15:30:29.645716354 +0000 UTC Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.981781 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.981784 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.982304 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:31:00 crc kubenswrapper[4737]: E0126 18:31:00.982511 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.982725 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:31:00 crc kubenswrapper[4737]: E0126 18:31:00.982941 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:31:00 crc kubenswrapper[4737]: E0126 18:31:00.983023 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4pv7r" podUID="1a3aadb5-b908-4300-af5f-e3c37dff9e14" Jan 26 18:31:00 crc kubenswrapper[4737]: E0126 18:31:00.983109 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.991351 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.991388 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.991397 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.991411 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:00 crc kubenswrapper[4737]: I0126 18:31:00.991422 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:00Z","lastTransitionTime":"2026-01-26T18:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.094368 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.094410 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.094569 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.094614 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.094627 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:01Z","lastTransitionTime":"2026-01-26T18:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.197589 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.197633 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.197646 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.197664 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.197675 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:01Z","lastTransitionTime":"2026-01-26T18:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.250545 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a3aadb5-b908-4300-af5f-e3c37dff9e14-metrics-certs\") pod \"network-metrics-daemon-4pv7r\" (UID: \"1a3aadb5-b908-4300-af5f-e3c37dff9e14\") " pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:31:01 crc kubenswrapper[4737]: E0126 18:31:01.250776 4737 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 18:31:01 crc kubenswrapper[4737]: E0126 18:31:01.250872 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a3aadb5-b908-4300-af5f-e3c37dff9e14-metrics-certs podName:1a3aadb5-b908-4300-af5f-e3c37dff9e14 nodeName:}" failed. No retries permitted until 2026-01-26 18:31:03.250848691 +0000 UTC m=+36.559043619 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1a3aadb5-b908-4300-af5f-e3c37dff9e14-metrics-certs") pod "network-metrics-daemon-4pv7r" (UID: "1a3aadb5-b908-4300-af5f-e3c37dff9e14") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.300909 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.300972 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.300982 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.301005 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.301017 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:01Z","lastTransitionTime":"2026-01-26T18:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.403063 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.403125 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.403135 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.403149 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.403161 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:01Z","lastTransitionTime":"2026-01-26T18:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.404706 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jgjrk_ecb40773-20dc-48ef-bf7f-17f4a042b01c/ovnkube-controller/1.log" Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.409663 4737 scope.go:117] "RemoveContainer" containerID="3914aba793322088149ecf9d7ad29dc5cbc6240e243dd5ce17c8df1ae4e39af5" Jan 26 18:31:01 crc kubenswrapper[4737]: E0126 18:31:01.410331 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jgjrk_openshift-ovn-kubernetes(ecb40773-20dc-48ef-bf7f-17f4a042b01c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" podUID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.428119 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3512c1850ad62aad579725558f83686c93dad645cc56cc852438dc2b4a6c35c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:01Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.442218 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925178b6076a7c576bc84fb58255bac5e1dcd86eda3fd94f0f93504a7cd7625a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548ccd6a70ea74a2030c871c94d8d7ac1de313de023b6a16b4a3a3bb2a2d7003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:01Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.457889 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e65f82894ec49f5a88663c42b77ad7d6f19fa922c45052d24272144140f979b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:01Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.472437 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:01Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.493850 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb40773-20dc-48ef-bf7f-17f4a042b01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66ec75b04c2383311d9d4c54148415f6f45821810aa9e68c12fa36c22637341c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13f6776860714e1ab348c9b7a767366f0b4b425d08ed27ee64abfaf2770f1889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0330027f82eafcc297d9ea91babd144a993a1f9d5b5f376274521364421fb70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b3d9e7e5a84aa89a81ca65443973a1a75bc1b54c2f3f5cbd6cf7a00f8d04704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2019712957d6ff1e329746e69d806c2cb554917815ebbac73b321965e5d981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://067cf449746568a0f2fa056863be0cc0bf40390eb6f239e011639fdc05f2ea8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3914aba793322088149ecf9d7ad29dc5cbc6240e243dd5ce17c8df1ae4e39af5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3914aba793322088149ecf9d7ad29dc5cbc6240e243dd5ce17c8df1ae4e39af5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"message\\\":\\\"luster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.188:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {53c717ca-2174-4315-bb03-c937a9c0d9b6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 18:30:59.535532 6159 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication/oauth-openshift]} name:Service_openshift-authentication/oauth-openshift_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.222:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c0c2f725-e461-454e-a88c-c8350d62e1ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 18:30:59.535235 6159 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0126 18:30:59.535640 6159 ovnkube.go:599] Stopped ovnkube\\\\nI0126 18:30:59.535664 6159 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 18:30:59.535725 6159 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jgjrk_openshift-ovn-kubernetes(ecb40773-20dc-48ef-bf7f-17f4a042b01c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://570bf995c9ab0a04cff8ada5b82ef19c9299d86ab480a43ea1446a3aedb8cd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jgjrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:01Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.505726 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.505771 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.505783 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.505803 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.505816 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:01Z","lastTransitionTime":"2026-01-26T18:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.513376 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d641e5-0291-480c-9413-478267450e45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d782bb5883158eb31686ef882923bc0fe18907ec34b462ad7641b8d0a6e675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce3c0b3eaf0ab467b2dbcadc4770536de6e0abf901c9636df113498aff77a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d13541d78d88ffb1e1dcff16556814da8c438d160fef0ea16468954f300dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209ecbbc6838b629efde256a421bfd4b6926d2a9cd2f02e4fb7df9325fdecfc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2968ec8a8ae174c006de379e7fae84b111c90cb44e51bb8d0fdcbc0e66a5842\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:30:39.472985 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:30:39.474507 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1374176662/tls.crt::/tmp/serving-cert-1374176662/tls.key\\\\\\\"\\\\nI0126 18:30:44.993991 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:30:44.996847 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:30:44.996868 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:30:44.996891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:30:44.996897 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:30:45.005311 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 18:30:45.005355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 18:30:45.005375 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005386 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005391 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:30:45.005396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:30:45.005400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:30:45.005403 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 18:30:45.006492 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45b34a9d70cf8504fd809f816a326a74e9a3c422a1ed1ffc221e72f90629b420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:01Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.530638 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvbml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f32d3b75-6d15-4fb7-9559-d3df1d77071e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e973f3c659c65849958ccb32d18d8e67d42874690df337699f6cf976485c536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81b1b4cdfa531e63bf8499478cc1f6813d659b2b1b160d374133713382cff7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81b1b4cdfa531e63bf8499478cc1f6813d659b2b1b160d374133713382cff7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvbml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:01Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.544688 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bc7b559-f4f0-47b0-b148-6d0915785538\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10904723390bf4505ed547f04ed3a24b1e7debcf7f089e7de30eb5166c8f6d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knvgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df8c189f585082008e31ded41ba96e5939a894300f9dc29b53768a05cea54c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knvgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rzpxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:01Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.556346 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4pv7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a3aadb5-b908-4300-af5f-e3c37dff9e14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7cfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7cfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4pv7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:01Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.570121 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:01Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.583240 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:01Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.595712 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fsmsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79f4091b-95d7-420a-b90a-1b6f48fb634e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://182bb7a343b62287950a4012ccd463ab6a7d339540f40db94e83248958d49095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fsmsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:01Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.608570 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.608612 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.608624 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.608641 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.608653 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:01Z","lastTransitionTime":"2026-01-26T18:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.617967 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6554c7-415f-457d-8121-82981ebe2781\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2838d2a1b16be346b2d6a63998cd81416ef81978be369242fae471f6a53fdbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf97240160ecd3f4e73effbeb33f85dad6c12afbfe10315b8624d5c366945d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cfbe9f1ae9deaee4bbb0db6d490c25bd86326a3b962d2221cffa8c7e8431cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35617b01f73620a31d80cfbb5bc2c444ee37cdf3cfd67d62b70f36c6738bfc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2decc4fe0a94f1c54bc9b532267b0cbac17f7762e628835a11ba40561c8971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:01Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.631259 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qjff2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82627aad-2019-482e-934a-7e9729927a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d6c4b9c86f851e8346bde5364b9a2463869d85fef2bc4e705335f9253be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9ggl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qjff2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:01Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.645275 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afd75772-7900-46c3-b392-afb075e1cc08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44e1f827ccc2bfeece3e663dd96fc5e48e301dbf7ac31e381e7a33a8a4a422c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea5fce0e1e77606f5e8f6cb2c1b339d6b7b8174e1f68a050834be1f5bedfec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qxkj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:01Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.658605 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gxxjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"632d368f-0ceb-4edc-aac0-b760c24da635\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://045cdffff188229daeee7faf3a96a61c6b0ab18fdd0908f528b8a2a5b19059bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrskd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gxxjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:01Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.672146 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d16415ca-2740-4247-846a-9afd1ebcca48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4f461b168b044c50f281bafc5c7ef0d877392e3cc72edc7b2f0028cf8fe6b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7338aa3bff3561881f454689b4ae1ab8b46ddf950c45dd080107c7b78e6766a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ccdee3654b2923f02f6071aa3924d0934ed028d809dfbf120ba387637632dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c275106783e56387249df9619e22fd0eca28516545f77cead21b8c925f9c36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:01Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.711104 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.711145 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.711176 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.711193 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.711203 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:01Z","lastTransitionTime":"2026-01-26T18:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.813276 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.813348 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.813361 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.813383 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.813446 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:01Z","lastTransitionTime":"2026-01-26T18:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.916269 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.916583 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.916658 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.916761 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.916851 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:01Z","lastTransitionTime":"2026-01-26T18:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:01 crc kubenswrapper[4737]: I0126 18:31:01.935056 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 20:23:06.173500571 +0000 UTC Jan 26 18:31:02 crc kubenswrapper[4737]: I0126 18:31:02.021120 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:02 crc kubenswrapper[4737]: I0126 18:31:02.021457 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:02 crc kubenswrapper[4737]: I0126 18:31:02.021572 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:02 crc kubenswrapper[4737]: I0126 18:31:02.021682 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:02 crc kubenswrapper[4737]: I0126 18:31:02.021762 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:02Z","lastTransitionTime":"2026-01-26T18:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:02 crc kubenswrapper[4737]: I0126 18:31:02.124909 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:02 crc kubenswrapper[4737]: I0126 18:31:02.124965 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:02 crc kubenswrapper[4737]: I0126 18:31:02.124975 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:02 crc kubenswrapper[4737]: I0126 18:31:02.124992 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:02 crc kubenswrapper[4737]: I0126 18:31:02.125002 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:02Z","lastTransitionTime":"2026-01-26T18:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:02 crc kubenswrapper[4737]: I0126 18:31:02.228449 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:02 crc kubenswrapper[4737]: I0126 18:31:02.228516 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:02 crc kubenswrapper[4737]: I0126 18:31:02.228535 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:02 crc kubenswrapper[4737]: I0126 18:31:02.228558 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:02 crc kubenswrapper[4737]: I0126 18:31:02.228577 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:02Z","lastTransitionTime":"2026-01-26T18:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:02 crc kubenswrapper[4737]: I0126 18:31:02.331139 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:02 crc kubenswrapper[4737]: I0126 18:31:02.331205 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:02 crc kubenswrapper[4737]: I0126 18:31:02.331242 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:02 crc kubenswrapper[4737]: I0126 18:31:02.331271 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:02 crc kubenswrapper[4737]: I0126 18:31:02.331291 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:02Z","lastTransitionTime":"2026-01-26T18:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:02 crc kubenswrapper[4737]: I0126 18:31:02.435343 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:02 crc kubenswrapper[4737]: I0126 18:31:02.435447 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:02 crc kubenswrapper[4737]: I0126 18:31:02.435466 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:02 crc kubenswrapper[4737]: I0126 18:31:02.435536 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:02 crc kubenswrapper[4737]: I0126 18:31:02.435562 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:02Z","lastTransitionTime":"2026-01-26T18:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:02 crc kubenswrapper[4737]: I0126 18:31:02.538992 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:02 crc kubenswrapper[4737]: I0126 18:31:02.539049 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:02 crc kubenswrapper[4737]: I0126 18:31:02.539059 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:02 crc kubenswrapper[4737]: I0126 18:31:02.539084 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:02 crc kubenswrapper[4737]: I0126 18:31:02.539111 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:02Z","lastTransitionTime":"2026-01-26T18:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:02 crc kubenswrapper[4737]: I0126 18:31:02.641570 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:02 crc kubenswrapper[4737]: I0126 18:31:02.641633 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:02 crc kubenswrapper[4737]: I0126 18:31:02.641648 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:02 crc kubenswrapper[4737]: I0126 18:31:02.641666 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:02 crc kubenswrapper[4737]: I0126 18:31:02.641678 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:02Z","lastTransitionTime":"2026-01-26T18:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:02 crc kubenswrapper[4737]: I0126 18:31:02.744958 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:02 crc kubenswrapper[4737]: I0126 18:31:02.745031 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:02 crc kubenswrapper[4737]: I0126 18:31:02.745041 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:02 crc kubenswrapper[4737]: I0126 18:31:02.745058 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:02 crc kubenswrapper[4737]: I0126 18:31:02.745114 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:02Z","lastTransitionTime":"2026-01-26T18:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:02 crc kubenswrapper[4737]: I0126 18:31:02.848915 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:02 crc kubenswrapper[4737]: I0126 18:31:02.848967 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:02 crc kubenswrapper[4737]: I0126 18:31:02.848979 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:02 crc kubenswrapper[4737]: I0126 18:31:02.848997 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:02 crc kubenswrapper[4737]: I0126 18:31:02.849009 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:02Z","lastTransitionTime":"2026-01-26T18:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:02 crc kubenswrapper[4737]: I0126 18:31:02.935299 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 03:28:41.033799492 +0000 UTC Jan 26 18:31:02 crc kubenswrapper[4737]: I0126 18:31:02.952681 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:02 crc kubenswrapper[4737]: I0126 18:31:02.952763 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:02 crc kubenswrapper[4737]: I0126 18:31:02.952787 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:02 crc kubenswrapper[4737]: I0126 18:31:02.952817 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:02 crc kubenswrapper[4737]: I0126 18:31:02.952844 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:02Z","lastTransitionTime":"2026-01-26T18:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:02 crc kubenswrapper[4737]: I0126 18:31:02.981433 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:31:02 crc kubenswrapper[4737]: E0126 18:31:02.981793 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:31:02 crc kubenswrapper[4737]: I0126 18:31:02.981914 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:31:02 crc kubenswrapper[4737]: I0126 18:31:02.982159 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:31:02 crc kubenswrapper[4737]: E0126 18:31:02.982547 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4pv7r" podUID="1a3aadb5-b908-4300-af5f-e3c37dff9e14" Jan 26 18:31:02 crc kubenswrapper[4737]: I0126 18:31:02.982567 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:31:02 crc kubenswrapper[4737]: E0126 18:31:02.982667 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:31:02 crc kubenswrapper[4737]: E0126 18:31:02.982944 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:31:03 crc kubenswrapper[4737]: I0126 18:31:03.056132 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:03 crc kubenswrapper[4737]: I0126 18:31:03.056191 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:03 crc kubenswrapper[4737]: I0126 18:31:03.056207 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:03 crc kubenswrapper[4737]: I0126 18:31:03.056231 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:03 crc kubenswrapper[4737]: I0126 18:31:03.056247 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:03Z","lastTransitionTime":"2026-01-26T18:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:03 crc kubenswrapper[4737]: I0126 18:31:03.158976 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:03 crc kubenswrapper[4737]: I0126 18:31:03.159026 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:03 crc kubenswrapper[4737]: I0126 18:31:03.159041 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:03 crc kubenswrapper[4737]: I0126 18:31:03.159059 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:03 crc kubenswrapper[4737]: I0126 18:31:03.159093 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:03Z","lastTransitionTime":"2026-01-26T18:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:03 crc kubenswrapper[4737]: I0126 18:31:03.262389 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:03 crc kubenswrapper[4737]: I0126 18:31:03.262448 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:03 crc kubenswrapper[4737]: I0126 18:31:03.262464 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:03 crc kubenswrapper[4737]: I0126 18:31:03.262488 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:03 crc kubenswrapper[4737]: I0126 18:31:03.262509 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:03Z","lastTransitionTime":"2026-01-26T18:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:03 crc kubenswrapper[4737]: I0126 18:31:03.273154 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a3aadb5-b908-4300-af5f-e3c37dff9e14-metrics-certs\") pod \"network-metrics-daemon-4pv7r\" (UID: \"1a3aadb5-b908-4300-af5f-e3c37dff9e14\") " pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:31:03 crc kubenswrapper[4737]: E0126 18:31:03.273398 4737 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 18:31:03 crc kubenswrapper[4737]: E0126 18:31:03.273516 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a3aadb5-b908-4300-af5f-e3c37dff9e14-metrics-certs podName:1a3aadb5-b908-4300-af5f-e3c37dff9e14 nodeName:}" failed. No retries permitted until 2026-01-26 18:31:07.273492796 +0000 UTC m=+40.581687614 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1a3aadb5-b908-4300-af5f-e3c37dff9e14-metrics-certs") pod "network-metrics-daemon-4pv7r" (UID: "1a3aadb5-b908-4300-af5f-e3c37dff9e14") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 18:31:03 crc kubenswrapper[4737]: I0126 18:31:03.366470 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:03 crc kubenswrapper[4737]: I0126 18:31:03.366529 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:03 crc kubenswrapper[4737]: I0126 18:31:03.366565 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:03 crc kubenswrapper[4737]: I0126 18:31:03.366591 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:03 crc kubenswrapper[4737]: I0126 18:31:03.366611 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:03Z","lastTransitionTime":"2026-01-26T18:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:03 crc kubenswrapper[4737]: I0126 18:31:03.470260 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:03 crc kubenswrapper[4737]: I0126 18:31:03.470331 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:03 crc kubenswrapper[4737]: I0126 18:31:03.470342 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:03 crc kubenswrapper[4737]: I0126 18:31:03.470364 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:03 crc kubenswrapper[4737]: I0126 18:31:03.470384 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:03Z","lastTransitionTime":"2026-01-26T18:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:03 crc kubenswrapper[4737]: I0126 18:31:03.572912 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:03 crc kubenswrapper[4737]: I0126 18:31:03.572953 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:03 crc kubenswrapper[4737]: I0126 18:31:03.572964 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:03 crc kubenswrapper[4737]: I0126 18:31:03.572979 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:03 crc kubenswrapper[4737]: I0126 18:31:03.572990 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:03Z","lastTransitionTime":"2026-01-26T18:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:03 crc kubenswrapper[4737]: I0126 18:31:03.675905 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:03 crc kubenswrapper[4737]: I0126 18:31:03.675971 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:03 crc kubenswrapper[4737]: I0126 18:31:03.675986 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:03 crc kubenswrapper[4737]: I0126 18:31:03.676007 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:03 crc kubenswrapper[4737]: I0126 18:31:03.676033 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:03Z","lastTransitionTime":"2026-01-26T18:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:03 crc kubenswrapper[4737]: I0126 18:31:03.779303 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:03 crc kubenswrapper[4737]: I0126 18:31:03.779344 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:03 crc kubenswrapper[4737]: I0126 18:31:03.779353 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:03 crc kubenswrapper[4737]: I0126 18:31:03.779373 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:03 crc kubenswrapper[4737]: I0126 18:31:03.779382 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:03Z","lastTransitionTime":"2026-01-26T18:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:03 crc kubenswrapper[4737]: I0126 18:31:03.881997 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:03 crc kubenswrapper[4737]: I0126 18:31:03.882051 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:03 crc kubenswrapper[4737]: I0126 18:31:03.882066 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:03 crc kubenswrapper[4737]: I0126 18:31:03.882115 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:03 crc kubenswrapper[4737]: I0126 18:31:03.882133 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:03Z","lastTransitionTime":"2026-01-26T18:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:03 crc kubenswrapper[4737]: I0126 18:31:03.935526 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 21:48:06.666634601 +0000 UTC Jan 26 18:31:03 crc kubenswrapper[4737]: I0126 18:31:03.985363 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:03 crc kubenswrapper[4737]: I0126 18:31:03.985417 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:03 crc kubenswrapper[4737]: I0126 18:31:03.985427 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:03 crc kubenswrapper[4737]: I0126 18:31:03.985450 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:03 crc kubenswrapper[4737]: I0126 18:31:03.985464 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:03Z","lastTransitionTime":"2026-01-26T18:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:04 crc kubenswrapper[4737]: I0126 18:31:04.088366 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:04 crc kubenswrapper[4737]: I0126 18:31:04.088725 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:04 crc kubenswrapper[4737]: I0126 18:31:04.088804 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:04 crc kubenswrapper[4737]: I0126 18:31:04.088926 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:04 crc kubenswrapper[4737]: I0126 18:31:04.088999 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:04Z","lastTransitionTime":"2026-01-26T18:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:04 crc kubenswrapper[4737]: I0126 18:31:04.191522 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:04 crc kubenswrapper[4737]: I0126 18:31:04.191557 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:04 crc kubenswrapper[4737]: I0126 18:31:04.191568 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:04 crc kubenswrapper[4737]: I0126 18:31:04.191589 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:04 crc kubenswrapper[4737]: I0126 18:31:04.191600 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:04Z","lastTransitionTime":"2026-01-26T18:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:04 crc kubenswrapper[4737]: I0126 18:31:04.295055 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:04 crc kubenswrapper[4737]: I0126 18:31:04.295129 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:04 crc kubenswrapper[4737]: I0126 18:31:04.295142 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:04 crc kubenswrapper[4737]: I0126 18:31:04.295161 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:04 crc kubenswrapper[4737]: I0126 18:31:04.295172 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:04Z","lastTransitionTime":"2026-01-26T18:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:04 crc kubenswrapper[4737]: I0126 18:31:04.397221 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:04 crc kubenswrapper[4737]: I0126 18:31:04.397270 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:04 crc kubenswrapper[4737]: I0126 18:31:04.397288 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:04 crc kubenswrapper[4737]: I0126 18:31:04.397310 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:04 crc kubenswrapper[4737]: I0126 18:31:04.397329 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:04Z","lastTransitionTime":"2026-01-26T18:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:04 crc kubenswrapper[4737]: I0126 18:31:04.499593 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:04 crc kubenswrapper[4737]: I0126 18:31:04.499645 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:04 crc kubenswrapper[4737]: I0126 18:31:04.499654 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:04 crc kubenswrapper[4737]: I0126 18:31:04.499674 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:04 crc kubenswrapper[4737]: I0126 18:31:04.499686 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:04Z","lastTransitionTime":"2026-01-26T18:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:04 crc kubenswrapper[4737]: I0126 18:31:04.602368 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:04 crc kubenswrapper[4737]: I0126 18:31:04.602433 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:04 crc kubenswrapper[4737]: I0126 18:31:04.602450 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:04 crc kubenswrapper[4737]: I0126 18:31:04.602475 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:04 crc kubenswrapper[4737]: I0126 18:31:04.602497 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:04Z","lastTransitionTime":"2026-01-26T18:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:04 crc kubenswrapper[4737]: I0126 18:31:04.705123 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:04 crc kubenswrapper[4737]: I0126 18:31:04.705172 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:04 crc kubenswrapper[4737]: I0126 18:31:04.705183 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:04 crc kubenswrapper[4737]: I0126 18:31:04.705200 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:04 crc kubenswrapper[4737]: I0126 18:31:04.705216 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:04Z","lastTransitionTime":"2026-01-26T18:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:04 crc kubenswrapper[4737]: I0126 18:31:04.808536 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:04 crc kubenswrapper[4737]: I0126 18:31:04.808892 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:04 crc kubenswrapper[4737]: I0126 18:31:04.808957 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:04 crc kubenswrapper[4737]: I0126 18:31:04.809077 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:04 crc kubenswrapper[4737]: I0126 18:31:04.809182 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:04Z","lastTransitionTime":"2026-01-26T18:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:04 crc kubenswrapper[4737]: I0126 18:31:04.912291 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:04 crc kubenswrapper[4737]: I0126 18:31:04.912731 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:04 crc kubenswrapper[4737]: I0126 18:31:04.912806 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:04 crc kubenswrapper[4737]: I0126 18:31:04.912883 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:04 crc kubenswrapper[4737]: I0126 18:31:04.912957 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:04Z","lastTransitionTime":"2026-01-26T18:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:04 crc kubenswrapper[4737]: I0126 18:31:04.936244 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 16:48:59.707604952 +0000 UTC Jan 26 18:31:04 crc kubenswrapper[4737]: I0126 18:31:04.980909 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:31:04 crc kubenswrapper[4737]: I0126 18:31:04.980909 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:31:04 crc kubenswrapper[4737]: E0126 18:31:04.981150 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:31:04 crc kubenswrapper[4737]: I0126 18:31:04.981047 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:31:04 crc kubenswrapper[4737]: E0126 18:31:04.981286 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:31:04 crc kubenswrapper[4737]: I0126 18:31:04.980942 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:31:04 crc kubenswrapper[4737]: E0126 18:31:04.981411 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4pv7r" podUID="1a3aadb5-b908-4300-af5f-e3c37dff9e14" Jan 26 18:31:04 crc kubenswrapper[4737]: E0126 18:31:04.981457 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:31:05 crc kubenswrapper[4737]: I0126 18:31:05.015772 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:05 crc kubenswrapper[4737]: I0126 18:31:05.015834 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:05 crc kubenswrapper[4737]: I0126 18:31:05.015843 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:05 crc kubenswrapper[4737]: I0126 18:31:05.015857 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:05 crc kubenswrapper[4737]: I0126 18:31:05.015867 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:05Z","lastTransitionTime":"2026-01-26T18:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:05 crc kubenswrapper[4737]: I0126 18:31:05.119142 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:05 crc kubenswrapper[4737]: I0126 18:31:05.119206 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:05 crc kubenswrapper[4737]: I0126 18:31:05.119223 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:05 crc kubenswrapper[4737]: I0126 18:31:05.119245 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:05 crc kubenswrapper[4737]: I0126 18:31:05.119258 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:05Z","lastTransitionTime":"2026-01-26T18:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:05 crc kubenswrapper[4737]: I0126 18:31:05.221834 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:05 crc kubenswrapper[4737]: I0126 18:31:05.221875 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:05 crc kubenswrapper[4737]: I0126 18:31:05.221885 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:05 crc kubenswrapper[4737]: I0126 18:31:05.221902 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:05 crc kubenswrapper[4737]: I0126 18:31:05.221913 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:05Z","lastTransitionTime":"2026-01-26T18:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:05 crc kubenswrapper[4737]: I0126 18:31:05.324605 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:05 crc kubenswrapper[4737]: I0126 18:31:05.324706 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:05 crc kubenswrapper[4737]: I0126 18:31:05.324729 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:05 crc kubenswrapper[4737]: I0126 18:31:05.324752 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:05 crc kubenswrapper[4737]: I0126 18:31:05.324766 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:05Z","lastTransitionTime":"2026-01-26T18:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:05 crc kubenswrapper[4737]: I0126 18:31:05.426963 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:05 crc kubenswrapper[4737]: I0126 18:31:05.427024 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:05 crc kubenswrapper[4737]: I0126 18:31:05.427036 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:05 crc kubenswrapper[4737]: I0126 18:31:05.427054 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:05 crc kubenswrapper[4737]: I0126 18:31:05.427069 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:05Z","lastTransitionTime":"2026-01-26T18:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:05 crc kubenswrapper[4737]: I0126 18:31:05.529538 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:05 crc kubenswrapper[4737]: I0126 18:31:05.529605 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:05 crc kubenswrapper[4737]: I0126 18:31:05.529622 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:05 crc kubenswrapper[4737]: I0126 18:31:05.529645 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:05 crc kubenswrapper[4737]: I0126 18:31:05.529662 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:05Z","lastTransitionTime":"2026-01-26T18:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:05 crc kubenswrapper[4737]: I0126 18:31:05.633507 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:05 crc kubenswrapper[4737]: I0126 18:31:05.633560 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:05 crc kubenswrapper[4737]: I0126 18:31:05.633574 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:05 crc kubenswrapper[4737]: I0126 18:31:05.633595 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:05 crc kubenswrapper[4737]: I0126 18:31:05.633615 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:05Z","lastTransitionTime":"2026-01-26T18:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:05 crc kubenswrapper[4737]: I0126 18:31:05.736990 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:05 crc kubenswrapper[4737]: I0126 18:31:05.737058 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:05 crc kubenswrapper[4737]: I0126 18:31:05.737151 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:05 crc kubenswrapper[4737]: I0126 18:31:05.737179 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:05 crc kubenswrapper[4737]: I0126 18:31:05.737197 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:05Z","lastTransitionTime":"2026-01-26T18:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:05 crc kubenswrapper[4737]: I0126 18:31:05.840434 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:05 crc kubenswrapper[4737]: I0126 18:31:05.840499 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:05 crc kubenswrapper[4737]: I0126 18:31:05.840517 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:05 crc kubenswrapper[4737]: I0126 18:31:05.840541 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:05 crc kubenswrapper[4737]: I0126 18:31:05.840558 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:05Z","lastTransitionTime":"2026-01-26T18:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:05 crc kubenswrapper[4737]: I0126 18:31:05.937215 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 09:24:23.935980998 +0000 UTC Jan 26 18:31:05 crc kubenswrapper[4737]: I0126 18:31:05.943890 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:05 crc kubenswrapper[4737]: I0126 18:31:05.943970 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:05 crc kubenswrapper[4737]: I0126 18:31:05.943982 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:05 crc kubenswrapper[4737]: I0126 18:31:05.944003 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:05 crc kubenswrapper[4737]: I0126 18:31:05.944018 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:05Z","lastTransitionTime":"2026-01-26T18:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.002173 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.002236 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.002259 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.002287 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.002311 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:06Z","lastTransitionTime":"2026-01-26T18:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:06 crc kubenswrapper[4737]: E0126 18:31:06.022748 4737 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"163b9b97-5fa6-4443-9f0c-6d278a8ade1d\\\",\\\"systemUUID\\\":\\\"4ebf7606-e2ee-4d28-b0b5-b6f922331ef2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:06Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.029010 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.029159 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.029185 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.029213 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.029235 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:06Z","lastTransitionTime":"2026-01-26T18:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:06 crc kubenswrapper[4737]: E0126 18:31:06.051172 4737 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"163b9b97-5fa6-4443-9f0c-6d278a8ade1d\\\",\\\"systemUUID\\\":\\\"4ebf7606-e2ee-4d28-b0b5-b6f922331ef2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:06Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.056676 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.056753 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.056768 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.056791 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.056808 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:06Z","lastTransitionTime":"2026-01-26T18:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:06 crc kubenswrapper[4737]: E0126 18:31:06.077026 4737 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"163b9b97-5fa6-4443-9f0c-6d278a8ade1d\\\",\\\"systemUUID\\\":\\\"4ebf7606-e2ee-4d28-b0b5-b6f922331ef2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:06Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.082577 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.082636 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.082647 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.082677 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.082693 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:06Z","lastTransitionTime":"2026-01-26T18:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:06 crc kubenswrapper[4737]: E0126 18:31:06.102158 4737 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"163b9b97-5fa6-4443-9f0c-6d278a8ade1d\\\",\\\"systemUUID\\\":\\\"4ebf7606-e2ee-4d28-b0b5-b6f922331ef2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:06Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.107857 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.107915 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.107928 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.107951 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.107964 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:06Z","lastTransitionTime":"2026-01-26T18:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:06 crc kubenswrapper[4737]: E0126 18:31:06.127087 4737 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"163b9b97-5fa6-4443-9f0c-6d278a8ade1d\\\",\\\"systemUUID\\\":\\\"4ebf7606-e2ee-4d28-b0b5-b6f922331ef2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:06Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:06 crc kubenswrapper[4737]: E0126 18:31:06.127295 4737 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.129703 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.129759 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.129779 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.129812 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.129831 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:06Z","lastTransitionTime":"2026-01-26T18:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.233964 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.234027 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.234039 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.234061 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.234098 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:06Z","lastTransitionTime":"2026-01-26T18:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.337908 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.338030 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.338045 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.338085 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.338104 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:06Z","lastTransitionTime":"2026-01-26T18:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.440466 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.440544 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.440564 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.440592 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.440611 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:06Z","lastTransitionTime":"2026-01-26T18:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.544161 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.544228 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.544241 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.544260 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.544274 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:06Z","lastTransitionTime":"2026-01-26T18:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.647396 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.647455 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.647467 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.647486 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.647498 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:06Z","lastTransitionTime":"2026-01-26T18:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.749717 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.749791 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.749804 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.749839 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.749853 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:06Z","lastTransitionTime":"2026-01-26T18:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.853600 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.853651 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.853664 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.853681 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.853695 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:06Z","lastTransitionTime":"2026-01-26T18:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.937849 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 06:56:51.22805349 +0000 UTC Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.957080 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.957137 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.957147 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.957163 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.957175 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:06Z","lastTransitionTime":"2026-01-26T18:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.981924 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.981978 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.982014 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:31:06 crc kubenswrapper[4737]: I0126 18:31:06.982108 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:31:06 crc kubenswrapper[4737]: E0126 18:31:06.982175 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:31:06 crc kubenswrapper[4737]: E0126 18:31:06.982266 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:31:06 crc kubenswrapper[4737]: E0126 18:31:06.982339 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4pv7r" podUID="1a3aadb5-b908-4300-af5f-e3c37dff9e14" Jan 26 18:31:06 crc kubenswrapper[4737]: E0126 18:31:06.982462 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.001345 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d641e5-0291-480c-9413-478267450e45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d782bb5883158eb31686ef882923bc0fe18907ec34b462ad7641b8d0a6e675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce3c0b3eaf0ab467b2dbcadc4770536de6e0abf901c9636df113498aff77a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d13541d78d88ffb1e1dcff16556814da8c438d160fef0ea16468954f300dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209ecbbc6838b629efde256a421bfd4b6926d2a9cd2f02e4fb7df9325fdecfc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2968ec8a8ae174c006de379e7fae84b111c90cb44e51bb8d0fdcbc0e66a5842\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:30:39.472985 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:30:39.474507 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1374176662/tls.crt::/tmp/serving-cert-1374176662/tls.key\\\\\\\"\\\\nI0126 18:30:44.993991 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:30:44.996847 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:30:44.996868 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:30:44.996891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:30:44.996897 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:30:45.005311 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 18:30:45.005355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 18:30:45.005375 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005386 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005391 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:30:45.005396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:30:45.005400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:30:45.005403 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 18:30:45.006492 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45b34a9d70cf8504fd809f816a326a74e9a3c422a1ed1ffc221e72f90629b420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:06Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.022322 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3512c1850ad62aad579725558f83686c93dad645cc56cc852438dc2b4a6c35c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:07Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.038414 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925178b6076a7c576bc84fb58255bac5e1dcd86eda3fd94f0f93504a7cd7625a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548ccd6a70ea74a2030c871c94d8d7ac1de313de023b6a16b4a3a3bb2a2d7003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:07Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.060056 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.060176 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.060188 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.060254 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.060268 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:07Z","lastTransitionTime":"2026-01-26T18:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.066870 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e65f82894ec49f5a88663c42b77ad7d6f19fa922c45052d24272144140f979b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:07Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.085519 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:07Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.114815 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb40773-20dc-48ef-bf7f-17f4a042b01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66ec75b04c2383311d9d4c54148415f6f45821810aa9e68c12fa36c22637341c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13f6776860714e1ab348c9b7a767366f0b4b425d08ed27ee64abfaf2770f1889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0330027f82eafcc297d9ea91babd144a993a1f9d5b5f376274521364421fb70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b3d9e7e5a84aa89a81ca65443973a1a75bc1b54c2f3f5cbd6cf7a00f8d04704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2019712957d6ff1e329746e69d806c2cb554917815ebbac73b321965e5d981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://067cf449746568a0f2fa056863be0cc0bf40390eb6f239e011639fdc05f2ea8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3914aba793322088149ecf9d7ad29dc5cbc6240e243dd5ce17c8df1ae4e39af5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3914aba793322088149ecf9d7ad29dc5cbc6240e243dd5ce17c8df1ae4e39af5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"message\\\":\\\"luster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.188:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {53c717ca-2174-4315-bb03-c937a9c0d9b6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 18:30:59.535532 6159 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication/oauth-openshift]} name:Service_openshift-authentication/oauth-openshift_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.222:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c0c2f725-e461-454e-a88c-c8350d62e1ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 18:30:59.535235 6159 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0126 18:30:59.535640 6159 ovnkube.go:599] Stopped ovnkube\\\\nI0126 18:30:59.535664 6159 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 18:30:59.535725 6159 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jgjrk_openshift-ovn-kubernetes(ecb40773-20dc-48ef-bf7f-17f4a042b01c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://570bf995c9ab0a04cff8ada5b82ef19c9299d86ab480a43ea1446a3aedb8cd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jgjrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:07Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.131358 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvbml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f32d3b75-6d15-4fb7-9559-d3df1d77071e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e973f3c659c65849958ccb32d18d8e67d42874690df337699f6cf976485c536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81b1b4cdfa531e63bf8499478cc1f6813d659b2b1b160d374133713382cff7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81b1b4cdfa531e63bf8499478cc1f6813d659b2b1b160d374133713382cff7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvbml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:07Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.145542 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bc7b559-f4f0-47b0-b148-6d0915785538\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10904723390bf4505ed547f04ed3a24b1e7debcf7f089e7de30eb5166c8f6d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knvgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df8c189f585082008e31ded41ba96e5939a894300f9dc29b53768a05cea54c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knvgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rzpxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:07Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.158913 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4pv7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a3aadb5-b908-4300-af5f-e3c37dff9e14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7cfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7cfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4pv7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:07Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.162831 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.162917 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.162932 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.162951 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.162965 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:07Z","lastTransitionTime":"2026-01-26T18:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.179301 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6554c7-415f-457d-8121-82981ebe2781\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2838d2a1b16be346b2d6a63998cd81416ef81978be369242fae471f6a53fdbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf97240160ecd3f4e73effbeb33f85dad6c12afbfe10315b8624d5c366945d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cfbe9f1ae9deaee4bbb0db6d490c25bd86326a3b962d2221cffa8c7e8431cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35617b01f73620a31d80cfbb5bc2c444ee37cdf3cfd67d62b70f36c6738bfc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2decc4fe0a94f1c54bc9b532267b0cbac17f7762e628835a11ba40561c8971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:07Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.196001 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:07Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.209408 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:07Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.224429 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fsmsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79f4091b-95d7-420a-b90a-1b6f48fb634e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://182bb7a343b62287950a4012ccd463ab6a7d339540f40db94e83248958d49095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fsmsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:07Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.241667 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d16415ca-2740-4247-846a-9afd1ebcca48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4f461b168b044c50f281bafc5c7ef0d877392e3cc72edc7b2f0028cf8fe6b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7338aa3bff3561881f454689b4ae1ab8b46ddf950c45dd080107c7b78e6766a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ccdee3654b2923f02f6071aa3924d0934ed028d809dfbf120ba387637632dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c275106783e56387249df9619e22fd0eca28516545f77cead21b8c925f9c36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:07Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.254379 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qjff2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82627aad-2019-482e-934a-7e9729927a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d6c4b9c86f851e8346bde5364b9a2463869d85fef2bc4e705335f9253be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9ggl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qjff2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:07Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.264920 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.264981 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.264997 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.265018 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.265033 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:07Z","lastTransitionTime":"2026-01-26T18:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.268298 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afd75772-7900-46c3-b392-afb075e1cc08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44e1f827ccc2bfeece3e663dd96fc5e48e301dbf7ac31e381e7a33a8a4a422c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea5fce0e1e77606f5e8f6cb2c1b339d6b7b8174e1f68a050834be1f5bedfec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qxkj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:07Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.280590 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gxxjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"632d368f-0ceb-4edc-aac0-b760c24da635\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://045cdffff188229daeee7faf3a96a61c6b0ab18fdd0908f528b8a2a5b19059bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrskd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gxxjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:07Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.317053 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a3aadb5-b908-4300-af5f-e3c37dff9e14-metrics-certs\") pod \"network-metrics-daemon-4pv7r\" (UID: \"1a3aadb5-b908-4300-af5f-e3c37dff9e14\") " pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:31:07 crc kubenswrapper[4737]: E0126 18:31:07.317205 4737 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 18:31:07 crc kubenswrapper[4737]: E0126 18:31:07.317260 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a3aadb5-b908-4300-af5f-e3c37dff9e14-metrics-certs podName:1a3aadb5-b908-4300-af5f-e3c37dff9e14 nodeName:}" failed. No retries permitted until 2026-01-26 18:31:15.317246468 +0000 UTC m=+48.625441176 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1a3aadb5-b908-4300-af5f-e3c37dff9e14-metrics-certs") pod "network-metrics-daemon-4pv7r" (UID: "1a3aadb5-b908-4300-af5f-e3c37dff9e14") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.367320 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.367373 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.367393 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.367417 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.367438 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:07Z","lastTransitionTime":"2026-01-26T18:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.471541 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.472211 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.472233 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.472258 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.472312 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:07Z","lastTransitionTime":"2026-01-26T18:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.576600 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.576665 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.576681 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.576705 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.576725 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:07Z","lastTransitionTime":"2026-01-26T18:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.681233 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.681285 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.681300 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.681334 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.681363 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:07Z","lastTransitionTime":"2026-01-26T18:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.785308 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.785367 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.785383 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.785407 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.785425 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:07Z","lastTransitionTime":"2026-01-26T18:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.889307 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.889421 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.889433 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.889452 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.889464 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:07Z","lastTransitionTime":"2026-01-26T18:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.938134 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 18:48:54.955744679 +0000 UTC Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.992383 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.992476 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.992492 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.992542 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:07 crc kubenswrapper[4737]: I0126 18:31:07.992561 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:07Z","lastTransitionTime":"2026-01-26T18:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:08 crc kubenswrapper[4737]: I0126 18:31:08.096512 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:08 crc kubenswrapper[4737]: I0126 18:31:08.096577 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:08 crc kubenswrapper[4737]: I0126 18:31:08.096598 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:08 crc kubenswrapper[4737]: I0126 18:31:08.096628 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:08 crc kubenswrapper[4737]: I0126 18:31:08.096649 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:08Z","lastTransitionTime":"2026-01-26T18:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:08 crc kubenswrapper[4737]: I0126 18:31:08.199532 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:08 crc kubenswrapper[4737]: I0126 18:31:08.199605 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:08 crc kubenswrapper[4737]: I0126 18:31:08.199616 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:08 crc kubenswrapper[4737]: I0126 18:31:08.199638 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:08 crc kubenswrapper[4737]: I0126 18:31:08.199657 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:08Z","lastTransitionTime":"2026-01-26T18:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:08 crc kubenswrapper[4737]: I0126 18:31:08.302338 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:08 crc kubenswrapper[4737]: I0126 18:31:08.302372 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:08 crc kubenswrapper[4737]: I0126 18:31:08.302380 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:08 crc kubenswrapper[4737]: I0126 18:31:08.302394 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:08 crc kubenswrapper[4737]: I0126 18:31:08.302403 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:08Z","lastTransitionTime":"2026-01-26T18:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:08 crc kubenswrapper[4737]: I0126 18:31:08.405798 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:08 crc kubenswrapper[4737]: I0126 18:31:08.405839 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:08 crc kubenswrapper[4737]: I0126 18:31:08.405848 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:08 crc kubenswrapper[4737]: I0126 18:31:08.405861 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:08 crc kubenswrapper[4737]: I0126 18:31:08.405870 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:08Z","lastTransitionTime":"2026-01-26T18:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:08 crc kubenswrapper[4737]: I0126 18:31:08.508329 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:08 crc kubenswrapper[4737]: I0126 18:31:08.508370 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:08 crc kubenswrapper[4737]: I0126 18:31:08.508380 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:08 crc kubenswrapper[4737]: I0126 18:31:08.508394 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:08 crc kubenswrapper[4737]: I0126 18:31:08.508409 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:08Z","lastTransitionTime":"2026-01-26T18:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:08 crc kubenswrapper[4737]: I0126 18:31:08.612112 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:08 crc kubenswrapper[4737]: I0126 18:31:08.612180 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:08 crc kubenswrapper[4737]: I0126 18:31:08.612197 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:08 crc kubenswrapper[4737]: I0126 18:31:08.612217 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:08 crc kubenswrapper[4737]: I0126 18:31:08.612230 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:08Z","lastTransitionTime":"2026-01-26T18:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:08 crc kubenswrapper[4737]: I0126 18:31:08.715531 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:08 crc kubenswrapper[4737]: I0126 18:31:08.715575 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:08 crc kubenswrapper[4737]: I0126 18:31:08.715592 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:08 crc kubenswrapper[4737]: I0126 18:31:08.715614 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:08 crc kubenswrapper[4737]: I0126 18:31:08.715627 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:08Z","lastTransitionTime":"2026-01-26T18:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:08 crc kubenswrapper[4737]: I0126 18:31:08.818851 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:08 crc kubenswrapper[4737]: I0126 18:31:08.818903 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:08 crc kubenswrapper[4737]: I0126 18:31:08.818918 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:08 crc kubenswrapper[4737]: I0126 18:31:08.818937 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:08 crc kubenswrapper[4737]: I0126 18:31:08.818953 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:08Z","lastTransitionTime":"2026-01-26T18:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:08 crc kubenswrapper[4737]: I0126 18:31:08.921626 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:08 crc kubenswrapper[4737]: I0126 18:31:08.921673 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:08 crc kubenswrapper[4737]: I0126 18:31:08.921688 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:08 crc kubenswrapper[4737]: I0126 18:31:08.921705 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:08 crc kubenswrapper[4737]: I0126 18:31:08.921720 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:08Z","lastTransitionTime":"2026-01-26T18:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:08 crc kubenswrapper[4737]: I0126 18:31:08.938921 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 22:05:24.949107573 +0000 UTC Jan 26 18:31:08 crc kubenswrapper[4737]: I0126 18:31:08.981238 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:31:08 crc kubenswrapper[4737]: I0126 18:31:08.981318 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:31:08 crc kubenswrapper[4737]: I0126 18:31:08.981477 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:31:08 crc kubenswrapper[4737]: I0126 18:31:08.981545 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:31:08 crc kubenswrapper[4737]: E0126 18:31:08.981528 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:31:08 crc kubenswrapper[4737]: E0126 18:31:08.981713 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4pv7r" podUID="1a3aadb5-b908-4300-af5f-e3c37dff9e14" Jan 26 18:31:08 crc kubenswrapper[4737]: E0126 18:31:08.981840 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:31:08 crc kubenswrapper[4737]: E0126 18:31:08.981990 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:31:09 crc kubenswrapper[4737]: I0126 18:31:09.025620 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:09 crc kubenswrapper[4737]: I0126 18:31:09.026004 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:09 crc kubenswrapper[4737]: I0126 18:31:09.026225 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:09 crc kubenswrapper[4737]: I0126 18:31:09.026429 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:09 crc kubenswrapper[4737]: I0126 18:31:09.026607 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:09Z","lastTransitionTime":"2026-01-26T18:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:09 crc kubenswrapper[4737]: I0126 18:31:09.130232 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:09 crc kubenswrapper[4737]: I0126 18:31:09.130297 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:09 crc kubenswrapper[4737]: I0126 18:31:09.130316 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:09 crc kubenswrapper[4737]: I0126 18:31:09.130343 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:09 crc kubenswrapper[4737]: I0126 18:31:09.130362 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:09Z","lastTransitionTime":"2026-01-26T18:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:09 crc kubenswrapper[4737]: I0126 18:31:09.232880 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:09 crc kubenswrapper[4737]: I0126 18:31:09.232948 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:09 crc kubenswrapper[4737]: I0126 18:31:09.232960 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:09 crc kubenswrapper[4737]: I0126 18:31:09.232975 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:09 crc kubenswrapper[4737]: I0126 18:31:09.232986 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:09Z","lastTransitionTime":"2026-01-26T18:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:09 crc kubenswrapper[4737]: I0126 18:31:09.336421 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:09 crc kubenswrapper[4737]: I0126 18:31:09.336866 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:09 crc kubenswrapper[4737]: I0126 18:31:09.337023 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:09 crc kubenswrapper[4737]: I0126 18:31:09.337257 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:09 crc kubenswrapper[4737]: I0126 18:31:09.337431 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:09Z","lastTransitionTime":"2026-01-26T18:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:09 crc kubenswrapper[4737]: I0126 18:31:09.440185 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:09 crc kubenswrapper[4737]: I0126 18:31:09.440262 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:09 crc kubenswrapper[4737]: I0126 18:31:09.440285 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:09 crc kubenswrapper[4737]: I0126 18:31:09.440319 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:09 crc kubenswrapper[4737]: I0126 18:31:09.440345 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:09Z","lastTransitionTime":"2026-01-26T18:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:09 crc kubenswrapper[4737]: I0126 18:31:09.543142 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:09 crc kubenswrapper[4737]: I0126 18:31:09.543199 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:09 crc kubenswrapper[4737]: I0126 18:31:09.543216 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:09 crc kubenswrapper[4737]: I0126 18:31:09.543238 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:09 crc kubenswrapper[4737]: I0126 18:31:09.543255 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:09Z","lastTransitionTime":"2026-01-26T18:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:09 crc kubenswrapper[4737]: I0126 18:31:09.646672 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:09 crc kubenswrapper[4737]: I0126 18:31:09.646755 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:09 crc kubenswrapper[4737]: I0126 18:31:09.646770 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:09 crc kubenswrapper[4737]: I0126 18:31:09.646792 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:09 crc kubenswrapper[4737]: I0126 18:31:09.646806 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:09Z","lastTransitionTime":"2026-01-26T18:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:09 crc kubenswrapper[4737]: I0126 18:31:09.749860 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:09 crc kubenswrapper[4737]: I0126 18:31:09.749924 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:09 crc kubenswrapper[4737]: I0126 18:31:09.749952 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:09 crc kubenswrapper[4737]: I0126 18:31:09.749982 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:09 crc kubenswrapper[4737]: I0126 18:31:09.750006 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:09Z","lastTransitionTime":"2026-01-26T18:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:09 crc kubenswrapper[4737]: I0126 18:31:09.852863 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:09 crc kubenswrapper[4737]: I0126 18:31:09.852990 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:09 crc kubenswrapper[4737]: I0126 18:31:09.853019 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:09 crc kubenswrapper[4737]: I0126 18:31:09.853047 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:09 crc kubenswrapper[4737]: I0126 18:31:09.853098 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:09Z","lastTransitionTime":"2026-01-26T18:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:09 crc kubenswrapper[4737]: I0126 18:31:09.940165 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 22:05:37.67899062 +0000 UTC Jan 26 18:31:09 crc kubenswrapper[4737]: I0126 18:31:09.955515 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:09 crc kubenswrapper[4737]: I0126 18:31:09.955575 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:09 crc kubenswrapper[4737]: I0126 18:31:09.955592 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:09 crc kubenswrapper[4737]: I0126 18:31:09.955613 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:09 crc kubenswrapper[4737]: I0126 18:31:09.955628 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:09Z","lastTransitionTime":"2026-01-26T18:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:10 crc kubenswrapper[4737]: I0126 18:31:10.058271 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:10 crc kubenswrapper[4737]: I0126 18:31:10.058321 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:10 crc kubenswrapper[4737]: I0126 18:31:10.058335 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:10 crc kubenswrapper[4737]: I0126 18:31:10.058351 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:10 crc kubenswrapper[4737]: I0126 18:31:10.058363 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:10Z","lastTransitionTime":"2026-01-26T18:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:10 crc kubenswrapper[4737]: I0126 18:31:10.161875 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:10 crc kubenswrapper[4737]: I0126 18:31:10.161948 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:10 crc kubenswrapper[4737]: I0126 18:31:10.161986 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:10 crc kubenswrapper[4737]: I0126 18:31:10.162022 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:10 crc kubenswrapper[4737]: I0126 18:31:10.162047 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:10Z","lastTransitionTime":"2026-01-26T18:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:10 crc kubenswrapper[4737]: I0126 18:31:10.264980 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:10 crc kubenswrapper[4737]: I0126 18:31:10.265033 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:10 crc kubenswrapper[4737]: I0126 18:31:10.265046 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:10 crc kubenswrapper[4737]: I0126 18:31:10.265064 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:10 crc kubenswrapper[4737]: I0126 18:31:10.265107 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:10Z","lastTransitionTime":"2026-01-26T18:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:10 crc kubenswrapper[4737]: I0126 18:31:10.368452 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:10 crc kubenswrapper[4737]: I0126 18:31:10.368538 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:10 crc kubenswrapper[4737]: I0126 18:31:10.368560 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:10 crc kubenswrapper[4737]: I0126 18:31:10.368588 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:10 crc kubenswrapper[4737]: I0126 18:31:10.368607 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:10Z","lastTransitionTime":"2026-01-26T18:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:10 crc kubenswrapper[4737]: I0126 18:31:10.471784 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:10 crc kubenswrapper[4737]: I0126 18:31:10.471845 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:10 crc kubenswrapper[4737]: I0126 18:31:10.471864 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:10 crc kubenswrapper[4737]: I0126 18:31:10.471892 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:10 crc kubenswrapper[4737]: I0126 18:31:10.471914 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:10Z","lastTransitionTime":"2026-01-26T18:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:10 crc kubenswrapper[4737]: I0126 18:31:10.575388 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:10 crc kubenswrapper[4737]: I0126 18:31:10.575728 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:10 crc kubenswrapper[4737]: I0126 18:31:10.575845 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:10 crc kubenswrapper[4737]: I0126 18:31:10.576007 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:10 crc kubenswrapper[4737]: I0126 18:31:10.576173 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:10Z","lastTransitionTime":"2026-01-26T18:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:10 crc kubenswrapper[4737]: I0126 18:31:10.679395 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:10 crc kubenswrapper[4737]: I0126 18:31:10.679921 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:10 crc kubenswrapper[4737]: I0126 18:31:10.680206 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:10 crc kubenswrapper[4737]: I0126 18:31:10.680454 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:10 crc kubenswrapper[4737]: I0126 18:31:10.680638 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:10Z","lastTransitionTime":"2026-01-26T18:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:10 crc kubenswrapper[4737]: I0126 18:31:10.783446 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:10 crc kubenswrapper[4737]: I0126 18:31:10.783947 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:10 crc kubenswrapper[4737]: I0126 18:31:10.784160 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:10 crc kubenswrapper[4737]: I0126 18:31:10.784323 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:10 crc kubenswrapper[4737]: I0126 18:31:10.784463 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:10Z","lastTransitionTime":"2026-01-26T18:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:10 crc kubenswrapper[4737]: I0126 18:31:10.888715 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:10 crc kubenswrapper[4737]: I0126 18:31:10.888793 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:10 crc kubenswrapper[4737]: I0126 18:31:10.888815 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:10 crc kubenswrapper[4737]: I0126 18:31:10.888890 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:10 crc kubenswrapper[4737]: I0126 18:31:10.888912 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:10Z","lastTransitionTime":"2026-01-26T18:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:10 crc kubenswrapper[4737]: I0126 18:31:10.941003 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 15:53:27.284314216 +0000 UTC Jan 26 18:31:10 crc kubenswrapper[4737]: I0126 18:31:10.981367 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:31:10 crc kubenswrapper[4737]: I0126 18:31:10.981402 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:31:10 crc kubenswrapper[4737]: I0126 18:31:10.981462 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:31:10 crc kubenswrapper[4737]: E0126 18:31:10.982179 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:31:10 crc kubenswrapper[4737]: E0126 18:31:10.982176 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:31:10 crc kubenswrapper[4737]: I0126 18:31:10.981653 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:31:10 crc kubenswrapper[4737]: E0126 18:31:10.982916 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:31:10 crc kubenswrapper[4737]: E0126 18:31:10.982474 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4pv7r" podUID="1a3aadb5-b908-4300-af5f-e3c37dff9e14" Jan 26 18:31:10 crc kubenswrapper[4737]: I0126 18:31:10.992363 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:10 crc kubenswrapper[4737]: I0126 18:31:10.992431 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:10 crc kubenswrapper[4737]: I0126 18:31:10.992450 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:10 crc kubenswrapper[4737]: I0126 18:31:10.992475 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:10 crc kubenswrapper[4737]: I0126 18:31:10.992493 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:10Z","lastTransitionTime":"2026-01-26T18:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:11 crc kubenswrapper[4737]: I0126 18:31:11.094982 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:11 crc kubenswrapper[4737]: I0126 18:31:11.095047 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:11 crc kubenswrapper[4737]: I0126 18:31:11.095064 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:11 crc kubenswrapper[4737]: I0126 18:31:11.095129 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:11 crc kubenswrapper[4737]: I0126 18:31:11.095148 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:11Z","lastTransitionTime":"2026-01-26T18:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:11 crc kubenswrapper[4737]: I0126 18:31:11.198392 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:11 crc kubenswrapper[4737]: I0126 18:31:11.198707 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:11 crc kubenswrapper[4737]: I0126 18:31:11.198807 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:11 crc kubenswrapper[4737]: I0126 18:31:11.198907 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:11 crc kubenswrapper[4737]: I0126 18:31:11.198996 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:11Z","lastTransitionTime":"2026-01-26T18:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:11 crc kubenswrapper[4737]: I0126 18:31:11.301886 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:11 crc kubenswrapper[4737]: I0126 18:31:11.301948 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:11 crc kubenswrapper[4737]: I0126 18:31:11.301970 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:11 crc kubenswrapper[4737]: I0126 18:31:11.301999 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:11 crc kubenswrapper[4737]: I0126 18:31:11.302021 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:11Z","lastTransitionTime":"2026-01-26T18:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:11 crc kubenswrapper[4737]: I0126 18:31:11.405309 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:11 crc kubenswrapper[4737]: I0126 18:31:11.405359 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:11 crc kubenswrapper[4737]: I0126 18:31:11.405375 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:11 crc kubenswrapper[4737]: I0126 18:31:11.405399 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:11 crc kubenswrapper[4737]: I0126 18:31:11.405415 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:11Z","lastTransitionTime":"2026-01-26T18:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:11 crc kubenswrapper[4737]: I0126 18:31:11.507946 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:11 crc kubenswrapper[4737]: I0126 18:31:11.508004 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:11 crc kubenswrapper[4737]: I0126 18:31:11.508022 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:11 crc kubenswrapper[4737]: I0126 18:31:11.508045 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:11 crc kubenswrapper[4737]: I0126 18:31:11.508061 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:11Z","lastTransitionTime":"2026-01-26T18:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:11 crc kubenswrapper[4737]: I0126 18:31:11.611626 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:11 crc kubenswrapper[4737]: I0126 18:31:11.611671 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:11 crc kubenswrapper[4737]: I0126 18:31:11.611687 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:11 crc kubenswrapper[4737]: I0126 18:31:11.611709 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:11 crc kubenswrapper[4737]: I0126 18:31:11.611725 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:11Z","lastTransitionTime":"2026-01-26T18:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:11 crc kubenswrapper[4737]: I0126 18:31:11.713799 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:11 crc kubenswrapper[4737]: I0126 18:31:11.713838 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:11 crc kubenswrapper[4737]: I0126 18:31:11.713853 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:11 crc kubenswrapper[4737]: I0126 18:31:11.713867 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:11 crc kubenswrapper[4737]: I0126 18:31:11.713876 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:11Z","lastTransitionTime":"2026-01-26T18:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:11 crc kubenswrapper[4737]: I0126 18:31:11.817582 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:11 crc kubenswrapper[4737]: I0126 18:31:11.817644 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:11 crc kubenswrapper[4737]: I0126 18:31:11.817667 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:11 crc kubenswrapper[4737]: I0126 18:31:11.817696 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:11 crc kubenswrapper[4737]: I0126 18:31:11.817718 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:11Z","lastTransitionTime":"2026-01-26T18:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:11 crc kubenswrapper[4737]: I0126 18:31:11.920438 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:11 crc kubenswrapper[4737]: I0126 18:31:11.920513 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:11 crc kubenswrapper[4737]: I0126 18:31:11.920532 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:11 crc kubenswrapper[4737]: I0126 18:31:11.920559 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:11 crc kubenswrapper[4737]: I0126 18:31:11.920575 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:11Z","lastTransitionTime":"2026-01-26T18:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:11 crc kubenswrapper[4737]: I0126 18:31:11.943004 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 22:02:30.248645058 +0000 UTC Jan 26 18:31:12 crc kubenswrapper[4737]: I0126 18:31:12.023872 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:12 crc kubenswrapper[4737]: I0126 18:31:12.023944 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:12 crc kubenswrapper[4737]: I0126 18:31:12.023963 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:12 crc kubenswrapper[4737]: I0126 18:31:12.023989 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:12 crc kubenswrapper[4737]: I0126 18:31:12.024009 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:12Z","lastTransitionTime":"2026-01-26T18:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:12 crc kubenswrapper[4737]: I0126 18:31:12.127099 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:12 crc kubenswrapper[4737]: I0126 18:31:12.127155 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:12 crc kubenswrapper[4737]: I0126 18:31:12.127181 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:12 crc kubenswrapper[4737]: I0126 18:31:12.127207 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:12 crc kubenswrapper[4737]: I0126 18:31:12.127228 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:12Z","lastTransitionTime":"2026-01-26T18:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:12 crc kubenswrapper[4737]: I0126 18:31:12.230254 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:12 crc kubenswrapper[4737]: I0126 18:31:12.230328 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:12 crc kubenswrapper[4737]: I0126 18:31:12.230358 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:12 crc kubenswrapper[4737]: I0126 18:31:12.230385 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:12 crc kubenswrapper[4737]: I0126 18:31:12.230406 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:12Z","lastTransitionTime":"2026-01-26T18:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:12 crc kubenswrapper[4737]: I0126 18:31:12.333251 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:12 crc kubenswrapper[4737]: I0126 18:31:12.333301 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:12 crc kubenswrapper[4737]: I0126 18:31:12.333312 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:12 crc kubenswrapper[4737]: I0126 18:31:12.333330 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:12 crc kubenswrapper[4737]: I0126 18:31:12.333341 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:12Z","lastTransitionTime":"2026-01-26T18:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:12 crc kubenswrapper[4737]: I0126 18:31:12.436136 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:12 crc kubenswrapper[4737]: I0126 18:31:12.436220 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:12 crc kubenswrapper[4737]: I0126 18:31:12.436241 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:12 crc kubenswrapper[4737]: I0126 18:31:12.436266 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:12 crc kubenswrapper[4737]: I0126 18:31:12.436287 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:12Z","lastTransitionTime":"2026-01-26T18:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:12 crc kubenswrapper[4737]: I0126 18:31:12.538825 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:12 crc kubenswrapper[4737]: I0126 18:31:12.538864 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:12 crc kubenswrapper[4737]: I0126 18:31:12.538877 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:12 crc kubenswrapper[4737]: I0126 18:31:12.538891 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:12 crc kubenswrapper[4737]: I0126 18:31:12.538902 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:12Z","lastTransitionTime":"2026-01-26T18:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:12 crc kubenswrapper[4737]: I0126 18:31:12.640939 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:12 crc kubenswrapper[4737]: I0126 18:31:12.640983 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:12 crc kubenswrapper[4737]: I0126 18:31:12.640994 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:12 crc kubenswrapper[4737]: I0126 18:31:12.641010 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:12 crc kubenswrapper[4737]: I0126 18:31:12.641022 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:12Z","lastTransitionTime":"2026-01-26T18:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:12 crc kubenswrapper[4737]: I0126 18:31:12.743607 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:12 crc kubenswrapper[4737]: I0126 18:31:12.743664 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:12 crc kubenswrapper[4737]: I0126 18:31:12.743681 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:12 crc kubenswrapper[4737]: I0126 18:31:12.743704 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:12 crc kubenswrapper[4737]: I0126 18:31:12.743721 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:12Z","lastTransitionTime":"2026-01-26T18:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:12 crc kubenswrapper[4737]: I0126 18:31:12.846312 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:12 crc kubenswrapper[4737]: I0126 18:31:12.846404 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:12 crc kubenswrapper[4737]: I0126 18:31:12.846415 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:12 crc kubenswrapper[4737]: I0126 18:31:12.846436 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:12 crc kubenswrapper[4737]: I0126 18:31:12.846451 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:12Z","lastTransitionTime":"2026-01-26T18:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:12 crc kubenswrapper[4737]: I0126 18:31:12.943417 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 23:30:47.062209594 +0000 UTC Jan 26 18:31:12 crc kubenswrapper[4737]: I0126 18:31:12.949610 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:12 crc kubenswrapper[4737]: I0126 18:31:12.949654 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:12 crc kubenswrapper[4737]: I0126 18:31:12.949664 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:12 crc kubenswrapper[4737]: I0126 18:31:12.949712 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:12 crc kubenswrapper[4737]: I0126 18:31:12.949722 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:12Z","lastTransitionTime":"2026-01-26T18:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:12 crc kubenswrapper[4737]: I0126 18:31:12.981681 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:31:12 crc kubenswrapper[4737]: I0126 18:31:12.981728 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:31:12 crc kubenswrapper[4737]: I0126 18:31:12.981787 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:31:12 crc kubenswrapper[4737]: E0126 18:31:12.981806 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:31:12 crc kubenswrapper[4737]: E0126 18:31:12.982007 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:31:12 crc kubenswrapper[4737]: I0126 18:31:12.982105 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:31:12 crc kubenswrapper[4737]: E0126 18:31:12.982464 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4pv7r" podUID="1a3aadb5-b908-4300-af5f-e3c37dff9e14" Jan 26 18:31:12 crc kubenswrapper[4737]: E0126 18:31:12.982488 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:31:13 crc kubenswrapper[4737]: I0126 18:31:13.055165 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:13 crc kubenswrapper[4737]: I0126 18:31:13.055617 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:13 crc kubenswrapper[4737]: I0126 18:31:13.055775 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:13 crc kubenswrapper[4737]: I0126 18:31:13.055866 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:13 crc kubenswrapper[4737]: I0126 18:31:13.057345 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:13Z","lastTransitionTime":"2026-01-26T18:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:13 crc kubenswrapper[4737]: I0126 18:31:13.161809 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:13 crc kubenswrapper[4737]: I0126 18:31:13.161882 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:13 crc kubenswrapper[4737]: I0126 18:31:13.161902 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:13 crc kubenswrapper[4737]: I0126 18:31:13.161929 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:13 crc kubenswrapper[4737]: I0126 18:31:13.161947 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:13Z","lastTransitionTime":"2026-01-26T18:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:13 crc kubenswrapper[4737]: I0126 18:31:13.264857 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:13 crc kubenswrapper[4737]: I0126 18:31:13.264906 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:13 crc kubenswrapper[4737]: I0126 18:31:13.264918 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:13 crc kubenswrapper[4737]: I0126 18:31:13.264936 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:13 crc kubenswrapper[4737]: I0126 18:31:13.264984 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:13Z","lastTransitionTime":"2026-01-26T18:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:13 crc kubenswrapper[4737]: I0126 18:31:13.368902 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:13 crc kubenswrapper[4737]: I0126 18:31:13.368977 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:13 crc kubenswrapper[4737]: I0126 18:31:13.368995 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:13 crc kubenswrapper[4737]: I0126 18:31:13.369024 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:13 crc kubenswrapper[4737]: I0126 18:31:13.369043 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:13Z","lastTransitionTime":"2026-01-26T18:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:13 crc kubenswrapper[4737]: I0126 18:31:13.472285 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:13 crc kubenswrapper[4737]: I0126 18:31:13.472410 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:13 crc kubenswrapper[4737]: I0126 18:31:13.472428 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:13 crc kubenswrapper[4737]: I0126 18:31:13.472452 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:13 crc kubenswrapper[4737]: I0126 18:31:13.472470 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:13Z","lastTransitionTime":"2026-01-26T18:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:13 crc kubenswrapper[4737]: I0126 18:31:13.575671 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:13 crc kubenswrapper[4737]: I0126 18:31:13.576190 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:13 crc kubenswrapper[4737]: I0126 18:31:13.576429 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:13 crc kubenswrapper[4737]: I0126 18:31:13.576646 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:13 crc kubenswrapper[4737]: I0126 18:31:13.576859 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:13Z","lastTransitionTime":"2026-01-26T18:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:13 crc kubenswrapper[4737]: I0126 18:31:13.680783 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:13 crc kubenswrapper[4737]: I0126 18:31:13.681398 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:13 crc kubenswrapper[4737]: I0126 18:31:13.681590 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:13 crc kubenswrapper[4737]: I0126 18:31:13.681735 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:13 crc kubenswrapper[4737]: I0126 18:31:13.681874 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:13Z","lastTransitionTime":"2026-01-26T18:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:13 crc kubenswrapper[4737]: I0126 18:31:13.785781 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:13 crc kubenswrapper[4737]: I0126 18:31:13.785848 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:13 crc kubenswrapper[4737]: I0126 18:31:13.785866 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:13 crc kubenswrapper[4737]: I0126 18:31:13.785890 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:13 crc kubenswrapper[4737]: I0126 18:31:13.785909 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:13Z","lastTransitionTime":"2026-01-26T18:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:13 crc kubenswrapper[4737]: I0126 18:31:13.888675 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:13 crc kubenswrapper[4737]: I0126 18:31:13.888716 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:13 crc kubenswrapper[4737]: I0126 18:31:13.888729 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:13 crc kubenswrapper[4737]: I0126 18:31:13.888745 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:13 crc kubenswrapper[4737]: I0126 18:31:13.888757 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:13Z","lastTransitionTime":"2026-01-26T18:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:13 crc kubenswrapper[4737]: I0126 18:31:13.944506 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 15:52:59.546600128 +0000 UTC Jan 26 18:31:13 crc kubenswrapper[4737]: I0126 18:31:13.991605 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:13 crc kubenswrapper[4737]: I0126 18:31:13.991702 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:13 crc kubenswrapper[4737]: I0126 18:31:13.991733 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:13 crc kubenswrapper[4737]: I0126 18:31:13.991759 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:13 crc kubenswrapper[4737]: I0126 18:31:13.991777 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:13Z","lastTransitionTime":"2026-01-26T18:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:14 crc kubenswrapper[4737]: I0126 18:31:14.095416 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:14 crc kubenswrapper[4737]: I0126 18:31:14.095454 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:14 crc kubenswrapper[4737]: I0126 18:31:14.095462 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:14 crc kubenswrapper[4737]: I0126 18:31:14.095478 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:14 crc kubenswrapper[4737]: I0126 18:31:14.095488 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:14Z","lastTransitionTime":"2026-01-26T18:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:14 crc kubenswrapper[4737]: I0126 18:31:14.197949 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:14 crc kubenswrapper[4737]: I0126 18:31:14.197980 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:14 crc kubenswrapper[4737]: I0126 18:31:14.197988 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:14 crc kubenswrapper[4737]: I0126 18:31:14.198000 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:14 crc kubenswrapper[4737]: I0126 18:31:14.198008 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:14Z","lastTransitionTime":"2026-01-26T18:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:14 crc kubenswrapper[4737]: I0126 18:31:14.300728 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:14 crc kubenswrapper[4737]: I0126 18:31:14.300776 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:14 crc kubenswrapper[4737]: I0126 18:31:14.300789 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:14 crc kubenswrapper[4737]: I0126 18:31:14.300811 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:14 crc kubenswrapper[4737]: I0126 18:31:14.300822 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:14Z","lastTransitionTime":"2026-01-26T18:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:14 crc kubenswrapper[4737]: I0126 18:31:14.404172 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:14 crc kubenswrapper[4737]: I0126 18:31:14.404232 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:14 crc kubenswrapper[4737]: I0126 18:31:14.404249 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:14 crc kubenswrapper[4737]: I0126 18:31:14.404271 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:14 crc kubenswrapper[4737]: I0126 18:31:14.404287 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:14Z","lastTransitionTime":"2026-01-26T18:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:14 crc kubenswrapper[4737]: I0126 18:31:14.507892 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:14 crc kubenswrapper[4737]: I0126 18:31:14.507939 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:14 crc kubenswrapper[4737]: I0126 18:31:14.507951 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:14 crc kubenswrapper[4737]: I0126 18:31:14.507970 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:14 crc kubenswrapper[4737]: I0126 18:31:14.507984 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:14Z","lastTransitionTime":"2026-01-26T18:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:14 crc kubenswrapper[4737]: I0126 18:31:14.611219 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:14 crc kubenswrapper[4737]: I0126 18:31:14.611271 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:14 crc kubenswrapper[4737]: I0126 18:31:14.611284 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:14 crc kubenswrapper[4737]: I0126 18:31:14.611302 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:14 crc kubenswrapper[4737]: I0126 18:31:14.611353 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:14Z","lastTransitionTime":"2026-01-26T18:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:14 crc kubenswrapper[4737]: I0126 18:31:14.714730 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:14 crc kubenswrapper[4737]: I0126 18:31:14.714776 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:14 crc kubenswrapper[4737]: I0126 18:31:14.714789 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:14 crc kubenswrapper[4737]: I0126 18:31:14.714810 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:14 crc kubenswrapper[4737]: I0126 18:31:14.714823 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:14Z","lastTransitionTime":"2026-01-26T18:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:14 crc kubenswrapper[4737]: I0126 18:31:14.818214 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:14 crc kubenswrapper[4737]: I0126 18:31:14.818271 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:14 crc kubenswrapper[4737]: I0126 18:31:14.818288 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:14 crc kubenswrapper[4737]: I0126 18:31:14.818313 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:14 crc kubenswrapper[4737]: I0126 18:31:14.818329 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:14Z","lastTransitionTime":"2026-01-26T18:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:14 crc kubenswrapper[4737]: I0126 18:31:14.921714 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:14 crc kubenswrapper[4737]: I0126 18:31:14.921752 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:14 crc kubenswrapper[4737]: I0126 18:31:14.921762 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:14 crc kubenswrapper[4737]: I0126 18:31:14.921783 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:14 crc kubenswrapper[4737]: I0126 18:31:14.921798 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:14Z","lastTransitionTime":"2026-01-26T18:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:14 crc kubenswrapper[4737]: I0126 18:31:14.945670 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 12:36:31.22986589 +0000 UTC Jan 26 18:31:14 crc kubenswrapper[4737]: I0126 18:31:14.981446 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:31:14 crc kubenswrapper[4737]: I0126 18:31:14.981606 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:31:14 crc kubenswrapper[4737]: I0126 18:31:14.981681 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:31:14 crc kubenswrapper[4737]: I0126 18:31:14.981621 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:31:14 crc kubenswrapper[4737]: E0126 18:31:14.981803 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4pv7r" podUID="1a3aadb5-b908-4300-af5f-e3c37dff9e14" Jan 26 18:31:14 crc kubenswrapper[4737]: E0126 18:31:14.981944 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:31:14 crc kubenswrapper[4737]: E0126 18:31:14.982154 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:31:14 crc kubenswrapper[4737]: E0126 18:31:14.982250 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:31:15 crc kubenswrapper[4737]: I0126 18:31:15.025563 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:15 crc kubenswrapper[4737]: I0126 18:31:15.025620 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:15 crc kubenswrapper[4737]: I0126 18:31:15.025638 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:15 crc kubenswrapper[4737]: I0126 18:31:15.025662 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:15 crc kubenswrapper[4737]: I0126 18:31:15.025682 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:15Z","lastTransitionTime":"2026-01-26T18:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:15 crc kubenswrapper[4737]: I0126 18:31:15.129306 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:15 crc kubenswrapper[4737]: I0126 18:31:15.129361 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:15 crc kubenswrapper[4737]: I0126 18:31:15.129373 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:15 crc kubenswrapper[4737]: I0126 18:31:15.129390 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:15 crc kubenswrapper[4737]: I0126 18:31:15.129760 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:15Z","lastTransitionTime":"2026-01-26T18:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:15 crc kubenswrapper[4737]: I0126 18:31:15.233162 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:15 crc kubenswrapper[4737]: I0126 18:31:15.233222 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:15 crc kubenswrapper[4737]: I0126 18:31:15.233240 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:15 crc kubenswrapper[4737]: I0126 18:31:15.233262 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:15 crc kubenswrapper[4737]: I0126 18:31:15.233280 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:15Z","lastTransitionTime":"2026-01-26T18:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:15 crc kubenswrapper[4737]: I0126 18:31:15.336245 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:15 crc kubenswrapper[4737]: I0126 18:31:15.336305 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:15 crc kubenswrapper[4737]: I0126 18:31:15.336321 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:15 crc kubenswrapper[4737]: I0126 18:31:15.336341 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:15 crc kubenswrapper[4737]: I0126 18:31:15.336359 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:15Z","lastTransitionTime":"2026-01-26T18:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:15 crc kubenswrapper[4737]: I0126 18:31:15.410531 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a3aadb5-b908-4300-af5f-e3c37dff9e14-metrics-certs\") pod \"network-metrics-daemon-4pv7r\" (UID: \"1a3aadb5-b908-4300-af5f-e3c37dff9e14\") " pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:31:15 crc kubenswrapper[4737]: E0126 18:31:15.410825 4737 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 18:31:15 crc kubenswrapper[4737]: E0126 18:31:15.410944 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a3aadb5-b908-4300-af5f-e3c37dff9e14-metrics-certs podName:1a3aadb5-b908-4300-af5f-e3c37dff9e14 nodeName:}" failed. No retries permitted until 2026-01-26 18:31:31.410914449 +0000 UTC m=+64.719109197 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1a3aadb5-b908-4300-af5f-e3c37dff9e14-metrics-certs") pod "network-metrics-daemon-4pv7r" (UID: "1a3aadb5-b908-4300-af5f-e3c37dff9e14") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 18:31:15 crc kubenswrapper[4737]: I0126 18:31:15.440432 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:15 crc kubenswrapper[4737]: I0126 18:31:15.440478 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:15 crc kubenswrapper[4737]: I0126 18:31:15.440489 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:15 crc kubenswrapper[4737]: I0126 18:31:15.440505 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:15 crc kubenswrapper[4737]: I0126 18:31:15.440517 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:15Z","lastTransitionTime":"2026-01-26T18:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:15 crc kubenswrapper[4737]: I0126 18:31:15.544392 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:15 crc kubenswrapper[4737]: I0126 18:31:15.544457 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:15 crc kubenswrapper[4737]: I0126 18:31:15.544472 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:15 crc kubenswrapper[4737]: I0126 18:31:15.544493 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:15 crc kubenswrapper[4737]: I0126 18:31:15.544506 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:15Z","lastTransitionTime":"2026-01-26T18:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:15 crc kubenswrapper[4737]: I0126 18:31:15.649670 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:15 crc kubenswrapper[4737]: I0126 18:31:15.649730 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:15 crc kubenswrapper[4737]: I0126 18:31:15.649748 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:15 crc kubenswrapper[4737]: I0126 18:31:15.649775 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:15 crc kubenswrapper[4737]: I0126 18:31:15.649797 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:15Z","lastTransitionTime":"2026-01-26T18:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:15 crc kubenswrapper[4737]: I0126 18:31:15.752900 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:15 crc kubenswrapper[4737]: I0126 18:31:15.752975 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:15 crc kubenswrapper[4737]: I0126 18:31:15.752989 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:15 crc kubenswrapper[4737]: I0126 18:31:15.753048 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:15 crc kubenswrapper[4737]: I0126 18:31:15.753069 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:15Z","lastTransitionTime":"2026-01-26T18:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:15 crc kubenswrapper[4737]: I0126 18:31:15.856536 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:15 crc kubenswrapper[4737]: I0126 18:31:15.856594 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:15 crc kubenswrapper[4737]: I0126 18:31:15.856610 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:15 crc kubenswrapper[4737]: I0126 18:31:15.856635 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:15 crc kubenswrapper[4737]: I0126 18:31:15.856651 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:15Z","lastTransitionTime":"2026-01-26T18:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:15 crc kubenswrapper[4737]: I0126 18:31:15.946230 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 19:18:39.675889192 +0000 UTC Jan 26 18:31:15 crc kubenswrapper[4737]: I0126 18:31:15.959965 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:15 crc kubenswrapper[4737]: I0126 18:31:15.960027 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:15 crc kubenswrapper[4737]: I0126 18:31:15.960038 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:15 crc kubenswrapper[4737]: I0126 18:31:15.960058 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:15 crc kubenswrapper[4737]: I0126 18:31:15.960086 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:15Z","lastTransitionTime":"2026-01-26T18:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:15 crc kubenswrapper[4737]: I0126 18:31:15.982594 4737 scope.go:117] "RemoveContainer" containerID="3914aba793322088149ecf9d7ad29dc5cbc6240e243dd5ce17c8df1ae4e39af5" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.063053 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.063117 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.063127 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.063141 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.063150 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:16Z","lastTransitionTime":"2026-01-26T18:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.158046 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.158580 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.158608 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.158639 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.158664 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:16Z","lastTransitionTime":"2026-01-26T18:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:16 crc kubenswrapper[4737]: E0126 18:31:16.185407 4737 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"163b9b97-5fa6-4443-9f0c-6d278a8ade1d\\\",\\\"systemUUID\\\":\\\"4ebf7606-e2ee-4d28-b0b5-b6f922331ef2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:16Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.190994 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.191055 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.191098 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.191125 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.191147 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:16Z","lastTransitionTime":"2026-01-26T18:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:16 crc kubenswrapper[4737]: E0126 18:31:16.213228 4737 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"163b9b97-5fa6-4443-9f0c-6d278a8ade1d\\\",\\\"systemUUID\\\":\\\"4ebf7606-e2ee-4d28-b0b5-b6f922331ef2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:16Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.217515 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.217568 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.217581 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.217602 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.217615 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:16Z","lastTransitionTime":"2026-01-26T18:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:16 crc kubenswrapper[4737]: E0126 18:31:16.234604 4737 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"163b9b97-5fa6-4443-9f0c-6d278a8ade1d\\\",\\\"systemUUID\\\":\\\"4ebf7606-e2ee-4d28-b0b5-b6f922331ef2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:16Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.239434 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.239481 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.239497 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.239517 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.239533 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:16Z","lastTransitionTime":"2026-01-26T18:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:16 crc kubenswrapper[4737]: E0126 18:31:16.252744 4737 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"163b9b97-5fa6-4443-9f0c-6d278a8ade1d\\\",\\\"systemUUID\\\":\\\"4ebf7606-e2ee-4d28-b0b5-b6f922331ef2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:16Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.257890 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.257940 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.257981 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.258010 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.258030 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:16Z","lastTransitionTime":"2026-01-26T18:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:16 crc kubenswrapper[4737]: E0126 18:31:16.276210 4737 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"163b9b97-5fa6-4443-9f0c-6d278a8ade1d\\\",\\\"systemUUID\\\":\\\"4ebf7606-e2ee-4d28-b0b5-b6f922331ef2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:16Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:16 crc kubenswrapper[4737]: E0126 18:31:16.276386 4737 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.278810 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.278853 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.278865 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.278880 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.278890 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:16Z","lastTransitionTime":"2026-01-26T18:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.381333 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.381409 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.381423 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.381446 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.381462 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:16Z","lastTransitionTime":"2026-01-26T18:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.470097 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jgjrk_ecb40773-20dc-48ef-bf7f-17f4a042b01c/ovnkube-controller/1.log" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.473851 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" event={"ID":"ecb40773-20dc-48ef-bf7f-17f4a042b01c","Type":"ContainerStarted","Data":"046202f8fbac321bcb6ceb2a70e0b655bf88d62a5c28a1c43a1a815ad3b2f87d"} Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.474365 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.483910 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.483960 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.483972 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.483997 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.484012 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:16Z","lastTransitionTime":"2026-01-26T18:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.495268 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvbml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f32d3b75-6d15-4fb7-9559-d3df1d77071e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e973f3c659c65849958ccb32d18d8e67d42874690df337699f6cf976485c536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81b1b4cdfa531e63bf8499478cc1f6813d659b2b1b160d374133713382cff7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81b1b4cdfa531e63bf8499478cc1f6813d659b2b1b160d374133713382cff7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvbml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:16Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.507673 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bc7b559-f4f0-47b0-b148-6d0915785538\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10904723390bf4505ed547f04ed3a24b1e7debcf7f089e7de30eb5166c8f6d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knvgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df8c189f585082008e31ded41ba96e5939a894300f9dc29b53768a05cea54c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knvgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rzpxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:16Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.518059 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4pv7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a3aadb5-b908-4300-af5f-e3c37dff9e14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7cfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7cfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4pv7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:16Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.537124 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6554c7-415f-457d-8121-82981ebe2781\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2838d2a1b16be346b2d6a63998cd81416ef81978be369242fae471f6a53fdbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf97240160ecd3f4e73effbeb33f85dad6c12afbfe10315b8624d5c366945d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cfbe9f1ae9deaee4bbb0db6d490c25bd86326a3b962d2221cffa8c7e8431cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35617b01f73620a31d80cfbb5bc2c444ee37cdf3cfd67d62b70f36c6738bfc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2decc4fe0a94f1c54bc9b532267b0cbac17f7762e628835a11ba40561c8971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:16Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.550997 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:16Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.565060 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:16Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.581882 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fsmsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79f4091b-95d7-420a-b90a-1b6f48fb634e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://182bb7a343b62287950a4012ccd463ab6a7d339540f40db94e83248958d49095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fsmsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:16Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.591845 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.591884 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.591896 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.591912 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.591927 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:16Z","lastTransitionTime":"2026-01-26T18:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.601386 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d16415ca-2740-4247-846a-9afd1ebcca48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4f461b168b044c50f281bafc5c7ef0d877392e3cc72edc7b2f0028cf8fe6b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7338aa3bff3561881f454689b4ae1ab8b46ddf950c45dd080107c7b78e6766a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ccdee3654b2923f02f6071aa3924d0934ed028d809dfbf120ba387637632dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c275106783e56387249df9619e22fd0eca28516545f77cead21b8c925f9c36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:16Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.615019 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qjff2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82627aad-2019-482e-934a-7e9729927a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d6c4b9c86f851e8346bde5364b9a2463869d85fef2bc4e705335f9253be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9ggl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qjff2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:16Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.627392 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afd75772-7900-46c3-b392-afb075e1cc08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44e1f827ccc2bfeece3e663dd96fc5e48e301dbf7ac31e381e7a33a8a4a422c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea5fce0e1e77606f5e8f6cb2c1b339d6b7b8174e1f68a050834be1f5bedfec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qxkj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:16Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.639274 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gxxjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"632d368f-0ceb-4edc-aac0-b760c24da635\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://045cdffff188229daeee7faf3a96a61c6b0ab18fdd0908f528b8a2a5b19059bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrskd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gxxjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:16Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.653281 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d641e5-0291-480c-9413-478267450e45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d782bb5883158eb31686ef882923bc0fe18907ec34b462ad7641b8d0a6e675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce3c0b3eaf0ab467b2dbcadc4770536de6e0abf901c9636df113498aff77a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d13541d78d88ffb1e1dcff16556814da8c438d160fef0ea16468954f300dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209ecbbc6838b629efde256a421bfd4b6926d2a9cd2f02e4fb7df9325fdecfc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2968ec8a8ae174c006de379e7fae84b111c90cb44e51bb8d0fdcbc0e66a5842\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:30:39.472985 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:30:39.474507 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1374176662/tls.crt::/tmp/serving-cert-1374176662/tls.key\\\\\\\"\\\\nI0126 18:30:44.993991 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:30:44.996847 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:30:44.996868 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:30:44.996891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:30:44.996897 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:30:45.005311 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 18:30:45.005355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 18:30:45.005375 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005386 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005391 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:30:45.005396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:30:45.005400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:30:45.005403 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 18:30:45.006492 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45b34a9d70cf8504fd809f816a326a74e9a3c422a1ed1ffc221e72f90629b420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:16Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.669191 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3512c1850ad62aad579725558f83686c93dad645cc56cc852438dc2b4a6c35c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:16Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.683514 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925178b6076a7c576bc84fb58255bac5e1dcd86eda3fd94f0f93504a7cd7625a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548ccd6a70ea74a2030c871c94d8d7ac1de313de023b6a16b4a3a3bb2a2d7003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:16Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.694695 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.694732 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.694742 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.694759 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.694771 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:16Z","lastTransitionTime":"2026-01-26T18:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.698109 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e65f82894ec49f5a88663c42b77ad7d6f19fa922c45052d24272144140f979b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:16Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.712038 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:16Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.735538 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb40773-20dc-48ef-bf7f-17f4a042b01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66ec75b04c2383311d9d4c54148415f6f45821810aa9e68c12fa36c22637341c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13f6776860714e1ab348c9b7a767366f0b4b425d08ed27ee64abfaf2770f1889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0330027f82eafcc297d9ea91babd144a993a1f9d5b5f376274521364421fb70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b3d9e7e5a84aa89a81ca65443973a1a75bc1b54c2f3f5cbd6cf7a00f8d04704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2019712957d6ff1e329746e69d806c2cb554917815ebbac73b321965e5d981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://067cf449746568a0f2fa056863be0cc0bf40390eb6f239e011639fdc05f2ea8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://046202f8fbac321bcb6ceb2a70e0b655bf88d62a5c28a1c43a1a815ad3b2f87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3914aba793322088149ecf9d7ad29dc5cbc6240e243dd5ce17c8df1ae4e39af5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"message\\\":\\\"luster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.188:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {53c717ca-2174-4315-bb03-c937a9c0d9b6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 18:30:59.535532 6159 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication/oauth-openshift]} name:Service_openshift-authentication/oauth-openshift_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.222:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c0c2f725-e461-454e-a88c-c8350d62e1ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 18:30:59.535235 6159 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0126 18:30:59.535640 6159 ovnkube.go:599] Stopped ovnkube\\\\nI0126 18:30:59.535664 6159 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 18:30:59.535725 6159 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://570bf995c9ab0a04cff8ada5b82ef19c9299d86ab480a43ea1446a3aedb8cd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jgjrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:16Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.745960 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:31:16 crc kubenswrapper[4737]: E0126 18:31:16.746161 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:31:48.746130331 +0000 UTC m=+82.054325049 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.798207 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.798271 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.798281 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.798296 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.798306 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:16Z","lastTransitionTime":"2026-01-26T18:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.847028 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.847110 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.847146 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.847187 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:31:16 crc kubenswrapper[4737]: E0126 18:31:16.847294 4737 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 18:31:16 crc kubenswrapper[4737]: E0126 18:31:16.847362 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 18:31:48.847342176 +0000 UTC m=+82.155536884 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 18:31:16 crc kubenswrapper[4737]: E0126 18:31:16.847627 4737 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 18:31:16 crc kubenswrapper[4737]: E0126 18:31:16.847649 4737 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 18:31:16 crc kubenswrapper[4737]: E0126 18:31:16.847665 4737 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:31:16 crc kubenswrapper[4737]: E0126 18:31:16.847703 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 18:31:48.847695685 +0000 UTC m=+82.155890393 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:31:16 crc kubenswrapper[4737]: E0126 18:31:16.847755 4737 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 18:31:16 crc kubenswrapper[4737]: E0126 18:31:16.847809 4737 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 18:31:16 crc kubenswrapper[4737]: E0126 18:31:16.847827 4737 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:31:16 crc kubenswrapper[4737]: E0126 18:31:16.847926 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 18:31:48.847894039 +0000 UTC m=+82.156088747 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:31:16 crc kubenswrapper[4737]: E0126 18:31:16.848308 4737 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 18:31:16 crc kubenswrapper[4737]: E0126 18:31:16.848505 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 18:31:48.848480793 +0000 UTC m=+82.156675511 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.902730 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.903186 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.903267 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.903375 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.903447 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:16Z","lastTransitionTime":"2026-01-26T18:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.947236 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 22:36:19.071574664 +0000 UTC Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.981835 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:31:16 crc kubenswrapper[4737]: E0126 18:31:16.982036 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.982275 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.982354 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:31:16 crc kubenswrapper[4737]: E0126 18:31:16.982423 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4pv7r" podUID="1a3aadb5-b908-4300-af5f-e3c37dff9e14" Jan 26 18:31:16 crc kubenswrapper[4737]: I0126 18:31:16.982468 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:31:16 crc kubenswrapper[4737]: E0126 18:31:16.982614 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:31:16 crc kubenswrapper[4737]: E0126 18:31:16.982727 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.005760 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.005816 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.005829 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.005849 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.005862 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:17Z","lastTransitionTime":"2026-01-26T18:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.013516 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6554c7-415f-457d-8121-82981ebe2781\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2838d2a1b16be346b2d6a63998cd81416ef81978be369242fae471f6a53fdbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf97240160ecd3f4e73effbeb33f85dad6c12afbfe10315b8624d5c366945d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cfbe9f1ae9deaee4bbb0db6d490c25bd86326a3b962d2221cffa8c7e8431cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35617b01f73620a31d80cfbb5bc2c444ee37cdf3cfd67d62b70f36c6738bfc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2decc4fe0a94f1c54bc9b532267b0cbac17f7762e628835a11ba40561c8971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.033574 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.094316 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.108080 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.108120 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.108130 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.108149 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.108164 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:17Z","lastTransitionTime":"2026-01-26T18:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.110538 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fsmsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79f4091b-95d7-420a-b90a-1b6f48fb634e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://182bb7a343b62287950a4012ccd463ab6a7d339540f40db94e83248958d49095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fsmsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.126039 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gxxjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"632d368f-0ceb-4edc-aac0-b760c24da635\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://045cdffff188229daeee7faf3a96a61c6b0ab18fdd0908f528b8a2a5b19059bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrskd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gxxjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.145170 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d16415ca-2740-4247-846a-9afd1ebcca48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4f461b168b044c50f281bafc5c7ef0d877392e3cc72edc7b2f0028cf8fe6b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7338aa3bff3561881f454689b4ae1ab8b46ddf950c45dd080107c7b78e6766a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ccdee3654b2923f02f6071aa3924d0934ed028d809dfbf120ba387637632dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c275106783e56387249df9619e22fd0eca28516545f77cead21b8c925f9c36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.162791 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qjff2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82627aad-2019-482e-934a-7e9729927a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d6c4b9c86f851e8346bde5364b9a2463869d85fef2bc4e705335f9253be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9ggl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qjff2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.181136 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afd75772-7900-46c3-b392-afb075e1cc08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44e1f827ccc2bfeece3e663dd96fc5e48e301dbf7ac31e381e7a33a8a4a422c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea5fce0e1e77606f5e8f6cb2c1b339d6b7b8174e1f68a050834be1f5bedfec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qxkj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.196374 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.212291 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.212347 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.212359 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.212377 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.212392 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:17Z","lastTransitionTime":"2026-01-26T18:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.217452 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb40773-20dc-48ef-bf7f-17f4a042b01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66ec75b04c2383311d9d4c54148415f6f45821810aa9e68c12fa36c22637341c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13f6776860714e1ab348c9b7a767366f0b4b425d08ed27ee64abfaf2770f1889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0330027f82eafcc297d9ea91babd144a993a1f9d5b5f376274521364421fb70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b3d9e7e5a84aa89a81ca65443973a1a75bc1b54c2f3f5cbd6cf7a00f8d04704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2019712957d6ff1e329746e69d806c2cb554917815ebbac73b321965e5d981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://067cf449746568a0f2fa056863be0cc0bf40390eb6f239e011639fdc05f2ea8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://046202f8fbac321bcb6ceb2a70e0b655bf88d62a5c28a1c43a1a815ad3b2f87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3914aba793322088149ecf9d7ad29dc5cbc6240e243dd5ce17c8df1ae4e39af5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"message\\\":\\\"luster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.188:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {53c717ca-2174-4315-bb03-c937a9c0d9b6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 18:30:59.535532 6159 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication/oauth-openshift]} name:Service_openshift-authentication/oauth-openshift_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.222:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c0c2f725-e461-454e-a88c-c8350d62e1ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 18:30:59.535235 6159 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0126 18:30:59.535640 6159 ovnkube.go:599] Stopped ovnkube\\\\nI0126 18:30:59.535664 6159 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 18:30:59.535725 6159 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://570bf995c9ab0a04cff8ada5b82ef19c9299d86ab480a43ea1446a3aedb8cd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jgjrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.234833 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d641e5-0291-480c-9413-478267450e45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d782bb5883158eb31686ef882923bc0fe18907ec34b462ad7641b8d0a6e675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce3c0b3eaf0ab467b2dbcadc4770536de6e0abf901c9636df113498aff77a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d13541d78d88ffb1e1dcff16556814da8c438d160fef0ea16468954f300dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209ecbbc6838b629efde256a421bfd4b6926d2a9cd2f02e4fb7df9325fdecfc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2968ec8a8ae174c006de379e7fae84b111c90cb44e51bb8d0fdcbc0e66a5842\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:30:39.472985 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:30:39.474507 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1374176662/tls.crt::/tmp/serving-cert-1374176662/tls.key\\\\\\\"\\\\nI0126 18:30:44.993991 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:30:44.996847 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:30:44.996868 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:30:44.996891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:30:44.996897 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:30:45.005311 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 18:30:45.005355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 18:30:45.005375 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005386 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005391 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:30:45.005396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:30:45.005400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:30:45.005403 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 18:30:45.006492 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45b34a9d70cf8504fd809f816a326a74e9a3c422a1ed1ffc221e72f90629b420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.251661 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3512c1850ad62aad579725558f83686c93dad645cc56cc852438dc2b4a6c35c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.273734 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925178b6076a7c576bc84fb58255bac5e1dcd86eda3fd94f0f93504a7cd7625a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548ccd6a70ea74a2030c871c94d8d7ac1de313de023b6a16b4a3a3bb2a2d7003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.291352 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e65f82894ec49f5a88663c42b77ad7d6f19fa922c45052d24272144140f979b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.310196 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvbml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f32d3b75-6d15-4fb7-9559-d3df1d77071e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e973f3c659c65849958ccb32d18d8e67d42874690df337699f6cf976485c536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81b1b4cdfa531e63bf8499478cc1f6813d659b2b1b160d374133713382cff7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81b1b4cdfa531e63bf8499478cc1f6813d659b2b1b160d374133713382cff7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvbml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.316063 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.316441 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.316609 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.316774 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.316974 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:17Z","lastTransitionTime":"2026-01-26T18:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.324683 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bc7b559-f4f0-47b0-b148-6d0915785538\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10904723390bf4505ed547f04ed3a24b1e7debcf7f089e7de30eb5166c8f6d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knvgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df8c189f585082008e31ded41ba96e5939a894300f9dc29b53768a05cea54c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knvgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rzpxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.338397 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4pv7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a3aadb5-b908-4300-af5f-e3c37dff9e14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7cfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7cfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4pv7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.419703 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.419765 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.419776 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.419799 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.419810 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:17Z","lastTransitionTime":"2026-01-26T18:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.479880 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jgjrk_ecb40773-20dc-48ef-bf7f-17f4a042b01c/ovnkube-controller/2.log" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.481008 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jgjrk_ecb40773-20dc-48ef-bf7f-17f4a042b01c/ovnkube-controller/1.log" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.484335 4737 generic.go:334] "Generic (PLEG): container finished" podID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" containerID="046202f8fbac321bcb6ceb2a70e0b655bf88d62a5c28a1c43a1a815ad3b2f87d" exitCode=1 Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.484385 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" event={"ID":"ecb40773-20dc-48ef-bf7f-17f4a042b01c","Type":"ContainerDied","Data":"046202f8fbac321bcb6ceb2a70e0b655bf88d62a5c28a1c43a1a815ad3b2f87d"} Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.484443 4737 scope.go:117] "RemoveContainer" containerID="3914aba793322088149ecf9d7ad29dc5cbc6240e243dd5ce17c8df1ae4e39af5" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.485425 4737 scope.go:117] "RemoveContainer" containerID="046202f8fbac321bcb6ceb2a70e0b655bf88d62a5c28a1c43a1a815ad3b2f87d" Jan 26 18:31:17 crc kubenswrapper[4737]: E0126 18:31:17.485653 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jgjrk_openshift-ovn-kubernetes(ecb40773-20dc-48ef-bf7f-17f4a042b01c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" podUID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.508325 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvbml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f32d3b75-6d15-4fb7-9559-d3df1d77071e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e973f3c659c65849958ccb32d18d8e67d42874690df337699f6cf976485c536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81b1b4cdfa531e63bf8499478cc1f6813d659b2b1b160d374133713382cff7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81b1b4cdfa531e63bf8499478cc1f6813d659b2b1b160d374133713382cff7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvbml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.524129 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.524525 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.524622 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.524705 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.524778 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:17Z","lastTransitionTime":"2026-01-26T18:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.526912 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bc7b559-f4f0-47b0-b148-6d0915785538\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10904723390bf4505ed547f04ed3a24b1e7debcf7f089e7de30eb5166c8f6d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knvgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df8c189f585082008e31ded41ba96e5939a894300f9dc29b53768a05cea54c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knvgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rzpxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.541670 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4pv7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a3aadb5-b908-4300-af5f-e3c37dff9e14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7cfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7cfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4pv7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.557224 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.575629 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.590453 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fsmsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79f4091b-95d7-420a-b90a-1b6f48fb634e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://182bb7a343b62287950a4012ccd463ab6a7d339540f40db94e83248958d49095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fsmsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.611514 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6554c7-415f-457d-8121-82981ebe2781\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2838d2a1b16be346b2d6a63998cd81416ef81978be369242fae471f6a53fdbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf97240160ecd3f4e73effbeb33f85dad6c12afbfe10315b8624d5c366945d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cfbe9f1ae9deaee4bbb0db6d490c25bd86326a3b962d2221cffa8c7e8431cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35617b01f73620a31d80cfbb5bc2c444ee37cdf3cfd67d62b70f36c6738bfc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2decc4fe0a94f1c54bc9b532267b0cbac17f7762e628835a11ba40561c8971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.626715 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qjff2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82627aad-2019-482e-934a-7e9729927a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d6c4b9c86f851e8346bde5364b9a2463869d85fef2bc4e705335f9253be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9ggl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qjff2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.627758 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.627787 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.627797 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.627816 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.627827 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:17Z","lastTransitionTime":"2026-01-26T18:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.639677 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afd75772-7900-46c3-b392-afb075e1cc08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44e1f827ccc2bfeece3e663dd96fc5e48e301dbf7ac31e381e7a33a8a4a422c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea5fce0e1e77606f5e8f6cb2c1b339d6b7b8174e1f68a050834be1f5bedfec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qxkj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.651970 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gxxjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"632d368f-0ceb-4edc-aac0-b760c24da635\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://045cdffff188229daeee7faf3a96a61c6b0ab18fdd0908f528b8a2a5b19059bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrskd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gxxjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.665925 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d16415ca-2740-4247-846a-9afd1ebcca48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4f461b168b044c50f281bafc5c7ef0d877392e3cc72edc7b2f0028cf8fe6b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7338aa3bff3561881f454689b4ae1ab8b46ddf950c45dd080107c7b78e6766a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ccdee3654b2923f02f6071aa3924d0934ed028d809dfbf120ba387637632dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c275106783e56387249df9619e22fd0eca28516545f77cead21b8c925f9c36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.682368 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3512c1850ad62aad579725558f83686c93dad645cc56cc852438dc2b4a6c35c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.696523 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925178b6076a7c576bc84fb58255bac5e1dcd86eda3fd94f0f93504a7cd7625a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548ccd6a70ea74a2030c871c94d8d7ac1de313de023b6a16b4a3a3bb2a2d7003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.708773 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e65f82894ec49f5a88663c42b77ad7d6f19fa922c45052d24272144140f979b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.722686 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.730511 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.730551 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.730560 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.730574 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.730585 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:17Z","lastTransitionTime":"2026-01-26T18:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.741505 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb40773-20dc-48ef-bf7f-17f4a042b01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66ec75b04c2383311d9d4c54148415f6f45821810aa9e68c12fa36c22637341c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13f6776860714e1ab348c9b7a767366f0b4b425d08ed27ee64abfaf2770f1889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0330027f82eafcc297d9ea91babd144a993a1f9d5b5f376274521364421fb70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b3d9e7e5a84aa89a81ca65443973a1a75bc1b54c2f3f5cbd6cf7a00f8d04704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2019712957d6ff1e329746e69d806c2cb554917815ebbac73b321965e5d981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://067cf449746568a0f2fa056863be0cc0bf40390eb6f239e011639fdc05f2ea8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://046202f8fbac321bcb6ceb2a70e0b655bf88d62a5c28a1c43a1a815ad3b2f87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3914aba793322088149ecf9d7ad29dc5cbc6240e243dd5ce17c8df1ae4e39af5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"message\\\":\\\"luster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.188:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {53c717ca-2174-4315-bb03-c937a9c0d9b6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 18:30:59.535532 6159 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication/oauth-openshift]} name:Service_openshift-authentication/oauth-openshift_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.222:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c0c2f725-e461-454e-a88c-c8350d62e1ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 18:30:59.535235 6159 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0126 18:30:59.535640 6159 ovnkube.go:599] Stopped ovnkube\\\\nI0126 18:30:59.535664 6159 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 18:30:59.535725 6159 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://046202f8fbac321bcb6ceb2a70e0b655bf88d62a5c28a1c43a1a815ad3b2f87d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:31:17Z\\\",\\\"message\\\":\\\"ocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.254:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e4e4203e-87c7-4024-930a-5d6bdfe2bdde}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 18:31:16.841538 6403 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:16Z is after 2025-08-24T17:21:41Z]\\\\nI0126 18:31:16.841546 6403 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator-webhook]} name:Service_openshift-machine\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://570bf995c9ab0a04cff8ada5b82ef19c9299d86ab480a43ea1446a3aedb8cd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jgjrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.759864 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d641e5-0291-480c-9413-478267450e45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d782bb5883158eb31686ef882923bc0fe18907ec34b462ad7641b8d0a6e675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce3c0b3eaf0ab467b2dbcadc4770536de6e0abf901c9636df113498aff77a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d13541d78d88ffb1e1dcff16556814da8c438d160fef0ea16468954f300dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209ecbbc6838b629efde256a421bfd4b6926d2a9cd2f02e4fb7df9325fdecfc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2968ec8a8ae174c006de379e7fae84b111c90cb44e51bb8d0fdcbc0e66a5842\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:30:39.472985 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:30:39.474507 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1374176662/tls.crt::/tmp/serving-cert-1374176662/tls.key\\\\\\\"\\\\nI0126 18:30:44.993991 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:30:44.996847 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:30:44.996868 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:30:44.996891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:30:44.996897 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:30:45.005311 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 18:30:45.005355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 18:30:45.005375 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005386 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005391 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:30:45.005396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:30:45.005400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:30:45.005403 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 18:30:45.006492 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45b34a9d70cf8504fd809f816a326a74e9a3c422a1ed1ffc221e72f90629b420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.832614 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.832653 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.832663 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.832675 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.832684 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:17Z","lastTransitionTime":"2026-01-26T18:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.936165 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.936269 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.936298 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.936330 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.936356 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:17Z","lastTransitionTime":"2026-01-26T18:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:17 crc kubenswrapper[4737]: I0126 18:31:17.947707 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 14:26:25.992402898 +0000 UTC Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.039419 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.039521 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.039535 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.039556 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.039572 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:18Z","lastTransitionTime":"2026-01-26T18:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.142984 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.143029 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.143042 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.143060 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.143091 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:18Z","lastTransitionTime":"2026-01-26T18:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.245525 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.245577 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.245591 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.245615 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.245631 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:18Z","lastTransitionTime":"2026-01-26T18:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.348383 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.348441 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.348453 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.348474 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.348487 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:18Z","lastTransitionTime":"2026-01-26T18:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.450913 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.450956 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.450966 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.450984 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.450996 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:18Z","lastTransitionTime":"2026-01-26T18:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.494986 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jgjrk_ecb40773-20dc-48ef-bf7f-17f4a042b01c/ovnkube-controller/2.log" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.499961 4737 scope.go:117] "RemoveContainer" containerID="046202f8fbac321bcb6ceb2a70e0b655bf88d62a5c28a1c43a1a815ad3b2f87d" Jan 26 18:31:18 crc kubenswrapper[4737]: E0126 18:31:18.500243 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jgjrk_openshift-ovn-kubernetes(ecb40773-20dc-48ef-bf7f-17f4a042b01c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" podUID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.545607 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6554c7-415f-457d-8121-82981ebe2781\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2838d2a1b16be346b2d6a63998cd81416ef81978be369242fae471f6a53fdbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf97240160ecd3f4e73effbeb33f85dad6c12afbfe10315b8624d5c366945d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cfbe9f1ae9deaee4bbb0db6d490c25bd86326a3b962d2221cffa8c7e8431cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35617b01f73620a31d80cfbb5bc2c444ee37cdf3cfd67d62b70f36c6738bfc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2decc4fe0a94f1c54bc9b532267b0cbac17f7762e628835a11ba40561c8971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.554525 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.554616 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.554627 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.554667 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.554683 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:18Z","lastTransitionTime":"2026-01-26T18:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.567162 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.607494 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.619269 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fsmsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79f4091b-95d7-420a-b90a-1b6f48fb634e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://182bb7a343b62287950a4012ccd463ab6a7d339540f40db94e83248958d49095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fsmsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.638659 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d16415ca-2740-4247-846a-9afd1ebcca48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4f461b168b044c50f281bafc5c7ef0d877392e3cc72edc7b2f0028cf8fe6b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7338aa3bff3561881f454689b4ae1ab8b46ddf950c45dd080107c7b78e6766a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ccdee3654b2923f02f6071aa3924d0934ed028d809dfbf120ba387637632dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c275106783e56387249df9619e22fd0eca28516545f77cead21b8c925f9c36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.655704 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qjff2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82627aad-2019-482e-934a-7e9729927a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d6c4b9c86f851e8346bde5364b9a2463869d85fef2bc4e705335f9253be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9ggl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qjff2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.668009 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.668042 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.668052 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.668089 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.668108 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:18Z","lastTransitionTime":"2026-01-26T18:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.672693 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afd75772-7900-46c3-b392-afb075e1cc08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44e1f827ccc2bfeece3e663dd96fc5e48e301dbf7ac31e381e7a33a8a4a422c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea5fce0e1e77606f5e8f6cb2c1b339d6b7b8174e1f68a050834be1f5bedfec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qxkj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.686137 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gxxjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"632d368f-0ceb-4edc-aac0-b760c24da635\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://045cdffff188229daeee7faf3a96a61c6b0ab18fdd0908f528b8a2a5b19059bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrskd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gxxjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.703662 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d641e5-0291-480c-9413-478267450e45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d782bb5883158eb31686ef882923bc0fe18907ec34b462ad7641b8d0a6e675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce3c0b3eaf0ab467b2dbcadc4770536de6e0abf901c9636df113498aff77a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d13541d78d88ffb1e1dcff16556814da8c438d160fef0ea16468954f300dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209ecbbc6838b629efde256a421bfd4b6926d2a9cd2f02e4fb7df9325fdecfc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2968ec8a8ae174c006de379e7fae84b111c90cb44e51bb8d0fdcbc0e66a5842\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:30:39.472985 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:30:39.474507 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1374176662/tls.crt::/tmp/serving-cert-1374176662/tls.key\\\\\\\"\\\\nI0126 18:30:44.993991 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:30:44.996847 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:30:44.996868 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:30:44.996891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:30:44.996897 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:30:45.005311 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 18:30:45.005355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 18:30:45.005375 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005386 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005391 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:30:45.005396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:30:45.005400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:30:45.005403 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 18:30:45.006492 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45b34a9d70cf8504fd809f816a326a74e9a3c422a1ed1ffc221e72f90629b420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.720357 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3512c1850ad62aad579725558f83686c93dad645cc56cc852438dc2b4a6c35c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.736304 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925178b6076a7c576bc84fb58255bac5e1dcd86eda3fd94f0f93504a7cd7625a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548ccd6a70ea74a2030c871c94d8d7ac1de313de023b6a16b4a3a3bb2a2d7003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.751510 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e65f82894ec49f5a88663c42b77ad7d6f19fa922c45052d24272144140f979b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.764942 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.770478 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.770524 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.770536 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.770557 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.770571 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:18Z","lastTransitionTime":"2026-01-26T18:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.784505 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb40773-20dc-48ef-bf7f-17f4a042b01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66ec75b04c2383311d9d4c54148415f6f45821810aa9e68c12fa36c22637341c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13f6776860714e1ab348c9b7a767366f0b4b425d08ed27ee64abfaf2770f1889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0330027f82eafcc297d9ea91babd144a993a1f9d5b5f376274521364421fb70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b3d9e7e5a84aa89a81ca65443973a1a75bc1b54c2f3f5cbd6cf7a00f8d04704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2019712957d6ff1e329746e69d806c2cb554917815ebbac73b321965e5d981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://067cf449746568a0f2fa056863be0cc0bf40390eb6f239e011639fdc05f2ea8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://046202f8fbac321bcb6ceb2a70e0b655bf88d62a5c28a1c43a1a815ad3b2f87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://046202f8fbac321bcb6ceb2a70e0b655bf88d62a5c28a1c43a1a815ad3b2f87d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:31:17Z\\\",\\\"message\\\":\\\"ocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.254:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e4e4203e-87c7-4024-930a-5d6bdfe2bdde}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 18:31:16.841538 6403 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:16Z is after 2025-08-24T17:21:41Z]\\\\nI0126 18:31:16.841546 6403 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator-webhook]} name:Service_openshift-machine\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:31:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jgjrk_openshift-ovn-kubernetes(ecb40773-20dc-48ef-bf7f-17f4a042b01c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://570bf995c9ab0a04cff8ada5b82ef19c9299d86ab480a43ea1446a3aedb8cd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jgjrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.800788 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvbml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f32d3b75-6d15-4fb7-9559-d3df1d77071e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e973f3c659c65849958ccb32d18d8e67d42874690df337699f6cf976485c536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81b1b4cdfa531e63bf8499478cc1f6813d659b2b1b160d374133713382cff7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81b1b4cdfa531e63bf8499478cc1f6813d659b2b1b160d374133713382cff7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvbml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.812978 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bc7b559-f4f0-47b0-b148-6d0915785538\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10904723390bf4505ed547f04ed3a24b1e7debcf7f089e7de30eb5166c8f6d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knvgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df8c189f585082008e31ded41ba96e5939a894300f9dc29b53768a05cea54c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knvgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rzpxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.824757 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4pv7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a3aadb5-b908-4300-af5f-e3c37dff9e14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7cfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7cfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4pv7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.873010 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.873060 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.873093 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.873109 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.873119 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:18Z","lastTransitionTime":"2026-01-26T18:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.947983 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 23:01:02.562811226 +0000 UTC Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.975346 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.975385 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.975395 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.975414 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.975425 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:18Z","lastTransitionTime":"2026-01-26T18:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.981728 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.981728 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.981804 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:31:18 crc kubenswrapper[4737]: E0126 18:31:18.981910 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4pv7r" podUID="1a3aadb5-b908-4300-af5f-e3c37dff9e14" Jan 26 18:31:18 crc kubenswrapper[4737]: I0126 18:31:18.982147 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:31:18 crc kubenswrapper[4737]: E0126 18:31:18.982281 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:31:18 crc kubenswrapper[4737]: E0126 18:31:18.982227 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:31:18 crc kubenswrapper[4737]: E0126 18:31:18.982483 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:31:19 crc kubenswrapper[4737]: I0126 18:31:19.078370 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:19 crc kubenswrapper[4737]: I0126 18:31:19.078410 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:19 crc kubenswrapper[4737]: I0126 18:31:19.078424 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:19 crc kubenswrapper[4737]: I0126 18:31:19.078439 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:19 crc kubenswrapper[4737]: I0126 18:31:19.078449 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:19Z","lastTransitionTime":"2026-01-26T18:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:19 crc kubenswrapper[4737]: I0126 18:31:19.181245 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:19 crc kubenswrapper[4737]: I0126 18:31:19.181288 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:19 crc kubenswrapper[4737]: I0126 18:31:19.181305 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:19 crc kubenswrapper[4737]: I0126 18:31:19.181326 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:19 crc kubenswrapper[4737]: I0126 18:31:19.181341 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:19Z","lastTransitionTime":"2026-01-26T18:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:19 crc kubenswrapper[4737]: I0126 18:31:19.284131 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:19 crc kubenswrapper[4737]: I0126 18:31:19.284169 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:19 crc kubenswrapper[4737]: I0126 18:31:19.284178 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:19 crc kubenswrapper[4737]: I0126 18:31:19.284192 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:19 crc kubenswrapper[4737]: I0126 18:31:19.284204 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:19Z","lastTransitionTime":"2026-01-26T18:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:19 crc kubenswrapper[4737]: I0126 18:31:19.386939 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:19 crc kubenswrapper[4737]: I0126 18:31:19.386984 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:19 crc kubenswrapper[4737]: I0126 18:31:19.386994 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:19 crc kubenswrapper[4737]: I0126 18:31:19.387010 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:19 crc kubenswrapper[4737]: I0126 18:31:19.387021 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:19Z","lastTransitionTime":"2026-01-26T18:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:19 crc kubenswrapper[4737]: I0126 18:31:19.489449 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:19 crc kubenswrapper[4737]: I0126 18:31:19.489487 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:19 crc kubenswrapper[4737]: I0126 18:31:19.489498 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:19 crc kubenswrapper[4737]: I0126 18:31:19.489514 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:19 crc kubenswrapper[4737]: I0126 18:31:19.489524 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:19Z","lastTransitionTime":"2026-01-26T18:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:19 crc kubenswrapper[4737]: I0126 18:31:19.592451 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:19 crc kubenswrapper[4737]: I0126 18:31:19.592525 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:19 crc kubenswrapper[4737]: I0126 18:31:19.592541 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:19 crc kubenswrapper[4737]: I0126 18:31:19.592568 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:19 crc kubenswrapper[4737]: I0126 18:31:19.592583 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:19Z","lastTransitionTime":"2026-01-26T18:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:19 crc kubenswrapper[4737]: I0126 18:31:19.695327 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:19 crc kubenswrapper[4737]: I0126 18:31:19.695375 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:19 crc kubenswrapper[4737]: I0126 18:31:19.695385 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:19 crc kubenswrapper[4737]: I0126 18:31:19.695402 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:19 crc kubenswrapper[4737]: I0126 18:31:19.695414 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:19Z","lastTransitionTime":"2026-01-26T18:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:19 crc kubenswrapper[4737]: I0126 18:31:19.802273 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:19 crc kubenswrapper[4737]: I0126 18:31:19.802325 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:19 crc kubenswrapper[4737]: I0126 18:31:19.802338 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:19 crc kubenswrapper[4737]: I0126 18:31:19.802358 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:19 crc kubenswrapper[4737]: I0126 18:31:19.802372 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:19Z","lastTransitionTime":"2026-01-26T18:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:19 crc kubenswrapper[4737]: I0126 18:31:19.905916 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:19 crc kubenswrapper[4737]: I0126 18:31:19.905981 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:19 crc kubenswrapper[4737]: I0126 18:31:19.905997 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:19 crc kubenswrapper[4737]: I0126 18:31:19.906022 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:19 crc kubenswrapper[4737]: I0126 18:31:19.906040 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:19Z","lastTransitionTime":"2026-01-26T18:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:19 crc kubenswrapper[4737]: I0126 18:31:19.948971 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 11:49:48.683321451 +0000 UTC Jan 26 18:31:20 crc kubenswrapper[4737]: I0126 18:31:20.009430 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:20 crc kubenswrapper[4737]: I0126 18:31:20.009481 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:20 crc kubenswrapper[4737]: I0126 18:31:20.009494 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:20 crc kubenswrapper[4737]: I0126 18:31:20.009513 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:20 crc kubenswrapper[4737]: I0126 18:31:20.009526 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:20Z","lastTransitionTime":"2026-01-26T18:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:20 crc kubenswrapper[4737]: I0126 18:31:20.112396 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:20 crc kubenswrapper[4737]: I0126 18:31:20.112445 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:20 crc kubenswrapper[4737]: I0126 18:31:20.112463 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:20 crc kubenswrapper[4737]: I0126 18:31:20.112484 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:20 crc kubenswrapper[4737]: I0126 18:31:20.112496 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:20Z","lastTransitionTime":"2026-01-26T18:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:20 crc kubenswrapper[4737]: I0126 18:31:20.215454 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:20 crc kubenswrapper[4737]: I0126 18:31:20.215501 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:20 crc kubenswrapper[4737]: I0126 18:31:20.215515 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:20 crc kubenswrapper[4737]: I0126 18:31:20.215528 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:20 crc kubenswrapper[4737]: I0126 18:31:20.215541 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:20Z","lastTransitionTime":"2026-01-26T18:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:20 crc kubenswrapper[4737]: I0126 18:31:20.318528 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:20 crc kubenswrapper[4737]: I0126 18:31:20.318574 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:20 crc kubenswrapper[4737]: I0126 18:31:20.318587 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:20 crc kubenswrapper[4737]: I0126 18:31:20.318608 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:20 crc kubenswrapper[4737]: I0126 18:31:20.318624 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:20Z","lastTransitionTime":"2026-01-26T18:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:20 crc kubenswrapper[4737]: I0126 18:31:20.421904 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:20 crc kubenswrapper[4737]: I0126 18:31:20.421947 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:20 crc kubenswrapper[4737]: I0126 18:31:20.421960 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:20 crc kubenswrapper[4737]: I0126 18:31:20.421980 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:20 crc kubenswrapper[4737]: I0126 18:31:20.421997 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:20Z","lastTransitionTime":"2026-01-26T18:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:20 crc kubenswrapper[4737]: I0126 18:31:20.524724 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:20 crc kubenswrapper[4737]: I0126 18:31:20.524780 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:20 crc kubenswrapper[4737]: I0126 18:31:20.524792 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:20 crc kubenswrapper[4737]: I0126 18:31:20.524809 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:20 crc kubenswrapper[4737]: I0126 18:31:20.524820 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:20Z","lastTransitionTime":"2026-01-26T18:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:20 crc kubenswrapper[4737]: I0126 18:31:20.627314 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:20 crc kubenswrapper[4737]: I0126 18:31:20.627360 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:20 crc kubenswrapper[4737]: I0126 18:31:20.627375 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:20 crc kubenswrapper[4737]: I0126 18:31:20.627398 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:20 crc kubenswrapper[4737]: I0126 18:31:20.627413 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:20Z","lastTransitionTime":"2026-01-26T18:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:20 crc kubenswrapper[4737]: I0126 18:31:20.729912 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:20 crc kubenswrapper[4737]: I0126 18:31:20.729993 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:20 crc kubenswrapper[4737]: I0126 18:31:20.730013 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:20 crc kubenswrapper[4737]: I0126 18:31:20.730039 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:20 crc kubenswrapper[4737]: I0126 18:31:20.730058 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:20Z","lastTransitionTime":"2026-01-26T18:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:20 crc kubenswrapper[4737]: I0126 18:31:20.833967 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:20 crc kubenswrapper[4737]: I0126 18:31:20.833999 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:20 crc kubenswrapper[4737]: I0126 18:31:20.834008 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:20 crc kubenswrapper[4737]: I0126 18:31:20.834023 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:20 crc kubenswrapper[4737]: I0126 18:31:20.834033 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:20Z","lastTransitionTime":"2026-01-26T18:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:20 crc kubenswrapper[4737]: I0126 18:31:20.937902 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:20 crc kubenswrapper[4737]: I0126 18:31:20.937954 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:20 crc kubenswrapper[4737]: I0126 18:31:20.937963 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:20 crc kubenswrapper[4737]: I0126 18:31:20.937979 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:20 crc kubenswrapper[4737]: I0126 18:31:20.937989 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:20Z","lastTransitionTime":"2026-01-26T18:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:20 crc kubenswrapper[4737]: I0126 18:31:20.949342 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 13:27:52.793847963 +0000 UTC Jan 26 18:31:20 crc kubenswrapper[4737]: I0126 18:31:20.980893 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:31:20 crc kubenswrapper[4737]: I0126 18:31:20.980911 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:31:20 crc kubenswrapper[4737]: I0126 18:31:20.981018 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:31:20 crc kubenswrapper[4737]: E0126 18:31:20.981044 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:31:20 crc kubenswrapper[4737]: I0126 18:31:20.981108 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:31:20 crc kubenswrapper[4737]: E0126 18:31:20.981229 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4pv7r" podUID="1a3aadb5-b908-4300-af5f-e3c37dff9e14" Jan 26 18:31:20 crc kubenswrapper[4737]: E0126 18:31:20.981380 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:31:20 crc kubenswrapper[4737]: E0126 18:31:20.981491 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.039985 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.040030 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.040040 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.040056 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.040066 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:21Z","lastTransitionTime":"2026-01-26T18:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.143096 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.143178 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.143191 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.143214 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.143227 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:21Z","lastTransitionTime":"2026-01-26T18:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.246268 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.246305 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.246314 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.246329 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.246340 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:21Z","lastTransitionTime":"2026-01-26T18:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.348905 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.348965 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.348977 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.348994 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.349007 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:21Z","lastTransitionTime":"2026-01-26T18:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.451167 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.451226 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.451241 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.451262 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.451273 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:21Z","lastTransitionTime":"2026-01-26T18:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.456682 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.471416 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.474981 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvbml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f32d3b75-6d15-4fb7-9559-d3df1d77071e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e973f3c659c65849958ccb32d18d8e67d42874690df337699f6cf976485c536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81b1b4cdfa531e63bf8499478cc1f6813d659b2b1b160d374133713382cff7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81b1b4cdfa531e63bf8499478cc1f6813d659b2b1b160d374133713382cff7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvbml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:21Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.491320 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bc7b559-f4f0-47b0-b148-6d0915785538\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10904723390bf4505ed547f04ed3a24b1e7debcf7f089e7de30eb5166c8f6d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knvgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df8c189f585082008e31ded41ba96e5939a894300f9dc29b53768a05cea54c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knvgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rzpxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:21Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.507356 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4pv7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a3aadb5-b908-4300-af5f-e3c37dff9e14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7cfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7cfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4pv7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:21Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.526187 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6554c7-415f-457d-8121-82981ebe2781\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2838d2a1b16be346b2d6a63998cd81416ef81978be369242fae471f6a53fdbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf97240160ecd3f4e73effbeb33f85dad6c12afbfe10315b8624d5c366945d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cfbe9f1ae9deaee4bbb0db6d490c25bd86326a3b962d2221cffa8c7e8431cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35617b01f73620a31d80cfbb5bc2c444ee37cdf3cfd67d62b70f36c6738bfc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2decc4fe0a94f1c54bc9b532267b0cbac17f7762e628835a11ba40561c8971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:21Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.546848 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:21Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.553803 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.553832 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.553843 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.553859 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.553873 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:21Z","lastTransitionTime":"2026-01-26T18:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.564053 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:21Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.576511 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fsmsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79f4091b-95d7-420a-b90a-1b6f48fb634e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://182bb7a343b62287950a4012ccd463ab6a7d339540f40db94e83248958d49095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fsmsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:21Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.589003 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gxxjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"632d368f-0ceb-4edc-aac0-b760c24da635\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://045cdffff188229daeee7faf3a96a61c6b0ab18fdd0908f528b8a2a5b19059bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrskd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gxxjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:21Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.602987 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d16415ca-2740-4247-846a-9afd1ebcca48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4f461b168b044c50f281bafc5c7ef0d877392e3cc72edc7b2f0028cf8fe6b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7338aa3bff3561881f454689b4ae1ab8b46ddf950c45dd080107c7b78e6766a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ccdee3654b2923f02f6071aa3924d0934ed028d809dfbf120ba387637632dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c275106783e56387249df9619e22fd0eca28516545f77cead21b8c925f9c36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:21Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.618630 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qjff2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82627aad-2019-482e-934a-7e9729927a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d6c4b9c86f851e8346bde5364b9a2463869d85fef2bc4e705335f9253be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9ggl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qjff2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:21Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.630652 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afd75772-7900-46c3-b392-afb075e1cc08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44e1f827ccc2bfeece3e663dd96fc5e48e301dbf7ac31e381e7a33a8a4a422c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea5fce0e1e77606f5e8f6cb2c1b339d6b7b8174e1f68a050834be1f5bedfec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qxkj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:21Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.642329 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:21Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.656702 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.657168 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.657305 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.657356 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.657377 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:21Z","lastTransitionTime":"2026-01-26T18:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.662397 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb40773-20dc-48ef-bf7f-17f4a042b01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66ec75b04c2383311d9d4c54148415f6f45821810aa9e68c12fa36c22637341c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13f6776860714e1ab348c9b7a767366f0b4b425d08ed27ee64abfaf2770f1889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0330027f82eafcc297d9ea91babd144a993a1f9d5b5f376274521364421fb70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b3d9e7e5a84aa89a81ca65443973a1a75bc1b54c2f3f5cbd6cf7a00f8d04704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2019712957d6ff1e329746e69d806c2cb554917815ebbac73b321965e5d981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://067cf449746568a0f2fa056863be0cc0bf40390eb6f239e011639fdc05f2ea8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://046202f8fbac321bcb6ceb2a70e0b655bf88d62a5c28a1c43a1a815ad3b2f87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://046202f8fbac321bcb6ceb2a70e0b655bf88d62a5c28a1c43a1a815ad3b2f87d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:31:17Z\\\",\\\"message\\\":\\\"ocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.254:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e4e4203e-87c7-4024-930a-5d6bdfe2bdde}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 18:31:16.841538 6403 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:16Z is after 2025-08-24T17:21:41Z]\\\\nI0126 18:31:16.841546 6403 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator-webhook]} name:Service_openshift-machine\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:31:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jgjrk_openshift-ovn-kubernetes(ecb40773-20dc-48ef-bf7f-17f4a042b01c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://570bf995c9ab0a04cff8ada5b82ef19c9299d86ab480a43ea1446a3aedb8cd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jgjrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:21Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.677362 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d641e5-0291-480c-9413-478267450e45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d782bb5883158eb31686ef882923bc0fe18907ec34b462ad7641b8d0a6e675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce3c0b3eaf0ab467b2dbcadc4770536de6e0abf901c9636df113498aff77a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d13541d78d88ffb1e1dcff16556814da8c438d160fef0ea16468954f300dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209ecbbc6838b629efde256a421bfd4b6926d2a9cd2f02e4fb7df9325fdecfc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2968ec8a8ae174c006de379e7fae84b111c90cb44e51bb8d0fdcbc0e66a5842\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:30:39.472985 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:30:39.474507 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1374176662/tls.crt::/tmp/serving-cert-1374176662/tls.key\\\\\\\"\\\\nI0126 18:30:44.993991 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:30:44.996847 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:30:44.996868 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:30:44.996891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:30:44.996897 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:30:45.005311 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 18:30:45.005355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 18:30:45.005375 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005386 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005391 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:30:45.005396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:30:45.005400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:30:45.005403 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 18:30:45.006492 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45b34a9d70cf8504fd809f816a326a74e9a3c422a1ed1ffc221e72f90629b420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:21Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.693690 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3512c1850ad62aad579725558f83686c93dad645cc56cc852438dc2b4a6c35c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:21Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.707225 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925178b6076a7c576bc84fb58255bac5e1dcd86eda3fd94f0f93504a7cd7625a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548ccd6a70ea74a2030c871c94d8d7ac1de313de023b6a16b4a3a3bb2a2d7003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:21Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.719887 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e65f82894ec49f5a88663c42b77ad7d6f19fa922c45052d24272144140f979b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:21Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.760172 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.760214 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.760226 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.760246 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.760260 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:21Z","lastTransitionTime":"2026-01-26T18:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.863215 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.863255 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.863264 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.863279 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.863291 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:21Z","lastTransitionTime":"2026-01-26T18:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.949635 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 02:21:12.37165383 +0000 UTC Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.966468 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.966512 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.966522 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.966536 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:21 crc kubenswrapper[4737]: I0126 18:31:21.966563 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:21Z","lastTransitionTime":"2026-01-26T18:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:22 crc kubenswrapper[4737]: I0126 18:31:22.073164 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:22 crc kubenswrapper[4737]: I0126 18:31:22.073203 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:22 crc kubenswrapper[4737]: I0126 18:31:22.073218 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:22 crc kubenswrapper[4737]: I0126 18:31:22.073235 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:22 crc kubenswrapper[4737]: I0126 18:31:22.073247 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:22Z","lastTransitionTime":"2026-01-26T18:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:22 crc kubenswrapper[4737]: I0126 18:31:22.176085 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:22 crc kubenswrapper[4737]: I0126 18:31:22.176142 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:22 crc kubenswrapper[4737]: I0126 18:31:22.176154 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:22 crc kubenswrapper[4737]: I0126 18:31:22.176171 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:22 crc kubenswrapper[4737]: I0126 18:31:22.176181 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:22Z","lastTransitionTime":"2026-01-26T18:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:22 crc kubenswrapper[4737]: I0126 18:31:22.279082 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:22 crc kubenswrapper[4737]: I0126 18:31:22.279135 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:22 crc kubenswrapper[4737]: I0126 18:31:22.279149 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:22 crc kubenswrapper[4737]: I0126 18:31:22.279167 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:22 crc kubenswrapper[4737]: I0126 18:31:22.279180 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:22Z","lastTransitionTime":"2026-01-26T18:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:22 crc kubenswrapper[4737]: I0126 18:31:22.382024 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:22 crc kubenswrapper[4737]: I0126 18:31:22.382110 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:22 crc kubenswrapper[4737]: I0126 18:31:22.382122 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:22 crc kubenswrapper[4737]: I0126 18:31:22.382139 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:22 crc kubenswrapper[4737]: I0126 18:31:22.382151 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:22Z","lastTransitionTime":"2026-01-26T18:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:22 crc kubenswrapper[4737]: I0126 18:31:22.485255 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:22 crc kubenswrapper[4737]: I0126 18:31:22.485312 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:22 crc kubenswrapper[4737]: I0126 18:31:22.485323 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:22 crc kubenswrapper[4737]: I0126 18:31:22.485345 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:22 crc kubenswrapper[4737]: I0126 18:31:22.485365 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:22Z","lastTransitionTime":"2026-01-26T18:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:22 crc kubenswrapper[4737]: I0126 18:31:22.587564 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:22 crc kubenswrapper[4737]: I0126 18:31:22.587605 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:22 crc kubenswrapper[4737]: I0126 18:31:22.587617 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:22 crc kubenswrapper[4737]: I0126 18:31:22.587634 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:22 crc kubenswrapper[4737]: I0126 18:31:22.587645 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:22Z","lastTransitionTime":"2026-01-26T18:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:22 crc kubenswrapper[4737]: I0126 18:31:22.690929 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:22 crc kubenswrapper[4737]: I0126 18:31:22.690994 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:22 crc kubenswrapper[4737]: I0126 18:31:22.691009 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:22 crc kubenswrapper[4737]: I0126 18:31:22.691031 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:22 crc kubenswrapper[4737]: I0126 18:31:22.691044 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:22Z","lastTransitionTime":"2026-01-26T18:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:22 crc kubenswrapper[4737]: I0126 18:31:22.794698 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:22 crc kubenswrapper[4737]: I0126 18:31:22.794753 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:22 crc kubenswrapper[4737]: I0126 18:31:22.794762 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:22 crc kubenswrapper[4737]: I0126 18:31:22.794780 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:22 crc kubenswrapper[4737]: I0126 18:31:22.794792 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:22Z","lastTransitionTime":"2026-01-26T18:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:22 crc kubenswrapper[4737]: I0126 18:31:22.897636 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:22 crc kubenswrapper[4737]: I0126 18:31:22.897706 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:22 crc kubenswrapper[4737]: I0126 18:31:22.897719 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:22 crc kubenswrapper[4737]: I0126 18:31:22.897737 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:22 crc kubenswrapper[4737]: I0126 18:31:22.897749 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:22Z","lastTransitionTime":"2026-01-26T18:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:22 crc kubenswrapper[4737]: I0126 18:31:22.949959 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 17:29:07.74782254 +0000 UTC Jan 26 18:31:22 crc kubenswrapper[4737]: I0126 18:31:22.981676 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:31:22 crc kubenswrapper[4737]: I0126 18:31:22.981732 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:31:22 crc kubenswrapper[4737]: I0126 18:31:22.981861 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:31:22 crc kubenswrapper[4737]: E0126 18:31:22.981978 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:31:22 crc kubenswrapper[4737]: I0126 18:31:22.982033 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:31:22 crc kubenswrapper[4737]: E0126 18:31:22.982220 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:31:22 crc kubenswrapper[4737]: E0126 18:31:22.982244 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4pv7r" podUID="1a3aadb5-b908-4300-af5f-e3c37dff9e14" Jan 26 18:31:22 crc kubenswrapper[4737]: E0126 18:31:22.982317 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:31:23 crc kubenswrapper[4737]: I0126 18:31:23.000391 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:23 crc kubenswrapper[4737]: I0126 18:31:23.000440 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:23 crc kubenswrapper[4737]: I0126 18:31:23.000453 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:23 crc kubenswrapper[4737]: I0126 18:31:23.000471 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:23 crc kubenswrapper[4737]: I0126 18:31:23.000484 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:23Z","lastTransitionTime":"2026-01-26T18:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:23 crc kubenswrapper[4737]: I0126 18:31:23.103875 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:23 crc kubenswrapper[4737]: I0126 18:31:23.103935 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:23 crc kubenswrapper[4737]: I0126 18:31:23.103946 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:23 crc kubenswrapper[4737]: I0126 18:31:23.103962 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:23 crc kubenswrapper[4737]: I0126 18:31:23.103974 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:23Z","lastTransitionTime":"2026-01-26T18:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:23 crc kubenswrapper[4737]: I0126 18:31:23.206749 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:23 crc kubenswrapper[4737]: I0126 18:31:23.206822 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:23 crc kubenswrapper[4737]: I0126 18:31:23.206837 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:23 crc kubenswrapper[4737]: I0126 18:31:23.206857 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:23 crc kubenswrapper[4737]: I0126 18:31:23.206867 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:23Z","lastTransitionTime":"2026-01-26T18:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:23 crc kubenswrapper[4737]: I0126 18:31:23.309653 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:23 crc kubenswrapper[4737]: I0126 18:31:23.309722 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:23 crc kubenswrapper[4737]: I0126 18:31:23.309741 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:23 crc kubenswrapper[4737]: I0126 18:31:23.309770 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:23 crc kubenswrapper[4737]: I0126 18:31:23.309792 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:23Z","lastTransitionTime":"2026-01-26T18:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:23 crc kubenswrapper[4737]: I0126 18:31:23.413345 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:23 crc kubenswrapper[4737]: I0126 18:31:23.413391 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:23 crc kubenswrapper[4737]: I0126 18:31:23.413409 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:23 crc kubenswrapper[4737]: I0126 18:31:23.413432 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:23 crc kubenswrapper[4737]: I0126 18:31:23.413446 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:23Z","lastTransitionTime":"2026-01-26T18:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:23 crc kubenswrapper[4737]: I0126 18:31:23.515109 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:23 crc kubenswrapper[4737]: I0126 18:31:23.515175 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:23 crc kubenswrapper[4737]: I0126 18:31:23.515187 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:23 crc kubenswrapper[4737]: I0126 18:31:23.515204 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:23 crc kubenswrapper[4737]: I0126 18:31:23.515216 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:23Z","lastTransitionTime":"2026-01-26T18:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:23 crc kubenswrapper[4737]: I0126 18:31:23.618128 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:23 crc kubenswrapper[4737]: I0126 18:31:23.618469 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:23 crc kubenswrapper[4737]: I0126 18:31:23.618575 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:23 crc kubenswrapper[4737]: I0126 18:31:23.618683 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:23 crc kubenswrapper[4737]: I0126 18:31:23.618767 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:23Z","lastTransitionTime":"2026-01-26T18:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:23 crc kubenswrapper[4737]: I0126 18:31:23.721460 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:23 crc kubenswrapper[4737]: I0126 18:31:23.721508 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:23 crc kubenswrapper[4737]: I0126 18:31:23.721522 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:23 crc kubenswrapper[4737]: I0126 18:31:23.721543 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:23 crc kubenswrapper[4737]: I0126 18:31:23.721554 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:23Z","lastTransitionTime":"2026-01-26T18:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:23 crc kubenswrapper[4737]: I0126 18:31:23.823972 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:23 crc kubenswrapper[4737]: I0126 18:31:23.824024 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:23 crc kubenswrapper[4737]: I0126 18:31:23.824034 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:23 crc kubenswrapper[4737]: I0126 18:31:23.824052 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:23 crc kubenswrapper[4737]: I0126 18:31:23.824091 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:23Z","lastTransitionTime":"2026-01-26T18:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:23 crc kubenswrapper[4737]: I0126 18:31:23.926616 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:23 crc kubenswrapper[4737]: I0126 18:31:23.926671 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:23 crc kubenswrapper[4737]: I0126 18:31:23.926683 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:23 crc kubenswrapper[4737]: I0126 18:31:23.926705 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:23 crc kubenswrapper[4737]: I0126 18:31:23.926721 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:23Z","lastTransitionTime":"2026-01-26T18:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:23 crc kubenswrapper[4737]: I0126 18:31:23.950963 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 06:40:12.505750988 +0000 UTC Jan 26 18:31:24 crc kubenswrapper[4737]: I0126 18:31:24.029161 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:24 crc kubenswrapper[4737]: I0126 18:31:24.029208 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:24 crc kubenswrapper[4737]: I0126 18:31:24.029219 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:24 crc kubenswrapper[4737]: I0126 18:31:24.029234 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:24 crc kubenswrapper[4737]: I0126 18:31:24.029244 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:24Z","lastTransitionTime":"2026-01-26T18:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:24 crc kubenswrapper[4737]: I0126 18:31:24.131472 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:24 crc kubenswrapper[4737]: I0126 18:31:24.131509 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:24 crc kubenswrapper[4737]: I0126 18:31:24.131518 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:24 crc kubenswrapper[4737]: I0126 18:31:24.131533 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:24 crc kubenswrapper[4737]: I0126 18:31:24.131542 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:24Z","lastTransitionTime":"2026-01-26T18:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:24 crc kubenswrapper[4737]: I0126 18:31:24.234088 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:24 crc kubenswrapper[4737]: I0126 18:31:24.234131 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:24 crc kubenswrapper[4737]: I0126 18:31:24.234142 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:24 crc kubenswrapper[4737]: I0126 18:31:24.234160 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:24 crc kubenswrapper[4737]: I0126 18:31:24.234175 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:24Z","lastTransitionTime":"2026-01-26T18:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:24 crc kubenswrapper[4737]: I0126 18:31:24.336751 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:24 crc kubenswrapper[4737]: I0126 18:31:24.336802 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:24 crc kubenswrapper[4737]: I0126 18:31:24.336813 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:24 crc kubenswrapper[4737]: I0126 18:31:24.336833 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:24 crc kubenswrapper[4737]: I0126 18:31:24.336842 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:24Z","lastTransitionTime":"2026-01-26T18:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:24 crc kubenswrapper[4737]: I0126 18:31:24.439849 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:24 crc kubenswrapper[4737]: I0126 18:31:24.439914 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:24 crc kubenswrapper[4737]: I0126 18:31:24.439927 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:24 crc kubenswrapper[4737]: I0126 18:31:24.439949 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:24 crc kubenswrapper[4737]: I0126 18:31:24.439965 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:24Z","lastTransitionTime":"2026-01-26T18:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:24 crc kubenswrapper[4737]: I0126 18:31:24.542058 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:24 crc kubenswrapper[4737]: I0126 18:31:24.542116 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:24 crc kubenswrapper[4737]: I0126 18:31:24.542152 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:24 crc kubenswrapper[4737]: I0126 18:31:24.542171 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:24 crc kubenswrapper[4737]: I0126 18:31:24.542182 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:24Z","lastTransitionTime":"2026-01-26T18:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:24 crc kubenswrapper[4737]: I0126 18:31:24.644739 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:24 crc kubenswrapper[4737]: I0126 18:31:24.644786 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:24 crc kubenswrapper[4737]: I0126 18:31:24.644799 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:24 crc kubenswrapper[4737]: I0126 18:31:24.644817 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:24 crc kubenswrapper[4737]: I0126 18:31:24.644829 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:24Z","lastTransitionTime":"2026-01-26T18:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:24 crc kubenswrapper[4737]: I0126 18:31:24.748207 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:24 crc kubenswrapper[4737]: I0126 18:31:24.748264 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:24 crc kubenswrapper[4737]: I0126 18:31:24.748277 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:24 crc kubenswrapper[4737]: I0126 18:31:24.748301 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:24 crc kubenswrapper[4737]: I0126 18:31:24.748314 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:24Z","lastTransitionTime":"2026-01-26T18:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:24 crc kubenswrapper[4737]: I0126 18:31:24.851993 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:24 crc kubenswrapper[4737]: I0126 18:31:24.852046 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:24 crc kubenswrapper[4737]: I0126 18:31:24.852055 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:24 crc kubenswrapper[4737]: I0126 18:31:24.852092 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:24 crc kubenswrapper[4737]: I0126 18:31:24.852103 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:24Z","lastTransitionTime":"2026-01-26T18:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:24 crc kubenswrapper[4737]: I0126 18:31:24.951166 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 22:23:07.371617562 +0000 UTC Jan 26 18:31:24 crc kubenswrapper[4737]: I0126 18:31:24.953909 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:24 crc kubenswrapper[4737]: I0126 18:31:24.954169 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:24 crc kubenswrapper[4737]: I0126 18:31:24.954282 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:24 crc kubenswrapper[4737]: I0126 18:31:24.954407 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:24 crc kubenswrapper[4737]: I0126 18:31:24.954485 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:24Z","lastTransitionTime":"2026-01-26T18:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:24 crc kubenswrapper[4737]: I0126 18:31:24.981663 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:31:24 crc kubenswrapper[4737]: I0126 18:31:24.981701 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:31:24 crc kubenswrapper[4737]: I0126 18:31:24.981727 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:31:24 crc kubenswrapper[4737]: I0126 18:31:24.981663 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:31:24 crc kubenswrapper[4737]: E0126 18:31:24.981818 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:31:24 crc kubenswrapper[4737]: E0126 18:31:24.981870 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:31:24 crc kubenswrapper[4737]: E0126 18:31:24.981953 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4pv7r" podUID="1a3aadb5-b908-4300-af5f-e3c37dff9e14" Jan 26 18:31:24 crc kubenswrapper[4737]: E0126 18:31:24.982034 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:31:25 crc kubenswrapper[4737]: I0126 18:31:25.057258 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:25 crc kubenswrapper[4737]: I0126 18:31:25.057325 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:25 crc kubenswrapper[4737]: I0126 18:31:25.057339 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:25 crc kubenswrapper[4737]: I0126 18:31:25.057356 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:25 crc kubenswrapper[4737]: I0126 18:31:25.057371 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:25Z","lastTransitionTime":"2026-01-26T18:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:25 crc kubenswrapper[4737]: I0126 18:31:25.159593 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:25 crc kubenswrapper[4737]: I0126 18:31:25.159662 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:25 crc kubenswrapper[4737]: I0126 18:31:25.159675 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:25 crc kubenswrapper[4737]: I0126 18:31:25.159694 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:25 crc kubenswrapper[4737]: I0126 18:31:25.159706 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:25Z","lastTransitionTime":"2026-01-26T18:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:25 crc kubenswrapper[4737]: I0126 18:31:25.262528 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:25 crc kubenswrapper[4737]: I0126 18:31:25.262579 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:25 crc kubenswrapper[4737]: I0126 18:31:25.262590 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:25 crc kubenswrapper[4737]: I0126 18:31:25.262609 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:25 crc kubenswrapper[4737]: I0126 18:31:25.262621 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:25Z","lastTransitionTime":"2026-01-26T18:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:25 crc kubenswrapper[4737]: I0126 18:31:25.365626 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:25 crc kubenswrapper[4737]: I0126 18:31:25.365678 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:25 crc kubenswrapper[4737]: I0126 18:31:25.365693 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:25 crc kubenswrapper[4737]: I0126 18:31:25.365712 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:25 crc kubenswrapper[4737]: I0126 18:31:25.365944 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:25Z","lastTransitionTime":"2026-01-26T18:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:25 crc kubenswrapper[4737]: I0126 18:31:25.468391 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:25 crc kubenswrapper[4737]: I0126 18:31:25.468444 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:25 crc kubenswrapper[4737]: I0126 18:31:25.468457 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:25 crc kubenswrapper[4737]: I0126 18:31:25.468476 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:25 crc kubenswrapper[4737]: I0126 18:31:25.468487 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:25Z","lastTransitionTime":"2026-01-26T18:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:25 crc kubenswrapper[4737]: I0126 18:31:25.571225 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:25 crc kubenswrapper[4737]: I0126 18:31:25.571271 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:25 crc kubenswrapper[4737]: I0126 18:31:25.571281 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:25 crc kubenswrapper[4737]: I0126 18:31:25.571298 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:25 crc kubenswrapper[4737]: I0126 18:31:25.571309 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:25Z","lastTransitionTime":"2026-01-26T18:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:25 crc kubenswrapper[4737]: I0126 18:31:25.673781 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:25 crc kubenswrapper[4737]: I0126 18:31:25.673828 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:25 crc kubenswrapper[4737]: I0126 18:31:25.673839 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:25 crc kubenswrapper[4737]: I0126 18:31:25.673859 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:25 crc kubenswrapper[4737]: I0126 18:31:25.673869 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:25Z","lastTransitionTime":"2026-01-26T18:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:25 crc kubenswrapper[4737]: I0126 18:31:25.776625 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:25 crc kubenswrapper[4737]: I0126 18:31:25.776677 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:25 crc kubenswrapper[4737]: I0126 18:31:25.776693 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:25 crc kubenswrapper[4737]: I0126 18:31:25.776717 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:25 crc kubenswrapper[4737]: I0126 18:31:25.776732 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:25Z","lastTransitionTime":"2026-01-26T18:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:25 crc kubenswrapper[4737]: I0126 18:31:25.879237 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:25 crc kubenswrapper[4737]: I0126 18:31:25.879285 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:25 crc kubenswrapper[4737]: I0126 18:31:25.879296 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:25 crc kubenswrapper[4737]: I0126 18:31:25.879315 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:25 crc kubenswrapper[4737]: I0126 18:31:25.879326 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:25Z","lastTransitionTime":"2026-01-26T18:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:25 crc kubenswrapper[4737]: I0126 18:31:25.951549 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 18:00:47.669914423 +0000 UTC Jan 26 18:31:25 crc kubenswrapper[4737]: I0126 18:31:25.982016 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:25 crc kubenswrapper[4737]: I0126 18:31:25.982066 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:25 crc kubenswrapper[4737]: I0126 18:31:25.982107 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:25 crc kubenswrapper[4737]: I0126 18:31:25.982124 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:25 crc kubenswrapper[4737]: I0126 18:31:25.982137 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:25Z","lastTransitionTime":"2026-01-26T18:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.084989 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.085048 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.085090 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.085116 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.085132 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:26Z","lastTransitionTime":"2026-01-26T18:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.188177 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.188224 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.188237 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.188254 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.188268 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:26Z","lastTransitionTime":"2026-01-26T18:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.291499 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.291574 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.291588 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.291656 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.291673 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:26Z","lastTransitionTime":"2026-01-26T18:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.394883 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.394916 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.394928 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.394945 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.394960 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:26Z","lastTransitionTime":"2026-01-26T18:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.487715 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.487760 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.487774 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.487790 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.487801 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:26Z","lastTransitionTime":"2026-01-26T18:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:26 crc kubenswrapper[4737]: E0126 18:31:26.504188 4737 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"163b9b97-5fa6-4443-9f0c-6d278a8ade1d\\\",\\\"systemUUID\\\":\\\"4ebf7606-e2ee-4d28-b0b5-b6f922331ef2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:26Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.509453 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.509635 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.509725 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.509867 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.509960 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:26Z","lastTransitionTime":"2026-01-26T18:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:26 crc kubenswrapper[4737]: E0126 18:31:26.522986 4737 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"163b9b97-5fa6-4443-9f0c-6d278a8ade1d\\\",\\\"systemUUID\\\":\\\"4ebf7606-e2ee-4d28-b0b5-b6f922331ef2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:26Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.527966 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.528021 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.528034 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.528062 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.528095 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:26Z","lastTransitionTime":"2026-01-26T18:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:26 crc kubenswrapper[4737]: E0126 18:31:26.545033 4737 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"163b9b97-5fa6-4443-9f0c-6d278a8ade1d\\\",\\\"systemUUID\\\":\\\"4ebf7606-e2ee-4d28-b0b5-b6f922331ef2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:26Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.549628 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.549790 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.549991 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.550167 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.550335 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:26Z","lastTransitionTime":"2026-01-26T18:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:26 crc kubenswrapper[4737]: E0126 18:31:26.564610 4737 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"163b9b97-5fa6-4443-9f0c-6d278a8ade1d\\\",\\\"systemUUID\\\":\\\"4ebf7606-e2ee-4d28-b0b5-b6f922331ef2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:26Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.569514 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.569588 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.569605 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.569630 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.569649 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:26Z","lastTransitionTime":"2026-01-26T18:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:26 crc kubenswrapper[4737]: E0126 18:31:26.583634 4737 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"163b9b97-5fa6-4443-9f0c-6d278a8ade1d\\\",\\\"systemUUID\\\":\\\"4ebf7606-e2ee-4d28-b0b5-b6f922331ef2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:26Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:26 crc kubenswrapper[4737]: E0126 18:31:26.583791 4737 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.586038 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.586220 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.586719 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.587034 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.587338 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:26Z","lastTransitionTime":"2026-01-26T18:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.691099 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.691694 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.691784 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.691870 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.691947 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:26Z","lastTransitionTime":"2026-01-26T18:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.795867 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.795931 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.795944 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.795977 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.795991 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:26Z","lastTransitionTime":"2026-01-26T18:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.898204 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.898262 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.898305 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.898330 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.898344 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:26Z","lastTransitionTime":"2026-01-26T18:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.952514 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 05:26:13.44748443 +0000 UTC Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.981060 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.981121 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.981162 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.981573 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:31:26 crc kubenswrapper[4737]: E0126 18:31:26.981557 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:31:26 crc kubenswrapper[4737]: E0126 18:31:26.981711 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:31:26 crc kubenswrapper[4737]: E0126 18:31:26.981785 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4pv7r" podUID="1a3aadb5-b908-4300-af5f-e3c37dff9e14" Jan 26 18:31:26 crc kubenswrapper[4737]: E0126 18:31:26.981831 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:31:26 crc kubenswrapper[4737]: I0126 18:31:26.995665 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a777e838-21c0-4be5-9c8d-66ffb95135e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e22cbaea409b90eb9ad8f629cc94f12d5d94913c660d1f4ecbf3b1dd136d009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15312b4318e6f2175734be08ac5efbea4b0a46e2810e7223575671671408a157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81db4bac81727e02147b813300003fba15b7daf01d124d40ee30e4a87446ed1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4aba885244febd5d5191fbd34d2ee56412140bedfaf405e1a6b8bdeba2814f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4aba885244febd5d5191fbd34d2ee56412140bedfaf405e1a6b8bdeba2814f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:26Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.001815 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.001857 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.001869 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.001891 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.001904 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:27Z","lastTransitionTime":"2026-01-26T18:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.014297 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvbml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f32d3b75-6d15-4fb7-9559-d3df1d77071e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e973f3c659c65849958ccb32d18d8e67d42874690df337699f6cf976485c536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81b1b4cdfa531e63bf8499478cc1f6813d659b2b1b160d374133713382cff7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81b1b4cdfa531e63bf8499478cc1f6813d659b2b1b160d374133713382cff7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvbml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:27Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.028837 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bc7b559-f4f0-47b0-b148-6d0915785538\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10904723390bf4505ed547f04ed3a24b1e7debcf7f089e7de30eb5166c8f6d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knvgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df8c189f585082008e31ded41ba96e5939a894300f9dc29b53768a05cea54c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knvgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rzpxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:27Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.043980 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4pv7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a3aadb5-b908-4300-af5f-e3c37dff9e14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7cfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7cfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4pv7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:27Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.073177 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6554c7-415f-457d-8121-82981ebe2781\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2838d2a1b16be346b2d6a63998cd81416ef81978be369242fae471f6a53fdbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf97240160ecd3f4e73effbeb33f85dad6c12afbfe10315b8624d5c366945d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cfbe9f1ae9deaee4bbb0db6d490c25bd86326a3b962d2221cffa8c7e8431cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35617b01f73620a31d80cfbb5bc2c444ee37cdf3cfd67d62b70f36c6738bfc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2decc4fe0a94f1c54bc9b532267b0cbac17f7762e628835a11ba40561c8971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:27Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.088841 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:27Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.103786 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:27Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.105543 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.105683 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.105786 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.105933 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.106023 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:27Z","lastTransitionTime":"2026-01-26T18:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.117163 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fsmsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79f4091b-95d7-420a-b90a-1b6f48fb634e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://182bb7a343b62287950a4012ccd463ab6a7d339540f40db94e83248958d49095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fsmsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:27Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.130631 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d16415ca-2740-4247-846a-9afd1ebcca48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4f461b168b044c50f281bafc5c7ef0d877392e3cc72edc7b2f0028cf8fe6b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7338aa3bff3561881f454689b4ae1ab8b46ddf950c45dd080107c7b78e6766a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ccdee3654b2923f02f6071aa3924d0934ed028d809dfbf120ba387637632dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c275106783e56387249df9619e22fd0eca28516545f77cead21b8c925f9c36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:27Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.144865 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qjff2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82627aad-2019-482e-934a-7e9729927a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d6c4b9c86f851e8346bde5364b9a2463869d85fef2bc4e705335f9253be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9ggl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qjff2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:27Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.158880 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afd75772-7900-46c3-b392-afb075e1cc08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44e1f827ccc2bfeece3e663dd96fc5e48e301dbf7ac31e381e7a33a8a4a422c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea5fce0e1e77606f5e8f6cb2c1b339d6b7b8174e1f68a050834be1f5bedfec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qxkj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:27Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.172247 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gxxjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"632d368f-0ceb-4edc-aac0-b760c24da635\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://045cdffff188229daeee7faf3a96a61c6b0ab18fdd0908f528b8a2a5b19059bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrskd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gxxjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:27Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.195387 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d641e5-0291-480c-9413-478267450e45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d782bb5883158eb31686ef882923bc0fe18907ec34b462ad7641b8d0a6e675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce3c0b3eaf0ab467b2dbcadc4770536de6e0abf901c9636df113498aff77a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d13541d78d88ffb1e1dcff16556814da8c438d160fef0ea16468954f300dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209ecbbc6838b629efde256a421bfd4b6926d2a9cd2f02e4fb7df9325fdecfc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2968ec8a8ae174c006de379e7fae84b111c90cb44e51bb8d0fdcbc0e66a5842\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:30:39.472985 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:30:39.474507 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1374176662/tls.crt::/tmp/serving-cert-1374176662/tls.key\\\\\\\"\\\\nI0126 18:30:44.993991 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:30:44.996847 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:30:44.996868 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:30:44.996891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:30:44.996897 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:30:45.005311 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 18:30:45.005355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 18:30:45.005375 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005386 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005391 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:30:45.005396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:30:45.005400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:30:45.005403 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 18:30:45.006492 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45b34a9d70cf8504fd809f816a326a74e9a3c422a1ed1ffc221e72f90629b420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:27Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.209690 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.209783 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.209809 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.209847 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.209876 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:27Z","lastTransitionTime":"2026-01-26T18:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.213515 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3512c1850ad62aad579725558f83686c93dad645cc56cc852438dc2b4a6c35c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:27Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.237280 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925178b6076a7c576bc84fb58255bac5e1dcd86eda3fd94f0f93504a7cd7625a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548ccd6a70ea74a2030c871c94d8d7ac1de313de023b6a16b4a3a3bb2a2d7003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:27Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.252894 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e65f82894ec49f5a88663c42b77ad7d6f19fa922c45052d24272144140f979b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:27Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.269744 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:27Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.293593 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb40773-20dc-48ef-bf7f-17f4a042b01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66ec75b04c2383311d9d4c54148415f6f45821810aa9e68c12fa36c22637341c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13f6776860714e1ab348c9b7a767366f0b4b425d08ed27ee64abfaf2770f1889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0330027f82eafcc297d9ea91babd144a993a1f9d5b5f376274521364421fb70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b3d9e7e5a84aa89a81ca65443973a1a75bc1b54c2f3f5cbd6cf7a00f8d04704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2019712957d6ff1e329746e69d806c2cb554917815ebbac73b321965e5d981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://067cf449746568a0f2fa056863be0cc0bf40390eb6f239e011639fdc05f2ea8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://046202f8fbac321bcb6ceb2a70e0b655bf88d62a5c28a1c43a1a815ad3b2f87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://046202f8fbac321bcb6ceb2a70e0b655bf88d62a5c28a1c43a1a815ad3b2f87d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:31:17Z\\\",\\\"message\\\":\\\"ocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.254:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e4e4203e-87c7-4024-930a-5d6bdfe2bdde}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 18:31:16.841538 6403 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:16Z is after 2025-08-24T17:21:41Z]\\\\nI0126 18:31:16.841546 6403 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator-webhook]} name:Service_openshift-machine\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:31:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jgjrk_openshift-ovn-kubernetes(ecb40773-20dc-48ef-bf7f-17f4a042b01c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://570bf995c9ab0a04cff8ada5b82ef19c9299d86ab480a43ea1446a3aedb8cd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jgjrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:27Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.312490 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.312534 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.312544 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.312562 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.312574 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:27Z","lastTransitionTime":"2026-01-26T18:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.414959 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.415421 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.415623 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.415712 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.415798 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:27Z","lastTransitionTime":"2026-01-26T18:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.519505 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.519558 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.519568 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.519589 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.519600 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:27Z","lastTransitionTime":"2026-01-26T18:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.623460 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.623815 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.623826 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.623840 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.623850 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:27Z","lastTransitionTime":"2026-01-26T18:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.726282 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.726338 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.726349 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.726367 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.726379 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:27Z","lastTransitionTime":"2026-01-26T18:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.829641 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.829733 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.829775 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.829809 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.829832 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:27Z","lastTransitionTime":"2026-01-26T18:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.933063 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.933138 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.933148 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.933166 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.933177 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:27Z","lastTransitionTime":"2026-01-26T18:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:27 crc kubenswrapper[4737]: I0126 18:31:27.953695 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 04:39:29.255864882 +0000 UTC Jan 26 18:31:28 crc kubenswrapper[4737]: I0126 18:31:28.035756 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:28 crc kubenswrapper[4737]: I0126 18:31:28.035813 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:28 crc kubenswrapper[4737]: I0126 18:31:28.035829 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:28 crc kubenswrapper[4737]: I0126 18:31:28.035847 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:28 crc kubenswrapper[4737]: I0126 18:31:28.035866 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:28Z","lastTransitionTime":"2026-01-26T18:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:28 crc kubenswrapper[4737]: I0126 18:31:28.138591 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:28 crc kubenswrapper[4737]: I0126 18:31:28.138646 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:28 crc kubenswrapper[4737]: I0126 18:31:28.138661 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:28 crc kubenswrapper[4737]: I0126 18:31:28.138681 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:28 crc kubenswrapper[4737]: I0126 18:31:28.138702 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:28Z","lastTransitionTime":"2026-01-26T18:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:28 crc kubenswrapper[4737]: I0126 18:31:28.241630 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:28 crc kubenswrapper[4737]: I0126 18:31:28.241711 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:28 crc kubenswrapper[4737]: I0126 18:31:28.241726 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:28 crc kubenswrapper[4737]: I0126 18:31:28.241750 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:28 crc kubenswrapper[4737]: I0126 18:31:28.241769 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:28Z","lastTransitionTime":"2026-01-26T18:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:28 crc kubenswrapper[4737]: I0126 18:31:28.344411 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:28 crc kubenswrapper[4737]: I0126 18:31:28.344462 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:28 crc kubenswrapper[4737]: I0126 18:31:28.344474 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:28 crc kubenswrapper[4737]: I0126 18:31:28.344495 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:28 crc kubenswrapper[4737]: I0126 18:31:28.344505 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:28Z","lastTransitionTime":"2026-01-26T18:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:28 crc kubenswrapper[4737]: I0126 18:31:28.447442 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:28 crc kubenswrapper[4737]: I0126 18:31:28.447489 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:28 crc kubenswrapper[4737]: I0126 18:31:28.447505 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:28 crc kubenswrapper[4737]: I0126 18:31:28.447523 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:28 crc kubenswrapper[4737]: I0126 18:31:28.447533 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:28Z","lastTransitionTime":"2026-01-26T18:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:28 crc kubenswrapper[4737]: I0126 18:31:28.550706 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:28 crc kubenswrapper[4737]: I0126 18:31:28.550761 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:28 crc kubenswrapper[4737]: I0126 18:31:28.550774 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:28 crc kubenswrapper[4737]: I0126 18:31:28.550793 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:28 crc kubenswrapper[4737]: I0126 18:31:28.550804 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:28Z","lastTransitionTime":"2026-01-26T18:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:28 crc kubenswrapper[4737]: I0126 18:31:28.654268 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:28 crc kubenswrapper[4737]: I0126 18:31:28.654336 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:28 crc kubenswrapper[4737]: I0126 18:31:28.654351 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:28 crc kubenswrapper[4737]: I0126 18:31:28.654367 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:28 crc kubenswrapper[4737]: I0126 18:31:28.654378 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:28Z","lastTransitionTime":"2026-01-26T18:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:28 crc kubenswrapper[4737]: I0126 18:31:28.758061 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:28 crc kubenswrapper[4737]: I0126 18:31:28.758467 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:28 crc kubenswrapper[4737]: I0126 18:31:28.758584 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:28 crc kubenswrapper[4737]: I0126 18:31:28.758668 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:28 crc kubenswrapper[4737]: I0126 18:31:28.758739 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:28Z","lastTransitionTime":"2026-01-26T18:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:28 crc kubenswrapper[4737]: I0126 18:31:28.861935 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:28 crc kubenswrapper[4737]: I0126 18:31:28.861991 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:28 crc kubenswrapper[4737]: I0126 18:31:28.862005 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:28 crc kubenswrapper[4737]: I0126 18:31:28.862028 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:28 crc kubenswrapper[4737]: I0126 18:31:28.862044 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:28Z","lastTransitionTime":"2026-01-26T18:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:28 crc kubenswrapper[4737]: I0126 18:31:28.953852 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 06:28:35.021422902 +0000 UTC Jan 26 18:31:28 crc kubenswrapper[4737]: I0126 18:31:28.966066 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:28 crc kubenswrapper[4737]: I0126 18:31:28.966134 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:28 crc kubenswrapper[4737]: I0126 18:31:28.966147 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:28 crc kubenswrapper[4737]: I0126 18:31:28.966164 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:28 crc kubenswrapper[4737]: I0126 18:31:28.966175 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:28Z","lastTransitionTime":"2026-01-26T18:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:28 crc kubenswrapper[4737]: I0126 18:31:28.982223 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:31:28 crc kubenswrapper[4737]: E0126 18:31:28.982412 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:31:28 crc kubenswrapper[4737]: I0126 18:31:28.982476 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:31:28 crc kubenswrapper[4737]: E0126 18:31:28.982540 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4pv7r" podUID="1a3aadb5-b908-4300-af5f-e3c37dff9e14" Jan 26 18:31:28 crc kubenswrapper[4737]: I0126 18:31:28.982223 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:31:28 crc kubenswrapper[4737]: E0126 18:31:28.982606 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:31:28 crc kubenswrapper[4737]: I0126 18:31:28.983384 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:31:28 crc kubenswrapper[4737]: E0126 18:31:28.983636 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:31:29 crc kubenswrapper[4737]: I0126 18:31:29.070239 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:29 crc kubenswrapper[4737]: I0126 18:31:29.070290 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:29 crc kubenswrapper[4737]: I0126 18:31:29.070301 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:29 crc kubenswrapper[4737]: I0126 18:31:29.070318 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:29 crc kubenswrapper[4737]: I0126 18:31:29.070328 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:29Z","lastTransitionTime":"2026-01-26T18:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:29 crc kubenswrapper[4737]: I0126 18:31:29.172950 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:29 crc kubenswrapper[4737]: I0126 18:31:29.172995 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:29 crc kubenswrapper[4737]: I0126 18:31:29.173005 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:29 crc kubenswrapper[4737]: I0126 18:31:29.173022 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:29 crc kubenswrapper[4737]: I0126 18:31:29.173034 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:29Z","lastTransitionTime":"2026-01-26T18:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:29 crc kubenswrapper[4737]: I0126 18:31:29.277012 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:29 crc kubenswrapper[4737]: I0126 18:31:29.277103 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:29 crc kubenswrapper[4737]: I0126 18:31:29.277118 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:29 crc kubenswrapper[4737]: I0126 18:31:29.277139 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:29 crc kubenswrapper[4737]: I0126 18:31:29.277152 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:29Z","lastTransitionTime":"2026-01-26T18:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:29 crc kubenswrapper[4737]: I0126 18:31:29.380229 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:29 crc kubenswrapper[4737]: I0126 18:31:29.380275 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:29 crc kubenswrapper[4737]: I0126 18:31:29.380286 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:29 crc kubenswrapper[4737]: I0126 18:31:29.380303 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:29 crc kubenswrapper[4737]: I0126 18:31:29.380317 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:29Z","lastTransitionTime":"2026-01-26T18:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:29 crc kubenswrapper[4737]: I0126 18:31:29.482770 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:29 crc kubenswrapper[4737]: I0126 18:31:29.482823 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:29 crc kubenswrapper[4737]: I0126 18:31:29.482838 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:29 crc kubenswrapper[4737]: I0126 18:31:29.482859 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:29 crc kubenswrapper[4737]: I0126 18:31:29.482873 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:29Z","lastTransitionTime":"2026-01-26T18:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:29 crc kubenswrapper[4737]: I0126 18:31:29.586122 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:29 crc kubenswrapper[4737]: I0126 18:31:29.586574 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:29 crc kubenswrapper[4737]: I0126 18:31:29.586641 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:29 crc kubenswrapper[4737]: I0126 18:31:29.586743 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:29 crc kubenswrapper[4737]: I0126 18:31:29.586804 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:29Z","lastTransitionTime":"2026-01-26T18:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:29 crc kubenswrapper[4737]: I0126 18:31:29.689344 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:29 crc kubenswrapper[4737]: I0126 18:31:29.689404 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:29 crc kubenswrapper[4737]: I0126 18:31:29.689422 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:29 crc kubenswrapper[4737]: I0126 18:31:29.689446 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:29 crc kubenswrapper[4737]: I0126 18:31:29.689462 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:29Z","lastTransitionTime":"2026-01-26T18:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:29 crc kubenswrapper[4737]: I0126 18:31:29.792536 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:29 crc kubenswrapper[4737]: I0126 18:31:29.792949 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:29 crc kubenswrapper[4737]: I0126 18:31:29.793013 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:29 crc kubenswrapper[4737]: I0126 18:31:29.793131 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:29 crc kubenswrapper[4737]: I0126 18:31:29.793221 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:29Z","lastTransitionTime":"2026-01-26T18:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:29 crc kubenswrapper[4737]: I0126 18:31:29.896250 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:29 crc kubenswrapper[4737]: I0126 18:31:29.896332 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:29 crc kubenswrapper[4737]: I0126 18:31:29.896352 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:29 crc kubenswrapper[4737]: I0126 18:31:29.896384 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:29 crc kubenswrapper[4737]: I0126 18:31:29.896407 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:29Z","lastTransitionTime":"2026-01-26T18:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:29 crc kubenswrapper[4737]: I0126 18:31:29.954963 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 12:37:55.431325224 +0000 UTC Jan 26 18:31:29 crc kubenswrapper[4737]: I0126 18:31:29.999364 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:29 crc kubenswrapper[4737]: I0126 18:31:29.999405 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:29 crc kubenswrapper[4737]: I0126 18:31:29.999414 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:29 crc kubenswrapper[4737]: I0126 18:31:29.999458 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:29 crc kubenswrapper[4737]: I0126 18:31:29.999472 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:29Z","lastTransitionTime":"2026-01-26T18:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:30 crc kubenswrapper[4737]: I0126 18:31:30.102405 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:30 crc kubenswrapper[4737]: I0126 18:31:30.102502 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:30 crc kubenswrapper[4737]: I0126 18:31:30.102526 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:30 crc kubenswrapper[4737]: I0126 18:31:30.102558 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:30 crc kubenswrapper[4737]: I0126 18:31:30.102578 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:30Z","lastTransitionTime":"2026-01-26T18:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:30 crc kubenswrapper[4737]: I0126 18:31:30.205948 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:30 crc kubenswrapper[4737]: I0126 18:31:30.205995 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:30 crc kubenswrapper[4737]: I0126 18:31:30.206012 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:30 crc kubenswrapper[4737]: I0126 18:31:30.206035 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:30 crc kubenswrapper[4737]: I0126 18:31:30.206053 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:30Z","lastTransitionTime":"2026-01-26T18:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:30 crc kubenswrapper[4737]: I0126 18:31:30.308465 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:30 crc kubenswrapper[4737]: I0126 18:31:30.308511 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:30 crc kubenswrapper[4737]: I0126 18:31:30.308523 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:30 crc kubenswrapper[4737]: I0126 18:31:30.308543 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:30 crc kubenswrapper[4737]: I0126 18:31:30.308558 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:30Z","lastTransitionTime":"2026-01-26T18:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:30 crc kubenswrapper[4737]: I0126 18:31:30.410946 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:30 crc kubenswrapper[4737]: I0126 18:31:30.411369 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:30 crc kubenswrapper[4737]: I0126 18:31:30.411521 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:30 crc kubenswrapper[4737]: I0126 18:31:30.411631 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:30 crc kubenswrapper[4737]: I0126 18:31:30.411732 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:30Z","lastTransitionTime":"2026-01-26T18:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:30 crc kubenswrapper[4737]: I0126 18:31:30.515265 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:30 crc kubenswrapper[4737]: I0126 18:31:30.515309 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:30 crc kubenswrapper[4737]: I0126 18:31:30.515319 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:30 crc kubenswrapper[4737]: I0126 18:31:30.515335 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:30 crc kubenswrapper[4737]: I0126 18:31:30.515347 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:30Z","lastTransitionTime":"2026-01-26T18:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:30 crc kubenswrapper[4737]: I0126 18:31:30.617456 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:30 crc kubenswrapper[4737]: I0126 18:31:30.617485 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:30 crc kubenswrapper[4737]: I0126 18:31:30.617494 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:30 crc kubenswrapper[4737]: I0126 18:31:30.617506 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:30 crc kubenswrapper[4737]: I0126 18:31:30.617515 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:30Z","lastTransitionTime":"2026-01-26T18:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:30 crc kubenswrapper[4737]: I0126 18:31:30.720151 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:30 crc kubenswrapper[4737]: I0126 18:31:30.720199 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:30 crc kubenswrapper[4737]: I0126 18:31:30.720236 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:30 crc kubenswrapper[4737]: I0126 18:31:30.720254 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:30 crc kubenswrapper[4737]: I0126 18:31:30.720269 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:30Z","lastTransitionTime":"2026-01-26T18:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:30 crc kubenswrapper[4737]: I0126 18:31:30.823102 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:30 crc kubenswrapper[4737]: I0126 18:31:30.823150 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:30 crc kubenswrapper[4737]: I0126 18:31:30.823159 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:30 crc kubenswrapper[4737]: I0126 18:31:30.823174 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:30 crc kubenswrapper[4737]: I0126 18:31:30.823187 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:30Z","lastTransitionTime":"2026-01-26T18:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:30 crc kubenswrapper[4737]: I0126 18:31:30.926204 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:30 crc kubenswrapper[4737]: I0126 18:31:30.926602 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:30 crc kubenswrapper[4737]: I0126 18:31:30.926715 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:30 crc kubenswrapper[4737]: I0126 18:31:30.926880 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:30 crc kubenswrapper[4737]: I0126 18:31:30.926971 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:30Z","lastTransitionTime":"2026-01-26T18:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:30 crc kubenswrapper[4737]: I0126 18:31:30.955891 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 15:22:55.124258322 +0000 UTC Jan 26 18:31:30 crc kubenswrapper[4737]: I0126 18:31:30.981393 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:31:30 crc kubenswrapper[4737]: I0126 18:31:30.981440 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:31:30 crc kubenswrapper[4737]: I0126 18:31:30.982216 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:31:30 crc kubenswrapper[4737]: I0126 18:31:30.982009 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:31:30 crc kubenswrapper[4737]: E0126 18:31:30.982329 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:31:30 crc kubenswrapper[4737]: E0126 18:31:30.982614 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:31:30 crc kubenswrapper[4737]: E0126 18:31:30.982771 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:31:30 crc kubenswrapper[4737]: E0126 18:31:30.982922 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4pv7r" podUID="1a3aadb5-b908-4300-af5f-e3c37dff9e14" Jan 26 18:31:31 crc kubenswrapper[4737]: I0126 18:31:31.030244 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:31 crc kubenswrapper[4737]: I0126 18:31:31.030292 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:31 crc kubenswrapper[4737]: I0126 18:31:31.030305 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:31 crc kubenswrapper[4737]: I0126 18:31:31.030322 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:31 crc kubenswrapper[4737]: I0126 18:31:31.030335 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:31Z","lastTransitionTime":"2026-01-26T18:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:31 crc kubenswrapper[4737]: I0126 18:31:31.132463 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:31 crc kubenswrapper[4737]: I0126 18:31:31.132507 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:31 crc kubenswrapper[4737]: I0126 18:31:31.132516 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:31 crc kubenswrapper[4737]: I0126 18:31:31.132535 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:31 crc kubenswrapper[4737]: I0126 18:31:31.132552 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:31Z","lastTransitionTime":"2026-01-26T18:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:31 crc kubenswrapper[4737]: I0126 18:31:31.235241 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:31 crc kubenswrapper[4737]: I0126 18:31:31.235297 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:31 crc kubenswrapper[4737]: I0126 18:31:31.235310 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:31 crc kubenswrapper[4737]: I0126 18:31:31.235332 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:31 crc kubenswrapper[4737]: I0126 18:31:31.235346 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:31Z","lastTransitionTime":"2026-01-26T18:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:31 crc kubenswrapper[4737]: I0126 18:31:31.338294 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:31 crc kubenswrapper[4737]: I0126 18:31:31.338601 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:31 crc kubenswrapper[4737]: I0126 18:31:31.338731 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:31 crc kubenswrapper[4737]: I0126 18:31:31.338871 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:31 crc kubenswrapper[4737]: I0126 18:31:31.338974 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:31Z","lastTransitionTime":"2026-01-26T18:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:31 crc kubenswrapper[4737]: I0126 18:31:31.412371 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a3aadb5-b908-4300-af5f-e3c37dff9e14-metrics-certs\") pod \"network-metrics-daemon-4pv7r\" (UID: \"1a3aadb5-b908-4300-af5f-e3c37dff9e14\") " pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:31:31 crc kubenswrapper[4737]: E0126 18:31:31.412556 4737 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 18:31:31 crc kubenswrapper[4737]: E0126 18:31:31.412634 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a3aadb5-b908-4300-af5f-e3c37dff9e14-metrics-certs podName:1a3aadb5-b908-4300-af5f-e3c37dff9e14 nodeName:}" failed. No retries permitted until 2026-01-26 18:32:03.412601941 +0000 UTC m=+96.720796649 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1a3aadb5-b908-4300-af5f-e3c37dff9e14-metrics-certs") pod "network-metrics-daemon-4pv7r" (UID: "1a3aadb5-b908-4300-af5f-e3c37dff9e14") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 18:31:31 crc kubenswrapper[4737]: I0126 18:31:31.442326 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:31 crc kubenswrapper[4737]: I0126 18:31:31.442716 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:31 crc kubenswrapper[4737]: I0126 18:31:31.442830 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:31 crc kubenswrapper[4737]: I0126 18:31:31.442935 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:31 crc kubenswrapper[4737]: I0126 18:31:31.443017 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:31Z","lastTransitionTime":"2026-01-26T18:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:31 crc kubenswrapper[4737]: I0126 18:31:31.544925 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:31 crc kubenswrapper[4737]: I0126 18:31:31.544987 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:31 crc kubenswrapper[4737]: I0126 18:31:31.544997 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:31 crc kubenswrapper[4737]: I0126 18:31:31.545014 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:31 crc kubenswrapper[4737]: I0126 18:31:31.545025 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:31Z","lastTransitionTime":"2026-01-26T18:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:31 crc kubenswrapper[4737]: I0126 18:31:31.647117 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:31 crc kubenswrapper[4737]: I0126 18:31:31.647158 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:31 crc kubenswrapper[4737]: I0126 18:31:31.647171 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:31 crc kubenswrapper[4737]: I0126 18:31:31.647188 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:31 crc kubenswrapper[4737]: I0126 18:31:31.647200 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:31Z","lastTransitionTime":"2026-01-26T18:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:31 crc kubenswrapper[4737]: I0126 18:31:31.749974 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:31 crc kubenswrapper[4737]: I0126 18:31:31.750032 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:31 crc kubenswrapper[4737]: I0126 18:31:31.750045 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:31 crc kubenswrapper[4737]: I0126 18:31:31.750066 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:31 crc kubenswrapper[4737]: I0126 18:31:31.750100 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:31Z","lastTransitionTime":"2026-01-26T18:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:31 crc kubenswrapper[4737]: I0126 18:31:31.852763 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:31 crc kubenswrapper[4737]: I0126 18:31:31.853182 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:31 crc kubenswrapper[4737]: I0126 18:31:31.853268 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:31 crc kubenswrapper[4737]: I0126 18:31:31.853348 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:31 crc kubenswrapper[4737]: I0126 18:31:31.853534 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:31Z","lastTransitionTime":"2026-01-26T18:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:31 crc kubenswrapper[4737]: I0126 18:31:31.956106 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 09:16:13.605115466 +0000 UTC Jan 26 18:31:31 crc kubenswrapper[4737]: I0126 18:31:31.956543 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:31 crc kubenswrapper[4737]: I0126 18:31:31.956579 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:31 crc kubenswrapper[4737]: I0126 18:31:31.956590 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:31 crc kubenswrapper[4737]: I0126 18:31:31.956608 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:31 crc kubenswrapper[4737]: I0126 18:31:31.956621 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:31Z","lastTransitionTime":"2026-01-26T18:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.058983 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.059299 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.059375 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.059443 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.059504 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:32Z","lastTransitionTime":"2026-01-26T18:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.161939 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.161971 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.161981 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.162032 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.162043 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:32Z","lastTransitionTime":"2026-01-26T18:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.264151 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.264483 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.264568 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.264634 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.264721 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:32Z","lastTransitionTime":"2026-01-26T18:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.367648 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.367998 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.368100 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.368196 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.368390 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:32Z","lastTransitionTime":"2026-01-26T18:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.471612 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.471672 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.471686 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.471710 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.471724 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:32Z","lastTransitionTime":"2026-01-26T18:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.548389 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qjff2_82627aad-2019-482e-934a-7e9729927a34/kube-multus/0.log" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.548464 4737 generic.go:334] "Generic (PLEG): container finished" podID="82627aad-2019-482e-934a-7e9729927a34" containerID="938d6c4b9c86f851e8346bde5364b9a2463869d85fef2bc4e705335f9253be4c" exitCode=1 Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.548516 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qjff2" event={"ID":"82627aad-2019-482e-934a-7e9729927a34","Type":"ContainerDied","Data":"938d6c4b9c86f851e8346bde5364b9a2463869d85fef2bc4e705335f9253be4c"} Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.549102 4737 scope.go:117] "RemoveContainer" containerID="938d6c4b9c86f851e8346bde5364b9a2463869d85fef2bc4e705335f9253be4c" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.565296 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:32Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.576767 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.577063 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.577191 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.577272 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.577339 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:32Z","lastTransitionTime":"2026-01-26T18:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.578919 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fsmsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79f4091b-95d7-420a-b90a-1b6f48fb634e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://182bb7a343b62287950a4012ccd463ab6a7d339540f40db94e83248958d49095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fsmsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:32Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.605860 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6554c7-415f-457d-8121-82981ebe2781\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2838d2a1b16be346b2d6a63998cd81416ef81978be369242fae471f6a53fdbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf97240160ecd3f4e73effbeb33f85dad6c12afbfe10315b8624d5c366945d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cfbe9f1ae9deaee4bbb0db6d490c25bd86326a3b962d2221cffa8c7e8431cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35617b01f73620a31d80cfbb5bc2c444ee37cdf3cfd67d62b70f36c6738bfc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2decc4fe0a94f1c54bc9b532267b0cbac17f7762e628835a11ba40561c8971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:32Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.622557 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:32Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.637332 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qjff2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82627aad-2019-482e-934a-7e9729927a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d6c4b9c86f851e8346bde5364b9a2463869d85fef2bc4e705335f9253be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://938d6c4b9c86f851e8346bde5364b9a2463869d85fef2bc4e705335f9253be4c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:31:31Z\\\",\\\"message\\\":\\\"2026-01-26T18:30:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_73f0df80-6376-4ba2-b9e3-93d21fcc0927\\\\n2026-01-26T18:30:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_73f0df80-6376-4ba2-b9e3-93d21fcc0927 to /host/opt/cni/bin/\\\\n2026-01-26T18:30:46Z [verbose] multus-daemon started\\\\n2026-01-26T18:30:46Z [verbose] Readiness Indicator file check\\\\n2026-01-26T18:31:31Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9ggl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qjff2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:32Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.651229 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afd75772-7900-46c3-b392-afb075e1cc08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44e1f827ccc2bfeece3e663dd96fc5e48e301dbf7ac31e381e7a33a8a4a422c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea5fce0e1e77606f5e8f6cb2c1b339d6b7b8174e1f68a050834be1f5bedfec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qxkj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:32Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.666018 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gxxjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"632d368f-0ceb-4edc-aac0-b760c24da635\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://045cdffff188229daeee7faf3a96a61c6b0ab18fdd0908f528b8a2a5b19059bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrskd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gxxjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:32Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.680622 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d16415ca-2740-4247-846a-9afd1ebcca48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4f461b168b044c50f281bafc5c7ef0d877392e3cc72edc7b2f0028cf8fe6b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7338aa3bff3561881f454689b4ae1ab8b46ddf950c45dd080107c7b78e6766a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ccdee3654b2923f02f6071aa3924d0934ed028d809dfbf120ba387637632dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c275106783e56387249df9619e22fd0eca28516545f77cead21b8c925f9c36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:32Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.681831 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.681992 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.682165 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.682286 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.682378 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:32Z","lastTransitionTime":"2026-01-26T18:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.694433 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925178b6076a7c576bc84fb58255bac5e1dcd86eda3fd94f0f93504a7cd7625a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548ccd6a70ea74a2030c871c94d8d7ac1de313de023b6a16b4a3a3bb2a2d7003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:32Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.707979 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e65f82894ec49f5a88663c42b77ad7d6f19fa922c45052d24272144140f979b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:32Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.721414 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:32Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.746367 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb40773-20dc-48ef-bf7f-17f4a042b01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66ec75b04c2383311d9d4c54148415f6f45821810aa9e68c12fa36c22637341c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13f6776860714e1ab348c9b7a767366f0b4b425d08ed27ee64abfaf2770f1889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0330027f82eafcc297d9ea91babd144a993a1f9d5b5f376274521364421fb70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b3d9e7e5a84aa89a81ca65443973a1a75bc1b54c2f3f5cbd6cf7a00f8d04704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2019712957d6ff1e329746e69d806c2cb554917815ebbac73b321965e5d981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://067cf449746568a0f2fa056863be0cc0bf40390eb6f239e011639fdc05f2ea8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://046202f8fbac321bcb6ceb2a70e0b655bf88d62a5c28a1c43a1a815ad3b2f87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://046202f8fbac321bcb6ceb2a70e0b655bf88d62a5c28a1c43a1a815ad3b2f87d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:31:17Z\\\",\\\"message\\\":\\\"ocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.254:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e4e4203e-87c7-4024-930a-5d6bdfe2bdde}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 18:31:16.841538 6403 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:16Z is after 2025-08-24T17:21:41Z]\\\\nI0126 18:31:16.841546 6403 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator-webhook]} name:Service_openshift-machine\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:31:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jgjrk_openshift-ovn-kubernetes(ecb40773-20dc-48ef-bf7f-17f4a042b01c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://570bf995c9ab0a04cff8ada5b82ef19c9299d86ab480a43ea1446a3aedb8cd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jgjrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:32Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.761741 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d641e5-0291-480c-9413-478267450e45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d782bb5883158eb31686ef882923bc0fe18907ec34b462ad7641b8d0a6e675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce3c0b3eaf0ab467b2dbcadc4770536de6e0abf901c9636df113498aff77a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d13541d78d88ffb1e1dcff16556814da8c438d160fef0ea16468954f300dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209ecbbc6838b629efde256a421bfd4b6926d2a9cd2f02e4fb7df9325fdecfc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2968ec8a8ae174c006de379e7fae84b111c90cb44e51bb8d0fdcbc0e66a5842\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:30:39.472985 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:30:39.474507 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1374176662/tls.crt::/tmp/serving-cert-1374176662/tls.key\\\\\\\"\\\\nI0126 18:30:44.993991 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:30:44.996847 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:30:44.996868 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:30:44.996891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:30:44.996897 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:30:45.005311 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 18:30:45.005355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 18:30:45.005375 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005386 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005391 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:30:45.005396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:30:45.005400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:30:45.005403 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 18:30:45.006492 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45b34a9d70cf8504fd809f816a326a74e9a3c422a1ed1ffc221e72f90629b420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:32Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.774509 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3512c1850ad62aad579725558f83686c93dad645cc56cc852438dc2b4a6c35c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:32Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.784870 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.784963 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.784976 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.784999 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.785013 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:32Z","lastTransitionTime":"2026-01-26T18:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.786999 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bc7b559-f4f0-47b0-b148-6d0915785538\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10904723390bf4505ed547f04ed3a24b1e7debcf7f089e7de30eb5166c8f6d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knvgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df8c189f585082008e31ded41ba96e5939a894300f9dc29b53768a05cea54c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knvgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rzpxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:32Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.798624 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4pv7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a3aadb5-b908-4300-af5f-e3c37dff9e14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7cfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7cfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4pv7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:32Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.810538 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a777e838-21c0-4be5-9c8d-66ffb95135e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e22cbaea409b90eb9ad8f629cc94f12d5d94913c660d1f4ecbf3b1dd136d009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15312b4318e6f2175734be08ac5efbea4b0a46e2810e7223575671671408a157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81db4bac81727e02147b813300003fba15b7daf01d124d40ee30e4a87446ed1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4aba885244febd5d5191fbd34d2ee56412140bedfaf405e1a6b8bdeba2814f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4aba885244febd5d5191fbd34d2ee56412140bedfaf405e1a6b8bdeba2814f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:32Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.824555 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvbml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f32d3b75-6d15-4fb7-9559-d3df1d77071e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e973f3c659c65849958ccb32d18d8e67d42874690df337699f6cf976485c536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81b1b4cdfa531e63bf8499478cc1f6813d659b2b1b160d374133713382cff7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81b1b4cdfa531e63bf8499478cc1f6813d659b2b1b160d374133713382cff7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvbml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:32Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.887941 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.887981 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.887991 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.888006 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.888016 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:32Z","lastTransitionTime":"2026-01-26T18:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.956302 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 01:58:27.603741327 +0000 UTC Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.981745 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.981836 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.982056 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.982116 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:31:32 crc kubenswrapper[4737]: E0126 18:31:32.982255 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.982454 4737 scope.go:117] "RemoveContainer" containerID="046202f8fbac321bcb6ceb2a70e0b655bf88d62a5c28a1c43a1a815ad3b2f87d" Jan 26 18:31:32 crc kubenswrapper[4737]: E0126 18:31:32.982521 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:31:32 crc kubenswrapper[4737]: E0126 18:31:32.982678 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jgjrk_openshift-ovn-kubernetes(ecb40773-20dc-48ef-bf7f-17f4a042b01c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" podUID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" Jan 26 18:31:32 crc kubenswrapper[4737]: E0126 18:31:32.982689 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:31:32 crc kubenswrapper[4737]: E0126 18:31:32.982746 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4pv7r" podUID="1a3aadb5-b908-4300-af5f-e3c37dff9e14" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.990220 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.990409 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.990544 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.990693 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:32 crc kubenswrapper[4737]: I0126 18:31:32.990800 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:32Z","lastTransitionTime":"2026-01-26T18:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.094414 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.094463 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.094472 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.094488 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.094499 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:33Z","lastTransitionTime":"2026-01-26T18:31:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.197494 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.197559 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.197571 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.197589 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.197605 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:33Z","lastTransitionTime":"2026-01-26T18:31:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.300297 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.300343 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.300352 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.300367 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.300377 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:33Z","lastTransitionTime":"2026-01-26T18:31:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.403216 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.403269 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.403280 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.403299 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.403313 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:33Z","lastTransitionTime":"2026-01-26T18:31:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.505950 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.505994 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.506005 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.506308 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.506322 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:33Z","lastTransitionTime":"2026-01-26T18:31:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.554230 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qjff2_82627aad-2019-482e-934a-7e9729927a34/kube-multus/0.log" Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.554292 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qjff2" event={"ID":"82627aad-2019-482e-934a-7e9729927a34","Type":"ContainerStarted","Data":"debc5589aae465210c77fde58754f822ad1d429fc00cfb56625deddf51cf6fc2"} Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.574295 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6554c7-415f-457d-8121-82981ebe2781\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2838d2a1b16be346b2d6a63998cd81416ef81978be369242fae471f6a53fdbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf97240160ecd3f4e73effbeb33f85dad6c12afbfe10315b8624d5c366945d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cfbe9f1ae9deaee4bbb0db6d490c25bd86326a3b962d2221cffa8c7e8431cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35617b01f73620a31d80cfbb5bc2c444ee37cdf3cfd67d62b70f36c6738bfc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2decc4fe0a94f1c54bc9b532267b0cbac17f7762e628835a11ba40561c8971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:33Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.588843 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:33Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.602386 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:33Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.608983 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.609030 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.609039 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.609058 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.609089 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:33Z","lastTransitionTime":"2026-01-26T18:31:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.612977 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fsmsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79f4091b-95d7-420a-b90a-1b6f48fb634e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://182bb7a343b62287950a4012ccd463ab6a7d339540f40db94e83248958d49095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fsmsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:33Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.626231 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d16415ca-2740-4247-846a-9afd1ebcca48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4f461b168b044c50f281bafc5c7ef0d877392e3cc72edc7b2f0028cf8fe6b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7338aa3bff3561881f454689b4ae1ab8b46ddf950c45dd080107c7b78e6766a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ccdee3654b2923f02f6071aa3924d0934ed028d809dfbf120ba387637632dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c275106783e56387249df9619e22fd0eca28516545f77cead21b8c925f9c36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:33Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.641243 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qjff2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82627aad-2019-482e-934a-7e9729927a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://debc5589aae465210c77fde58754f822ad1d429fc00cfb56625deddf51cf6fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://938d6c4b9c86f851e8346bde5364b9a2463869d85fef2bc4e705335f9253be4c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:31:31Z\\\",\\\"message\\\":\\\"2026-01-26T18:30:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_73f0df80-6376-4ba2-b9e3-93d21fcc0927\\\\n2026-01-26T18:30:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_73f0df80-6376-4ba2-b9e3-93d21fcc0927 to /host/opt/cni/bin/\\\\n2026-01-26T18:30:46Z [verbose] multus-daemon started\\\\n2026-01-26T18:30:46Z [verbose] Readiness Indicator file check\\\\n2026-01-26T18:31:31Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:31:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9ggl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qjff2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:33Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.651799 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afd75772-7900-46c3-b392-afb075e1cc08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44e1f827ccc2bfeece3e663dd96fc5e48e301dbf7ac31e381e7a33a8a4a422c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea5fce0e1e77606f5e8f6cb2c1b339d6b7b8174e1f68a050834be1f5bedfec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qxkj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:33Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.663694 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gxxjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"632d368f-0ceb-4edc-aac0-b760c24da635\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://045cdffff188229daeee7faf3a96a61c6b0ab18fdd0908f528b8a2a5b19059bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrskd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gxxjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:33Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.678240 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d641e5-0291-480c-9413-478267450e45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d782bb5883158eb31686ef882923bc0fe18907ec34b462ad7641b8d0a6e675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce3c0b3eaf0ab467b2dbcadc4770536de6e0abf901c9636df113498aff77a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d13541d78d88ffb1e1dcff16556814da8c438d160fef0ea16468954f300dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209ecbbc6838b629efde256a421bfd4b6926d2a9cd2f02e4fb7df9325fdecfc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2968ec8a8ae174c006de379e7fae84b111c90cb44e51bb8d0fdcbc0e66a5842\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:30:39.472985 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:30:39.474507 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1374176662/tls.crt::/tmp/serving-cert-1374176662/tls.key\\\\\\\"\\\\nI0126 18:30:44.993991 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:30:44.996847 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:30:44.996868 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:30:44.996891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:30:44.996897 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:30:45.005311 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 18:30:45.005355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 18:30:45.005375 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005386 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005391 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:30:45.005396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:30:45.005400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:30:45.005403 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 18:30:45.006492 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45b34a9d70cf8504fd809f816a326a74e9a3c422a1ed1ffc221e72f90629b420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:33Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.692318 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3512c1850ad62aad579725558f83686c93dad645cc56cc852438dc2b4a6c35c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:33Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.705997 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925178b6076a7c576bc84fb58255bac5e1dcd86eda3fd94f0f93504a7cd7625a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548ccd6a70ea74a2030c871c94d8d7ac1de313de023b6a16b4a3a3bb2a2d7003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:33Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.712208 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.712545 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.712724 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.713083 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.713307 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:33Z","lastTransitionTime":"2026-01-26T18:31:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.720655 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e65f82894ec49f5a88663c42b77ad7d6f19fa922c45052d24272144140f979b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:33Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.735203 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:33Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.755014 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb40773-20dc-48ef-bf7f-17f4a042b01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66ec75b04c2383311d9d4c54148415f6f45821810aa9e68c12fa36c22637341c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13f6776860714e1ab348c9b7a767366f0b4b425d08ed27ee64abfaf2770f1889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0330027f82eafcc297d9ea91babd144a993a1f9d5b5f376274521364421fb70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b3d9e7e5a84aa89a81ca65443973a1a75bc1b54c2f3f5cbd6cf7a00f8d04704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2019712957d6ff1e329746e69d806c2cb554917815ebbac73b321965e5d981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://067cf449746568a0f2fa056863be0cc0bf40390eb6f239e011639fdc05f2ea8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://046202f8fbac321bcb6ceb2a70e0b655bf88d62a5c28a1c43a1a815ad3b2f87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://046202f8fbac321bcb6ceb2a70e0b655bf88d62a5c28a1c43a1a815ad3b2f87d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:31:17Z\\\",\\\"message\\\":\\\"ocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.254:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e4e4203e-87c7-4024-930a-5d6bdfe2bdde}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 18:31:16.841538 6403 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:16Z is after 2025-08-24T17:21:41Z]\\\\nI0126 18:31:16.841546 6403 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator-webhook]} name:Service_openshift-machine\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:31:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jgjrk_openshift-ovn-kubernetes(ecb40773-20dc-48ef-bf7f-17f4a042b01c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://570bf995c9ab0a04cff8ada5b82ef19c9299d86ab480a43ea1446a3aedb8cd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jgjrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:33Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.769957 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a777e838-21c0-4be5-9c8d-66ffb95135e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e22cbaea409b90eb9ad8f629cc94f12d5d94913c660d1f4ecbf3b1dd136d009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15312b4318e6f2175734be08ac5efbea4b0a46e2810e7223575671671408a157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81db4bac81727e02147b813300003fba15b7daf01d124d40ee30e4a87446ed1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4aba885244febd5d5191fbd34d2ee56412140bedfaf405e1a6b8bdeba2814f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4aba885244febd5d5191fbd34d2ee56412140bedfaf405e1a6b8bdeba2814f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:33Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.785001 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvbml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f32d3b75-6d15-4fb7-9559-d3df1d77071e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e973f3c659c65849958ccb32d18d8e67d42874690df337699f6cf976485c536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81b1b4cdfa531e63bf8499478cc1f6813d659b2b1b160d374133713382cff7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81b1b4cdfa531e63bf8499478cc1f6813d659b2b1b160d374133713382cff7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvbml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:33Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.796853 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bc7b559-f4f0-47b0-b148-6d0915785538\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10904723390bf4505ed547f04ed3a24b1e7debcf7f089e7de30eb5166c8f6d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knvgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df8c189f585082008e31ded41ba96e5939a894300f9dc29b53768a05cea54c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knvgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rzpxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:33Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.807773 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4pv7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a3aadb5-b908-4300-af5f-e3c37dff9e14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7cfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7cfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4pv7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:33Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.815560 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.815732 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.815814 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.815940 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.816025 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:33Z","lastTransitionTime":"2026-01-26T18:31:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.918921 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.918979 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.918992 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.919011 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.919024 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:33Z","lastTransitionTime":"2026-01-26T18:31:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.957630 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 20:42:44.018716433 +0000 UTC Jan 26 18:31:33 crc kubenswrapper[4737]: I0126 18:31:33.993432 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 26 18:31:34 crc kubenswrapper[4737]: I0126 18:31:34.022014 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:34 crc kubenswrapper[4737]: I0126 18:31:34.022125 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:34 crc kubenswrapper[4737]: I0126 18:31:34.022144 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:34 crc kubenswrapper[4737]: I0126 18:31:34.022168 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:34 crc kubenswrapper[4737]: I0126 18:31:34.022184 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:34Z","lastTransitionTime":"2026-01-26T18:31:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:34 crc kubenswrapper[4737]: I0126 18:31:34.125478 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:34 crc kubenswrapper[4737]: I0126 18:31:34.125526 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:34 crc kubenswrapper[4737]: I0126 18:31:34.125538 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:34 crc kubenswrapper[4737]: I0126 18:31:34.125555 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:34 crc kubenswrapper[4737]: I0126 18:31:34.125567 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:34Z","lastTransitionTime":"2026-01-26T18:31:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:34 crc kubenswrapper[4737]: I0126 18:31:34.228368 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:34 crc kubenswrapper[4737]: I0126 18:31:34.228419 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:34 crc kubenswrapper[4737]: I0126 18:31:34.228429 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:34 crc kubenswrapper[4737]: I0126 18:31:34.228447 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:34 crc kubenswrapper[4737]: I0126 18:31:34.228459 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:34Z","lastTransitionTime":"2026-01-26T18:31:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:34 crc kubenswrapper[4737]: I0126 18:31:34.331410 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:34 crc kubenswrapper[4737]: I0126 18:31:34.331454 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:34 crc kubenswrapper[4737]: I0126 18:31:34.331465 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:34 crc kubenswrapper[4737]: I0126 18:31:34.331482 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:34 crc kubenswrapper[4737]: I0126 18:31:34.331495 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:34Z","lastTransitionTime":"2026-01-26T18:31:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:34 crc kubenswrapper[4737]: I0126 18:31:34.433632 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:34 crc kubenswrapper[4737]: I0126 18:31:34.433915 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:34 crc kubenswrapper[4737]: I0126 18:31:34.434007 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:34 crc kubenswrapper[4737]: I0126 18:31:34.434141 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:34 crc kubenswrapper[4737]: I0126 18:31:34.434223 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:34Z","lastTransitionTime":"2026-01-26T18:31:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:34 crc kubenswrapper[4737]: I0126 18:31:34.537729 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:34 crc kubenswrapper[4737]: I0126 18:31:34.537810 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:34 crc kubenswrapper[4737]: I0126 18:31:34.537828 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:34 crc kubenswrapper[4737]: I0126 18:31:34.537855 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:34 crc kubenswrapper[4737]: I0126 18:31:34.537873 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:34Z","lastTransitionTime":"2026-01-26T18:31:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:34 crc kubenswrapper[4737]: I0126 18:31:34.641008 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:34 crc kubenswrapper[4737]: I0126 18:31:34.641423 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:34 crc kubenswrapper[4737]: I0126 18:31:34.641438 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:34 crc kubenswrapper[4737]: I0126 18:31:34.641456 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:34 crc kubenswrapper[4737]: I0126 18:31:34.641467 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:34Z","lastTransitionTime":"2026-01-26T18:31:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:34 crc kubenswrapper[4737]: I0126 18:31:34.744137 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:34 crc kubenswrapper[4737]: I0126 18:31:34.744199 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:34 crc kubenswrapper[4737]: I0126 18:31:34.744216 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:34 crc kubenswrapper[4737]: I0126 18:31:34.744240 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:34 crc kubenswrapper[4737]: I0126 18:31:34.744253 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:34Z","lastTransitionTime":"2026-01-26T18:31:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:34 crc kubenswrapper[4737]: I0126 18:31:34.846493 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:34 crc kubenswrapper[4737]: I0126 18:31:34.846536 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:34 crc kubenswrapper[4737]: I0126 18:31:34.846548 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:34 crc kubenswrapper[4737]: I0126 18:31:34.846565 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:34 crc kubenswrapper[4737]: I0126 18:31:34.846579 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:34Z","lastTransitionTime":"2026-01-26T18:31:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:34 crc kubenswrapper[4737]: I0126 18:31:34.948909 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:34 crc kubenswrapper[4737]: I0126 18:31:34.948948 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:34 crc kubenswrapper[4737]: I0126 18:31:34.948959 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:34 crc kubenswrapper[4737]: I0126 18:31:34.948974 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:34 crc kubenswrapper[4737]: I0126 18:31:34.948984 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:34Z","lastTransitionTime":"2026-01-26T18:31:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:34 crc kubenswrapper[4737]: I0126 18:31:34.958557 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 11:11:45.860914976 +0000 UTC Jan 26 18:31:34 crc kubenswrapper[4737]: I0126 18:31:34.983403 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:31:34 crc kubenswrapper[4737]: E0126 18:31:34.983532 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:31:34 crc kubenswrapper[4737]: I0126 18:31:34.983757 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:31:34 crc kubenswrapper[4737]: E0126 18:31:34.983851 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4pv7r" podUID="1a3aadb5-b908-4300-af5f-e3c37dff9e14" Jan 26 18:31:34 crc kubenswrapper[4737]: I0126 18:31:34.984016 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:31:34 crc kubenswrapper[4737]: I0126 18:31:34.985154 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:31:34 crc kubenswrapper[4737]: E0126 18:31:34.985153 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:31:34 crc kubenswrapper[4737]: E0126 18:31:34.985223 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:31:35 crc kubenswrapper[4737]: I0126 18:31:35.051417 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:35 crc kubenswrapper[4737]: I0126 18:31:35.051477 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:35 crc kubenswrapper[4737]: I0126 18:31:35.051504 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:35 crc kubenswrapper[4737]: I0126 18:31:35.051526 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:35 crc kubenswrapper[4737]: I0126 18:31:35.051546 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:35Z","lastTransitionTime":"2026-01-26T18:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:35 crc kubenswrapper[4737]: I0126 18:31:35.154276 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:35 crc kubenswrapper[4737]: I0126 18:31:35.154331 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:35 crc kubenswrapper[4737]: I0126 18:31:35.154346 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:35 crc kubenswrapper[4737]: I0126 18:31:35.154366 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:35 crc kubenswrapper[4737]: I0126 18:31:35.154383 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:35Z","lastTransitionTime":"2026-01-26T18:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:35 crc kubenswrapper[4737]: I0126 18:31:35.257209 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:35 crc kubenswrapper[4737]: I0126 18:31:35.257256 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:35 crc kubenswrapper[4737]: I0126 18:31:35.257273 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:35 crc kubenswrapper[4737]: I0126 18:31:35.257337 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:35 crc kubenswrapper[4737]: I0126 18:31:35.257351 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:35Z","lastTransitionTime":"2026-01-26T18:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:35 crc kubenswrapper[4737]: I0126 18:31:35.360165 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:35 crc kubenswrapper[4737]: I0126 18:31:35.360446 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:35 crc kubenswrapper[4737]: I0126 18:31:35.360563 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:35 crc kubenswrapper[4737]: I0126 18:31:35.360665 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:35 crc kubenswrapper[4737]: I0126 18:31:35.360736 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:35Z","lastTransitionTime":"2026-01-26T18:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:35 crc kubenswrapper[4737]: I0126 18:31:35.462972 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:35 crc kubenswrapper[4737]: I0126 18:31:35.463041 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:35 crc kubenswrapper[4737]: I0126 18:31:35.463053 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:35 crc kubenswrapper[4737]: I0126 18:31:35.463089 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:35 crc kubenswrapper[4737]: I0126 18:31:35.463104 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:35Z","lastTransitionTime":"2026-01-26T18:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:35 crc kubenswrapper[4737]: I0126 18:31:35.565704 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:35 crc kubenswrapper[4737]: I0126 18:31:35.565769 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:35 crc kubenswrapper[4737]: I0126 18:31:35.565783 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:35 crc kubenswrapper[4737]: I0126 18:31:35.565801 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:35 crc kubenswrapper[4737]: I0126 18:31:35.565813 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:35Z","lastTransitionTime":"2026-01-26T18:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:35 crc kubenswrapper[4737]: I0126 18:31:35.668780 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:35 crc kubenswrapper[4737]: I0126 18:31:35.668818 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:35 crc kubenswrapper[4737]: I0126 18:31:35.668827 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:35 crc kubenswrapper[4737]: I0126 18:31:35.668845 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:35 crc kubenswrapper[4737]: I0126 18:31:35.668855 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:35Z","lastTransitionTime":"2026-01-26T18:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:35 crc kubenswrapper[4737]: I0126 18:31:35.771146 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:35 crc kubenswrapper[4737]: I0126 18:31:35.771181 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:35 crc kubenswrapper[4737]: I0126 18:31:35.771190 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:35 crc kubenswrapper[4737]: I0126 18:31:35.771206 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:35 crc kubenswrapper[4737]: I0126 18:31:35.771216 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:35Z","lastTransitionTime":"2026-01-26T18:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:35 crc kubenswrapper[4737]: I0126 18:31:35.874045 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:35 crc kubenswrapper[4737]: I0126 18:31:35.874477 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:35 crc kubenswrapper[4737]: I0126 18:31:35.874570 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:35 crc kubenswrapper[4737]: I0126 18:31:35.874645 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:35 crc kubenswrapper[4737]: I0126 18:31:35.874782 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:35Z","lastTransitionTime":"2026-01-26T18:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:35 crc kubenswrapper[4737]: I0126 18:31:35.958879 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 07:43:31.172920698 +0000 UTC Jan 26 18:31:35 crc kubenswrapper[4737]: I0126 18:31:35.977350 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:35 crc kubenswrapper[4737]: I0126 18:31:35.977386 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:35 crc kubenswrapper[4737]: I0126 18:31:35.977395 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:35 crc kubenswrapper[4737]: I0126 18:31:35.977414 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:35 crc kubenswrapper[4737]: I0126 18:31:35.977426 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:35Z","lastTransitionTime":"2026-01-26T18:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.080391 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.081217 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.081264 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.081287 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.081300 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:36Z","lastTransitionTime":"2026-01-26T18:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.183529 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.183593 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.183607 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.183630 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.183648 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:36Z","lastTransitionTime":"2026-01-26T18:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.286802 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.287218 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.287331 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.287431 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.288039 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:36Z","lastTransitionTime":"2026-01-26T18:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.391454 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.391852 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.391946 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.392094 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.392192 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:36Z","lastTransitionTime":"2026-01-26T18:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.495602 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.495651 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.495663 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.495681 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.495693 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:36Z","lastTransitionTime":"2026-01-26T18:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.598436 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.598817 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.598946 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.599058 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.599163 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:36Z","lastTransitionTime":"2026-01-26T18:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.702794 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.703160 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.703310 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.703505 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.703598 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:36Z","lastTransitionTime":"2026-01-26T18:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.760347 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.760426 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.760445 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.760469 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.760486 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:36Z","lastTransitionTime":"2026-01-26T18:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:36 crc kubenswrapper[4737]: E0126 18:31:36.776171 4737 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"163b9b97-5fa6-4443-9f0c-6d278a8ade1d\\\",\\\"systemUUID\\\":\\\"4ebf7606-e2ee-4d28-b0b5-b6f922331ef2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:36Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.781732 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.781775 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.781786 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.781807 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.781818 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:36Z","lastTransitionTime":"2026-01-26T18:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:36 crc kubenswrapper[4737]: E0126 18:31:36.795170 4737 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"163b9b97-5fa6-4443-9f0c-6d278a8ade1d\\\",\\\"systemUUID\\\":\\\"4ebf7606-e2ee-4d28-b0b5-b6f922331ef2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:36Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.798504 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.798529 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.798541 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.798556 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.798566 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:36Z","lastTransitionTime":"2026-01-26T18:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:36 crc kubenswrapper[4737]: E0126 18:31:36.810245 4737 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"163b9b97-5fa6-4443-9f0c-6d278a8ade1d\\\",\\\"systemUUID\\\":\\\"4ebf7606-e2ee-4d28-b0b5-b6f922331ef2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:36Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.812922 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.812943 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.812952 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.812966 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.812976 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:36Z","lastTransitionTime":"2026-01-26T18:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:36 crc kubenswrapper[4737]: E0126 18:31:36.824870 4737 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"163b9b97-5fa6-4443-9f0c-6d278a8ade1d\\\",\\\"systemUUID\\\":\\\"4ebf7606-e2ee-4d28-b0b5-b6f922331ef2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:36Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.828910 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.828956 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.828967 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.828985 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.828997 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:36Z","lastTransitionTime":"2026-01-26T18:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:36 crc kubenswrapper[4737]: E0126 18:31:36.841630 4737 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"163b9b97-5fa6-4443-9f0c-6d278a8ade1d\\\",\\\"systemUUID\\\":\\\"4ebf7606-e2ee-4d28-b0b5-b6f922331ef2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:36Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:36 crc kubenswrapper[4737]: E0126 18:31:36.841793 4737 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.844146 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.844181 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.844191 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.844212 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.844225 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:36Z","lastTransitionTime":"2026-01-26T18:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.946544 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.946613 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.946628 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.946650 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.946664 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:36Z","lastTransitionTime":"2026-01-26T18:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.959815 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 15:26:19.602948679 +0000 UTC Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.981562 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.981582 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:31:36 crc kubenswrapper[4737]: E0126 18:31:36.981723 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.981742 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.981765 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:31:36 crc kubenswrapper[4737]: E0126 18:31:36.981951 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:31:36 crc kubenswrapper[4737]: E0126 18:31:36.982045 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4pv7r" podUID="1a3aadb5-b908-4300-af5f-e3c37dff9e14" Jan 26 18:31:36 crc kubenswrapper[4737]: E0126 18:31:36.982206 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:31:36 crc kubenswrapper[4737]: I0126 18:31:36.994032 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d16415ca-2740-4247-846a-9afd1ebcca48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4f461b168b044c50f281bafc5c7ef0d877392e3cc72edc7b2f0028cf8fe6b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7338aa3bff3561881f454689b4ae1ab8b46ddf950c45dd080107c7b78e6766a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ccdee3654b2923f02f6071aa3924d0934ed028d809dfbf120ba387637632dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c275106783e56387249df9619e22fd0eca28516545f77cead21b8c925f9c36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:36Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.003938 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94095e78-9414-4124-97ef-06acf16f3751\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d52ac89fea984085d49fba71ada8accb5c8a57c7d898b2b3f994cd01a485c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b8f6ddca9dd101abf072f2cd61c297b2dd32397a6ab33c8aec8640fea83afe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82b8f6ddca9dd101abf072f2cd61c297b2dd32397a6ab33c8aec8640fea83afe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:37Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.018067 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qjff2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82627aad-2019-482e-934a-7e9729927a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://debc5589aae465210c77fde58754f822ad1d429fc00cfb56625deddf51cf6fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://938d6c4b9c86f851e8346bde5364b9a2463869d85fef2bc4e705335f9253be4c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:31:31Z\\\",\\\"message\\\":\\\"2026-01-26T18:30:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_73f0df80-6376-4ba2-b9e3-93d21fcc0927\\\\n2026-01-26T18:30:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_73f0df80-6376-4ba2-b9e3-93d21fcc0927 to /host/opt/cni/bin/\\\\n2026-01-26T18:30:46Z [verbose] multus-daemon started\\\\n2026-01-26T18:30:46Z [verbose] Readiness Indicator file check\\\\n2026-01-26T18:31:31Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:31:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9ggl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qjff2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:37Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.029641 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afd75772-7900-46c3-b392-afb075e1cc08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44e1f827ccc2bfeece3e663dd96fc5e48e301dbf7ac31e381e7a33a8a4a422c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea5fce0e1e77606f5e8f6cb2c1b339d6b7b8174e1f68a050834be1f5bedfec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qxkj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:37Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.040258 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gxxjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"632d368f-0ceb-4edc-aac0-b760c24da635\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://045cdffff188229daeee7faf3a96a61c6b0ab18fdd0908f528b8a2a5b19059bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrskd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gxxjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:37Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.053049 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.053110 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.053122 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.053142 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.053153 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:37Z","lastTransitionTime":"2026-01-26T18:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.058973 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb40773-20dc-48ef-bf7f-17f4a042b01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66ec75b04c2383311d9d4c54148415f6f45821810aa9e68c12fa36c22637341c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13f6776860714e1ab348c9b7a767366f0b4b425d08ed27ee64abfaf2770f1889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0330027f82eafcc297d9ea91babd144a993a1f9d5b5f376274521364421fb70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b3d9e7e5a84aa89a81ca65443973a1a75bc1b54c2f3f5cbd6cf7a00f8d04704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2019712957d6ff1e329746e69d806c2cb554917815ebbac73b321965e5d981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://067cf449746568a0f2fa056863be0cc0bf40390eb6f239e011639fdc05f2ea8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://046202f8fbac321bcb6ceb2a70e0b655bf88d62a5c28a1c43a1a815ad3b2f87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://046202f8fbac321bcb6ceb2a70e0b655bf88d62a5c28a1c43a1a815ad3b2f87d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:31:17Z\\\",\\\"message\\\":\\\"ocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.254:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e4e4203e-87c7-4024-930a-5d6bdfe2bdde}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 18:31:16.841538 6403 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:16Z is after 2025-08-24T17:21:41Z]\\\\nI0126 18:31:16.841546 6403 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator-webhook]} name:Service_openshift-machine\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:31:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jgjrk_openshift-ovn-kubernetes(ecb40773-20dc-48ef-bf7f-17f4a042b01c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://570bf995c9ab0a04cff8ada5b82ef19c9299d86ab480a43ea1446a3aedb8cd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jgjrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:37Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.074622 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d641e5-0291-480c-9413-478267450e45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d782bb5883158eb31686ef882923bc0fe18907ec34b462ad7641b8d0a6e675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce3c0b3eaf0ab467b2dbcadc4770536de6e0abf901c9636df113498aff77a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d13541d78d88ffb1e1dcff16556814da8c438d160fef0ea16468954f300dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209ecbbc6838b629efde256a421bfd4b6926d2a9cd2f02e4fb7df9325fdecfc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2968ec8a8ae174c006de379e7fae84b111c90cb44e51bb8d0fdcbc0e66a5842\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:30:39.472985 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:30:39.474507 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1374176662/tls.crt::/tmp/serving-cert-1374176662/tls.key\\\\\\\"\\\\nI0126 18:30:44.993991 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:30:44.996847 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:30:44.996868 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:30:44.996891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:30:44.996897 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:30:45.005311 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 18:30:45.005355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 18:30:45.005375 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005386 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005391 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:30:45.005396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:30:45.005400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:30:45.005403 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 18:30:45.006492 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45b34a9d70cf8504fd809f816a326a74e9a3c422a1ed1ffc221e72f90629b420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:37Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.086859 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3512c1850ad62aad579725558f83686c93dad645cc56cc852438dc2b4a6c35c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:37Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.102976 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925178b6076a7c576bc84fb58255bac5e1dcd86eda3fd94f0f93504a7cd7625a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548ccd6a70ea74a2030c871c94d8d7ac1de313de023b6a16b4a3a3bb2a2d7003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:37Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.119435 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e65f82894ec49f5a88663c42b77ad7d6f19fa922c45052d24272144140f979b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:37Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.130718 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:37Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.144856 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a777e838-21c0-4be5-9c8d-66ffb95135e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e22cbaea409b90eb9ad8f629cc94f12d5d94913c660d1f4ecbf3b1dd136d009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15312b4318e6f2175734be08ac5efbea4b0a46e2810e7223575671671408a157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81db4bac81727e02147b813300003fba15b7daf01d124d40ee30e4a87446ed1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4aba885244febd5d5191fbd34d2ee56412140bedfaf405e1a6b8bdeba2814f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4aba885244febd5d5191fbd34d2ee56412140bedfaf405e1a6b8bdeba2814f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:37Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.155959 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.156181 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.156341 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.156443 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.156560 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:37Z","lastTransitionTime":"2026-01-26T18:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.163428 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvbml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f32d3b75-6d15-4fb7-9559-d3df1d77071e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e973f3c659c65849958ccb32d18d8e67d42874690df337699f6cf976485c536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81b1b4cdfa531e63bf8499478cc1f6813d659b2b1b160d374133713382cff7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81b1b4cdfa531e63bf8499478cc1f6813d659b2b1b160d374133713382cff7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvbml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:37Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.178176 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bc7b559-f4f0-47b0-b148-6d0915785538\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10904723390bf4505ed547f04ed3a24b1e7debcf7f089e7de30eb5166c8f6d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knvgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df8c189f585082008e31ded41ba96e5939a894300f9dc29b53768a05cea54c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knvgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rzpxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:37Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.192702 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4pv7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a3aadb5-b908-4300-af5f-e3c37dff9e14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7cfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7cfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4pv7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:37Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.213194 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6554c7-415f-457d-8121-82981ebe2781\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2838d2a1b16be346b2d6a63998cd81416ef81978be369242fae471f6a53fdbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf97240160ecd3f4e73effbeb33f85dad6c12afbfe10315b8624d5c366945d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cfbe9f1ae9deaee4bbb0db6d490c25bd86326a3b962d2221cffa8c7e8431cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35617b01f73620a31d80cfbb5bc2c444ee37cdf3cfd67d62b70f36c6738bfc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2decc4fe0a94f1c54bc9b532267b0cbac17f7762e628835a11ba40561c8971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:37Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.227589 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:37Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.241486 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:37Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.252214 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fsmsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79f4091b-95d7-420a-b90a-1b6f48fb634e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://182bb7a343b62287950a4012ccd463ab6a7d339540f40db94e83248958d49095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fsmsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:37Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.259368 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.259722 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.259827 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.259944 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.260041 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:37Z","lastTransitionTime":"2026-01-26T18:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.362350 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.362417 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.362436 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.362462 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.362479 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:37Z","lastTransitionTime":"2026-01-26T18:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.466592 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.466919 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.466994 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.467178 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.467268 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:37Z","lastTransitionTime":"2026-01-26T18:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.569295 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.569340 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.569352 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.569369 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.569381 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:37Z","lastTransitionTime":"2026-01-26T18:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.672244 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.672282 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.672293 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.672312 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.672324 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:37Z","lastTransitionTime":"2026-01-26T18:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.773997 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.774039 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.774052 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.774093 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.774109 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:37Z","lastTransitionTime":"2026-01-26T18:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.876802 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.876845 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.876856 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.876871 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.876881 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:37Z","lastTransitionTime":"2026-01-26T18:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.960017 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 11:54:36.607093133 +0000 UTC Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.979083 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.979131 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.979141 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.979158 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:37 crc kubenswrapper[4737]: I0126 18:31:37.979168 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:37Z","lastTransitionTime":"2026-01-26T18:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:38 crc kubenswrapper[4737]: I0126 18:31:38.082431 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:38 crc kubenswrapper[4737]: I0126 18:31:38.082479 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:38 crc kubenswrapper[4737]: I0126 18:31:38.082490 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:38 crc kubenswrapper[4737]: I0126 18:31:38.082510 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:38 crc kubenswrapper[4737]: I0126 18:31:38.082523 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:38Z","lastTransitionTime":"2026-01-26T18:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:38 crc kubenswrapper[4737]: I0126 18:31:38.184986 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:38 crc kubenswrapper[4737]: I0126 18:31:38.185025 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:38 crc kubenswrapper[4737]: I0126 18:31:38.185036 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:38 crc kubenswrapper[4737]: I0126 18:31:38.185053 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:38 crc kubenswrapper[4737]: I0126 18:31:38.185063 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:38Z","lastTransitionTime":"2026-01-26T18:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:38 crc kubenswrapper[4737]: I0126 18:31:38.287303 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:38 crc kubenswrapper[4737]: I0126 18:31:38.287352 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:38 crc kubenswrapper[4737]: I0126 18:31:38.287365 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:38 crc kubenswrapper[4737]: I0126 18:31:38.287386 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:38 crc kubenswrapper[4737]: I0126 18:31:38.287405 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:38Z","lastTransitionTime":"2026-01-26T18:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:38 crc kubenswrapper[4737]: I0126 18:31:38.390127 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:38 crc kubenswrapper[4737]: I0126 18:31:38.390173 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:38 crc kubenswrapper[4737]: I0126 18:31:38.390185 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:38 crc kubenswrapper[4737]: I0126 18:31:38.390202 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:38 crc kubenswrapper[4737]: I0126 18:31:38.390212 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:38Z","lastTransitionTime":"2026-01-26T18:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:38 crc kubenswrapper[4737]: I0126 18:31:38.492294 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:38 crc kubenswrapper[4737]: I0126 18:31:38.492346 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:38 crc kubenswrapper[4737]: I0126 18:31:38.492358 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:38 crc kubenswrapper[4737]: I0126 18:31:38.492376 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:38 crc kubenswrapper[4737]: I0126 18:31:38.492386 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:38Z","lastTransitionTime":"2026-01-26T18:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:38 crc kubenswrapper[4737]: I0126 18:31:38.594967 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:38 crc kubenswrapper[4737]: I0126 18:31:38.595008 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:38 crc kubenswrapper[4737]: I0126 18:31:38.595018 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:38 crc kubenswrapper[4737]: I0126 18:31:38.595037 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:38 crc kubenswrapper[4737]: I0126 18:31:38.595048 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:38Z","lastTransitionTime":"2026-01-26T18:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:38 crc kubenswrapper[4737]: I0126 18:31:38.697894 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:38 crc kubenswrapper[4737]: I0126 18:31:38.697937 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:38 crc kubenswrapper[4737]: I0126 18:31:38.697946 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:38 crc kubenswrapper[4737]: I0126 18:31:38.697967 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:38 crc kubenswrapper[4737]: I0126 18:31:38.697979 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:38Z","lastTransitionTime":"2026-01-26T18:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:38 crc kubenswrapper[4737]: I0126 18:31:38.800598 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:38 crc kubenswrapper[4737]: I0126 18:31:38.800648 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:38 crc kubenswrapper[4737]: I0126 18:31:38.800658 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:38 crc kubenswrapper[4737]: I0126 18:31:38.800678 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:38 crc kubenswrapper[4737]: I0126 18:31:38.800691 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:38Z","lastTransitionTime":"2026-01-26T18:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:38 crc kubenswrapper[4737]: I0126 18:31:38.903336 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:38 crc kubenswrapper[4737]: I0126 18:31:38.903396 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:38 crc kubenswrapper[4737]: I0126 18:31:38.903414 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:38 crc kubenswrapper[4737]: I0126 18:31:38.903436 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:38 crc kubenswrapper[4737]: I0126 18:31:38.903453 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:38Z","lastTransitionTime":"2026-01-26T18:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:38 crc kubenswrapper[4737]: I0126 18:31:38.960885 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 10:24:17.951314473 +0000 UTC Jan 26 18:31:38 crc kubenswrapper[4737]: I0126 18:31:38.981115 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:31:38 crc kubenswrapper[4737]: I0126 18:31:38.981155 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:31:38 crc kubenswrapper[4737]: I0126 18:31:38.981193 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:31:38 crc kubenswrapper[4737]: I0126 18:31:38.981119 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:31:38 crc kubenswrapper[4737]: E0126 18:31:38.981273 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:31:38 crc kubenswrapper[4737]: E0126 18:31:38.981451 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:31:38 crc kubenswrapper[4737]: E0126 18:31:38.981607 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4pv7r" podUID="1a3aadb5-b908-4300-af5f-e3c37dff9e14" Jan 26 18:31:38 crc kubenswrapper[4737]: E0126 18:31:38.981719 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:31:39 crc kubenswrapper[4737]: I0126 18:31:39.010001 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:39 crc kubenswrapper[4737]: I0126 18:31:39.010051 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:39 crc kubenswrapper[4737]: I0126 18:31:39.010062 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:39 crc kubenswrapper[4737]: I0126 18:31:39.010093 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:39 crc kubenswrapper[4737]: I0126 18:31:39.010104 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:39Z","lastTransitionTime":"2026-01-26T18:31:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:39 crc kubenswrapper[4737]: I0126 18:31:39.113410 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:39 crc kubenswrapper[4737]: I0126 18:31:39.113450 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:39 crc kubenswrapper[4737]: I0126 18:31:39.113461 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:39 crc kubenswrapper[4737]: I0126 18:31:39.113477 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:39 crc kubenswrapper[4737]: I0126 18:31:39.113488 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:39Z","lastTransitionTime":"2026-01-26T18:31:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:39 crc kubenswrapper[4737]: I0126 18:31:39.215784 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:39 crc kubenswrapper[4737]: I0126 18:31:39.215817 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:39 crc kubenswrapper[4737]: I0126 18:31:39.215825 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:39 crc kubenswrapper[4737]: I0126 18:31:39.215843 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:39 crc kubenswrapper[4737]: I0126 18:31:39.215853 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:39Z","lastTransitionTime":"2026-01-26T18:31:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:39 crc kubenswrapper[4737]: I0126 18:31:39.317822 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:39 crc kubenswrapper[4737]: I0126 18:31:39.317867 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:39 crc kubenswrapper[4737]: I0126 18:31:39.317877 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:39 crc kubenswrapper[4737]: I0126 18:31:39.317890 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:39 crc kubenswrapper[4737]: I0126 18:31:39.317899 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:39Z","lastTransitionTime":"2026-01-26T18:31:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:39 crc kubenswrapper[4737]: I0126 18:31:39.420999 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:39 crc kubenswrapper[4737]: I0126 18:31:39.421052 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:39 crc kubenswrapper[4737]: I0126 18:31:39.421062 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:39 crc kubenswrapper[4737]: I0126 18:31:39.421098 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:39 crc kubenswrapper[4737]: I0126 18:31:39.421115 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:39Z","lastTransitionTime":"2026-01-26T18:31:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:39 crc kubenswrapper[4737]: I0126 18:31:39.523545 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:39 crc kubenswrapper[4737]: I0126 18:31:39.523577 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:39 crc kubenswrapper[4737]: I0126 18:31:39.523586 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:39 crc kubenswrapper[4737]: I0126 18:31:39.523599 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:39 crc kubenswrapper[4737]: I0126 18:31:39.523609 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:39Z","lastTransitionTime":"2026-01-26T18:31:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:39 crc kubenswrapper[4737]: I0126 18:31:39.626953 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:39 crc kubenswrapper[4737]: I0126 18:31:39.626985 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:39 crc kubenswrapper[4737]: I0126 18:31:39.626996 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:39 crc kubenswrapper[4737]: I0126 18:31:39.627015 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:39 crc kubenswrapper[4737]: I0126 18:31:39.627026 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:39Z","lastTransitionTime":"2026-01-26T18:31:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:39 crc kubenswrapper[4737]: I0126 18:31:39.729609 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:39 crc kubenswrapper[4737]: I0126 18:31:39.729667 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:39 crc kubenswrapper[4737]: I0126 18:31:39.729680 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:39 crc kubenswrapper[4737]: I0126 18:31:39.729701 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:39 crc kubenswrapper[4737]: I0126 18:31:39.729714 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:39Z","lastTransitionTime":"2026-01-26T18:31:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:39 crc kubenswrapper[4737]: I0126 18:31:39.832661 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:39 crc kubenswrapper[4737]: I0126 18:31:39.832715 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:39 crc kubenswrapper[4737]: I0126 18:31:39.832725 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:39 crc kubenswrapper[4737]: I0126 18:31:39.832744 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:39 crc kubenswrapper[4737]: I0126 18:31:39.832755 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:39Z","lastTransitionTime":"2026-01-26T18:31:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:39 crc kubenswrapper[4737]: I0126 18:31:39.936503 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:39 crc kubenswrapper[4737]: I0126 18:31:39.936538 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:39 crc kubenswrapper[4737]: I0126 18:31:39.936547 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:39 crc kubenswrapper[4737]: I0126 18:31:39.936560 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:39 crc kubenswrapper[4737]: I0126 18:31:39.936569 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:39Z","lastTransitionTime":"2026-01-26T18:31:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:39 crc kubenswrapper[4737]: I0126 18:31:39.962157 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 20:23:48.693321445 +0000 UTC Jan 26 18:31:40 crc kubenswrapper[4737]: I0126 18:31:40.039187 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:40 crc kubenswrapper[4737]: I0126 18:31:40.039232 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:40 crc kubenswrapper[4737]: I0126 18:31:40.039245 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:40 crc kubenswrapper[4737]: I0126 18:31:40.039266 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:40 crc kubenswrapper[4737]: I0126 18:31:40.039278 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:40Z","lastTransitionTime":"2026-01-26T18:31:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:40 crc kubenswrapper[4737]: I0126 18:31:40.141786 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:40 crc kubenswrapper[4737]: I0126 18:31:40.141833 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:40 crc kubenswrapper[4737]: I0126 18:31:40.141844 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:40 crc kubenswrapper[4737]: I0126 18:31:40.141862 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:40 crc kubenswrapper[4737]: I0126 18:31:40.141872 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:40Z","lastTransitionTime":"2026-01-26T18:31:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:40 crc kubenswrapper[4737]: I0126 18:31:40.244037 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:40 crc kubenswrapper[4737]: I0126 18:31:40.244102 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:40 crc kubenswrapper[4737]: I0126 18:31:40.244116 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:40 crc kubenswrapper[4737]: I0126 18:31:40.244131 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:40 crc kubenswrapper[4737]: I0126 18:31:40.244142 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:40Z","lastTransitionTime":"2026-01-26T18:31:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:40 crc kubenswrapper[4737]: I0126 18:31:40.346605 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:40 crc kubenswrapper[4737]: I0126 18:31:40.346659 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:40 crc kubenswrapper[4737]: I0126 18:31:40.346674 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:40 crc kubenswrapper[4737]: I0126 18:31:40.346693 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:40 crc kubenswrapper[4737]: I0126 18:31:40.346707 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:40Z","lastTransitionTime":"2026-01-26T18:31:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:40 crc kubenswrapper[4737]: I0126 18:31:40.450016 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:40 crc kubenswrapper[4737]: I0126 18:31:40.450093 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:40 crc kubenswrapper[4737]: I0126 18:31:40.450105 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:40 crc kubenswrapper[4737]: I0126 18:31:40.450123 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:40 crc kubenswrapper[4737]: I0126 18:31:40.450134 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:40Z","lastTransitionTime":"2026-01-26T18:31:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:40 crc kubenswrapper[4737]: I0126 18:31:40.553470 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:40 crc kubenswrapper[4737]: I0126 18:31:40.553519 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:40 crc kubenswrapper[4737]: I0126 18:31:40.553529 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:40 crc kubenswrapper[4737]: I0126 18:31:40.553545 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:40 crc kubenswrapper[4737]: I0126 18:31:40.553555 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:40Z","lastTransitionTime":"2026-01-26T18:31:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:40 crc kubenswrapper[4737]: I0126 18:31:40.657048 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:40 crc kubenswrapper[4737]: I0126 18:31:40.657122 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:40 crc kubenswrapper[4737]: I0126 18:31:40.657131 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:40 crc kubenswrapper[4737]: I0126 18:31:40.657148 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:40 crc kubenswrapper[4737]: I0126 18:31:40.657162 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:40Z","lastTransitionTime":"2026-01-26T18:31:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:40 crc kubenswrapper[4737]: I0126 18:31:40.760469 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:40 crc kubenswrapper[4737]: I0126 18:31:40.760530 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:40 crc kubenswrapper[4737]: I0126 18:31:40.760542 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:40 crc kubenswrapper[4737]: I0126 18:31:40.760566 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:40 crc kubenswrapper[4737]: I0126 18:31:40.760584 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:40Z","lastTransitionTime":"2026-01-26T18:31:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:40 crc kubenswrapper[4737]: I0126 18:31:40.863986 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:40 crc kubenswrapper[4737]: I0126 18:31:40.864040 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:40 crc kubenswrapper[4737]: I0126 18:31:40.864051 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:40 crc kubenswrapper[4737]: I0126 18:31:40.864093 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:40 crc kubenswrapper[4737]: I0126 18:31:40.864105 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:40Z","lastTransitionTime":"2026-01-26T18:31:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:40 crc kubenswrapper[4737]: I0126 18:31:40.962555 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 04:55:20.832998308 +0000 UTC Jan 26 18:31:40 crc kubenswrapper[4737]: I0126 18:31:40.967309 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:40 crc kubenswrapper[4737]: I0126 18:31:40.967360 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:40 crc kubenswrapper[4737]: I0126 18:31:40.967374 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:40 crc kubenswrapper[4737]: I0126 18:31:40.967398 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:40 crc kubenswrapper[4737]: I0126 18:31:40.967412 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:40Z","lastTransitionTime":"2026-01-26T18:31:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:40 crc kubenswrapper[4737]: I0126 18:31:40.981612 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:31:40 crc kubenswrapper[4737]: I0126 18:31:40.981682 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:31:40 crc kubenswrapper[4737]: I0126 18:31:40.981606 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:31:40 crc kubenswrapper[4737]: E0126 18:31:40.981756 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:31:40 crc kubenswrapper[4737]: I0126 18:31:40.981796 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:31:40 crc kubenswrapper[4737]: E0126 18:31:40.982002 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4pv7r" podUID="1a3aadb5-b908-4300-af5f-e3c37dff9e14" Jan 26 18:31:40 crc kubenswrapper[4737]: E0126 18:31:40.982191 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:31:40 crc kubenswrapper[4737]: E0126 18:31:40.982265 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:31:41 crc kubenswrapper[4737]: I0126 18:31:41.070797 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:41 crc kubenswrapper[4737]: I0126 18:31:41.070856 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:41 crc kubenswrapper[4737]: I0126 18:31:41.070866 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:41 crc kubenswrapper[4737]: I0126 18:31:41.070886 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:41 crc kubenswrapper[4737]: I0126 18:31:41.070897 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:41Z","lastTransitionTime":"2026-01-26T18:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:41 crc kubenswrapper[4737]: I0126 18:31:41.173068 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:41 crc kubenswrapper[4737]: I0126 18:31:41.173123 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:41 crc kubenswrapper[4737]: I0126 18:31:41.173131 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:41 crc kubenswrapper[4737]: I0126 18:31:41.173145 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:41 crc kubenswrapper[4737]: I0126 18:31:41.173155 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:41Z","lastTransitionTime":"2026-01-26T18:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:41 crc kubenswrapper[4737]: I0126 18:31:41.276304 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:41 crc kubenswrapper[4737]: I0126 18:31:41.276348 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:41 crc kubenswrapper[4737]: I0126 18:31:41.276361 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:41 crc kubenswrapper[4737]: I0126 18:31:41.276381 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:41 crc kubenswrapper[4737]: I0126 18:31:41.276393 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:41Z","lastTransitionTime":"2026-01-26T18:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:41 crc kubenswrapper[4737]: I0126 18:31:41.378915 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:41 crc kubenswrapper[4737]: I0126 18:31:41.378949 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:41 crc kubenswrapper[4737]: I0126 18:31:41.378959 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:41 crc kubenswrapper[4737]: I0126 18:31:41.378979 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:41 crc kubenswrapper[4737]: I0126 18:31:41.378990 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:41Z","lastTransitionTime":"2026-01-26T18:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:41 crc kubenswrapper[4737]: I0126 18:31:41.481729 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:41 crc kubenswrapper[4737]: I0126 18:31:41.481795 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:41 crc kubenswrapper[4737]: I0126 18:31:41.481806 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:41 crc kubenswrapper[4737]: I0126 18:31:41.481823 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:41 crc kubenswrapper[4737]: I0126 18:31:41.481834 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:41Z","lastTransitionTime":"2026-01-26T18:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:41 crc kubenswrapper[4737]: I0126 18:31:41.583976 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:41 crc kubenswrapper[4737]: I0126 18:31:41.584014 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:41 crc kubenswrapper[4737]: I0126 18:31:41.584023 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:41 crc kubenswrapper[4737]: I0126 18:31:41.584040 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:41 crc kubenswrapper[4737]: I0126 18:31:41.584051 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:41Z","lastTransitionTime":"2026-01-26T18:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:41 crc kubenswrapper[4737]: I0126 18:31:41.687040 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:41 crc kubenswrapper[4737]: I0126 18:31:41.687153 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:41 crc kubenswrapper[4737]: I0126 18:31:41.687174 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:41 crc kubenswrapper[4737]: I0126 18:31:41.687199 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:41 crc kubenswrapper[4737]: I0126 18:31:41.687217 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:41Z","lastTransitionTime":"2026-01-26T18:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:41 crc kubenswrapper[4737]: I0126 18:31:41.791730 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:41 crc kubenswrapper[4737]: I0126 18:31:41.791794 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:41 crc kubenswrapper[4737]: I0126 18:31:41.791807 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:41 crc kubenswrapper[4737]: I0126 18:31:41.791828 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:41 crc kubenswrapper[4737]: I0126 18:31:41.791843 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:41Z","lastTransitionTime":"2026-01-26T18:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:41 crc kubenswrapper[4737]: I0126 18:31:41.894956 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:41 crc kubenswrapper[4737]: I0126 18:31:41.895042 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:41 crc kubenswrapper[4737]: I0126 18:31:41.895114 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:41 crc kubenswrapper[4737]: I0126 18:31:41.895151 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:41 crc kubenswrapper[4737]: I0126 18:31:41.895173 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:41Z","lastTransitionTime":"2026-01-26T18:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:41 crc kubenswrapper[4737]: I0126 18:31:41.963256 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 07:18:30.817940141 +0000 UTC Jan 26 18:31:41 crc kubenswrapper[4737]: I0126 18:31:41.998754 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:41 crc kubenswrapper[4737]: I0126 18:31:41.998890 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:41 crc kubenswrapper[4737]: I0126 18:31:41.998913 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:41 crc kubenswrapper[4737]: I0126 18:31:41.998966 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:41 crc kubenswrapper[4737]: I0126 18:31:41.998989 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:41Z","lastTransitionTime":"2026-01-26T18:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:42 crc kubenswrapper[4737]: I0126 18:31:42.102119 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:42 crc kubenswrapper[4737]: I0126 18:31:42.102176 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:42 crc kubenswrapper[4737]: I0126 18:31:42.102195 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:42 crc kubenswrapper[4737]: I0126 18:31:42.102220 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:42 crc kubenswrapper[4737]: I0126 18:31:42.102236 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:42Z","lastTransitionTime":"2026-01-26T18:31:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:42 crc kubenswrapper[4737]: I0126 18:31:42.204818 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:42 crc kubenswrapper[4737]: I0126 18:31:42.204862 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:42 crc kubenswrapper[4737]: I0126 18:31:42.204871 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:42 crc kubenswrapper[4737]: I0126 18:31:42.204886 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:42 crc kubenswrapper[4737]: I0126 18:31:42.204896 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:42Z","lastTransitionTime":"2026-01-26T18:31:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:42 crc kubenswrapper[4737]: I0126 18:31:42.308062 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:42 crc kubenswrapper[4737]: I0126 18:31:42.308139 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:42 crc kubenswrapper[4737]: I0126 18:31:42.308151 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:42 crc kubenswrapper[4737]: I0126 18:31:42.308171 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:42 crc kubenswrapper[4737]: I0126 18:31:42.308181 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:42Z","lastTransitionTime":"2026-01-26T18:31:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:42 crc kubenswrapper[4737]: I0126 18:31:42.413246 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:42 crc kubenswrapper[4737]: I0126 18:31:42.413314 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:42 crc kubenswrapper[4737]: I0126 18:31:42.413335 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:42 crc kubenswrapper[4737]: I0126 18:31:42.413363 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:42 crc kubenswrapper[4737]: I0126 18:31:42.413384 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:42Z","lastTransitionTime":"2026-01-26T18:31:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:42 crc kubenswrapper[4737]: I0126 18:31:42.516360 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:42 crc kubenswrapper[4737]: I0126 18:31:42.516414 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:42 crc kubenswrapper[4737]: I0126 18:31:42.516426 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:42 crc kubenswrapper[4737]: I0126 18:31:42.516445 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:42 crc kubenswrapper[4737]: I0126 18:31:42.516502 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:42Z","lastTransitionTime":"2026-01-26T18:31:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:42 crc kubenswrapper[4737]: I0126 18:31:42.619457 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:42 crc kubenswrapper[4737]: I0126 18:31:42.619507 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:42 crc kubenswrapper[4737]: I0126 18:31:42.619519 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:42 crc kubenswrapper[4737]: I0126 18:31:42.619538 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:42 crc kubenswrapper[4737]: I0126 18:31:42.619552 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:42Z","lastTransitionTime":"2026-01-26T18:31:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:42 crc kubenswrapper[4737]: I0126 18:31:42.722515 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:42 crc kubenswrapper[4737]: I0126 18:31:42.722556 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:42 crc kubenswrapper[4737]: I0126 18:31:42.722564 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:42 crc kubenswrapper[4737]: I0126 18:31:42.722580 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:42 crc kubenswrapper[4737]: I0126 18:31:42.722593 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:42Z","lastTransitionTime":"2026-01-26T18:31:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:42 crc kubenswrapper[4737]: I0126 18:31:42.825295 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:42 crc kubenswrapper[4737]: I0126 18:31:42.825347 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:42 crc kubenswrapper[4737]: I0126 18:31:42.825358 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:42 crc kubenswrapper[4737]: I0126 18:31:42.825377 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:42 crc kubenswrapper[4737]: I0126 18:31:42.825389 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:42Z","lastTransitionTime":"2026-01-26T18:31:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:42 crc kubenswrapper[4737]: I0126 18:31:42.928133 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:42 crc kubenswrapper[4737]: I0126 18:31:42.928200 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:42 crc kubenswrapper[4737]: I0126 18:31:42.928217 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:42 crc kubenswrapper[4737]: I0126 18:31:42.928243 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:42 crc kubenswrapper[4737]: I0126 18:31:42.928261 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:42Z","lastTransitionTime":"2026-01-26T18:31:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:42 crc kubenswrapper[4737]: I0126 18:31:42.963848 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 17:26:07.508827956 +0000 UTC Jan 26 18:31:42 crc kubenswrapper[4737]: I0126 18:31:42.981662 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:31:42 crc kubenswrapper[4737]: I0126 18:31:42.981755 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:31:42 crc kubenswrapper[4737]: I0126 18:31:42.981756 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:31:42 crc kubenswrapper[4737]: E0126 18:31:42.982434 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:31:42 crc kubenswrapper[4737]: E0126 18:31:42.982588 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4pv7r" podUID="1a3aadb5-b908-4300-af5f-e3c37dff9e14" Jan 26 18:31:42 crc kubenswrapper[4737]: I0126 18:31:42.981790 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:31:42 crc kubenswrapper[4737]: E0126 18:31:42.982722 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:31:42 crc kubenswrapper[4737]: E0126 18:31:42.982804 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:31:43 crc kubenswrapper[4737]: I0126 18:31:43.031154 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:43 crc kubenswrapper[4737]: I0126 18:31:43.031208 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:43 crc kubenswrapper[4737]: I0126 18:31:43.031220 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:43 crc kubenswrapper[4737]: I0126 18:31:43.031236 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:43 crc kubenswrapper[4737]: I0126 18:31:43.031247 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:43Z","lastTransitionTime":"2026-01-26T18:31:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:43 crc kubenswrapper[4737]: I0126 18:31:43.135060 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:43 crc kubenswrapper[4737]: I0126 18:31:43.135144 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:43 crc kubenswrapper[4737]: I0126 18:31:43.135161 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:43 crc kubenswrapper[4737]: I0126 18:31:43.135187 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:43 crc kubenswrapper[4737]: I0126 18:31:43.135207 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:43Z","lastTransitionTime":"2026-01-26T18:31:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:43 crc kubenswrapper[4737]: I0126 18:31:43.237645 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:43 crc kubenswrapper[4737]: I0126 18:31:43.237690 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:43 crc kubenswrapper[4737]: I0126 18:31:43.237699 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:43 crc kubenswrapper[4737]: I0126 18:31:43.237714 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:43 crc kubenswrapper[4737]: I0126 18:31:43.237723 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:43Z","lastTransitionTime":"2026-01-26T18:31:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:43 crc kubenswrapper[4737]: I0126 18:31:43.340709 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:43 crc kubenswrapper[4737]: I0126 18:31:43.340765 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:43 crc kubenswrapper[4737]: I0126 18:31:43.340777 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:43 crc kubenswrapper[4737]: I0126 18:31:43.340797 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:43 crc kubenswrapper[4737]: I0126 18:31:43.340809 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:43Z","lastTransitionTime":"2026-01-26T18:31:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:43 crc kubenswrapper[4737]: I0126 18:31:43.444222 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:43 crc kubenswrapper[4737]: I0126 18:31:43.444307 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:43 crc kubenswrapper[4737]: I0126 18:31:43.444321 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:43 crc kubenswrapper[4737]: I0126 18:31:43.444341 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:43 crc kubenswrapper[4737]: I0126 18:31:43.444353 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:43Z","lastTransitionTime":"2026-01-26T18:31:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:43 crc kubenswrapper[4737]: I0126 18:31:43.840408 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:43 crc kubenswrapper[4737]: I0126 18:31:43.840445 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:43 crc kubenswrapper[4737]: I0126 18:31:43.840473 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:43 crc kubenswrapper[4737]: I0126 18:31:43.840488 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:43 crc kubenswrapper[4737]: I0126 18:31:43.840499 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:43Z","lastTransitionTime":"2026-01-26T18:31:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:43 crc kubenswrapper[4737]: I0126 18:31:43.943315 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:43 crc kubenswrapper[4737]: I0126 18:31:43.943367 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:43 crc kubenswrapper[4737]: I0126 18:31:43.943385 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:43 crc kubenswrapper[4737]: I0126 18:31:43.943407 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:43 crc kubenswrapper[4737]: I0126 18:31:43.943421 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:43Z","lastTransitionTime":"2026-01-26T18:31:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:43 crc kubenswrapper[4737]: I0126 18:31:43.964751 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 07:30:05.77324576 +0000 UTC Jan 26 18:31:44 crc kubenswrapper[4737]: I0126 18:31:44.046385 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:44 crc kubenswrapper[4737]: I0126 18:31:44.046434 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:44 crc kubenswrapper[4737]: I0126 18:31:44.046446 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:44 crc kubenswrapper[4737]: I0126 18:31:44.046465 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:44 crc kubenswrapper[4737]: I0126 18:31:44.046478 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:44Z","lastTransitionTime":"2026-01-26T18:31:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:44 crc kubenswrapper[4737]: I0126 18:31:44.148344 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:44 crc kubenswrapper[4737]: I0126 18:31:44.148387 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:44 crc kubenswrapper[4737]: I0126 18:31:44.148420 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:44 crc kubenswrapper[4737]: I0126 18:31:44.148436 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:44 crc kubenswrapper[4737]: I0126 18:31:44.148445 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:44Z","lastTransitionTime":"2026-01-26T18:31:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:44 crc kubenswrapper[4737]: I0126 18:31:44.251376 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:44 crc kubenswrapper[4737]: I0126 18:31:44.251430 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:44 crc kubenswrapper[4737]: I0126 18:31:44.251446 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:44 crc kubenswrapper[4737]: I0126 18:31:44.251465 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:44 crc kubenswrapper[4737]: I0126 18:31:44.251483 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:44Z","lastTransitionTime":"2026-01-26T18:31:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:44 crc kubenswrapper[4737]: I0126 18:31:44.354192 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:44 crc kubenswrapper[4737]: I0126 18:31:44.354223 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:44 crc kubenswrapper[4737]: I0126 18:31:44.354234 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:44 crc kubenswrapper[4737]: I0126 18:31:44.354247 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:44 crc kubenswrapper[4737]: I0126 18:31:44.354256 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:44Z","lastTransitionTime":"2026-01-26T18:31:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:44 crc kubenswrapper[4737]: I0126 18:31:44.456881 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:44 crc kubenswrapper[4737]: I0126 18:31:44.456928 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:44 crc kubenswrapper[4737]: I0126 18:31:44.456945 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:44 crc kubenswrapper[4737]: I0126 18:31:44.456966 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:44 crc kubenswrapper[4737]: I0126 18:31:44.456979 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:44Z","lastTransitionTime":"2026-01-26T18:31:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:44 crc kubenswrapper[4737]: I0126 18:31:44.560169 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:44 crc kubenswrapper[4737]: I0126 18:31:44.560213 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:44 crc kubenswrapper[4737]: I0126 18:31:44.560224 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:44 crc kubenswrapper[4737]: I0126 18:31:44.560243 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:44 crc kubenswrapper[4737]: I0126 18:31:44.560252 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:44Z","lastTransitionTime":"2026-01-26T18:31:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:44 crc kubenswrapper[4737]: I0126 18:31:44.663214 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:44 crc kubenswrapper[4737]: I0126 18:31:44.663266 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:44 crc kubenswrapper[4737]: I0126 18:31:44.663279 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:44 crc kubenswrapper[4737]: I0126 18:31:44.663302 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:44 crc kubenswrapper[4737]: I0126 18:31:44.663316 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:44Z","lastTransitionTime":"2026-01-26T18:31:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:44 crc kubenswrapper[4737]: I0126 18:31:44.766183 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:44 crc kubenswrapper[4737]: I0126 18:31:44.766261 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:44 crc kubenswrapper[4737]: I0126 18:31:44.766274 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:44 crc kubenswrapper[4737]: I0126 18:31:44.766293 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:44 crc kubenswrapper[4737]: I0126 18:31:44.766304 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:44Z","lastTransitionTime":"2026-01-26T18:31:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:44 crc kubenswrapper[4737]: I0126 18:31:44.868587 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:44 crc kubenswrapper[4737]: I0126 18:31:44.868618 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:44 crc kubenswrapper[4737]: I0126 18:31:44.868627 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:44 crc kubenswrapper[4737]: I0126 18:31:44.868643 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:44 crc kubenswrapper[4737]: I0126 18:31:44.868655 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:44Z","lastTransitionTime":"2026-01-26T18:31:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:44 crc kubenswrapper[4737]: I0126 18:31:44.965020 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 07:13:46.4016363 +0000 UTC Jan 26 18:31:44 crc kubenswrapper[4737]: I0126 18:31:44.971346 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:44 crc kubenswrapper[4737]: I0126 18:31:44.971373 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:44 crc kubenswrapper[4737]: I0126 18:31:44.971382 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:44 crc kubenswrapper[4737]: I0126 18:31:44.971398 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:44 crc kubenswrapper[4737]: I0126 18:31:44.971409 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:44Z","lastTransitionTime":"2026-01-26T18:31:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:44 crc kubenswrapper[4737]: I0126 18:31:44.981668 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:31:44 crc kubenswrapper[4737]: I0126 18:31:44.981671 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:31:44 crc kubenswrapper[4737]: I0126 18:31:44.981668 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:31:44 crc kubenswrapper[4737]: I0126 18:31:44.981796 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:31:44 crc kubenswrapper[4737]: E0126 18:31:44.981790 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:31:44 crc kubenswrapper[4737]: E0126 18:31:44.982046 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4pv7r" podUID="1a3aadb5-b908-4300-af5f-e3c37dff9e14" Jan 26 18:31:44 crc kubenswrapper[4737]: E0126 18:31:44.982038 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:31:44 crc kubenswrapper[4737]: E0126 18:31:44.982145 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:31:45 crc kubenswrapper[4737]: I0126 18:31:45.073418 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:45 crc kubenswrapper[4737]: I0126 18:31:45.073456 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:45 crc kubenswrapper[4737]: I0126 18:31:45.073468 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:45 crc kubenswrapper[4737]: I0126 18:31:45.073481 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:45 crc kubenswrapper[4737]: I0126 18:31:45.073490 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:45Z","lastTransitionTime":"2026-01-26T18:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:45 crc kubenswrapper[4737]: I0126 18:31:45.176487 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:45 crc kubenswrapper[4737]: I0126 18:31:45.176555 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:45 crc kubenswrapper[4737]: I0126 18:31:45.176596 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:45 crc kubenswrapper[4737]: I0126 18:31:45.176629 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:45 crc kubenswrapper[4737]: I0126 18:31:45.176651 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:45Z","lastTransitionTime":"2026-01-26T18:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:45 crc kubenswrapper[4737]: I0126 18:31:45.279257 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:45 crc kubenswrapper[4737]: I0126 18:31:45.279365 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:45 crc kubenswrapper[4737]: I0126 18:31:45.279381 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:45 crc kubenswrapper[4737]: I0126 18:31:45.279405 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:45 crc kubenswrapper[4737]: I0126 18:31:45.279420 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:45Z","lastTransitionTime":"2026-01-26T18:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:45 crc kubenswrapper[4737]: I0126 18:31:45.382530 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:45 crc kubenswrapper[4737]: I0126 18:31:45.382581 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:45 crc kubenswrapper[4737]: I0126 18:31:45.382596 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:45 crc kubenswrapper[4737]: I0126 18:31:45.382613 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:45 crc kubenswrapper[4737]: I0126 18:31:45.382625 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:45Z","lastTransitionTime":"2026-01-26T18:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:45 crc kubenswrapper[4737]: I0126 18:31:45.484930 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:45 crc kubenswrapper[4737]: I0126 18:31:45.484977 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:45 crc kubenswrapper[4737]: I0126 18:31:45.484989 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:45 crc kubenswrapper[4737]: I0126 18:31:45.485008 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:45 crc kubenswrapper[4737]: I0126 18:31:45.485022 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:45Z","lastTransitionTime":"2026-01-26T18:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:45 crc kubenswrapper[4737]: I0126 18:31:45.587606 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:45 crc kubenswrapper[4737]: I0126 18:31:45.587655 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:45 crc kubenswrapper[4737]: I0126 18:31:45.587668 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:45 crc kubenswrapper[4737]: I0126 18:31:45.587687 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:45 crc kubenswrapper[4737]: I0126 18:31:45.587698 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:45Z","lastTransitionTime":"2026-01-26T18:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:45 crc kubenswrapper[4737]: I0126 18:31:45.690852 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:45 crc kubenswrapper[4737]: I0126 18:31:45.690894 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:45 crc kubenswrapper[4737]: I0126 18:31:45.690904 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:45 crc kubenswrapper[4737]: I0126 18:31:45.690921 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:45 crc kubenswrapper[4737]: I0126 18:31:45.690931 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:45Z","lastTransitionTime":"2026-01-26T18:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:45 crc kubenswrapper[4737]: I0126 18:31:45.793101 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:45 crc kubenswrapper[4737]: I0126 18:31:45.793144 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:45 crc kubenswrapper[4737]: I0126 18:31:45.793154 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:45 crc kubenswrapper[4737]: I0126 18:31:45.793171 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:45 crc kubenswrapper[4737]: I0126 18:31:45.793184 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:45Z","lastTransitionTime":"2026-01-26T18:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:45 crc kubenswrapper[4737]: I0126 18:31:45.895839 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:45 crc kubenswrapper[4737]: I0126 18:31:45.895884 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:45 crc kubenswrapper[4737]: I0126 18:31:45.895897 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:45 crc kubenswrapper[4737]: I0126 18:31:45.895912 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:45 crc kubenswrapper[4737]: I0126 18:31:45.895922 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:45Z","lastTransitionTime":"2026-01-26T18:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:45 crc kubenswrapper[4737]: I0126 18:31:45.966098 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 18:37:11.780519255 +0000 UTC Jan 26 18:31:45 crc kubenswrapper[4737]: I0126 18:31:45.982456 4737 scope.go:117] "RemoveContainer" containerID="046202f8fbac321bcb6ceb2a70e0b655bf88d62a5c28a1c43a1a815ad3b2f87d" Jan 26 18:31:45 crc kubenswrapper[4737]: I0126 18:31:45.998892 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:45 crc kubenswrapper[4737]: I0126 18:31:45.998953 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:45 crc kubenswrapper[4737]: I0126 18:31:45.998970 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:45 crc kubenswrapper[4737]: I0126 18:31:45.998996 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:45 crc kubenswrapper[4737]: I0126 18:31:45.999016 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:45Z","lastTransitionTime":"2026-01-26T18:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.101113 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.101165 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.101177 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.101195 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.101206 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:46Z","lastTransitionTime":"2026-01-26T18:31:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.204615 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.204663 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.204673 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.204691 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.204702 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:46Z","lastTransitionTime":"2026-01-26T18:31:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.307545 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.307585 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.307597 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.307616 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.307627 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:46Z","lastTransitionTime":"2026-01-26T18:31:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.410315 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.410371 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.410387 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.410405 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.410415 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:46Z","lastTransitionTime":"2026-01-26T18:31:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.513286 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.513347 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.513364 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.513386 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.513409 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:46Z","lastTransitionTime":"2026-01-26T18:31:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.616763 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.616804 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.616814 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.616829 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.616838 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:46Z","lastTransitionTime":"2026-01-26T18:31:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.720314 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.720358 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.720391 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.720413 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.720424 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:46Z","lastTransitionTime":"2026-01-26T18:31:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.822821 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.822865 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.822876 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.822892 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.822903 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:46Z","lastTransitionTime":"2026-01-26T18:31:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.850513 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jgjrk_ecb40773-20dc-48ef-bf7f-17f4a042b01c/ovnkube-controller/2.log" Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.853009 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" event={"ID":"ecb40773-20dc-48ef-bf7f-17f4a042b01c","Type":"ContainerStarted","Data":"6410407283f04a3f2e54ce997c8b1d77068c25df4c498c1cd5a23c30dbd514d4"} Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.853411 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.866426 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d16415ca-2740-4247-846a-9afd1ebcca48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4f461b168b044c50f281bafc5c7ef0d877392e3cc72edc7b2f0028cf8fe6b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7338aa3bff3561881f454689b4ae1ab8b46ddf950c45dd080107c7b78e6766a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ccdee3654b2923f02f6071aa3924d0934ed028d809dfbf120ba387637632dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c275106783e56387249df9619e22fd0eca28516545f77cead21b8c925f9c36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:46Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.876936 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94095e78-9414-4124-97ef-06acf16f3751\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d52ac89fea984085d49fba71ada8accb5c8a57c7d898b2b3f994cd01a485c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b8f6ddca9dd101abf072f2cd61c297b2dd32397a6ab33c8aec8640fea83afe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82b8f6ddca9dd101abf072f2cd61c297b2dd32397a6ab33c8aec8640fea83afe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:46Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.891648 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qjff2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82627aad-2019-482e-934a-7e9729927a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://debc5589aae465210c77fde58754f822ad1d429fc00cfb56625deddf51cf6fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://938d6c4b9c86f851e8346bde5364b9a2463869d85fef2bc4e705335f9253be4c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:31:31Z\\\",\\\"message\\\":\\\"2026-01-26T18:30:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_73f0df80-6376-4ba2-b9e3-93d21fcc0927\\\\n2026-01-26T18:30:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_73f0df80-6376-4ba2-b9e3-93d21fcc0927 to /host/opt/cni/bin/\\\\n2026-01-26T18:30:46Z [verbose] multus-daemon started\\\\n2026-01-26T18:30:46Z [verbose] Readiness Indicator file check\\\\n2026-01-26T18:31:31Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:31:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9ggl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qjff2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:46Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.903183 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afd75772-7900-46c3-b392-afb075e1cc08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44e1f827ccc2bfeece3e663dd96fc5e48e301dbf7ac31e381e7a33a8a4a422c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea5fce0e1e77606f5e8f6cb2c1b339d6b7b8174e1f68a050834be1f5bedfec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qxkj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:46Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.913399 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gxxjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"632d368f-0ceb-4edc-aac0-b760c24da635\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://045cdffff188229daeee7faf3a96a61c6b0ab18fdd0908f528b8a2a5b19059bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrskd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gxxjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:46Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.926867 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.926909 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.926919 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.926935 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.926946 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:46Z","lastTransitionTime":"2026-01-26T18:31:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.926992 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d641e5-0291-480c-9413-478267450e45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d782bb5883158eb31686ef882923bc0fe18907ec34b462ad7641b8d0a6e675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce3c0b3eaf0ab467b2dbcadc4770536de6e0abf901c9636df113498aff77a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d13541d78d88ffb1e1dcff16556814da8c438d160fef0ea16468954f300dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209ecbbc6838b629efde256a421bfd4b6926d2a9cd2f02e4fb7df9325fdecfc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2968ec8a8ae174c006de379e7fae84b111c90cb44e51bb8d0fdcbc0e66a5842\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:30:39.472985 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:30:39.474507 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1374176662/tls.crt::/tmp/serving-cert-1374176662/tls.key\\\\\\\"\\\\nI0126 18:30:44.993991 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:30:44.996847 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:30:44.996868 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:30:44.996891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:30:44.996897 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:30:45.005311 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 18:30:45.005355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 18:30:45.005375 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005386 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005391 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:30:45.005396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:30:45.005400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:30:45.005403 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 18:30:45.006492 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45b34a9d70cf8504fd809f816a326a74e9a3c422a1ed1ffc221e72f90629b420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:46Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.938609 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3512c1850ad62aad579725558f83686c93dad645cc56cc852438dc2b4a6c35c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:46Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.949613 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925178b6076a7c576bc84fb58255bac5e1dcd86eda3fd94f0f93504a7cd7625a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548ccd6a70ea74a2030c871c94d8d7ac1de313de023b6a16b4a3a3bb2a2d7003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:46Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.961481 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e65f82894ec49f5a88663c42b77ad7d6f19fa922c45052d24272144140f979b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:46Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.967042 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 12:36:34.054177821 +0000 UTC Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.974647 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:46Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.981494 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.981597 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:31:46 crc kubenswrapper[4737]: E0126 18:31:46.981647 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.981496 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.981517 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:31:46 crc kubenswrapper[4737]: E0126 18:31:46.981800 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4pv7r" podUID="1a3aadb5-b908-4300-af5f-e3c37dff9e14" Jan 26 18:31:46 crc kubenswrapper[4737]: E0126 18:31:46.981928 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:31:46 crc kubenswrapper[4737]: E0126 18:31:46.981999 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:31:46 crc kubenswrapper[4737]: I0126 18:31:46.992447 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb40773-20dc-48ef-bf7f-17f4a042b01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66ec75b04c2383311d9d4c54148415f6f45821810aa9e68c12fa36c22637341c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13f6776860714e1ab348c9b7a767366f0b4b425d08ed27ee64abfaf2770f1889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0330027f82eafcc297d9ea91babd144a993a1f9d5b5f376274521364421fb70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b3d9e7e5a84aa89a81ca65443973a1a75bc1b54c2f3f5cbd6cf7a00f8d04704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2019712957d6ff1e329746e69d806c2cb554917815ebbac73b321965e5d981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://067cf449746568a0f2fa056863be0cc0bf40390eb6f239e011639fdc05f2ea8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6410407283f04a3f2e54ce997c8b1d77068c25df4c498c1cd5a23c30dbd514d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://046202f8fbac321bcb6ceb2a70e0b655bf88d62a5c28a1c43a1a815ad3b2f87d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:31:17Z\\\",\\\"message\\\":\\\"ocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.254:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e4e4203e-87c7-4024-930a-5d6bdfe2bdde}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 18:31:16.841538 6403 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:16Z is after 2025-08-24T17:21:41Z]\\\\nI0126 18:31:16.841546 6403 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator-webhook]} name:Service_openshift-machine\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:31:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://570bf995c9ab0a04cff8ada5b82ef19c9299d86ab480a43ea1446a3aedb8cd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jgjrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:46Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.004985 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a777e838-21c0-4be5-9c8d-66ffb95135e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e22cbaea409b90eb9ad8f629cc94f12d5d94913c660d1f4ecbf3b1dd136d009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15312b4318e6f2175734be08ac5efbea4b0a46e2810e7223575671671408a157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81db4bac81727e02147b813300003fba15b7daf01d124d40ee30e4a87446ed1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4aba885244febd5d5191fbd34d2ee56412140bedfaf405e1a6b8bdeba2814f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4aba885244febd5d5191fbd34d2ee56412140bedfaf405e1a6b8bdeba2814f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.024012 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvbml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f32d3b75-6d15-4fb7-9559-d3df1d77071e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e973f3c659c65849958ccb32d18d8e67d42874690df337699f6cf976485c536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81b1b4cdfa531e63bf8499478cc1f6813d659b2b1b160d374133713382cff7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81b1b4cdfa531e63bf8499478cc1f6813d659b2b1b160d374133713382cff7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvbml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.029447 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.029678 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.029888 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.030061 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.030251 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:47Z","lastTransitionTime":"2026-01-26T18:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.037268 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bc7b559-f4f0-47b0-b148-6d0915785538\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10904723390bf4505ed547f04ed3a24b1e7debcf7f089e7de30eb5166c8f6d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knvgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df8c189f585082008e31ded41ba96e5939a894300f9dc29b53768a05cea54c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knvgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rzpxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.050601 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4pv7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a3aadb5-b908-4300-af5f-e3c37dff9e14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7cfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7cfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4pv7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.073984 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6554c7-415f-457d-8121-82981ebe2781\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2838d2a1b16be346b2d6a63998cd81416ef81978be369242fae471f6a53fdbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf97240160ecd3f4e73effbeb33f85dad6c12afbfe10315b8624d5c366945d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cfbe9f1ae9deaee4bbb0db6d490c25bd86326a3b962d2221cffa8c7e8431cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35617b01f73620a31d80cfbb5bc2c444ee37cdf3cfd67d62b70f36c6738bfc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2decc4fe0a94f1c54bc9b532267b0cbac17f7762e628835a11ba40561c8971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.085824 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.096707 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.105836 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fsmsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79f4091b-95d7-420a-b90a-1b6f48fb634e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://182bb7a343b62287950a4012ccd463ab6a7d339540f40db94e83248958d49095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fsmsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.117855 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d16415ca-2740-4247-846a-9afd1ebcca48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4f461b168b044c50f281bafc5c7ef0d877392e3cc72edc7b2f0028cf8fe6b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7338aa3bff3561881f454689b4ae1ab8b46ddf950c45dd080107c7b78e6766a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ccdee3654b2923f02f6071aa3924d0934ed028d809dfbf120ba387637632dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c275106783e56387249df9619e22fd0eca28516545f77cead21b8c925f9c36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.127041 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94095e78-9414-4124-97ef-06acf16f3751\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d52ac89fea984085d49fba71ada8accb5c8a57c7d898b2b3f994cd01a485c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b8f6ddca9dd101abf072f2cd61c297b2dd32397a6ab33c8aec8640fea83afe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82b8f6ddca9dd101abf072f2cd61c297b2dd32397a6ab33c8aec8640fea83afe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.132495 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.132520 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.132528 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.132544 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.132555 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:47Z","lastTransitionTime":"2026-01-26T18:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.137560 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qjff2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82627aad-2019-482e-934a-7e9729927a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://debc5589aae465210c77fde58754f822ad1d429fc00cfb56625deddf51cf6fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://938d6c4b9c86f851e8346bde5364b9a2463869d85fef2bc4e705335f9253be4c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:31:31Z\\\",\\\"message\\\":\\\"2026-01-26T18:30:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_73f0df80-6376-4ba2-b9e3-93d21fcc0927\\\\n2026-01-26T18:30:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_73f0df80-6376-4ba2-b9e3-93d21fcc0927 to /host/opt/cni/bin/\\\\n2026-01-26T18:30:46Z [verbose] multus-daemon started\\\\n2026-01-26T18:30:46Z [verbose] Readiness Indicator file check\\\\n2026-01-26T18:31:31Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:31:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9ggl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qjff2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.147892 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afd75772-7900-46c3-b392-afb075e1cc08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44e1f827ccc2bfeece3e663dd96fc5e48e301dbf7ac31e381e7a33a8a4a422c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea5fce0e1e77606f5e8f6cb2c1b339d6b7b8174e1f68a050834be1f5bedfec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qxkj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.159741 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gxxjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"632d368f-0ceb-4edc-aac0-b760c24da635\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://045cdffff188229daeee7faf3a96a61c6b0ab18fdd0908f528b8a2a5b19059bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrskd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gxxjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.175643 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d641e5-0291-480c-9413-478267450e45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d782bb5883158eb31686ef882923bc0fe18907ec34b462ad7641b8d0a6e675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce3c0b3eaf0ab467b2dbcadc4770536de6e0abf901c9636df113498aff77a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d13541d78d88ffb1e1dcff16556814da8c438d160fef0ea16468954f300dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209ecbbc6838b629efde256a421bfd4b6926d2a9cd2f02e4fb7df9325fdecfc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2968ec8a8ae174c006de379e7fae84b111c90cb44e51bb8d0fdcbc0e66a5842\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:30:39.472985 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:30:39.474507 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1374176662/tls.crt::/tmp/serving-cert-1374176662/tls.key\\\\\\\"\\\\nI0126 18:30:44.993991 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:30:44.996847 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:30:44.996868 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:30:44.996891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:30:44.996897 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:30:45.005311 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 18:30:45.005355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 18:30:45.005375 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005386 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005391 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:30:45.005396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:30:45.005400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:30:45.005403 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 18:30:45.006492 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45b34a9d70cf8504fd809f816a326a74e9a3c422a1ed1ffc221e72f90629b420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.190661 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3512c1850ad62aad579725558f83686c93dad645cc56cc852438dc2b4a6c35c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.207097 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925178b6076a7c576bc84fb58255bac5e1dcd86eda3fd94f0f93504a7cd7625a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548ccd6a70ea74a2030c871c94d8d7ac1de313de023b6a16b4a3a3bb2a2d7003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.237479 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.237576 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.237592 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.237613 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.237625 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:47Z","lastTransitionTime":"2026-01-26T18:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.243190 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e65f82894ec49f5a88663c42b77ad7d6f19fa922c45052d24272144140f979b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:47 crc kubenswrapper[4737]: E0126 18:31:47.256456 4737 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"163b9b97-5fa6-4443-9f0c-6d278a8ade1d\\\",\\\"systemUUID\\\":\\\"4ebf7606-e2ee-4d28-b0b5-b6f922331ef2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.263605 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.263652 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.263674 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.263698 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.263714 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:47Z","lastTransitionTime":"2026-01-26T18:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.275480 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:47 crc kubenswrapper[4737]: E0126 18:31:47.282303 4737 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"163b9b97-5fa6-4443-9f0c-6d278a8ade1d\\\",\\\"systemUUID\\\":\\\"4ebf7606-e2ee-4d28-b0b5-b6f922331ef2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.286925 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.287010 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.287028 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.287050 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.287063 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:47Z","lastTransitionTime":"2026-01-26T18:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.298529 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb40773-20dc-48ef-bf7f-17f4a042b01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66ec75b04c2383311d9d4c54148415f6f45821810aa9e68c12fa36c22637341c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13f6776860714e1ab348c9b7a767366f0b4b425d08ed27ee64abfaf2770f1889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0330027f82eafcc297d9ea91babd144a993a1f9d5b5f376274521364421fb70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b3d9e7e5a84aa89a81ca65443973a1a75bc1b54c2f3f5cbd6cf7a00f8d04704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2019712957d6ff1e329746e69d806c2cb554917815ebbac73b321965e5d981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://067cf449746568a0f2fa056863be0cc0bf40390eb6f239e011639fdc05f2ea8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6410407283f04a3f2e54ce997c8b1d77068c25df4c498c1cd5a23c30dbd514d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://046202f8fbac321bcb6ceb2a70e0b655bf88d62a5c28a1c43a1a815ad3b2f87d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:31:17Z\\\",\\\"message\\\":\\\"ocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.254:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e4e4203e-87c7-4024-930a-5d6bdfe2bdde}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 18:31:16.841538 6403 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:16Z is after 2025-08-24T17:21:41Z]\\\\nI0126 18:31:16.841546 6403 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator-webhook]} name:Service_openshift-machine\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:31:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://570bf995c9ab0a04cff8ada5b82ef19c9299d86ab480a43ea1446a3aedb8cd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jgjrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:47 crc kubenswrapper[4737]: E0126 18:31:47.301597 4737 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"163b9b97-5fa6-4443-9f0c-6d278a8ade1d\\\",\\\"systemUUID\\\":\\\"4ebf7606-e2ee-4d28-b0b5-b6f922331ef2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.305397 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.305440 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.305448 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.305463 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.305476 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:47Z","lastTransitionTime":"2026-01-26T18:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.311223 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a777e838-21c0-4be5-9c8d-66ffb95135e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e22cbaea409b90eb9ad8f629cc94f12d5d94913c660d1f4ecbf3b1dd136d009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15312b4318e6f2175734be08ac5efbea4b0a46e2810e7223575671671408a157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81db4bac81727e02147b813300003fba15b7daf01d124d40ee30e4a87446ed1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4aba885244febd5d5191fbd34d2ee56412140bedfaf405e1a6b8bdeba2814f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4aba885244febd5d5191fbd34d2ee56412140bedfaf405e1a6b8bdeba2814f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:47 crc kubenswrapper[4737]: E0126 18:31:47.318169 4737 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"163b9b97-5fa6-4443-9f0c-6d278a8ade1d\\\",\\\"systemUUID\\\":\\\"4ebf7606-e2ee-4d28-b0b5-b6f922331ef2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.321181 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.321298 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.321411 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.321491 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.321620 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:47Z","lastTransitionTime":"2026-01-26T18:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.327262 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvbml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f32d3b75-6d15-4fb7-9559-d3df1d77071e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e973f3c659c65849958ccb32d18d8e67d42874690df337699f6cf976485c536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81b1b4cdfa531e63bf8499478cc1f6813d659b2b1b160d374133713382cff7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81b1b4cdfa531e63bf8499478cc1f6813d659b2b1b160d374133713382cff7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvbml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:47 crc kubenswrapper[4737]: E0126 18:31:47.333266 4737 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:31:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"163b9b97-5fa6-4443-9f0c-6d278a8ade1d\\\",\\\"systemUUID\\\":\\\"4ebf7606-e2ee-4d28-b0b5-b6f922331ef2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:47 crc kubenswrapper[4737]: E0126 18:31:47.333740 4737 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.335766 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.335877 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.335944 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.336023 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.336104 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:47Z","lastTransitionTime":"2026-01-26T18:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.338530 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bc7b559-f4f0-47b0-b148-6d0915785538\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10904723390bf4505ed547f04ed3a24b1e7debcf7f089e7de30eb5166c8f6d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knvgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df8c189f585082008e31ded41ba96e5939a894300f9dc29b53768a05cea54c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knvgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rzpxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.349383 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4pv7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a3aadb5-b908-4300-af5f-e3c37dff9e14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7cfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7cfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4pv7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.368956 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6554c7-415f-457d-8121-82981ebe2781\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2838d2a1b16be346b2d6a63998cd81416ef81978be369242fae471f6a53fdbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf97240160ecd3f4e73effbeb33f85dad6c12afbfe10315b8624d5c366945d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cfbe9f1ae9deaee4bbb0db6d490c25bd86326a3b962d2221cffa8c7e8431cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35617b01f73620a31d80cfbb5bc2c444ee37cdf3cfd67d62b70f36c6738bfc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2decc4fe0a94f1c54bc9b532267b0cbac17f7762e628835a11ba40561c8971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.384113 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.401702 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.415043 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fsmsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79f4091b-95d7-420a-b90a-1b6f48fb634e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://182bb7a343b62287950a4012ccd463ab6a7d339540f40db94e83248958d49095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fsmsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.438403 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.438447 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.438459 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.438474 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.438487 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:47Z","lastTransitionTime":"2026-01-26T18:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.541677 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.541718 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.541727 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.541741 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.541754 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:47Z","lastTransitionTime":"2026-01-26T18:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.644061 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.644123 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.644134 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.644151 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.644164 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:47Z","lastTransitionTime":"2026-01-26T18:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.747178 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.747218 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.747227 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.747242 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.747253 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:47Z","lastTransitionTime":"2026-01-26T18:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.849673 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.849713 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.849723 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.849738 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.849748 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:47Z","lastTransitionTime":"2026-01-26T18:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.858682 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jgjrk_ecb40773-20dc-48ef-bf7f-17f4a042b01c/ovnkube-controller/3.log" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.859364 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jgjrk_ecb40773-20dc-48ef-bf7f-17f4a042b01c/ovnkube-controller/2.log" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.862916 4737 generic.go:334] "Generic (PLEG): container finished" podID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" containerID="6410407283f04a3f2e54ce997c8b1d77068c25df4c498c1cd5a23c30dbd514d4" exitCode=1 Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.862981 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" event={"ID":"ecb40773-20dc-48ef-bf7f-17f4a042b01c","Type":"ContainerDied","Data":"6410407283f04a3f2e54ce997c8b1d77068c25df4c498c1cd5a23c30dbd514d4"} Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.863133 4737 scope.go:117] "RemoveContainer" containerID="046202f8fbac321bcb6ceb2a70e0b655bf88d62a5c28a1c43a1a815ad3b2f87d" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.863917 4737 scope.go:117] "RemoveContainer" containerID="6410407283f04a3f2e54ce997c8b1d77068c25df4c498c1cd5a23c30dbd514d4" Jan 26 18:31:47 crc kubenswrapper[4737]: E0126 18:31:47.864167 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jgjrk_openshift-ovn-kubernetes(ecb40773-20dc-48ef-bf7f-17f4a042b01c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" podUID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.881330 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d641e5-0291-480c-9413-478267450e45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d782bb5883158eb31686ef882923bc0fe18907ec34b462ad7641b8d0a6e675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce3c0b3eaf0ab467b2dbcadc4770536de6e0abf901c9636df113498aff77a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d13541d78d88ffb1e1dcff16556814da8c438d160fef0ea16468954f300dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209ecbbc6838b629efde256a421bfd4b6926d2a9cd2f02e4fb7df9325fdecfc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2968ec8a8ae174c006de379e7fae84b111c90cb44e51bb8d0fdcbc0e66a5842\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:30:39.472985 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:30:39.474507 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1374176662/tls.crt::/tmp/serving-cert-1374176662/tls.key\\\\\\\"\\\\nI0126 18:30:44.993991 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:30:44.996847 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:30:44.996868 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:30:44.996891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:30:44.996897 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:30:45.005311 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 18:30:45.005355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 18:30:45.005375 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005386 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005391 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:30:45.005396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:30:45.005400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:30:45.005403 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 18:30:45.006492 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45b34a9d70cf8504fd809f816a326a74e9a3c422a1ed1ffc221e72f90629b420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.894728 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3512c1850ad62aad579725558f83686c93dad645cc56cc852438dc2b4a6c35c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.908583 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925178b6076a7c576bc84fb58255bac5e1dcd86eda3fd94f0f93504a7cd7625a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548ccd6a70ea74a2030c871c94d8d7ac1de313de023b6a16b4a3a3bb2a2d7003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.922352 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e65f82894ec49f5a88663c42b77ad7d6f19fa922c45052d24272144140f979b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.935514 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.953031 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.953133 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.953190 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.953222 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.953345 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:47Z","lastTransitionTime":"2026-01-26T18:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.957898 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb40773-20dc-48ef-bf7f-17f4a042b01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66ec75b04c2383311d9d4c54148415f6f45821810aa9e68c12fa36c22637341c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13f6776860714e1ab348c9b7a767366f0b4b425d08ed27ee64abfaf2770f1889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0330027f82eafcc297d9ea91babd144a993a1f9d5b5f376274521364421fb70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b3d9e7e5a84aa89a81ca65443973a1a75bc1b54c2f3f5cbd6cf7a00f8d04704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2019712957d6ff1e329746e69d806c2cb554917815ebbac73b321965e5d981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://067cf449746568a0f2fa056863be0cc0bf40390eb6f239e011639fdc05f2ea8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6410407283f04a3f2e54ce997c8b1d77068c25df4c498c1cd5a23c30dbd514d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://046202f8fbac321bcb6ceb2a70e0b655bf88d62a5c28a1c43a1a815ad3b2f87d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:31:17Z\\\",\\\"message\\\":\\\"ocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.254:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e4e4203e-87c7-4024-930a-5d6bdfe2bdde}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0126 18:31:16.841538 6403 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:16Z is after 2025-08-24T17:21:41Z]\\\\nI0126 18:31:16.841546 6403 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator-webhook]} name:Service_openshift-machine\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:31:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6410407283f04a3f2e54ce997c8b1d77068c25df4c498c1cd5a23c30dbd514d4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:31:46Z\\\",\\\"message\\\":\\\"d already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:46Z is after 2025-08-24T17:21:41Z]\\\\nI0126 18:31:46.713044 6805 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-controllers_TCP_cluster\\\\\\\", UUID:\\\\\\\"62af83f3-e0c8-4632-aaaa-17488566a9d8\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-controllers\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-controllers_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/mach\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:31:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://570bf995c9ab0a04cff8ada5b82ef19c9299d86ab480a43ea1446a3aedb8cd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jgjrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.967773 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 19:18:39.722376653 +0000 UTC Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.973850 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a777e838-21c0-4be5-9c8d-66ffb95135e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e22cbaea409b90eb9ad8f629cc94f12d5d94913c660d1f4ecbf3b1dd136d009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15312b4318e6f2175734be08ac5efbea4b0a46e2810e7223575671671408a157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81db4bac81727e02147b813300003fba15b7daf01d124d40ee30e4a87446ed1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4aba885244febd5d5191fbd34d2ee56412140bedfaf405e1a6b8bdeba2814f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4aba885244febd5d5191fbd34d2ee56412140bedfaf405e1a6b8bdeba2814f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:47 crc kubenswrapper[4737]: I0126 18:31:47.992612 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvbml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f32d3b75-6d15-4fb7-9559-d3df1d77071e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e973f3c659c65849958ccb32d18d8e67d42874690df337699f6cf976485c536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81b1b4cdfa531e63bf8499478cc1f6813d659b2b1b160d374133713382cff7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81b1b4cdfa531e63bf8499478cc1f6813d659b2b1b160d374133713382cff7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvbml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:47Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.004983 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bc7b559-f4f0-47b0-b148-6d0915785538\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10904723390bf4505ed547f04ed3a24b1e7debcf7f089e7de30eb5166c8f6d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knvgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df8c189f585082008e31ded41ba96e5939a894300f9dc29b53768a05cea54c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knvgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rzpxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:48Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.016428 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4pv7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a3aadb5-b908-4300-af5f-e3c37dff9e14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7cfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7cfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4pv7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:48Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.037296 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6554c7-415f-457d-8121-82981ebe2781\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2838d2a1b16be346b2d6a63998cd81416ef81978be369242fae471f6a53fdbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf97240160ecd3f4e73effbeb33f85dad6c12afbfe10315b8624d5c366945d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cfbe9f1ae9deaee4bbb0db6d490c25bd86326a3b962d2221cffa8c7e8431cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35617b01f73620a31d80cfbb5bc2c444ee37cdf3cfd67d62b70f36c6738bfc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2decc4fe0a94f1c54bc9b532267b0cbac17f7762e628835a11ba40561c8971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:48Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.056202 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:48Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.056759 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.056818 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.056836 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.056863 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.056885 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:48Z","lastTransitionTime":"2026-01-26T18:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.071552 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:48Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.087123 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fsmsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79f4091b-95d7-420a-b90a-1b6f48fb634e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://182bb7a343b62287950a4012ccd463ab6a7d339540f40db94e83248958d49095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fsmsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:48Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.103994 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d16415ca-2740-4247-846a-9afd1ebcca48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4f461b168b044c50f281bafc5c7ef0d877392e3cc72edc7b2f0028cf8fe6b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7338aa3bff3561881f454689b4ae1ab8b46ddf950c45dd080107c7b78e6766a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ccdee3654b2923f02f6071aa3924d0934ed028d809dfbf120ba387637632dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c275106783e56387249df9619e22fd0eca28516545f77cead21b8c925f9c36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:48Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.118010 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94095e78-9414-4124-97ef-06acf16f3751\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d52ac89fea984085d49fba71ada8accb5c8a57c7d898b2b3f994cd01a485c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b8f6ddca9dd101abf072f2cd61c297b2dd32397a6ab33c8aec8640fea83afe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82b8f6ddca9dd101abf072f2cd61c297b2dd32397a6ab33c8aec8640fea83afe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:48Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.131222 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qjff2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82627aad-2019-482e-934a-7e9729927a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://debc5589aae465210c77fde58754f822ad1d429fc00cfb56625deddf51cf6fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://938d6c4b9c86f851e8346bde5364b9a2463869d85fef2bc4e705335f9253be4c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:31:31Z\\\",\\\"message\\\":\\\"2026-01-26T18:30:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_73f0df80-6376-4ba2-b9e3-93d21fcc0927\\\\n2026-01-26T18:30:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_73f0df80-6376-4ba2-b9e3-93d21fcc0927 to /host/opt/cni/bin/\\\\n2026-01-26T18:30:46Z [verbose] multus-daemon started\\\\n2026-01-26T18:30:46Z [verbose] Readiness Indicator file check\\\\n2026-01-26T18:31:31Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:31:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9ggl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qjff2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:48Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.145030 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afd75772-7900-46c3-b392-afb075e1cc08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44e1f827ccc2bfeece3e663dd96fc5e48e301dbf7ac31e381e7a33a8a4a422c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea5fce0e1e77606f5e8f6cb2c1b339d6b7b8174e1f68a050834be1f5bedfec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qxkj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:48Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.157323 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gxxjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"632d368f-0ceb-4edc-aac0-b760c24da635\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://045cdffff188229daeee7faf3a96a61c6b0ab18fdd0908f528b8a2a5b19059bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrskd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gxxjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:48Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.159368 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.159460 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.159488 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.159531 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.159570 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:48Z","lastTransitionTime":"2026-01-26T18:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.262388 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.262458 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.262479 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.262512 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.262535 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:48Z","lastTransitionTime":"2026-01-26T18:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.366368 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.366405 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.366418 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.366435 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.366448 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:48Z","lastTransitionTime":"2026-01-26T18:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.468891 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.468951 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.468963 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.468979 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.468989 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:48Z","lastTransitionTime":"2026-01-26T18:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.571558 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.571606 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.571619 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.571638 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.571651 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:48Z","lastTransitionTime":"2026-01-26T18:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.674022 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.674108 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.674125 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.674143 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.674154 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:48Z","lastTransitionTime":"2026-01-26T18:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.777063 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.777155 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.777171 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.777216 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.777230 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:48Z","lastTransitionTime":"2026-01-26T18:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.790287 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:31:48 crc kubenswrapper[4737]: E0126 18:31:48.790445 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:32:52.790426585 +0000 UTC m=+146.098621283 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.869999 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jgjrk_ecb40773-20dc-48ef-bf7f-17f4a042b01c/ovnkube-controller/3.log" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.874331 4737 scope.go:117] "RemoveContainer" containerID="6410407283f04a3f2e54ce997c8b1d77068c25df4c498c1cd5a23c30dbd514d4" Jan 26 18:31:48 crc kubenswrapper[4737]: E0126 18:31:48.874528 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jgjrk_openshift-ovn-kubernetes(ecb40773-20dc-48ef-bf7f-17f4a042b01c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" podUID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.879197 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.879245 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.879281 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.879300 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.879312 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:48Z","lastTransitionTime":"2026-01-26T18:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.888539 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:48Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.891182 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.891219 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.891250 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.891323 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:31:48 crc kubenswrapper[4737]: E0126 18:31:48.891398 4737 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 18:31:48 crc kubenswrapper[4737]: E0126 18:31:48.891425 4737 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 18:31:48 crc kubenswrapper[4737]: E0126 18:31:48.891439 4737 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:31:48 crc kubenswrapper[4737]: E0126 18:31:48.891453 4737 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 18:31:48 crc kubenswrapper[4737]: E0126 18:31:48.891463 4737 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 18:31:48 crc kubenswrapper[4737]: E0126 18:31:48.891488 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 18:32:52.891470731 +0000 UTC m=+146.199665429 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:31:48 crc kubenswrapper[4737]: E0126 18:31:48.891490 4737 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 18:31:48 crc kubenswrapper[4737]: E0126 18:31:48.891399 4737 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 18:31:48 crc kubenswrapper[4737]: E0126 18:31:48.891509 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 18:32:52.891501702 +0000 UTC m=+146.199696410 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 18:31:48 crc kubenswrapper[4737]: E0126 18:31:48.891510 4737 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:31:48 crc kubenswrapper[4737]: E0126 18:31:48.891543 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 18:32:52.891524763 +0000 UTC m=+146.199719471 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 18:31:48 crc kubenswrapper[4737]: E0126 18:31:48.891575 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 18:32:52.891555274 +0000 UTC m=+146.199750152 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.900242 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fsmsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79f4091b-95d7-420a-b90a-1b6f48fb634e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://182bb7a343b62287950a4012ccd463ab6a7d339540f40db94e83248958d49095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fsmsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:48Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.922882 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6554c7-415f-457d-8121-82981ebe2781\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2838d2a1b16be346b2d6a63998cd81416ef81978be369242fae471f6a53fdbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf97240160ecd3f4e73effbeb33f85dad6c12afbfe10315b8624d5c366945d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cfbe9f1ae9deaee4bbb0db6d490c25bd86326a3b962d2221cffa8c7e8431cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35617b01f73620a31d80cfbb5bc2c444ee37cdf3cfd67d62b70f36c6738bfc83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b2decc4fe0a94f1c54bc9b532267b0cbac17f7762e628835a11ba40561c8971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00781795e94070489f8895fff046c84e764ef7ea3aa53a4a59973863cdf65935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f98198349774624153e2a9325792990364ae8741e60bdf06a0a0bd15a70ee09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f7694819f63f1362dd7f72022b7c9a3b0337715d6e8d8857502fc3eaf34aa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:48Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.937375 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:48Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.951622 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qjff2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82627aad-2019-482e-934a-7e9729927a34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://debc5589aae465210c77fde58754f822ad1d429fc00cfb56625deddf51cf6fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://938d6c4b9c86f851e8346bde5364b9a2463869d85fef2bc4e705335f9253be4c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:31:31Z\\\",\\\"message\\\":\\\"2026-01-26T18:30:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_73f0df80-6376-4ba2-b9e3-93d21fcc0927\\\\n2026-01-26T18:30:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_73f0df80-6376-4ba2-b9e3-93d21fcc0927 to /host/opt/cni/bin/\\\\n2026-01-26T18:30:46Z [verbose] multus-daemon started\\\\n2026-01-26T18:30:46Z [verbose] Readiness Indicator file check\\\\n2026-01-26T18:31:31Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:31:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9ggl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qjff2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:48Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.965521 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afd75772-7900-46c3-b392-afb075e1cc08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44e1f827ccc2bfeece3e663dd96fc5e48e301dbf7ac31e381e7a33a8a4a422c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea5fce0e1e77606f5e8f6cb2c1b339d6b7b8174e1f68a050834be1f5bedfec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9v4q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qxkj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:48Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.967983 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 10:44:46.293713794 +0000 UTC Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.978678 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gxxjs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"632d368f-0ceb-4edc-aac0-b760c24da635\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://045cdffff188229daeee7faf3a96a61c6b0ab18fdd0908f528b8a2a5b19059bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrskd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gxxjs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:48Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.981094 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.981149 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.981190 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.981207 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.981217 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:48 crc kubenswrapper[4737]: E0126 18:31:48.981244 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.981255 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.981297 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:48Z","lastTransitionTime":"2026-01-26T18:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.981329 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.981090 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:31:48 crc kubenswrapper[4737]: E0126 18:31:48.981484 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:31:48 crc kubenswrapper[4737]: E0126 18:31:48.981557 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4pv7r" podUID="1a3aadb5-b908-4300-af5f-e3c37dff9e14" Jan 26 18:31:48 crc kubenswrapper[4737]: E0126 18:31:48.981601 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:31:48 crc kubenswrapper[4737]: I0126 18:31:48.994928 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d16415ca-2740-4247-846a-9afd1ebcca48\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4f461b168b044c50f281bafc5c7ef0d877392e3cc72edc7b2f0028cf8fe6b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7338aa3bff3561881f454689b4ae1ab8b46ddf950c45dd080107c7b78e6766a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ccdee3654b2923f02f6071aa3924d0934ed028d809dfbf120ba387637632dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c275106783e56387249df9619e22fd0eca28516545f77cead21b8c925f9c36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:48Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:49 crc kubenswrapper[4737]: I0126 18:31:49.007997 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94095e78-9414-4124-97ef-06acf16f3751\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d52ac89fea984085d49fba71ada8accb5c8a57c7d898b2b3f994cd01a485c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82b8f6ddca9dd101abf072f2cd61c297b2dd32397a6ab33c8aec8640fea83afe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82b8f6ddca9dd101abf072f2cd61c297b2dd32397a6ab33c8aec8640fea83afe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:49Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:49 crc kubenswrapper[4737]: I0126 18:31:49.022607 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925178b6076a7c576bc84fb58255bac5e1dcd86eda3fd94f0f93504a7cd7625a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548ccd6a70ea74a2030c871c94d8d7ac1de313de023b6a16b4a3a3bb2a2d7003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:49Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:49 crc kubenswrapper[4737]: I0126 18:31:49.036685 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e65f82894ec49f5a88663c42b77ad7d6f19fa922c45052d24272144140f979b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:49Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:49 crc kubenswrapper[4737]: I0126 18:31:49.052179 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:49Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:49 crc kubenswrapper[4737]: I0126 18:31:49.074211 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb40773-20dc-48ef-bf7f-17f4a042b01c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66ec75b04c2383311d9d4c54148415f6f45821810aa9e68c12fa36c22637341c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13f6776860714e1ab348c9b7a767366f0b4b425d08ed27ee64abfaf2770f1889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0330027f82eafcc297d9ea91babd144a993a1f9d5b5f376274521364421fb70d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b3d9e7e5a84aa89a81ca65443973a1a75bc1b54c2f3f5cbd6cf7a00f8d04704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2019712957d6ff1e329746e69d806c2cb554917815ebbac73b321965e5d981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://067cf449746568a0f2fa056863be0cc0bf40390eb6f239e011639fdc05f2ea8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6410407283f04a3f2e54ce997c8b1d77068c25df4c498c1cd5a23c30dbd514d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6410407283f04a3f2e54ce997c8b1d77068c25df4c498c1cd5a23c30dbd514d4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:31:46Z\\\",\\\"message\\\":\\\"d already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:46Z is after 2025-08-24T17:21:41Z]\\\\nI0126 18:31:46.713044 6805 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-controllers_TCP_cluster\\\\\\\", UUID:\\\\\\\"62af83f3-e0c8-4632-aaaa-17488566a9d8\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-controllers\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-controllers_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/mach\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:31:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jgjrk_openshift-ovn-kubernetes(ecb40773-20dc-48ef-bf7f-17f4a042b01c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://570bf995c9ab0a04cff8ada5b82ef19c9299d86ab480a43ea1446a3aedb8cd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnp4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jgjrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:49Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:49 crc kubenswrapper[4737]: I0126 18:31:49.090360 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:49 crc kubenswrapper[4737]: I0126 18:31:49.090571 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:49 crc kubenswrapper[4737]: I0126 18:31:49.091486 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:49 crc kubenswrapper[4737]: I0126 18:31:49.091538 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:49 crc kubenswrapper[4737]: I0126 18:31:49.091559 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:49Z","lastTransitionTime":"2026-01-26T18:31:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:49 crc kubenswrapper[4737]: I0126 18:31:49.097933 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d641e5-0291-480c-9413-478267450e45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d782bb5883158eb31686ef882923bc0fe18907ec34b462ad7641b8d0a6e675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcce3c0b3eaf0ab467b2dbcadc4770536de6e0abf901c9636df113498aff77a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96d13541d78d88ffb1e1dcff16556814da8c438d160fef0ea16468954f300dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209ecbbc6838b629efde256a421bfd4b6926d2a9cd2f02e4fb7df9325fdecfc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2968ec8a8ae174c006de379e7fae84b111c90cb44e51bb8d0fdcbc0e66a5842\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:30:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:30:39.472985 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:30:39.474507 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1374176662/tls.crt::/tmp/serving-cert-1374176662/tls.key\\\\\\\"\\\\nI0126 18:30:44.993991 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:30:44.996847 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:30:44.996868 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:30:44.996891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:30:44.996897 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:30:45.005311 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0126 18:30:45.005355 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 18:30:45.005375 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005386 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:30:45.005391 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:30:45.005396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:30:45.005400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:30:45.005403 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 18:30:45.006492 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45b34a9d70cf8504fd809f816a326a74e9a3c422a1ed1ffc221e72f90629b420\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:49Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:49 crc kubenswrapper[4737]: I0126 18:31:49.114343 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3512c1850ad62aad579725558f83686c93dad645cc56cc852438dc2b4a6c35c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:49Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:49 crc kubenswrapper[4737]: I0126 18:31:49.130964 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bc7b559-f4f0-47b0-b148-6d0915785538\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10904723390bf4505ed547f04ed3a24b1e7debcf7f089e7de30eb5166c8f6d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knvgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df8c189f585082008e31ded41ba96e5939a894300f9dc29b53768a05cea54c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knvgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rzpxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:49Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:49 crc kubenswrapper[4737]: I0126 18:31:49.145091 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4pv7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a3aadb5-b908-4300-af5f-e3c37dff9e14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7cfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7cfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4pv7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:49Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:49 crc kubenswrapper[4737]: I0126 18:31:49.158975 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a777e838-21c0-4be5-9c8d-66ffb95135e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e22cbaea409b90eb9ad8f629cc94f12d5d94913c660d1f4ecbf3b1dd136d009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15312b4318e6f2175734be08ac5efbea4b0a46e2810e7223575671671408a157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81db4bac81727e02147b813300003fba15b7daf01d124d40ee30e4a87446ed1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4aba885244febd5d5191fbd34d2ee56412140bedfaf405e1a6b8bdeba2814f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4aba885244febd5d5191fbd34d2ee56412140bedfaf405e1a6b8bdeba2814f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:49Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:49 crc kubenswrapper[4737]: I0126 18:31:49.182002 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvbml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f32d3b75-6d15-4fb7-9559-d3df1d77071e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e973f3c659c65849958ccb32d18d8e67d42874690df337699f6cf976485c536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81b1b4cdfa531e63bf8499478cc1f6813d659b2b1b160d374133713382cff7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81b1b4cdfa531e63bf8499478cc1f6813d659b2b1b160d374133713382cff7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvbml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:49Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:49 crc kubenswrapper[4737]: I0126 18:31:49.194400 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:49 crc kubenswrapper[4737]: I0126 18:31:49.194460 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:49 crc kubenswrapper[4737]: I0126 18:31:49.194473 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:49 crc kubenswrapper[4737]: I0126 18:31:49.194491 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:49 crc kubenswrapper[4737]: I0126 18:31:49.194505 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:49Z","lastTransitionTime":"2026-01-26T18:31:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:49 crc kubenswrapper[4737]: I0126 18:31:49.297339 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:49 crc kubenswrapper[4737]: I0126 18:31:49.297409 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:49 crc kubenswrapper[4737]: I0126 18:31:49.297432 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:49 crc kubenswrapper[4737]: I0126 18:31:49.297464 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:49 crc kubenswrapper[4737]: I0126 18:31:49.297485 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:49Z","lastTransitionTime":"2026-01-26T18:31:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:49 crc kubenswrapper[4737]: I0126 18:31:49.400976 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:49 crc kubenswrapper[4737]: I0126 18:31:49.401455 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:49 crc kubenswrapper[4737]: I0126 18:31:49.401632 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:49 crc kubenswrapper[4737]: I0126 18:31:49.401823 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:49 crc kubenswrapper[4737]: I0126 18:31:49.401974 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:49Z","lastTransitionTime":"2026-01-26T18:31:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:49 crc kubenswrapper[4737]: I0126 18:31:49.504551 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:49 crc kubenswrapper[4737]: I0126 18:31:49.504590 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:49 crc kubenswrapper[4737]: I0126 18:31:49.504599 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:49 crc kubenswrapper[4737]: I0126 18:31:49.504616 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:49 crc kubenswrapper[4737]: I0126 18:31:49.504627 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:49Z","lastTransitionTime":"2026-01-26T18:31:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:49 crc kubenswrapper[4737]: I0126 18:31:49.608770 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:49 crc kubenswrapper[4737]: I0126 18:31:49.608838 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:49 crc kubenswrapper[4737]: I0126 18:31:49.608855 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:49 crc kubenswrapper[4737]: I0126 18:31:49.608881 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:49 crc kubenswrapper[4737]: I0126 18:31:49.608898 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:49Z","lastTransitionTime":"2026-01-26T18:31:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:49 crc kubenswrapper[4737]: I0126 18:31:49.711505 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:49 crc kubenswrapper[4737]: I0126 18:31:49.711592 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:49 crc kubenswrapper[4737]: I0126 18:31:49.711621 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:49 crc kubenswrapper[4737]: I0126 18:31:49.711731 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:49 crc kubenswrapper[4737]: I0126 18:31:49.711761 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:49Z","lastTransitionTime":"2026-01-26T18:31:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:49 crc kubenswrapper[4737]: I0126 18:31:49.814669 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:49 crc kubenswrapper[4737]: I0126 18:31:49.814707 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:49 crc kubenswrapper[4737]: I0126 18:31:49.814719 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:49 crc kubenswrapper[4737]: I0126 18:31:49.814738 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:49 crc kubenswrapper[4737]: I0126 18:31:49.814750 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:49Z","lastTransitionTime":"2026-01-26T18:31:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:49 crc kubenswrapper[4737]: I0126 18:31:49.917498 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:49 crc kubenswrapper[4737]: I0126 18:31:49.918019 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:49 crc kubenswrapper[4737]: I0126 18:31:49.918272 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:49 crc kubenswrapper[4737]: I0126 18:31:49.918475 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:49 crc kubenswrapper[4737]: I0126 18:31:49.918692 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:49Z","lastTransitionTime":"2026-01-26T18:31:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:49 crc kubenswrapper[4737]: I0126 18:31:49.968847 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 15:40:45.707646983 +0000 UTC Jan 26 18:31:50 crc kubenswrapper[4737]: I0126 18:31:50.021687 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:50 crc kubenswrapper[4737]: I0126 18:31:50.021733 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:50 crc kubenswrapper[4737]: I0126 18:31:50.021762 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:50 crc kubenswrapper[4737]: I0126 18:31:50.021783 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:50 crc kubenswrapper[4737]: I0126 18:31:50.021799 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:50Z","lastTransitionTime":"2026-01-26T18:31:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:50 crc kubenswrapper[4737]: I0126 18:31:50.125293 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:50 crc kubenswrapper[4737]: I0126 18:31:50.125628 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:50 crc kubenswrapper[4737]: I0126 18:31:50.125694 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:50 crc kubenswrapper[4737]: I0126 18:31:50.125766 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:50 crc kubenswrapper[4737]: I0126 18:31:50.125834 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:50Z","lastTransitionTime":"2026-01-26T18:31:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:50 crc kubenswrapper[4737]: I0126 18:31:50.228910 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:50 crc kubenswrapper[4737]: I0126 18:31:50.228965 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:50 crc kubenswrapper[4737]: I0126 18:31:50.228984 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:50 crc kubenswrapper[4737]: I0126 18:31:50.229011 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:50 crc kubenswrapper[4737]: I0126 18:31:50.229029 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:50Z","lastTransitionTime":"2026-01-26T18:31:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:50 crc kubenswrapper[4737]: I0126 18:31:50.331692 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:50 crc kubenswrapper[4737]: I0126 18:31:50.332061 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:50 crc kubenswrapper[4737]: I0126 18:31:50.332225 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:50 crc kubenswrapper[4737]: I0126 18:31:50.332423 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:50 crc kubenswrapper[4737]: I0126 18:31:50.332553 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:50Z","lastTransitionTime":"2026-01-26T18:31:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:50 crc kubenswrapper[4737]: I0126 18:31:50.436283 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:50 crc kubenswrapper[4737]: I0126 18:31:50.436350 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:50 crc kubenswrapper[4737]: I0126 18:31:50.436369 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:50 crc kubenswrapper[4737]: I0126 18:31:50.436399 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:50 crc kubenswrapper[4737]: I0126 18:31:50.436429 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:50Z","lastTransitionTime":"2026-01-26T18:31:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:50 crc kubenswrapper[4737]: I0126 18:31:50.539526 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:50 crc kubenswrapper[4737]: I0126 18:31:50.540349 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:50 crc kubenswrapper[4737]: I0126 18:31:50.540434 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:50 crc kubenswrapper[4737]: I0126 18:31:50.540469 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:50 crc kubenswrapper[4737]: I0126 18:31:50.540489 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:50Z","lastTransitionTime":"2026-01-26T18:31:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:50 crc kubenswrapper[4737]: I0126 18:31:50.644130 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:50 crc kubenswrapper[4737]: I0126 18:31:50.644203 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:50 crc kubenswrapper[4737]: I0126 18:31:50.644217 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:50 crc kubenswrapper[4737]: I0126 18:31:50.644239 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:50 crc kubenswrapper[4737]: I0126 18:31:50.644256 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:50Z","lastTransitionTime":"2026-01-26T18:31:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:50 crc kubenswrapper[4737]: I0126 18:31:50.747716 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:50 crc kubenswrapper[4737]: I0126 18:31:50.747784 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:50 crc kubenswrapper[4737]: I0126 18:31:50.747804 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:50 crc kubenswrapper[4737]: I0126 18:31:50.747830 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:50 crc kubenswrapper[4737]: I0126 18:31:50.747850 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:50Z","lastTransitionTime":"2026-01-26T18:31:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:50 crc kubenswrapper[4737]: I0126 18:31:50.851590 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:50 crc kubenswrapper[4737]: I0126 18:31:50.851669 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:50 crc kubenswrapper[4737]: I0126 18:31:50.851686 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:50 crc kubenswrapper[4737]: I0126 18:31:50.851710 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:50 crc kubenswrapper[4737]: I0126 18:31:50.851729 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:50Z","lastTransitionTime":"2026-01-26T18:31:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:50 crc kubenswrapper[4737]: I0126 18:31:50.954604 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:50 crc kubenswrapper[4737]: I0126 18:31:50.954668 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:50 crc kubenswrapper[4737]: I0126 18:31:50.954687 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:50 crc kubenswrapper[4737]: I0126 18:31:50.954713 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:50 crc kubenswrapper[4737]: I0126 18:31:50.954732 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:50Z","lastTransitionTime":"2026-01-26T18:31:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:50 crc kubenswrapper[4737]: I0126 18:31:50.969905 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 22:45:26.453102596 +0000 UTC Jan 26 18:31:50 crc kubenswrapper[4737]: I0126 18:31:50.981703 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:31:50 crc kubenswrapper[4737]: I0126 18:31:50.981805 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:31:50 crc kubenswrapper[4737]: I0126 18:31:50.981743 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:31:50 crc kubenswrapper[4737]: I0126 18:31:50.981730 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:31:50 crc kubenswrapper[4737]: E0126 18:31:50.981947 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4pv7r" podUID="1a3aadb5-b908-4300-af5f-e3c37dff9e14" Jan 26 18:31:50 crc kubenswrapper[4737]: E0126 18:31:50.982049 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:31:50 crc kubenswrapper[4737]: E0126 18:31:50.982193 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:31:50 crc kubenswrapper[4737]: E0126 18:31:50.982311 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:31:51 crc kubenswrapper[4737]: I0126 18:31:51.057990 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:51 crc kubenswrapper[4737]: I0126 18:31:51.058048 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:51 crc kubenswrapper[4737]: I0126 18:31:51.058058 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:51 crc kubenswrapper[4737]: I0126 18:31:51.058092 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:51 crc kubenswrapper[4737]: I0126 18:31:51.058103 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:51Z","lastTransitionTime":"2026-01-26T18:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:51 crc kubenswrapper[4737]: I0126 18:31:51.160296 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:51 crc kubenswrapper[4737]: I0126 18:31:51.160337 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:51 crc kubenswrapper[4737]: I0126 18:31:51.160348 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:51 crc kubenswrapper[4737]: I0126 18:31:51.160363 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:51 crc kubenswrapper[4737]: I0126 18:31:51.160375 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:51Z","lastTransitionTime":"2026-01-26T18:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:51 crc kubenswrapper[4737]: I0126 18:31:51.263446 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:51 crc kubenswrapper[4737]: I0126 18:31:51.263485 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:51 crc kubenswrapper[4737]: I0126 18:31:51.263502 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:51 crc kubenswrapper[4737]: I0126 18:31:51.263526 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:51 crc kubenswrapper[4737]: I0126 18:31:51.263545 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:51Z","lastTransitionTime":"2026-01-26T18:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:51 crc kubenswrapper[4737]: I0126 18:31:51.367510 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:51 crc kubenswrapper[4737]: I0126 18:31:51.367567 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:51 crc kubenswrapper[4737]: I0126 18:31:51.367585 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:51 crc kubenswrapper[4737]: I0126 18:31:51.367609 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:51 crc kubenswrapper[4737]: I0126 18:31:51.367628 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:51Z","lastTransitionTime":"2026-01-26T18:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:51 crc kubenswrapper[4737]: I0126 18:31:51.471800 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:51 crc kubenswrapper[4737]: I0126 18:31:51.471891 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:51 crc kubenswrapper[4737]: I0126 18:31:51.471920 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:51 crc kubenswrapper[4737]: I0126 18:31:51.471952 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:51 crc kubenswrapper[4737]: I0126 18:31:51.471979 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:51Z","lastTransitionTime":"2026-01-26T18:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:51 crc kubenswrapper[4737]: I0126 18:31:51.575482 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:51 crc kubenswrapper[4737]: I0126 18:31:51.575545 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:51 crc kubenswrapper[4737]: I0126 18:31:51.575557 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:51 crc kubenswrapper[4737]: I0126 18:31:51.575577 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:51 crc kubenswrapper[4737]: I0126 18:31:51.575590 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:51Z","lastTransitionTime":"2026-01-26T18:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:51 crc kubenswrapper[4737]: I0126 18:31:51.678801 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:51 crc kubenswrapper[4737]: I0126 18:31:51.678891 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:51 crc kubenswrapper[4737]: I0126 18:31:51.678927 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:51 crc kubenswrapper[4737]: I0126 18:31:51.678960 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:51 crc kubenswrapper[4737]: I0126 18:31:51.678984 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:51Z","lastTransitionTime":"2026-01-26T18:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:51 crc kubenswrapper[4737]: I0126 18:31:51.782317 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:51 crc kubenswrapper[4737]: I0126 18:31:51.782364 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:51 crc kubenswrapper[4737]: I0126 18:31:51.782374 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:51 crc kubenswrapper[4737]: I0126 18:31:51.782392 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:51 crc kubenswrapper[4737]: I0126 18:31:51.782403 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:51Z","lastTransitionTime":"2026-01-26T18:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:51 crc kubenswrapper[4737]: I0126 18:31:51.884721 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:51 crc kubenswrapper[4737]: I0126 18:31:51.884780 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:51 crc kubenswrapper[4737]: I0126 18:31:51.884790 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:51 crc kubenswrapper[4737]: I0126 18:31:51.884807 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:51 crc kubenswrapper[4737]: I0126 18:31:51.884819 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:51Z","lastTransitionTime":"2026-01-26T18:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:51 crc kubenswrapper[4737]: I0126 18:31:51.970935 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 21:49:18.180604652 +0000 UTC Jan 26 18:31:51 crc kubenswrapper[4737]: I0126 18:31:51.987448 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:51 crc kubenswrapper[4737]: I0126 18:31:51.987497 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:51 crc kubenswrapper[4737]: I0126 18:31:51.987509 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:51 crc kubenswrapper[4737]: I0126 18:31:51.987526 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:51 crc kubenswrapper[4737]: I0126 18:31:51.987536 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:51Z","lastTransitionTime":"2026-01-26T18:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:52 crc kubenswrapper[4737]: I0126 18:31:52.090308 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:52 crc kubenswrapper[4737]: I0126 18:31:52.090364 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:52 crc kubenswrapper[4737]: I0126 18:31:52.090376 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:52 crc kubenswrapper[4737]: I0126 18:31:52.090396 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:52 crc kubenswrapper[4737]: I0126 18:31:52.090408 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:52Z","lastTransitionTime":"2026-01-26T18:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:52 crc kubenswrapper[4737]: I0126 18:31:52.193816 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:52 crc kubenswrapper[4737]: I0126 18:31:52.193892 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:52 crc kubenswrapper[4737]: I0126 18:31:52.193909 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:52 crc kubenswrapper[4737]: I0126 18:31:52.193934 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:52 crc kubenswrapper[4737]: I0126 18:31:52.193952 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:52Z","lastTransitionTime":"2026-01-26T18:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:52 crc kubenswrapper[4737]: I0126 18:31:52.296811 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:52 crc kubenswrapper[4737]: I0126 18:31:52.296918 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:52 crc kubenswrapper[4737]: I0126 18:31:52.296949 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:52 crc kubenswrapper[4737]: I0126 18:31:52.296981 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:52 crc kubenswrapper[4737]: I0126 18:31:52.297005 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:52Z","lastTransitionTime":"2026-01-26T18:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:52 crc kubenswrapper[4737]: I0126 18:31:52.400099 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:52 crc kubenswrapper[4737]: I0126 18:31:52.400149 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:52 crc kubenswrapper[4737]: I0126 18:31:52.400163 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:52 crc kubenswrapper[4737]: I0126 18:31:52.400181 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:52 crc kubenswrapper[4737]: I0126 18:31:52.400194 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:52Z","lastTransitionTime":"2026-01-26T18:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:52 crc kubenswrapper[4737]: I0126 18:31:52.503265 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:52 crc kubenswrapper[4737]: I0126 18:31:52.503315 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:52 crc kubenswrapper[4737]: I0126 18:31:52.503326 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:52 crc kubenswrapper[4737]: I0126 18:31:52.503348 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:52 crc kubenswrapper[4737]: I0126 18:31:52.503362 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:52Z","lastTransitionTime":"2026-01-26T18:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:52 crc kubenswrapper[4737]: I0126 18:31:52.606879 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:52 crc kubenswrapper[4737]: I0126 18:31:52.606925 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:52 crc kubenswrapper[4737]: I0126 18:31:52.606937 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:52 crc kubenswrapper[4737]: I0126 18:31:52.606954 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:52 crc kubenswrapper[4737]: I0126 18:31:52.606965 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:52Z","lastTransitionTime":"2026-01-26T18:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:52 crc kubenswrapper[4737]: I0126 18:31:52.709741 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:52 crc kubenswrapper[4737]: I0126 18:31:52.709804 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:52 crc kubenswrapper[4737]: I0126 18:31:52.709817 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:52 crc kubenswrapper[4737]: I0126 18:31:52.709859 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:52 crc kubenswrapper[4737]: I0126 18:31:52.709872 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:52Z","lastTransitionTime":"2026-01-26T18:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:52 crc kubenswrapper[4737]: I0126 18:31:52.812156 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:52 crc kubenswrapper[4737]: I0126 18:31:52.812220 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:52 crc kubenswrapper[4737]: I0126 18:31:52.812238 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:52 crc kubenswrapper[4737]: I0126 18:31:52.812266 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:52 crc kubenswrapper[4737]: I0126 18:31:52.812286 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:52Z","lastTransitionTime":"2026-01-26T18:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:52 crc kubenswrapper[4737]: I0126 18:31:52.914959 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:52 crc kubenswrapper[4737]: I0126 18:31:52.915013 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:52 crc kubenswrapper[4737]: I0126 18:31:52.915025 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:52 crc kubenswrapper[4737]: I0126 18:31:52.915047 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:52 crc kubenswrapper[4737]: I0126 18:31:52.915059 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:52Z","lastTransitionTime":"2026-01-26T18:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:52 crc kubenswrapper[4737]: I0126 18:31:52.972162 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 23:35:33.942969066 +0000 UTC Jan 26 18:31:52 crc kubenswrapper[4737]: I0126 18:31:52.981709 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:31:52 crc kubenswrapper[4737]: I0126 18:31:52.981765 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:31:52 crc kubenswrapper[4737]: I0126 18:31:52.981787 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:31:52 crc kubenswrapper[4737]: E0126 18:31:52.981925 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:31:52 crc kubenswrapper[4737]: I0126 18:31:52.982050 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:31:52 crc kubenswrapper[4737]: E0126 18:31:52.982400 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4pv7r" podUID="1a3aadb5-b908-4300-af5f-e3c37dff9e14" Jan 26 18:31:52 crc kubenswrapper[4737]: E0126 18:31:52.982860 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:31:52 crc kubenswrapper[4737]: E0126 18:31:52.983185 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:31:53 crc kubenswrapper[4737]: I0126 18:31:53.018304 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:53 crc kubenswrapper[4737]: I0126 18:31:53.018410 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:53 crc kubenswrapper[4737]: I0126 18:31:53.018470 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:53 crc kubenswrapper[4737]: I0126 18:31:53.018502 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:53 crc kubenswrapper[4737]: I0126 18:31:53.018549 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:53Z","lastTransitionTime":"2026-01-26T18:31:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:53 crc kubenswrapper[4737]: I0126 18:31:53.122598 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:53 crc kubenswrapper[4737]: I0126 18:31:53.122669 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:53 crc kubenswrapper[4737]: I0126 18:31:53.122706 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:53 crc kubenswrapper[4737]: I0126 18:31:53.122750 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:53 crc kubenswrapper[4737]: I0126 18:31:53.122777 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:53Z","lastTransitionTime":"2026-01-26T18:31:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:53 crc kubenswrapper[4737]: I0126 18:31:53.225929 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:53 crc kubenswrapper[4737]: I0126 18:31:53.226031 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:53 crc kubenswrapper[4737]: I0126 18:31:53.226057 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:53 crc kubenswrapper[4737]: I0126 18:31:53.226156 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:53 crc kubenswrapper[4737]: I0126 18:31:53.226185 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:53Z","lastTransitionTime":"2026-01-26T18:31:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:53 crc kubenswrapper[4737]: I0126 18:31:53.329103 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:53 crc kubenswrapper[4737]: I0126 18:31:53.329166 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:53 crc kubenswrapper[4737]: I0126 18:31:53.329184 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:53 crc kubenswrapper[4737]: I0126 18:31:53.329208 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:53 crc kubenswrapper[4737]: I0126 18:31:53.329226 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:53Z","lastTransitionTime":"2026-01-26T18:31:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:53 crc kubenswrapper[4737]: I0126 18:31:53.432672 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:53 crc kubenswrapper[4737]: I0126 18:31:53.432729 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:53 crc kubenswrapper[4737]: I0126 18:31:53.432748 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:53 crc kubenswrapper[4737]: I0126 18:31:53.432772 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:53 crc kubenswrapper[4737]: I0126 18:31:53.432789 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:53Z","lastTransitionTime":"2026-01-26T18:31:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:53 crc kubenswrapper[4737]: I0126 18:31:53.535392 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:53 crc kubenswrapper[4737]: I0126 18:31:53.535485 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:53 crc kubenswrapper[4737]: I0126 18:31:53.535504 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:53 crc kubenswrapper[4737]: I0126 18:31:53.535527 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:53 crc kubenswrapper[4737]: I0126 18:31:53.535545 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:53Z","lastTransitionTime":"2026-01-26T18:31:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:53 crc kubenswrapper[4737]: I0126 18:31:53.638716 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:53 crc kubenswrapper[4737]: I0126 18:31:53.638778 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:53 crc kubenswrapper[4737]: I0126 18:31:53.638797 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:53 crc kubenswrapper[4737]: I0126 18:31:53.638822 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:53 crc kubenswrapper[4737]: I0126 18:31:53.638847 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:53Z","lastTransitionTime":"2026-01-26T18:31:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:53 crc kubenswrapper[4737]: I0126 18:31:53.742027 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:53 crc kubenswrapper[4737]: I0126 18:31:53.742123 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:53 crc kubenswrapper[4737]: I0126 18:31:53.742141 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:53 crc kubenswrapper[4737]: I0126 18:31:53.742165 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:53 crc kubenswrapper[4737]: I0126 18:31:53.742184 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:53Z","lastTransitionTime":"2026-01-26T18:31:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:53 crc kubenswrapper[4737]: I0126 18:31:53.846030 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:53 crc kubenswrapper[4737]: I0126 18:31:53.846136 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:53 crc kubenswrapper[4737]: I0126 18:31:53.846161 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:53 crc kubenswrapper[4737]: I0126 18:31:53.846191 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:53 crc kubenswrapper[4737]: I0126 18:31:53.846211 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:53Z","lastTransitionTime":"2026-01-26T18:31:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:53 crc kubenswrapper[4737]: I0126 18:31:53.949027 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:53 crc kubenswrapper[4737]: I0126 18:31:53.949120 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:53 crc kubenswrapper[4737]: I0126 18:31:53.949186 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:53 crc kubenswrapper[4737]: I0126 18:31:53.949213 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:53 crc kubenswrapper[4737]: I0126 18:31:53.949307 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:53Z","lastTransitionTime":"2026-01-26T18:31:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:53 crc kubenswrapper[4737]: I0126 18:31:53.973046 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 11:08:26.422102573 +0000 UTC Jan 26 18:31:54 crc kubenswrapper[4737]: I0126 18:31:54.052302 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:54 crc kubenswrapper[4737]: I0126 18:31:54.052370 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:54 crc kubenswrapper[4737]: I0126 18:31:54.052384 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:54 crc kubenswrapper[4737]: I0126 18:31:54.052405 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:54 crc kubenswrapper[4737]: I0126 18:31:54.052421 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:54Z","lastTransitionTime":"2026-01-26T18:31:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:54 crc kubenswrapper[4737]: I0126 18:31:54.155960 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:54 crc kubenswrapper[4737]: I0126 18:31:54.156033 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:54 crc kubenswrapper[4737]: I0126 18:31:54.156061 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:54 crc kubenswrapper[4737]: I0126 18:31:54.156141 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:54 crc kubenswrapper[4737]: I0126 18:31:54.156168 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:54Z","lastTransitionTime":"2026-01-26T18:31:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:54 crc kubenswrapper[4737]: I0126 18:31:54.261117 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:54 crc kubenswrapper[4737]: I0126 18:31:54.261188 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:54 crc kubenswrapper[4737]: I0126 18:31:54.261228 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:54 crc kubenswrapper[4737]: I0126 18:31:54.261263 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:54 crc kubenswrapper[4737]: I0126 18:31:54.261285 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:54Z","lastTransitionTime":"2026-01-26T18:31:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:54 crc kubenswrapper[4737]: I0126 18:31:54.365277 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:54 crc kubenswrapper[4737]: I0126 18:31:54.365352 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:54 crc kubenswrapper[4737]: I0126 18:31:54.365377 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:54 crc kubenswrapper[4737]: I0126 18:31:54.365413 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:54 crc kubenswrapper[4737]: I0126 18:31:54.365440 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:54Z","lastTransitionTime":"2026-01-26T18:31:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:54 crc kubenswrapper[4737]: I0126 18:31:54.469326 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:54 crc kubenswrapper[4737]: I0126 18:31:54.469388 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:54 crc kubenswrapper[4737]: I0126 18:31:54.469406 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:54 crc kubenswrapper[4737]: I0126 18:31:54.469433 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:54 crc kubenswrapper[4737]: I0126 18:31:54.469451 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:54Z","lastTransitionTime":"2026-01-26T18:31:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:54 crc kubenswrapper[4737]: I0126 18:31:54.573411 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:54 crc kubenswrapper[4737]: I0126 18:31:54.573503 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:54 crc kubenswrapper[4737]: I0126 18:31:54.573533 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:54 crc kubenswrapper[4737]: I0126 18:31:54.573571 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:54 crc kubenswrapper[4737]: I0126 18:31:54.573596 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:54Z","lastTransitionTime":"2026-01-26T18:31:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:54 crc kubenswrapper[4737]: I0126 18:31:54.678634 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:54 crc kubenswrapper[4737]: I0126 18:31:54.679342 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:54 crc kubenswrapper[4737]: I0126 18:31:54.679402 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:54 crc kubenswrapper[4737]: I0126 18:31:54.679434 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:54 crc kubenswrapper[4737]: I0126 18:31:54.679448 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:54Z","lastTransitionTime":"2026-01-26T18:31:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:54 crc kubenswrapper[4737]: I0126 18:31:54.783142 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:54 crc kubenswrapper[4737]: I0126 18:31:54.783224 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:54 crc kubenswrapper[4737]: I0126 18:31:54.783245 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:54 crc kubenswrapper[4737]: I0126 18:31:54.783277 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:54 crc kubenswrapper[4737]: I0126 18:31:54.783304 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:54Z","lastTransitionTime":"2026-01-26T18:31:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:54 crc kubenswrapper[4737]: I0126 18:31:54.886408 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:54 crc kubenswrapper[4737]: I0126 18:31:54.886485 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:54 crc kubenswrapper[4737]: I0126 18:31:54.886508 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:54 crc kubenswrapper[4737]: I0126 18:31:54.886537 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:54 crc kubenswrapper[4737]: I0126 18:31:54.886556 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:54Z","lastTransitionTime":"2026-01-26T18:31:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:54 crc kubenswrapper[4737]: I0126 18:31:54.973846 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 16:26:25.444428755 +0000 UTC Jan 26 18:31:54 crc kubenswrapper[4737]: I0126 18:31:54.981272 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:31:54 crc kubenswrapper[4737]: E0126 18:31:54.981579 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:31:54 crc kubenswrapper[4737]: I0126 18:31:54.981982 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:31:54 crc kubenswrapper[4737]: E0126 18:31:54.982248 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4pv7r" podUID="1a3aadb5-b908-4300-af5f-e3c37dff9e14" Jan 26 18:31:54 crc kubenswrapper[4737]: I0126 18:31:54.982403 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:31:54 crc kubenswrapper[4737]: I0126 18:31:54.982505 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:31:54 crc kubenswrapper[4737]: E0126 18:31:54.982576 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:31:54 crc kubenswrapper[4737]: E0126 18:31:54.982728 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:31:54 crc kubenswrapper[4737]: I0126 18:31:54.988513 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:54 crc kubenswrapper[4737]: I0126 18:31:54.988560 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:54 crc kubenswrapper[4737]: I0126 18:31:54.988575 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:54 crc kubenswrapper[4737]: I0126 18:31:54.988597 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:54 crc kubenswrapper[4737]: I0126 18:31:54.988760 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:54Z","lastTransitionTime":"2026-01-26T18:31:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:55 crc kubenswrapper[4737]: I0126 18:31:55.091776 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:55 crc kubenswrapper[4737]: I0126 18:31:55.091874 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:55 crc kubenswrapper[4737]: I0126 18:31:55.091892 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:55 crc kubenswrapper[4737]: I0126 18:31:55.091940 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:55 crc kubenswrapper[4737]: I0126 18:31:55.091957 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:55Z","lastTransitionTime":"2026-01-26T18:31:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:55 crc kubenswrapper[4737]: I0126 18:31:55.195320 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:55 crc kubenswrapper[4737]: I0126 18:31:55.195363 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:55 crc kubenswrapper[4737]: I0126 18:31:55.195371 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:55 crc kubenswrapper[4737]: I0126 18:31:55.195388 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:55 crc kubenswrapper[4737]: I0126 18:31:55.195398 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:55Z","lastTransitionTime":"2026-01-26T18:31:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:55 crc kubenswrapper[4737]: I0126 18:31:55.298624 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:55 crc kubenswrapper[4737]: I0126 18:31:55.298704 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:55 crc kubenswrapper[4737]: I0126 18:31:55.298727 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:55 crc kubenswrapper[4737]: I0126 18:31:55.298761 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:55 crc kubenswrapper[4737]: I0126 18:31:55.298791 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:55Z","lastTransitionTime":"2026-01-26T18:31:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:55 crc kubenswrapper[4737]: I0126 18:31:55.402195 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:55 crc kubenswrapper[4737]: I0126 18:31:55.402259 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:55 crc kubenswrapper[4737]: I0126 18:31:55.402273 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:55 crc kubenswrapper[4737]: I0126 18:31:55.402291 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:55 crc kubenswrapper[4737]: I0126 18:31:55.402301 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:55Z","lastTransitionTime":"2026-01-26T18:31:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:55 crc kubenswrapper[4737]: I0126 18:31:55.506108 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:55 crc kubenswrapper[4737]: I0126 18:31:55.506175 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:55 crc kubenswrapper[4737]: I0126 18:31:55.506200 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:55 crc kubenswrapper[4737]: I0126 18:31:55.506234 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:55 crc kubenswrapper[4737]: I0126 18:31:55.506259 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:55Z","lastTransitionTime":"2026-01-26T18:31:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:55 crc kubenswrapper[4737]: I0126 18:31:55.609614 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:55 crc kubenswrapper[4737]: I0126 18:31:55.609668 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:55 crc kubenswrapper[4737]: I0126 18:31:55.609681 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:55 crc kubenswrapper[4737]: I0126 18:31:55.609733 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:55 crc kubenswrapper[4737]: I0126 18:31:55.609748 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:55Z","lastTransitionTime":"2026-01-26T18:31:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:55 crc kubenswrapper[4737]: I0126 18:31:55.712485 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:55 crc kubenswrapper[4737]: I0126 18:31:55.712572 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:55 crc kubenswrapper[4737]: I0126 18:31:55.712582 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:55 crc kubenswrapper[4737]: I0126 18:31:55.712600 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:55 crc kubenswrapper[4737]: I0126 18:31:55.712612 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:55Z","lastTransitionTime":"2026-01-26T18:31:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:55 crc kubenswrapper[4737]: I0126 18:31:55.814591 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:55 crc kubenswrapper[4737]: I0126 18:31:55.814647 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:55 crc kubenswrapper[4737]: I0126 18:31:55.814661 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:55 crc kubenswrapper[4737]: I0126 18:31:55.814682 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:55 crc kubenswrapper[4737]: I0126 18:31:55.814698 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:55Z","lastTransitionTime":"2026-01-26T18:31:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:55 crc kubenswrapper[4737]: I0126 18:31:55.917599 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:55 crc kubenswrapper[4737]: I0126 18:31:55.917669 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:55 crc kubenswrapper[4737]: I0126 18:31:55.917682 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:55 crc kubenswrapper[4737]: I0126 18:31:55.917700 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:55 crc kubenswrapper[4737]: I0126 18:31:55.917714 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:55Z","lastTransitionTime":"2026-01-26T18:31:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:55 crc kubenswrapper[4737]: I0126 18:31:55.973976 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 23:01:50.012124403 +0000 UTC Jan 26 18:31:56 crc kubenswrapper[4737]: I0126 18:31:56.020641 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:56 crc kubenswrapper[4737]: I0126 18:31:56.020963 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:56 crc kubenswrapper[4737]: I0126 18:31:56.021219 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:56 crc kubenswrapper[4737]: I0126 18:31:56.021482 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:56 crc kubenswrapper[4737]: I0126 18:31:56.021665 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:56Z","lastTransitionTime":"2026-01-26T18:31:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:56 crc kubenswrapper[4737]: I0126 18:31:56.125631 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:56 crc kubenswrapper[4737]: I0126 18:31:56.125693 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:56 crc kubenswrapper[4737]: I0126 18:31:56.125707 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:56 crc kubenswrapper[4737]: I0126 18:31:56.125727 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:56 crc kubenswrapper[4737]: I0126 18:31:56.125738 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:56Z","lastTransitionTime":"2026-01-26T18:31:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:56 crc kubenswrapper[4737]: I0126 18:31:56.228762 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:56 crc kubenswrapper[4737]: I0126 18:31:56.228843 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:56 crc kubenswrapper[4737]: I0126 18:31:56.228865 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:56 crc kubenswrapper[4737]: I0126 18:31:56.228895 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:56 crc kubenswrapper[4737]: I0126 18:31:56.228921 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:56Z","lastTransitionTime":"2026-01-26T18:31:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:56 crc kubenswrapper[4737]: I0126 18:31:56.333109 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:56 crc kubenswrapper[4737]: I0126 18:31:56.333222 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:56 crc kubenswrapper[4737]: I0126 18:31:56.333243 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:56 crc kubenswrapper[4737]: I0126 18:31:56.333269 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:56 crc kubenswrapper[4737]: I0126 18:31:56.333284 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:56Z","lastTransitionTime":"2026-01-26T18:31:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:56 crc kubenswrapper[4737]: I0126 18:31:56.436515 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:56 crc kubenswrapper[4737]: I0126 18:31:56.436569 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:56 crc kubenswrapper[4737]: I0126 18:31:56.436580 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:56 crc kubenswrapper[4737]: I0126 18:31:56.436599 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:56 crc kubenswrapper[4737]: I0126 18:31:56.436611 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:56Z","lastTransitionTime":"2026-01-26T18:31:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:56 crc kubenswrapper[4737]: I0126 18:31:56.539998 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:56 crc kubenswrapper[4737]: I0126 18:31:56.540119 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:56 crc kubenswrapper[4737]: I0126 18:31:56.540147 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:56 crc kubenswrapper[4737]: I0126 18:31:56.540183 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:56 crc kubenswrapper[4737]: I0126 18:31:56.540208 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:56Z","lastTransitionTime":"2026-01-26T18:31:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:56 crc kubenswrapper[4737]: I0126 18:31:56.644160 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:56 crc kubenswrapper[4737]: I0126 18:31:56.644231 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:56 crc kubenswrapper[4737]: I0126 18:31:56.644247 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:56 crc kubenswrapper[4737]: I0126 18:31:56.644270 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:56 crc kubenswrapper[4737]: I0126 18:31:56.644285 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:56Z","lastTransitionTime":"2026-01-26T18:31:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:56 crc kubenswrapper[4737]: I0126 18:31:56.747065 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:56 crc kubenswrapper[4737]: I0126 18:31:56.747209 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:56 crc kubenswrapper[4737]: I0126 18:31:56.747228 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:56 crc kubenswrapper[4737]: I0126 18:31:56.747262 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:56 crc kubenswrapper[4737]: I0126 18:31:56.747284 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:56Z","lastTransitionTime":"2026-01-26T18:31:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:56 crc kubenswrapper[4737]: I0126 18:31:56.850847 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:56 crc kubenswrapper[4737]: I0126 18:31:56.850913 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:56 crc kubenswrapper[4737]: I0126 18:31:56.850933 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:56 crc kubenswrapper[4737]: I0126 18:31:56.850957 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:56 crc kubenswrapper[4737]: I0126 18:31:56.850975 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:56Z","lastTransitionTime":"2026-01-26T18:31:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:56 crc kubenswrapper[4737]: I0126 18:31:56.954135 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:56 crc kubenswrapper[4737]: I0126 18:31:56.954195 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:56 crc kubenswrapper[4737]: I0126 18:31:56.954208 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:56 crc kubenswrapper[4737]: I0126 18:31:56.954238 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:56 crc kubenswrapper[4737]: I0126 18:31:56.954253 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:56Z","lastTransitionTime":"2026-01-26T18:31:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:56 crc kubenswrapper[4737]: I0126 18:31:56.975031 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 09:41:37.439345346 +0000 UTC Jan 26 18:31:56 crc kubenswrapper[4737]: I0126 18:31:56.981945 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:31:56 crc kubenswrapper[4737]: I0126 18:31:56.982130 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:31:56 crc kubenswrapper[4737]: E0126 18:31:56.982176 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:31:56 crc kubenswrapper[4737]: I0126 18:31:56.982477 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:31:56 crc kubenswrapper[4737]: I0126 18:31:56.982517 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:31:56 crc kubenswrapper[4737]: E0126 18:31:56.982505 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4pv7r" podUID="1a3aadb5-b908-4300-af5f-e3c37dff9e14" Jan 26 18:31:56 crc kubenswrapper[4737]: E0126 18:31:56.983343 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:31:56 crc kubenswrapper[4737]: E0126 18:31:56.983643 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.005638 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a777e838-21c0-4be5-9c8d-66ffb95135e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:31:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e22cbaea409b90eb9ad8f629cc94f12d5d94913c660d1f4ecbf3b1dd136d009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15312b4318e6f2175734be08ac5efbea4b0a46e2810e7223575671671408a157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81db4bac81727e02147b813300003fba15b7daf01d124d40ee30e4a87446ed1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4aba885244febd5d5191fbd34d2ee56412140bedfaf405e1a6b8bdeba2814f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4aba885244febd5d5191fbd34d2ee56412140bedfaf405e1a6b8bdeba2814f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:57Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.044173 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvbml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f32d3b75-6d15-4fb7-9559-d3df1d77071e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e973f3c659c65849958ccb32d18d8e67d42874690df337699f6cf976485c536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8e3b31d856c5896694946164e5a67ae89eed558f644c46cbd8567621d2e93f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26440d609933b26710b9b795d22f93f3a3e237334cdf59b09fab7a59bebb124f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0afbcc81c84d781765314070a4e819effd6966302e4e6626d6e6f31a50ce6b7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964d4efebd36c98e04ce2d36427221cf4b898116bc050a65424de9e79e46b3bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c92823989e88b6148f741cfc3d548371e30589b5cfd7b16e954ebd4355399184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81b1b4cdfa531e63bf8499478cc1f6813d659b2b1b160d374133713382cff7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e81b1b4cdfa531e63bf8499478cc1f6813d659b2b1b160d374133713382cff7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:30:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:30:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4jhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvbml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:57Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.058038 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.058229 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.058258 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.058329 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.058353 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:57Z","lastTransitionTime":"2026-01-26T18:31:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.064035 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bc7b559-f4f0-47b0-b148-6d0915785538\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10904723390bf4505ed547f04ed3a24b1e7debcf7f089e7de30eb5166c8f6d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knvgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4df8c189f585082008e31ded41ba96e5939a894300f9dc29b53768a05cea54c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knvgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rzpxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:57Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.084288 4737 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4pv7r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a3aadb5-b908-4300-af5f-e3c37dff9e14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:30:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7cfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7cfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:30:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4pv7r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:31:57Z is after 2025-08-24T17:21:41Z" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.129877 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=72.129853953 podStartE2EDuration="1m12.129853953s" podCreationTimestamp="2026-01-26 18:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:31:57.127783448 +0000 UTC m=+90.435978156" watchObservedRunningTime="2026-01-26 18:31:57.129853953 +0000 UTC m=+90.438048671" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.160863 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.160919 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.160937 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.160963 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.160978 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:57Z","lastTransitionTime":"2026-01-26T18:31:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.204892 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-fsmsj" podStartSLOduration=73.204867292 podStartE2EDuration="1m13.204867292s" podCreationTimestamp="2026-01-26 18:30:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:31:57.188925291 +0000 UTC m=+90.497120029" watchObservedRunningTime="2026-01-26 18:31:57.204867292 +0000 UTC m=+90.513062000" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.222666 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-gxxjs" podStartSLOduration=72.222633091 podStartE2EDuration="1m12.222633091s" podCreationTimestamp="2026-01-26 18:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:31:57.20514795 +0000 UTC m=+90.513342698" watchObservedRunningTime="2026-01-26 18:31:57.222633091 +0000 UTC m=+90.530827829" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.240044 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=73.240012379 podStartE2EDuration="1m13.240012379s" podCreationTimestamp="2026-01-26 18:30:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:31:57.223238756 +0000 UTC m=+90.531433474" watchObservedRunningTime="2026-01-26 18:31:57.240012379 +0000 UTC m=+90.548207107" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.240314 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=24.240305028 podStartE2EDuration="24.240305028s" podCreationTimestamp="2026-01-26 18:31:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:31:57.23964337 +0000 UTC m=+90.547838088" watchObservedRunningTime="2026-01-26 18:31:57.240305028 +0000 UTC m=+90.548499756" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.261408 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-qjff2" podStartSLOduration=73.261381944 podStartE2EDuration="1m13.261381944s" podCreationTimestamp="2026-01-26 18:30:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:31:57.261300502 +0000 UTC m=+90.569495220" watchObservedRunningTime="2026-01-26 18:31:57.261381944 +0000 UTC m=+90.569576672" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.263882 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.263929 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.263941 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.263961 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.263975 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:57Z","lastTransitionTime":"2026-01-26T18:31:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.298672 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podStartSLOduration=73.298642097 podStartE2EDuration="1m13.298642097s" podCreationTimestamp="2026-01-26 18:30:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:31:57.282962553 +0000 UTC m=+90.591157271" watchObservedRunningTime="2026-01-26 18:31:57.298642097 +0000 UTC m=+90.606836815" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.340292 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=72.340266745 podStartE2EDuration="1m12.340266745s" podCreationTimestamp="2026-01-26 18:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:31:57.339815954 +0000 UTC m=+90.648010702" watchObservedRunningTime="2026-01-26 18:31:57.340266745 +0000 UTC m=+90.648461453" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.366138 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.366186 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.366197 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.366216 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.366228 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:57Z","lastTransitionTime":"2026-01-26T18:31:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.468345 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.468721 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.468836 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.468923 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.468997 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:57Z","lastTransitionTime":"2026-01-26T18:31:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.572667 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.572699 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.572711 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.572729 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.572740 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:57Z","lastTransitionTime":"2026-01-26T18:31:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.671521 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.671561 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.671573 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.671592 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.671606 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:57Z","lastTransitionTime":"2026-01-26T18:31:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.696849 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.696900 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.696911 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.696943 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.696956 4737 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:31:57Z","lastTransitionTime":"2026-01-26T18:31:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.726047 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-tz5zx"] Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.726635 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tz5zx" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.728828 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.728985 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.729110 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.729253 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.799465 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b4b32731-6e6c-42ed-aec2-79a16d2078a4-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-tz5zx\" (UID: \"b4b32731-6e6c-42ed-aec2-79a16d2078a4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tz5zx" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.799509 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b4b32731-6e6c-42ed-aec2-79a16d2078a4-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-tz5zx\" (UID: \"b4b32731-6e6c-42ed-aec2-79a16d2078a4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tz5zx" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.799544 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b4b32731-6e6c-42ed-aec2-79a16d2078a4-service-ca\") pod \"cluster-version-operator-5c965bbfc6-tz5zx\" (UID: \"b4b32731-6e6c-42ed-aec2-79a16d2078a4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tz5zx" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.799616 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4b32731-6e6c-42ed-aec2-79a16d2078a4-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-tz5zx\" (UID: \"b4b32731-6e6c-42ed-aec2-79a16d2078a4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tz5zx" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.799657 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b4b32731-6e6c-42ed-aec2-79a16d2078a4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-tz5zx\" (UID: \"b4b32731-6e6c-42ed-aec2-79a16d2078a4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tz5zx" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.803231 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-cvbml" podStartSLOduration=73.803192822 podStartE2EDuration="1m13.803192822s" podCreationTimestamp="2026-01-26 18:30:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:31:57.80236972 +0000 UTC m=+91.110564428" watchObservedRunningTime="2026-01-26 18:31:57.803192822 +0000 UTC m=+91.111387560" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.805535 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=36.805520423 podStartE2EDuration="36.805520423s" podCreationTimestamp="2026-01-26 18:31:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:31:57.77658207 +0000 UTC m=+91.084776778" watchObservedRunningTime="2026-01-26 18:31:57.805520423 +0000 UTC m=+91.113715161" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.900707 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b4b32731-6e6c-42ed-aec2-79a16d2078a4-service-ca\") pod \"cluster-version-operator-5c965bbfc6-tz5zx\" (UID: \"b4b32731-6e6c-42ed-aec2-79a16d2078a4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tz5zx" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.900791 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4b32731-6e6c-42ed-aec2-79a16d2078a4-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-tz5zx\" (UID: \"b4b32731-6e6c-42ed-aec2-79a16d2078a4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tz5zx" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.900821 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b4b32731-6e6c-42ed-aec2-79a16d2078a4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-tz5zx\" (UID: \"b4b32731-6e6c-42ed-aec2-79a16d2078a4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tz5zx" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.900848 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b4b32731-6e6c-42ed-aec2-79a16d2078a4-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-tz5zx\" (UID: \"b4b32731-6e6c-42ed-aec2-79a16d2078a4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tz5zx" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.900869 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b4b32731-6e6c-42ed-aec2-79a16d2078a4-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-tz5zx\" (UID: \"b4b32731-6e6c-42ed-aec2-79a16d2078a4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tz5zx" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.900919 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b4b32731-6e6c-42ed-aec2-79a16d2078a4-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-tz5zx\" (UID: \"b4b32731-6e6c-42ed-aec2-79a16d2078a4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tz5zx" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.901032 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b4b32731-6e6c-42ed-aec2-79a16d2078a4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-tz5zx\" (UID: \"b4b32731-6e6c-42ed-aec2-79a16d2078a4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tz5zx" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.901782 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b4b32731-6e6c-42ed-aec2-79a16d2078a4-service-ca\") pod \"cluster-version-operator-5c965bbfc6-tz5zx\" (UID: \"b4b32731-6e6c-42ed-aec2-79a16d2078a4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tz5zx" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.909816 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4b32731-6e6c-42ed-aec2-79a16d2078a4-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-tz5zx\" (UID: \"b4b32731-6e6c-42ed-aec2-79a16d2078a4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tz5zx" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.926527 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b4b32731-6e6c-42ed-aec2-79a16d2078a4-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-tz5zx\" (UID: \"b4b32731-6e6c-42ed-aec2-79a16d2078a4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tz5zx" Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.976148 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 13:43:50.947558518 +0000 UTC Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.976526 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 26 18:31:57 crc kubenswrapper[4737]: I0126 18:31:57.985825 4737 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 26 18:31:58 crc kubenswrapper[4737]: I0126 18:31:58.040652 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tz5zx" Jan 26 18:31:58 crc kubenswrapper[4737]: I0126 18:31:58.913168 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tz5zx" event={"ID":"b4b32731-6e6c-42ed-aec2-79a16d2078a4","Type":"ContainerStarted","Data":"ed7c466cb1b012ed7943c2e5b5ddd16b191cbb3af040f0813d87ab3a326bc288"} Jan 26 18:31:58 crc kubenswrapper[4737]: I0126 18:31:58.913238 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tz5zx" event={"ID":"b4b32731-6e6c-42ed-aec2-79a16d2078a4","Type":"ContainerStarted","Data":"ff85295451a0b494491759dd8f35878fb5ae81901ae1c1311004620c7f2e5cab"} Jan 26 18:31:58 crc kubenswrapper[4737]: I0126 18:31:58.932440 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rzpxj" podStartSLOduration=73.932418643 podStartE2EDuration="1m13.932418643s" podCreationTimestamp="2026-01-26 18:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:31:57.817980022 +0000 UTC m=+91.126174730" watchObservedRunningTime="2026-01-26 18:31:58.932418643 +0000 UTC m=+92.240613351" Jan 26 18:31:58 crc kubenswrapper[4737]: I0126 18:31:58.980952 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:31:58 crc kubenswrapper[4737]: I0126 18:31:58.981028 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:31:58 crc kubenswrapper[4737]: E0126 18:31:58.981135 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:31:58 crc kubenswrapper[4737]: I0126 18:31:58.981228 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:31:58 crc kubenswrapper[4737]: E0126 18:31:58.981434 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:31:58 crc kubenswrapper[4737]: E0126 18:31:58.981587 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4pv7r" podUID="1a3aadb5-b908-4300-af5f-e3c37dff9e14" Jan 26 18:31:58 crc kubenswrapper[4737]: I0126 18:31:58.981965 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:31:58 crc kubenswrapper[4737]: E0126 18:31:58.982384 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:32:00 crc kubenswrapper[4737]: I0126 18:32:00.981486 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:32:00 crc kubenswrapper[4737]: I0126 18:32:00.981665 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:32:00 crc kubenswrapper[4737]: I0126 18:32:00.981697 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:32:00 crc kubenswrapper[4737]: E0126 18:32:00.981861 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:32:00 crc kubenswrapper[4737]: I0126 18:32:00.982012 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:32:00 crc kubenswrapper[4737]: E0126 18:32:00.982128 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4pv7r" podUID="1a3aadb5-b908-4300-af5f-e3c37dff9e14" Jan 26 18:32:00 crc kubenswrapper[4737]: E0126 18:32:00.982214 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:32:00 crc kubenswrapper[4737]: E0126 18:32:00.982522 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:32:02 crc kubenswrapper[4737]: I0126 18:32:02.981617 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:32:02 crc kubenswrapper[4737]: I0126 18:32:02.981660 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:32:02 crc kubenswrapper[4737]: I0126 18:32:02.981805 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:32:02 crc kubenswrapper[4737]: E0126 18:32:02.981807 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4pv7r" podUID="1a3aadb5-b908-4300-af5f-e3c37dff9e14" Jan 26 18:32:02 crc kubenswrapper[4737]: E0126 18:32:02.982288 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:32:02 crc kubenswrapper[4737]: I0126 18:32:02.982457 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:32:02 crc kubenswrapper[4737]: E0126 18:32:02.982492 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:32:02 crc kubenswrapper[4737]: E0126 18:32:02.982561 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:32:02 crc kubenswrapper[4737]: I0126 18:32:02.983130 4737 scope.go:117] "RemoveContainer" containerID="6410407283f04a3f2e54ce997c8b1d77068c25df4c498c1cd5a23c30dbd514d4" Jan 26 18:32:02 crc kubenswrapper[4737]: E0126 18:32:02.983494 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jgjrk_openshift-ovn-kubernetes(ecb40773-20dc-48ef-bf7f-17f4a042b01c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" podUID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" Jan 26 18:32:03 crc kubenswrapper[4737]: I0126 18:32:03.468790 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a3aadb5-b908-4300-af5f-e3c37dff9e14-metrics-certs\") pod \"network-metrics-daemon-4pv7r\" (UID: \"1a3aadb5-b908-4300-af5f-e3c37dff9e14\") " pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:32:03 crc kubenswrapper[4737]: E0126 18:32:03.469107 4737 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 18:32:03 crc kubenswrapper[4737]: E0126 18:32:03.469256 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a3aadb5-b908-4300-af5f-e3c37dff9e14-metrics-certs podName:1a3aadb5-b908-4300-af5f-e3c37dff9e14 nodeName:}" failed. No retries permitted until 2026-01-26 18:33:07.469215949 +0000 UTC m=+160.777410687 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1a3aadb5-b908-4300-af5f-e3c37dff9e14-metrics-certs") pod "network-metrics-daemon-4pv7r" (UID: "1a3aadb5-b908-4300-af5f-e3c37dff9e14") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 18:32:04 crc kubenswrapper[4737]: I0126 18:32:04.981889 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:32:04 crc kubenswrapper[4737]: I0126 18:32:04.981920 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:32:04 crc kubenswrapper[4737]: I0126 18:32:04.981984 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:32:04 crc kubenswrapper[4737]: I0126 18:32:04.982478 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:32:04 crc kubenswrapper[4737]: E0126 18:32:04.983697 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4pv7r" podUID="1a3aadb5-b908-4300-af5f-e3c37dff9e14" Jan 26 18:32:04 crc kubenswrapper[4737]: E0126 18:32:04.984347 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:32:04 crc kubenswrapper[4737]: E0126 18:32:04.984891 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:32:04 crc kubenswrapper[4737]: E0126 18:32:04.985617 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:32:06 crc kubenswrapper[4737]: I0126 18:32:06.980786 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:32:06 crc kubenswrapper[4737]: I0126 18:32:06.980892 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:32:06 crc kubenswrapper[4737]: E0126 18:32:06.982560 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:32:06 crc kubenswrapper[4737]: I0126 18:32:06.982728 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:32:06 crc kubenswrapper[4737]: I0126 18:32:06.982721 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:32:06 crc kubenswrapper[4737]: E0126 18:32:06.983112 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:32:06 crc kubenswrapper[4737]: E0126 18:32:06.983195 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:32:06 crc kubenswrapper[4737]: E0126 18:32:06.982990 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4pv7r" podUID="1a3aadb5-b908-4300-af5f-e3c37dff9e14" Jan 26 18:32:08 crc kubenswrapper[4737]: I0126 18:32:08.981937 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:32:08 crc kubenswrapper[4737]: I0126 18:32:08.982037 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:32:08 crc kubenswrapper[4737]: E0126 18:32:08.982579 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4pv7r" podUID="1a3aadb5-b908-4300-af5f-e3c37dff9e14" Jan 26 18:32:08 crc kubenswrapper[4737]: I0126 18:32:08.982215 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:32:08 crc kubenswrapper[4737]: E0126 18:32:08.982726 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:32:08 crc kubenswrapper[4737]: I0126 18:32:08.982121 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:32:08 crc kubenswrapper[4737]: E0126 18:32:08.982847 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:32:08 crc kubenswrapper[4737]: E0126 18:32:08.982905 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:32:10 crc kubenswrapper[4737]: I0126 18:32:10.981657 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:32:10 crc kubenswrapper[4737]: E0126 18:32:10.981923 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:32:10 crc kubenswrapper[4737]: I0126 18:32:10.982243 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:32:10 crc kubenswrapper[4737]: I0126 18:32:10.982263 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:32:10 crc kubenswrapper[4737]: E0126 18:32:10.982410 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4pv7r" podUID="1a3aadb5-b908-4300-af5f-e3c37dff9e14" Jan 26 18:32:10 crc kubenswrapper[4737]: I0126 18:32:10.982565 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:32:10 crc kubenswrapper[4737]: E0126 18:32:10.982708 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:32:10 crc kubenswrapper[4737]: E0126 18:32:10.982868 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:32:12 crc kubenswrapper[4737]: I0126 18:32:12.981226 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:32:12 crc kubenswrapper[4737]: E0126 18:32:12.981350 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:32:12 crc kubenswrapper[4737]: I0126 18:32:12.981464 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:32:12 crc kubenswrapper[4737]: I0126 18:32:12.981480 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:32:12 crc kubenswrapper[4737]: E0126 18:32:12.981661 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:32:12 crc kubenswrapper[4737]: E0126 18:32:12.981758 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:32:12 crc kubenswrapper[4737]: I0126 18:32:12.981243 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:32:12 crc kubenswrapper[4737]: E0126 18:32:12.982177 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4pv7r" podUID="1a3aadb5-b908-4300-af5f-e3c37dff9e14" Jan 26 18:32:14 crc kubenswrapper[4737]: I0126 18:32:14.981638 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:32:14 crc kubenswrapper[4737]: E0126 18:32:14.981781 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:32:14 crc kubenswrapper[4737]: I0126 18:32:14.982102 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:32:14 crc kubenswrapper[4737]: E0126 18:32:14.982186 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:32:14 crc kubenswrapper[4737]: I0126 18:32:14.982950 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:32:14 crc kubenswrapper[4737]: I0126 18:32:14.983091 4737 scope.go:117] "RemoveContainer" containerID="6410407283f04a3f2e54ce997c8b1d77068c25df4c498c1cd5a23c30dbd514d4" Jan 26 18:32:14 crc kubenswrapper[4737]: I0126 18:32:14.983264 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:32:14 crc kubenswrapper[4737]: E0126 18:32:14.983510 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4pv7r" podUID="1a3aadb5-b908-4300-af5f-e3c37dff9e14" Jan 26 18:32:14 crc kubenswrapper[4737]: E0126 18:32:14.983690 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:32:14 crc kubenswrapper[4737]: E0126 18:32:14.983965 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jgjrk_openshift-ovn-kubernetes(ecb40773-20dc-48ef-bf7f-17f4a042b01c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" podUID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" Jan 26 18:32:16 crc kubenswrapper[4737]: I0126 18:32:16.980979 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:32:16 crc kubenswrapper[4737]: I0126 18:32:16.981100 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:32:16 crc kubenswrapper[4737]: I0126 18:32:16.981152 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:32:16 crc kubenswrapper[4737]: E0126 18:32:16.983209 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:32:16 crc kubenswrapper[4737]: I0126 18:32:16.983429 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:32:16 crc kubenswrapper[4737]: E0126 18:32:16.983624 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4pv7r" podUID="1a3aadb5-b908-4300-af5f-e3c37dff9e14" Jan 26 18:32:16 crc kubenswrapper[4737]: E0126 18:32:16.983777 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:32:16 crc kubenswrapper[4737]: E0126 18:32:16.983883 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:32:18 crc kubenswrapper[4737]: I0126 18:32:18.981514 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:32:18 crc kubenswrapper[4737]: E0126 18:32:18.981917 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4pv7r" podUID="1a3aadb5-b908-4300-af5f-e3c37dff9e14" Jan 26 18:32:18 crc kubenswrapper[4737]: I0126 18:32:18.983695 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:32:18 crc kubenswrapper[4737]: I0126 18:32:18.983896 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:32:18 crc kubenswrapper[4737]: E0126 18:32:18.983915 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:32:18 crc kubenswrapper[4737]: E0126 18:32:18.984142 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:32:18 crc kubenswrapper[4737]: I0126 18:32:18.984298 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:32:18 crc kubenswrapper[4737]: E0126 18:32:18.984504 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:32:18 crc kubenswrapper[4737]: I0126 18:32:18.996868 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qjff2_82627aad-2019-482e-934a-7e9729927a34/kube-multus/1.log" Jan 26 18:32:18 crc kubenswrapper[4737]: I0126 18:32:18.997895 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qjff2_82627aad-2019-482e-934a-7e9729927a34/kube-multus/0.log" Jan 26 18:32:18 crc kubenswrapper[4737]: I0126 18:32:18.997957 4737 generic.go:334] "Generic (PLEG): container finished" podID="82627aad-2019-482e-934a-7e9729927a34" containerID="debc5589aae465210c77fde58754f822ad1d429fc00cfb56625deddf51cf6fc2" exitCode=1 Jan 26 18:32:18 crc kubenswrapper[4737]: I0126 18:32:18.998001 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qjff2" event={"ID":"82627aad-2019-482e-934a-7e9729927a34","Type":"ContainerDied","Data":"debc5589aae465210c77fde58754f822ad1d429fc00cfb56625deddf51cf6fc2"} Jan 26 18:32:18 crc kubenswrapper[4737]: I0126 18:32:18.998047 4737 scope.go:117] "RemoveContainer" containerID="938d6c4b9c86f851e8346bde5364b9a2463869d85fef2bc4e705335f9253be4c" Jan 26 18:32:18 crc kubenswrapper[4737]: I0126 18:32:18.998529 4737 scope.go:117] "RemoveContainer" containerID="debc5589aae465210c77fde58754f822ad1d429fc00cfb56625deddf51cf6fc2" Jan 26 18:32:18 crc kubenswrapper[4737]: E0126 18:32:18.998759 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-qjff2_openshift-multus(82627aad-2019-482e-934a-7e9729927a34)\"" pod="openshift-multus/multus-qjff2" podUID="82627aad-2019-482e-934a-7e9729927a34" Jan 26 18:32:19 crc kubenswrapper[4737]: I0126 18:32:19.025967 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tz5zx" podStartSLOduration=95.025942177 podStartE2EDuration="1m35.025942177s" podCreationTimestamp="2026-01-26 18:30:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:31:58.936307066 +0000 UTC m=+92.244501784" watchObservedRunningTime="2026-01-26 18:32:19.025942177 +0000 UTC m=+112.334136895" Jan 26 18:32:20 crc kubenswrapper[4737]: I0126 18:32:20.005676 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qjff2_82627aad-2019-482e-934a-7e9729927a34/kube-multus/1.log" Jan 26 18:32:20 crc kubenswrapper[4737]: I0126 18:32:20.980970 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:32:20 crc kubenswrapper[4737]: I0126 18:32:20.981139 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:32:20 crc kubenswrapper[4737]: I0126 18:32:20.981269 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:32:20 crc kubenswrapper[4737]: E0126 18:32:20.981555 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4pv7r" podUID="1a3aadb5-b908-4300-af5f-e3c37dff9e14" Jan 26 18:32:20 crc kubenswrapper[4737]: I0126 18:32:20.981776 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:32:20 crc kubenswrapper[4737]: E0126 18:32:20.981910 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:32:20 crc kubenswrapper[4737]: E0126 18:32:20.982139 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:32:20 crc kubenswrapper[4737]: E0126 18:32:20.982353 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:32:22 crc kubenswrapper[4737]: I0126 18:32:22.981312 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:32:22 crc kubenswrapper[4737]: I0126 18:32:22.981395 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:32:22 crc kubenswrapper[4737]: I0126 18:32:22.981415 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:32:22 crc kubenswrapper[4737]: I0126 18:32:22.981325 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:32:22 crc kubenswrapper[4737]: E0126 18:32:22.981570 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:32:22 crc kubenswrapper[4737]: E0126 18:32:22.981733 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4pv7r" podUID="1a3aadb5-b908-4300-af5f-e3c37dff9e14" Jan 26 18:32:22 crc kubenswrapper[4737]: E0126 18:32:22.982035 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:32:22 crc kubenswrapper[4737]: E0126 18:32:22.982121 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:32:24 crc kubenswrapper[4737]: I0126 18:32:24.982444 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:32:24 crc kubenswrapper[4737]: I0126 18:32:24.982638 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:32:24 crc kubenswrapper[4737]: E0126 18:32:24.983345 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:32:24 crc kubenswrapper[4737]: I0126 18:32:24.982566 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:32:24 crc kubenswrapper[4737]: E0126 18:32:24.983491 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4pv7r" podUID="1a3aadb5-b908-4300-af5f-e3c37dff9e14" Jan 26 18:32:24 crc kubenswrapper[4737]: I0126 18:32:24.982658 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:32:24 crc kubenswrapper[4737]: E0126 18:32:24.983614 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:32:24 crc kubenswrapper[4737]: E0126 18:32:24.983665 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:32:26 crc kubenswrapper[4737]: E0126 18:32:26.958283 4737 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 26 18:32:26 crc kubenswrapper[4737]: I0126 18:32:26.981108 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:32:26 crc kubenswrapper[4737]: I0126 18:32:26.981195 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:32:26 crc kubenswrapper[4737]: I0126 18:32:26.981203 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:32:26 crc kubenswrapper[4737]: I0126 18:32:26.981445 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:32:26 crc kubenswrapper[4737]: E0126 18:32:26.984137 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4pv7r" podUID="1a3aadb5-b908-4300-af5f-e3c37dff9e14" Jan 26 18:32:26 crc kubenswrapper[4737]: E0126 18:32:26.984816 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:32:26 crc kubenswrapper[4737]: E0126 18:32:26.984938 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:32:26 crc kubenswrapper[4737]: E0126 18:32:26.984957 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:32:27 crc kubenswrapper[4737]: E0126 18:32:27.068778 4737 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 18:32:27 crc kubenswrapper[4737]: I0126 18:32:27.983026 4737 scope.go:117] "RemoveContainer" containerID="6410407283f04a3f2e54ce997c8b1d77068c25df4c498c1cd5a23c30dbd514d4" Jan 26 18:32:28 crc kubenswrapper[4737]: I0126 18:32:28.982035 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:32:28 crc kubenswrapper[4737]: E0126 18:32:28.982802 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:32:28 crc kubenswrapper[4737]: I0126 18:32:28.982110 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:32:28 crc kubenswrapper[4737]: E0126 18:32:28.982979 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:32:28 crc kubenswrapper[4737]: I0126 18:32:28.982244 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:32:28 crc kubenswrapper[4737]: E0126 18:32:28.983134 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:32:28 crc kubenswrapper[4737]: I0126 18:32:28.982101 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:32:28 crc kubenswrapper[4737]: E0126 18:32:28.983269 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4pv7r" podUID="1a3aadb5-b908-4300-af5f-e3c37dff9e14" Jan 26 18:32:29 crc kubenswrapper[4737]: I0126 18:32:29.047214 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jgjrk_ecb40773-20dc-48ef-bf7f-17f4a042b01c/ovnkube-controller/3.log" Jan 26 18:32:29 crc kubenswrapper[4737]: I0126 18:32:29.050611 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" event={"ID":"ecb40773-20dc-48ef-bf7f-17f4a042b01c","Type":"ContainerStarted","Data":"8e27b6f397361e34d0d8df88916c81d9690564a360505f53b30d8bee1858d35b"} Jan 26 18:32:29 crc kubenswrapper[4737]: I0126 18:32:29.051256 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:32:29 crc kubenswrapper[4737]: I0126 18:32:29.100786 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" podStartSLOduration=104.100757084 podStartE2EDuration="1m44.100757084s" podCreationTimestamp="2026-01-26 18:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:32:29.099214993 +0000 UTC m=+122.407409701" watchObservedRunningTime="2026-01-26 18:32:29.100757084 +0000 UTC m=+122.408951812" Jan 26 18:32:29 crc kubenswrapper[4737]: I0126 18:32:29.102336 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-4pv7r"] Jan 26 18:32:29 crc kubenswrapper[4737]: I0126 18:32:29.102470 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:32:29 crc kubenswrapper[4737]: E0126 18:32:29.102602 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4pv7r" podUID="1a3aadb5-b908-4300-af5f-e3c37dff9e14" Jan 26 18:32:29 crc kubenswrapper[4737]: I0126 18:32:29.982554 4737 scope.go:117] "RemoveContainer" containerID="debc5589aae465210c77fde58754f822ad1d429fc00cfb56625deddf51cf6fc2" Jan 26 18:32:30 crc kubenswrapper[4737]: I0126 18:32:30.981114 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:32:30 crc kubenswrapper[4737]: I0126 18:32:30.981059 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:32:30 crc kubenswrapper[4737]: I0126 18:32:30.981182 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:32:30 crc kubenswrapper[4737]: I0126 18:32:30.981229 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:32:30 crc kubenswrapper[4737]: E0126 18:32:30.981592 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:32:30 crc kubenswrapper[4737]: E0126 18:32:30.981908 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:32:30 crc kubenswrapper[4737]: E0126 18:32:30.982171 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4pv7r" podUID="1a3aadb5-b908-4300-af5f-e3c37dff9e14" Jan 26 18:32:30 crc kubenswrapper[4737]: E0126 18:32:30.982254 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:32:31 crc kubenswrapper[4737]: I0126 18:32:31.065790 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qjff2_82627aad-2019-482e-934a-7e9729927a34/kube-multus/1.log" Jan 26 18:32:31 crc kubenswrapper[4737]: I0126 18:32:31.065901 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qjff2" event={"ID":"82627aad-2019-482e-934a-7e9729927a34","Type":"ContainerStarted","Data":"00b3a8ab493480704ad64a0ee4fdc318b56fbd72df74360380e03d02e458cb9a"} Jan 26 18:32:32 crc kubenswrapper[4737]: I0126 18:32:32.982276 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:32:32 crc kubenswrapper[4737]: I0126 18:32:32.982277 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:32:32 crc kubenswrapper[4737]: I0126 18:32:32.982325 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:32:32 crc kubenswrapper[4737]: I0126 18:32:32.982359 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:32:32 crc kubenswrapper[4737]: I0126 18:32:32.987833 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 26 18:32:32 crc kubenswrapper[4737]: I0126 18:32:32.988452 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 26 18:32:32 crc kubenswrapper[4737]: I0126 18:32:32.988567 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 26 18:32:32 crc kubenswrapper[4737]: I0126 18:32:32.988654 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 26 18:32:32 crc kubenswrapper[4737]: I0126 18:32:32.988891 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 26 18:32:32 crc kubenswrapper[4737]: I0126 18:32:32.989053 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.016570 4737 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.062050 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-7jxs2"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.063202 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-7jxs2" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.069886 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-9kjp9"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.070560 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.071617 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7h9cs"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.071910 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7h9cs" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.075532 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-gsfgx"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.076318 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gsfgx" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.077255 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lnrns"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.077963 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lnrns" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.079614 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.080688 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-f84g9"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.081091 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.081848 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-f84g9" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.089417 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.089749 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.089941 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.090813 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.091212 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.091795 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.091975 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.092166 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 26 18:32:39 crc kubenswrapper[4737]: W0126 18:32:39.092299 4737 reflector.go:561] object-"openshift-route-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 26 18:32:39 crc kubenswrapper[4737]: E0126 18:32:39.092335 4737 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.092423 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.092677 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.092835 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.093446 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.093589 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.094554 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.094802 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.094926 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.095176 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.095304 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.095401 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.095783 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.095794 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.095901 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.096019 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.096027 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.096543 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.096910 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-htkzj"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.097450 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-htkzj" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.097557 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.097680 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.097835 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.098021 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.098033 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.098157 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.098337 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.098358 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.098579 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.098676 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.098718 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.098679 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.100455 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-hbdm4"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.112553 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-s7n9n"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.112814 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-l9spd"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.113162 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kzwmx"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.113458 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-p7ll4"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.113857 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p7ll4" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.114428 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.114753 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.098743 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.114923 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.115009 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-hbdm4" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.115196 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.116056 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.098743 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.116815 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-s7n9n" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.117105 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-l9spd" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.117267 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kzwmx" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.117640 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.119357 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.121232 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.144611 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.146684 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.164861 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.165854 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.166933 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.167061 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.167265 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.167735 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.167908 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.168486 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.168820 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.168942 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.169088 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.169141 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.169186 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.167087 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.169257 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.169285 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.169338 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.169409 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-fm6nl"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.170399 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-fm6nl" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.171491 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.171537 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.171590 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.171729 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.171814 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.174997 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-8phw8"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.175578 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-nqcjp"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.175937 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-nqcjp" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.176262 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8phw8" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.176606 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-shctm"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.177281 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-shctm" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.177997 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.178008 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.178481 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.178929 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.182170 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.182880 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lnrns"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.187483 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vkl6w"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.188097 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vkl6w" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.189550 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.190055 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.192641 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.196822 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.197632 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d8ad60c4-c4e9-48bd-bb54-f22bef5a8b76-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-kzwmx\" (UID: \"d8ad60c4-c4e9-48bd-bb54-f22bef5a8b76\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kzwmx" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.197674 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrc9h\" (UniqueName: \"kubernetes.io/projected/d8ad60c4-c4e9-48bd-bb54-f22bef5a8b76-kube-api-access-qrc9h\") pod \"cluster-samples-operator-665b6dd947-kzwmx\" (UID: \"d8ad60c4-c4e9-48bd-bb54-f22bef5a8b76\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kzwmx" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.197696 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-9kjp9\" (UID: \"fdc44942-56de-4694-bcd4-bca48f1e1e08\") " pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.197719 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27lxz\" (UniqueName: \"kubernetes.io/projected/887b083d-2d4b-4231-a109-f2e1d5d14c39-kube-api-access-27lxz\") pod \"openshift-apiserver-operator-796bbdcf4f-lnrns\" (UID: \"887b083d-2d4b-4231-a109-f2e1d5d14c39\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lnrns" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.197738 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/255d9d52-daaf-41e1-be00-4a94de0a6324-oauth-serving-cert\") pod \"console-f9d7485db-hbdm4\" (UID: \"255d9d52-daaf-41e1-be00-4a94de0a6324\") " pod="openshift-console/console-f9d7485db-hbdm4" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.197755 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ceadcc2-c87f-4382-895a-f052e3c3597d-serving-cert\") pod \"route-controller-manager-6576b87f9c-7h9cs\" (UID: \"9ceadcc2-c87f-4382-895a-f052e3c3597d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7h9cs" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.197775 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/858fe62f-567a-47e7-9847-c393790eb41f-node-pullsecrets\") pod \"apiserver-76f77b778f-7jxs2\" (UID: \"858fe62f-567a-47e7-9847-c393790eb41f\") " pod="openshift-apiserver/apiserver-76f77b778f-7jxs2" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.197792 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8d2d9bc1-4264-4633-af76-b57166070ab0-etcd-client\") pod \"apiserver-7bbb656c7d-gsfgx\" (UID: \"8d2d9bc1-4264-4633-af76-b57166070ab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gsfgx" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.197809 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/22b2e7a5-b20a-41cd-b9fc-694a9aa3e964-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-htkzj\" (UID: \"22b2e7a5-b20a-41cd-b9fc-694a9aa3e964\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-htkzj" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.197831 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-9kjp9\" (UID: \"fdc44942-56de-4694-bcd4-bca48f1e1e08\") " pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.197846 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c63a5aaa-f8bc-481b-b607-cda4e9eb4f9d-config\") pod \"console-operator-58897d9998-l9spd\" (UID: \"c63a5aaa-f8bc-481b-b607-cda4e9eb4f9d\") " pod="openshift-console-operator/console-operator-58897d9998-l9spd" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.197861 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/858fe62f-567a-47e7-9847-c393790eb41f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-7jxs2\" (UID: \"858fe62f-567a-47e7-9847-c393790eb41f\") " pod="openshift-apiserver/apiserver-76f77b778f-7jxs2" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.197881 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/38ea1569-149a-4a65-a61d-021204d2cde6-auth-proxy-config\") pod \"machine-approver-56656f9798-f84g9\" (UID: \"38ea1569-149a-4a65-a61d-021204d2cde6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-f84g9" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.197896 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c63a5aaa-f8bc-481b-b607-cda4e9eb4f9d-trusted-ca\") pod \"console-operator-58897d9998-l9spd\" (UID: \"c63a5aaa-f8bc-481b-b607-cda4e9eb4f9d\") " pod="openshift-console-operator/console-operator-58897d9998-l9spd" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.197912 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/858fe62f-567a-47e7-9847-c393790eb41f-config\") pod \"apiserver-76f77b778f-7jxs2\" (UID: \"858fe62f-567a-47e7-9847-c393790eb41f\") " pod="openshift-apiserver/apiserver-76f77b778f-7jxs2" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.197927 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/858fe62f-567a-47e7-9847-c393790eb41f-serving-cert\") pod \"apiserver-76f77b778f-7jxs2\" (UID: \"858fe62f-567a-47e7-9847-c393790eb41f\") " pod="openshift-apiserver/apiserver-76f77b778f-7jxs2" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.197947 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-9kjp9\" (UID: \"fdc44942-56de-4694-bcd4-bca48f1e1e08\") " pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.197977 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.197980 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/858fe62f-567a-47e7-9847-c393790eb41f-audit\") pod \"apiserver-76f77b778f-7jxs2\" (UID: \"858fe62f-567a-47e7-9847-c393790eb41f\") " pod="openshift-apiserver/apiserver-76f77b778f-7jxs2" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.198043 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/255d9d52-daaf-41e1-be00-4a94de0a6324-console-config\") pod \"console-f9d7485db-hbdm4\" (UID: \"255d9d52-daaf-41e1-be00-4a94de0a6324\") " pod="openshift-console/console-f9d7485db-hbdm4" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.198065 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8d2d9bc1-4264-4633-af76-b57166070ab0-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-gsfgx\" (UID: \"8d2d9bc1-4264-4633-af76-b57166070ab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gsfgx" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.198115 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8d2d9bc1-4264-4633-af76-b57166070ab0-encryption-config\") pod \"apiserver-7bbb656c7d-gsfgx\" (UID: \"8d2d9bc1-4264-4633-af76-b57166070ab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gsfgx" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.198177 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f792056c-fffa-4089-a040-8e09a1d6489f-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-s7n9n\" (UID: \"f792056c-fffa-4089-a040-8e09a1d6489f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-s7n9n" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.198266 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.198270 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.198266 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c63a5aaa-f8bc-481b-b607-cda4e9eb4f9d-serving-cert\") pod \"console-operator-58897d9998-l9spd\" (UID: \"c63a5aaa-f8bc-481b-b607-cda4e9eb4f9d\") " pod="openshift-console-operator/console-operator-58897d9998-l9spd" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.198373 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/858fe62f-567a-47e7-9847-c393790eb41f-encryption-config\") pod \"apiserver-76f77b778f-7jxs2\" (UID: \"858fe62f-567a-47e7-9847-c393790eb41f\") " pod="openshift-apiserver/apiserver-76f77b778f-7jxs2" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.198391 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38ea1569-149a-4a65-a61d-021204d2cde6-config\") pod \"machine-approver-56656f9798-f84g9\" (UID: \"38ea1569-149a-4a65-a61d-021204d2cde6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-f84g9" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.198413 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnqpt\" (UniqueName: \"kubernetes.io/projected/c63a5aaa-f8bc-481b-b607-cda4e9eb4f9d-kube-api-access-lnqpt\") pod \"console-operator-58897d9998-l9spd\" (UID: \"c63a5aaa-f8bc-481b-b607-cda4e9eb4f9d\") " pod="openshift-console-operator/console-operator-58897d9998-l9spd" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.198429 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/255d9d52-daaf-41e1-be00-4a94de0a6324-console-oauth-config\") pod \"console-f9d7485db-hbdm4\" (UID: \"255d9d52-daaf-41e1-be00-4a94de0a6324\") " pod="openshift-console/console-f9d7485db-hbdm4" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.198450 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/255d9d52-daaf-41e1-be00-4a94de0a6324-console-serving-cert\") pod \"console-f9d7485db-hbdm4\" (UID: \"255d9d52-daaf-41e1-be00-4a94de0a6324\") " pod="openshift-console/console-f9d7485db-hbdm4" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.198455 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.198466 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-9kjp9\" (UID: \"fdc44942-56de-4694-bcd4-bca48f1e1e08\") " pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.198483 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/858fe62f-567a-47e7-9847-c393790eb41f-etcd-serving-ca\") pod \"apiserver-76f77b778f-7jxs2\" (UID: \"858fe62f-567a-47e7-9847-c393790eb41f\") " pod="openshift-apiserver/apiserver-76f77b778f-7jxs2" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.198499 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/255d9d52-daaf-41e1-be00-4a94de0a6324-service-ca\") pod \"console-f9d7485db-hbdm4\" (UID: \"255d9d52-daaf-41e1-be00-4a94de0a6324\") " pod="openshift-console/console-f9d7485db-hbdm4" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.198570 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf6dj\" (UniqueName: \"kubernetes.io/projected/255d9d52-daaf-41e1-be00-4a94de0a6324-kube-api-access-zf6dj\") pod \"console-f9d7485db-hbdm4\" (UID: \"255d9d52-daaf-41e1-be00-4a94de0a6324\") " pod="openshift-console/console-f9d7485db-hbdm4" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.198599 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-9kjp9\" (UID: \"fdc44942-56de-4694-bcd4-bca48f1e1e08\") " pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.198620 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-9kjp9\" (UID: \"fdc44942-56de-4694-bcd4-bca48f1e1e08\") " pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.198636 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-9kjp9\" (UID: \"fdc44942-56de-4694-bcd4-bca48f1e1e08\") " pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.198664 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/255d9d52-daaf-41e1-be00-4a94de0a6324-trusted-ca-bundle\") pod \"console-f9d7485db-hbdm4\" (UID: \"255d9d52-daaf-41e1-be00-4a94de0a6324\") " pod="openshift-console/console-f9d7485db-hbdm4" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.198680 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fdc44942-56de-4694-bcd4-bca48f1e1e08-audit-policies\") pod \"oauth-openshift-558db77b4-9kjp9\" (UID: \"fdc44942-56de-4694-bcd4-bca48f1e1e08\") " pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.198696 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-9kjp9\" (UID: \"fdc44942-56de-4694-bcd4-bca48f1e1e08\") " pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.198713 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8a64e01-05c7-4ea4-a60c-0bcce98ea3ff-serving-cert\") pod \"openshift-config-operator-7777fb866f-p7ll4\" (UID: \"c8a64e01-05c7-4ea4-a60c-0bcce98ea3ff\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p7ll4" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.198729 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhnxn\" (UniqueName: \"kubernetes.io/projected/8d2d9bc1-4264-4633-af76-b57166070ab0-kube-api-access-xhnxn\") pod \"apiserver-7bbb656c7d-gsfgx\" (UID: \"8d2d9bc1-4264-4633-af76-b57166070ab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gsfgx" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.198747 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f792056c-fffa-4089-a040-8e09a1d6489f-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-s7n9n\" (UID: \"f792056c-fffa-4089-a040-8e09a1d6489f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-s7n9n" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.198766 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c8a64e01-05c7-4ea4-a60c-0bcce98ea3ff-available-featuregates\") pod \"openshift-config-operator-7777fb866f-p7ll4\" (UID: \"c8a64e01-05c7-4ea4-a60c-0bcce98ea3ff\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p7ll4" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.198789 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/858fe62f-567a-47e7-9847-c393790eb41f-etcd-client\") pod \"apiserver-76f77b778f-7jxs2\" (UID: \"858fe62f-567a-47e7-9847-c393790eb41f\") " pod="openshift-apiserver/apiserver-76f77b778f-7jxs2" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.198835 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d2d9bc1-4264-4633-af76-b57166070ab0-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-gsfgx\" (UID: \"8d2d9bc1-4264-4633-af76-b57166070ab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gsfgx" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.198859 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb8fc\" (UniqueName: \"kubernetes.io/projected/22b2e7a5-b20a-41cd-b9fc-694a9aa3e964-kube-api-access-bb8fc\") pod \"cluster-image-registry-operator-dc59b4c8b-htkzj\" (UID: \"22b2e7a5-b20a-41cd-b9fc-694a9aa3e964\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-htkzj" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.198906 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9ceadcc2-c87f-4382-895a-f052e3c3597d-client-ca\") pod \"route-controller-manager-6576b87f9c-7h9cs\" (UID: \"9ceadcc2-c87f-4382-895a-f052e3c3597d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7h9cs" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.198928 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8d2d9bc1-4264-4633-af76-b57166070ab0-audit-dir\") pod \"apiserver-7bbb656c7d-gsfgx\" (UID: \"8d2d9bc1-4264-4633-af76-b57166070ab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gsfgx" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.198949 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/887b083d-2d4b-4231-a109-f2e1d5d14c39-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-lnrns\" (UID: \"887b083d-2d4b-4231-a109-f2e1d5d14c39\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lnrns" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.198974 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/22b2e7a5-b20a-41cd-b9fc-694a9aa3e964-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-htkzj\" (UID: \"22b2e7a5-b20a-41cd-b9fc-694a9aa3e964\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-htkzj" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.198996 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prdjb\" (UniqueName: \"kubernetes.io/projected/9ceadcc2-c87f-4382-895a-f052e3c3597d-kube-api-access-prdjb\") pod \"route-controller-manager-6576b87f9c-7h9cs\" (UID: \"9ceadcc2-c87f-4382-895a-f052e3c3597d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7h9cs" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.199017 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-9kjp9\" (UID: \"fdc44942-56de-4694-bcd4-bca48f1e1e08\") " pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.199036 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/858fe62f-567a-47e7-9847-c393790eb41f-image-import-ca\") pod \"apiserver-76f77b778f-7jxs2\" (UID: \"858fe62f-567a-47e7-9847-c393790eb41f\") " pod="openshift-apiserver/apiserver-76f77b778f-7jxs2" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.199056 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-9kjp9\" (UID: \"fdc44942-56de-4694-bcd4-bca48f1e1e08\") " pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.199097 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blhrc\" (UniqueName: \"kubernetes.io/projected/c8a64e01-05c7-4ea4-a60c-0bcce98ea3ff-kube-api-access-blhrc\") pod \"openshift-config-operator-7777fb866f-p7ll4\" (UID: \"c8a64e01-05c7-4ea4-a60c-0bcce98ea3ff\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p7ll4" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.199122 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/38ea1569-149a-4a65-a61d-021204d2cde6-machine-approver-tls\") pod \"machine-approver-56656f9798-f84g9\" (UID: \"38ea1569-149a-4a65-a61d-021204d2cde6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-f84g9" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.199142 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/22b2e7a5-b20a-41cd-b9fc-694a9aa3e964-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-htkzj\" (UID: \"22b2e7a5-b20a-41cd-b9fc-694a9aa3e964\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-htkzj" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.199178 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8d2d9bc1-4264-4633-af76-b57166070ab0-audit-policies\") pod \"apiserver-7bbb656c7d-gsfgx\" (UID: \"8d2d9bc1-4264-4633-af76-b57166070ab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gsfgx" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.199203 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/887b083d-2d4b-4231-a109-f2e1d5d14c39-config\") pod \"openshift-apiserver-operator-796bbdcf4f-lnrns\" (UID: \"887b083d-2d4b-4231-a109-f2e1d5d14c39\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lnrns" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.199238 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv66w\" (UniqueName: \"kubernetes.io/projected/fdc44942-56de-4694-bcd4-bca48f1e1e08-kube-api-access-gv66w\") pod \"oauth-openshift-558db77b4-9kjp9\" (UID: \"fdc44942-56de-4694-bcd4-bca48f1e1e08\") " pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.199302 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/858fe62f-567a-47e7-9847-c393790eb41f-audit-dir\") pod \"apiserver-76f77b778f-7jxs2\" (UID: \"858fe62f-567a-47e7-9847-c393790eb41f\") " pod="openshift-apiserver/apiserver-76f77b778f-7jxs2" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.199342 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-9kjp9\" (UID: \"fdc44942-56de-4694-bcd4-bca48f1e1e08\") " pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.199364 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwrx2\" (UniqueName: \"kubernetes.io/projected/858fe62f-567a-47e7-9847-c393790eb41f-kube-api-access-jwrx2\") pod \"apiserver-76f77b778f-7jxs2\" (UID: \"858fe62f-567a-47e7-9847-c393790eb41f\") " pod="openshift-apiserver/apiserver-76f77b778f-7jxs2" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.199389 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzn95\" (UniqueName: \"kubernetes.io/projected/f792056c-fffa-4089-a040-8e09a1d6489f-kube-api-access-kzn95\") pod \"kube-storage-version-migrator-operator-b67b599dd-s7n9n\" (UID: \"f792056c-fffa-4089-a040-8e09a1d6489f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-s7n9n" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.199413 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ceadcc2-c87f-4382-895a-f052e3c3597d-config\") pod \"route-controller-manager-6576b87f9c-7h9cs\" (UID: \"9ceadcc2-c87f-4382-895a-f052e3c3597d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7h9cs" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.199437 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zlqs\" (UniqueName: \"kubernetes.io/projected/38ea1569-149a-4a65-a61d-021204d2cde6-kube-api-access-4zlqs\") pod \"machine-approver-56656f9798-f84g9\" (UID: \"38ea1569-149a-4a65-a61d-021204d2cde6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-f84g9" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.199460 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fdc44942-56de-4694-bcd4-bca48f1e1e08-audit-dir\") pod \"oauth-openshift-558db77b4-9kjp9\" (UID: \"fdc44942-56de-4694-bcd4-bca48f1e1e08\") " pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.199477 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d2d9bc1-4264-4633-af76-b57166070ab0-serving-cert\") pod \"apiserver-7bbb656c7d-gsfgx\" (UID: \"8d2d9bc1-4264-4633-af76-b57166070ab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gsfgx" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.198807 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.199567 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.199731 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.199226 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.199333 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.199386 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.199960 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.199486 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.199529 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.200158 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.200475 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.200610 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.200711 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.200784 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.202455 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qw4sc"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.203118 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-ktwh7"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.203576 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-ktwh7" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.225426 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qw4sc" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.225977 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-brpd2"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.227130 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-brpd2" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.234883 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.237930 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-n2t8j"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.257592 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.257838 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.258848 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.259906 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.262208 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7c9pc"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.262643 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.265662 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n2t8j" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.273180 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z594r"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.273796 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z594r" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.274427 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-scmj7"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.275138 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-wwzqx"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.275637 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.275902 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.276610 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-scmj7" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.277251 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p6k9r"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.278129 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-9kjp9"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.278173 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-wwzqx" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.278141 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p6k9r" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.288359 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sb8td"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.289110 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-mv7h7"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.290898 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sb8td" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.300585 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c63a5aaa-f8bc-481b-b607-cda4e9eb4f9d-config\") pod \"console-operator-58897d9998-l9spd\" (UID: \"c63a5aaa-f8bc-481b-b607-cda4e9eb4f9d\") " pod="openshift-console-operator/console-operator-58897d9998-l9spd" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.300624 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/858fe62f-567a-47e7-9847-c393790eb41f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-7jxs2\" (UID: \"858fe62f-567a-47e7-9847-c393790eb41f\") " pod="openshift-apiserver/apiserver-76f77b778f-7jxs2" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.300648 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/38ea1569-149a-4a65-a61d-021204d2cde6-auth-proxy-config\") pod \"machine-approver-56656f9798-f84g9\" (UID: \"38ea1569-149a-4a65-a61d-021204d2cde6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-f84g9" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.300671 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/90b067c5-a234-4e7f-a68b-e0b1c5cdac35-metrics-tls\") pod \"ingress-operator-5b745b69d9-8phw8\" (UID: \"90b067c5-a234-4e7f-a68b-e0b1c5cdac35\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8phw8" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.300697 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c63a5aaa-f8bc-481b-b607-cda4e9eb4f9d-trusted-ca\") pod \"console-operator-58897d9998-l9spd\" (UID: \"c63a5aaa-f8bc-481b-b607-cda4e9eb4f9d\") " pod="openshift-console-operator/console-operator-58897d9998-l9spd" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.300714 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/858fe62f-567a-47e7-9847-c393790eb41f-config\") pod \"apiserver-76f77b778f-7jxs2\" (UID: \"858fe62f-567a-47e7-9847-c393790eb41f\") " pod="openshift-apiserver/apiserver-76f77b778f-7jxs2" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.300731 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/858fe62f-567a-47e7-9847-c393790eb41f-serving-cert\") pod \"apiserver-76f77b778f-7jxs2\" (UID: \"858fe62f-567a-47e7-9847-c393790eb41f\") " pod="openshift-apiserver/apiserver-76f77b778f-7jxs2" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.300748 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fecb426-1ec9-4ee4-aee7-f079d088dea4-config\") pod \"etcd-operator-b45778765-nqcjp\" (UID: \"4fecb426-1ec9-4ee4-aee7-f079d088dea4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nqcjp" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.300767 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-9kjp9\" (UID: \"fdc44942-56de-4694-bcd4-bca48f1e1e08\") " pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.300783 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/858fe62f-567a-47e7-9847-c393790eb41f-audit\") pod \"apiserver-76f77b778f-7jxs2\" (UID: \"858fe62f-567a-47e7-9847-c393790eb41f\") " pod="openshift-apiserver/apiserver-76f77b778f-7jxs2" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.300802 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/255d9d52-daaf-41e1-be00-4a94de0a6324-console-config\") pod \"console-f9d7485db-hbdm4\" (UID: \"255d9d52-daaf-41e1-be00-4a94de0a6324\") " pod="openshift-console/console-f9d7485db-hbdm4" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.300818 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8d2d9bc1-4264-4633-af76-b57166070ab0-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-gsfgx\" (UID: \"8d2d9bc1-4264-4633-af76-b57166070ab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gsfgx" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.300853 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8d2d9bc1-4264-4633-af76-b57166070ab0-encryption-config\") pod \"apiserver-7bbb656c7d-gsfgx\" (UID: \"8d2d9bc1-4264-4633-af76-b57166070ab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gsfgx" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.300871 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f792056c-fffa-4089-a040-8e09a1d6489f-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-s7n9n\" (UID: \"f792056c-fffa-4089-a040-8e09a1d6489f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-s7n9n" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.300895 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c63a5aaa-f8bc-481b-b607-cda4e9eb4f9d-serving-cert\") pod \"console-operator-58897d9998-l9spd\" (UID: \"c63a5aaa-f8bc-481b-b607-cda4e9eb4f9d\") " pod="openshift-console-operator/console-operator-58897d9998-l9spd" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.300912 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/858fe62f-567a-47e7-9847-c393790eb41f-encryption-config\") pod \"apiserver-76f77b778f-7jxs2\" (UID: \"858fe62f-567a-47e7-9847-c393790eb41f\") " pod="openshift-apiserver/apiserver-76f77b778f-7jxs2" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.300929 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38ea1569-149a-4a65-a61d-021204d2cde6-config\") pod \"machine-approver-56656f9798-f84g9\" (UID: \"38ea1569-149a-4a65-a61d-021204d2cde6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-f84g9" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.300949 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnqpt\" (UniqueName: \"kubernetes.io/projected/c63a5aaa-f8bc-481b-b607-cda4e9eb4f9d-kube-api-access-lnqpt\") pod \"console-operator-58897d9998-l9spd\" (UID: \"c63a5aaa-f8bc-481b-b607-cda4e9eb4f9d\") " pod="openshift-console-operator/console-operator-58897d9998-l9spd" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.300965 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/255d9d52-daaf-41e1-be00-4a94de0a6324-console-oauth-config\") pod \"console-f9d7485db-hbdm4\" (UID: \"255d9d52-daaf-41e1-be00-4a94de0a6324\") " pod="openshift-console/console-f9d7485db-hbdm4" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.300982 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/255d9d52-daaf-41e1-be00-4a94de0a6324-console-serving-cert\") pod \"console-f9d7485db-hbdm4\" (UID: \"255d9d52-daaf-41e1-be00-4a94de0a6324\") " pod="openshift-console/console-f9d7485db-hbdm4" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.301043 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/90b067c5-a234-4e7f-a68b-e0b1c5cdac35-bound-sa-token\") pod \"ingress-operator-5b745b69d9-8phw8\" (UID: \"90b067c5-a234-4e7f-a68b-e0b1c5cdac35\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8phw8" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.302425 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c63a5aaa-f8bc-481b-b607-cda4e9eb4f9d-trusted-ca\") pod \"console-operator-58897d9998-l9spd\" (UID: \"c63a5aaa-f8bc-481b-b607-cda4e9eb4f9d\") " pod="openshift-console-operator/console-operator-58897d9998-l9spd" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.307291 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-n7cr7"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.307662 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jxrhw"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.307944 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/38ea1569-149a-4a65-a61d-021204d2cde6-auth-proxy-config\") pod \"machine-approver-56656f9798-f84g9\" (UID: \"38ea1569-149a-4a65-a61d-021204d2cde6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-f84g9" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.307975 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6jt9w"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.308736 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gftx9"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.309059 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/858fe62f-567a-47e7-9847-c393790eb41f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-7jxs2\" (UID: \"858fe62f-567a-47e7-9847-c393790eb41f\") " pod="openshift-apiserver/apiserver-76f77b778f-7jxs2" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.309290 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6f78q"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.309717 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-g4vb5"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.309855 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c63a5aaa-f8bc-481b-b607-cda4e9eb4f9d-serving-cert\") pod \"console-operator-58897d9998-l9spd\" (UID: \"c63a5aaa-f8bc-481b-b607-cda4e9eb4f9d\") " pod="openshift-console-operator/console-operator-58897d9998-l9spd" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.310180 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/858fe62f-567a-47e7-9847-c393790eb41f-config\") pod \"apiserver-76f77b778f-7jxs2\" (UID: \"858fe62f-567a-47e7-9847-c393790eb41f\") " pod="openshift-apiserver/apiserver-76f77b778f-7jxs2" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.310581 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38ea1569-149a-4a65-a61d-021204d2cde6-config\") pod \"machine-approver-56656f9798-f84g9\" (UID: \"38ea1569-149a-4a65-a61d-021204d2cde6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-f84g9" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.311177 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c63a5aaa-f8bc-481b-b607-cda4e9eb4f9d-config\") pod \"console-operator-58897d9998-l9spd\" (UID: \"c63a5aaa-f8bc-481b-b607-cda4e9eb4f9d\") " pod="openshift-console-operator/console-operator-58897d9998-l9spd" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.311303 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mv7h7" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.311683 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jxrhw" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.311833 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-n7cr7" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.311993 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8d2d9bc1-4264-4633-af76-b57166070ab0-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-gsfgx\" (UID: \"8d2d9bc1-4264-4633-af76-b57166070ab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gsfgx" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.312662 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/858fe62f-567a-47e7-9847-c393790eb41f-audit\") pod \"apiserver-76f77b778f-7jxs2\" (UID: \"858fe62f-567a-47e7-9847-c393790eb41f\") " pod="openshift-apiserver/apiserver-76f77b778f-7jxs2" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.313017 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/858fe62f-567a-47e7-9847-c393790eb41f-encryption-config\") pod \"apiserver-76f77b778f-7jxs2\" (UID: \"858fe62f-567a-47e7-9847-c393790eb41f\") " pod="openshift-apiserver/apiserver-76f77b778f-7jxs2" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.313291 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6jt9w" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.313683 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gftx9" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.316579 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6f78q" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.317001 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-964nh\" (UniqueName: \"kubernetes.io/projected/036a0e85-4072-4906-90a1-c87c319a4abe-kube-api-access-964nh\") pod \"migrator-59844c95c7-qw4sc\" (UID: \"036a0e85-4072-4906-90a1-c87c319a4abe\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qw4sc" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.317789 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f792056c-fffa-4089-a040-8e09a1d6489f-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-s7n9n\" (UID: \"f792056c-fffa-4089-a040-8e09a1d6489f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-s7n9n" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.317855 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-9kjp9\" (UID: \"fdc44942-56de-4694-bcd4-bca48f1e1e08\") " pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.317906 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/858fe62f-567a-47e7-9847-c393790eb41f-etcd-serving-ca\") pod \"apiserver-76f77b778f-7jxs2\" (UID: \"858fe62f-567a-47e7-9847-c393790eb41f\") " pod="openshift-apiserver/apiserver-76f77b778f-7jxs2" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.317926 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/255d9d52-daaf-41e1-be00-4a94de0a6324-service-ca\") pod \"console-f9d7485db-hbdm4\" (UID: \"255d9d52-daaf-41e1-be00-4a94de0a6324\") " pod="openshift-console/console-f9d7485db-hbdm4" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.317982 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwc2s\" (UniqueName: \"kubernetes.io/projected/abf4a817-2de4-4f69-9ad8-d15ed857d5ab-kube-api-access-dwc2s\") pod \"downloads-7954f5f757-brpd2\" (UID: \"abf4a817-2de4-4f69-9ad8-d15ed857d5ab\") " pod="openshift-console/downloads-7954f5f757-brpd2" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.318705 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/255d9d52-daaf-41e1-be00-4a94de0a6324-service-ca\") pod \"console-f9d7485db-hbdm4\" (UID: \"255d9d52-daaf-41e1-be00-4a94de0a6324\") " pod="openshift-console/console-f9d7485db-hbdm4" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.318779 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zf6dj\" (UniqueName: \"kubernetes.io/projected/255d9d52-daaf-41e1-be00-4a94de0a6324-kube-api-access-zf6dj\") pod \"console-f9d7485db-hbdm4\" (UID: \"255d9d52-daaf-41e1-be00-4a94de0a6324\") " pod="openshift-console/console-f9d7485db-hbdm4" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.318824 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-9kjp9\" (UID: \"fdc44942-56de-4694-bcd4-bca48f1e1e08\") " pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.319055 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/858fe62f-567a-47e7-9847-c393790eb41f-etcd-serving-ca\") pod \"apiserver-76f77b778f-7jxs2\" (UID: \"858fe62f-567a-47e7-9847-c393790eb41f\") " pod="openshift-apiserver/apiserver-76f77b778f-7jxs2" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.319505 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-9kjp9\" (UID: \"fdc44942-56de-4694-bcd4-bca48f1e1e08\") " pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.319595 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-9kjp9\" (UID: \"fdc44942-56de-4694-bcd4-bca48f1e1e08\") " pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.319645 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-9kjp9\" (UID: \"fdc44942-56de-4694-bcd4-bca48f1e1e08\") " pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.320451 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/858fe62f-567a-47e7-9847-c393790eb41f-serving-cert\") pod \"apiserver-76f77b778f-7jxs2\" (UID: \"858fe62f-567a-47e7-9847-c393790eb41f\") " pod="openshift-apiserver/apiserver-76f77b778f-7jxs2" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.320596 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/255d9d52-daaf-41e1-be00-4a94de0a6324-trusted-ca-bundle\") pod \"console-f9d7485db-hbdm4\" (UID: \"255d9d52-daaf-41e1-be00-4a94de0a6324\") " pod="openshift-console/console-f9d7485db-hbdm4" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.320659 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fdc44942-56de-4694-bcd4-bca48f1e1e08-audit-policies\") pod \"oauth-openshift-558db77b4-9kjp9\" (UID: \"fdc44942-56de-4694-bcd4-bca48f1e1e08\") " pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.320693 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l5bm\" (UniqueName: \"kubernetes.io/projected/90b067c5-a234-4e7f-a68b-e0b1c5cdac35-kube-api-access-6l5bm\") pod \"ingress-operator-5b745b69d9-8phw8\" (UID: \"90b067c5-a234-4e7f-a68b-e0b1c5cdac35\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8phw8" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.320747 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbl4f\" (UniqueName: \"kubernetes.io/projected/4fecb426-1ec9-4ee4-aee7-f079d088dea4-kube-api-access-jbl4f\") pod \"etcd-operator-b45778765-nqcjp\" (UID: \"4fecb426-1ec9-4ee4-aee7-f079d088dea4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nqcjp" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.320795 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-9kjp9\" (UID: \"fdc44942-56de-4694-bcd4-bca48f1e1e08\") " pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.320833 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8a64e01-05c7-4ea4-a60c-0bcce98ea3ff-serving-cert\") pod \"openshift-config-operator-7777fb866f-p7ll4\" (UID: \"c8a64e01-05c7-4ea4-a60c-0bcce98ea3ff\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p7ll4" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.320866 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe2f1edb-4ba9-4745-ba10-2377d62e0313-config\") pod \"kube-controller-manager-operator-78b949d7b-shctm\" (UID: \"fe2f1edb-4ba9-4745-ba10-2377d62e0313\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-shctm" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.320898 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhnxn\" (UniqueName: \"kubernetes.io/projected/8d2d9bc1-4264-4633-af76-b57166070ab0-kube-api-access-xhnxn\") pod \"apiserver-7bbb656c7d-gsfgx\" (UID: \"8d2d9bc1-4264-4633-af76-b57166070ab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gsfgx" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.320926 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f792056c-fffa-4089-a040-8e09a1d6489f-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-s7n9n\" (UID: \"f792056c-fffa-4089-a040-8e09a1d6489f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-s7n9n" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.320954 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c8a64e01-05c7-4ea4-a60c-0bcce98ea3ff-available-featuregates\") pod \"openshift-config-operator-7777fb866f-p7ll4\" (UID: \"c8a64e01-05c7-4ea4-a60c-0bcce98ea3ff\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p7ll4" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.320978 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/858fe62f-567a-47e7-9847-c393790eb41f-etcd-client\") pod \"apiserver-76f77b778f-7jxs2\" (UID: \"858fe62f-567a-47e7-9847-c393790eb41f\") " pod="openshift-apiserver/apiserver-76f77b778f-7jxs2" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.321002 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d2d9bc1-4264-4633-af76-b57166070ab0-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-gsfgx\" (UID: \"8d2d9bc1-4264-4633-af76-b57166070ab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gsfgx" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.321028 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bb8fc\" (UniqueName: \"kubernetes.io/projected/22b2e7a5-b20a-41cd-b9fc-694a9aa3e964-kube-api-access-bb8fc\") pod \"cluster-image-registry-operator-dc59b4c8b-htkzj\" (UID: \"22b2e7a5-b20a-41cd-b9fc-694a9aa3e964\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-htkzj" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.321089 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4fecb426-1ec9-4ee4-aee7-f079d088dea4-etcd-client\") pod \"etcd-operator-b45778765-nqcjp\" (UID: \"4fecb426-1ec9-4ee4-aee7-f079d088dea4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nqcjp" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.321111 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fe2f1edb-4ba9-4745-ba10-2377d62e0313-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-shctm\" (UID: \"fe2f1edb-4ba9-4745-ba10-2377d62e0313\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-shctm" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.321135 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9ceadcc2-c87f-4382-895a-f052e3c3597d-client-ca\") pod \"route-controller-manager-6576b87f9c-7h9cs\" (UID: \"9ceadcc2-c87f-4382-895a-f052e3c3597d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7h9cs" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.321153 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8d2d9bc1-4264-4633-af76-b57166070ab0-audit-dir\") pod \"apiserver-7bbb656c7d-gsfgx\" (UID: \"8d2d9bc1-4264-4633-af76-b57166070ab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gsfgx" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.321171 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/887b083d-2d4b-4231-a109-f2e1d5d14c39-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-lnrns\" (UID: \"887b083d-2d4b-4231-a109-f2e1d5d14c39\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lnrns" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.321191 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/22b2e7a5-b20a-41cd-b9fc-694a9aa3e964-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-htkzj\" (UID: \"22b2e7a5-b20a-41cd-b9fc-694a9aa3e964\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-htkzj" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.321204 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-9kjp9\" (UID: \"fdc44942-56de-4694-bcd4-bca48f1e1e08\") " pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.321213 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prdjb\" (UniqueName: \"kubernetes.io/projected/9ceadcc2-c87f-4382-895a-f052e3c3597d-kube-api-access-prdjb\") pod \"route-controller-manager-6576b87f9c-7h9cs\" (UID: \"9ceadcc2-c87f-4382-895a-f052e3c3597d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7h9cs" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.321270 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe2f1edb-4ba9-4745-ba10-2377d62e0313-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-shctm\" (UID: \"fe2f1edb-4ba9-4745-ba10-2377d62e0313\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-shctm" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.321298 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-9kjp9\" (UID: \"fdc44942-56de-4694-bcd4-bca48f1e1e08\") " pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.322079 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/255d9d52-daaf-41e1-be00-4a94de0a6324-trusted-ca-bundle\") pod \"console-f9d7485db-hbdm4\" (UID: \"255d9d52-daaf-41e1-be00-4a94de0a6324\") " pod="openshift-console/console-f9d7485db-hbdm4" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.322160 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.322514 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fdc44942-56de-4694-bcd4-bca48f1e1e08-audit-policies\") pod \"oauth-openshift-558db77b4-9kjp9\" (UID: \"fdc44942-56de-4694-bcd4-bca48f1e1e08\") " pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.322548 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/858fe62f-567a-47e7-9847-c393790eb41f-image-import-ca\") pod \"apiserver-76f77b778f-7jxs2\" (UID: \"858fe62f-567a-47e7-9847-c393790eb41f\") " pod="openshift-apiserver/apiserver-76f77b778f-7jxs2" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.322689 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c8a64e01-05c7-4ea4-a60c-0bcce98ea3ff-available-featuregates\") pod \"openshift-config-operator-7777fb866f-p7ll4\" (UID: \"c8a64e01-05c7-4ea4-a60c-0bcce98ea3ff\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p7ll4" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.322961 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-9kjp9\" (UID: \"fdc44942-56de-4694-bcd4-bca48f1e1e08\") " pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.324957 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-9kjp9\" (UID: \"fdc44942-56de-4694-bcd4-bca48f1e1e08\") " pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.326711 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/858fe62f-567a-47e7-9847-c393790eb41f-image-import-ca\") pod \"apiserver-76f77b778f-7jxs2\" (UID: \"858fe62f-567a-47e7-9847-c393790eb41f\") " pod="openshift-apiserver/apiserver-76f77b778f-7jxs2" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.326770 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8d2d9bc1-4264-4633-af76-b57166070ab0-audit-dir\") pod \"apiserver-7bbb656c7d-gsfgx\" (UID: \"8d2d9bc1-4264-4633-af76-b57166070ab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gsfgx" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.327399 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9ceadcc2-c87f-4382-895a-f052e3c3597d-client-ca\") pod \"route-controller-manager-6576b87f9c-7h9cs\" (UID: \"9ceadcc2-c87f-4382-895a-f052e3c3597d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7h9cs" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.327832 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d2d9bc1-4264-4633-af76-b57166070ab0-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-gsfgx\" (UID: \"8d2d9bc1-4264-4633-af76-b57166070ab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gsfgx" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.329543 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.329613 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t77ps"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.329810 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-9kjp9\" (UID: \"fdc44942-56de-4694-bcd4-bca48f1e1e08\") " pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.329892 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blhrc\" (UniqueName: \"kubernetes.io/projected/c8a64e01-05c7-4ea4-a60c-0bcce98ea3ff-kube-api-access-blhrc\") pod \"openshift-config-operator-7777fb866f-p7ll4\" (UID: \"c8a64e01-05c7-4ea4-a60c-0bcce98ea3ff\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p7ll4" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.329925 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/38ea1569-149a-4a65-a61d-021204d2cde6-machine-approver-tls\") pod \"machine-approver-56656f9798-f84g9\" (UID: \"38ea1569-149a-4a65-a61d-021204d2cde6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-f84g9" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.329985 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/22b2e7a5-b20a-41cd-b9fc-694a9aa3e964-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-htkzj\" (UID: \"22b2e7a5-b20a-41cd-b9fc-694a9aa3e964\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-htkzj" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.330121 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-8p4v9"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.330551 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490870-k4f69"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.330862 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-jhhdn"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.331472 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-gsfgx"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.331502 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-7jxs2"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.331516 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-scmj7"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.331528 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-s7n9n"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.331542 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-n2t8j"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.331555 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vkl6w"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.331565 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-m5fhx"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.331986 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-m5fhx" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.332720 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8a64e01-05c7-4ea4-a60c-0bcce98ea3ff-serving-cert\") pod \"openshift-config-operator-7777fb866f-p7ll4\" (UID: \"c8a64e01-05c7-4ea4-a60c-0bcce98ea3ff\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p7ll4" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.333426 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-9kjp9\" (UID: \"fdc44942-56de-4694-bcd4-bca48f1e1e08\") " pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.333684 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8d2d9bc1-4264-4633-af76-b57166070ab0-encryption-config\") pod \"apiserver-7bbb656c7d-gsfgx\" (UID: \"8d2d9bc1-4264-4633-af76-b57166070ab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gsfgx" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.334589 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8d2d9bc1-4264-4633-af76-b57166070ab0-audit-policies\") pod \"apiserver-7bbb656c7d-gsfgx\" (UID: \"8d2d9bc1-4264-4633-af76-b57166070ab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gsfgx" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.334681 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/887b083d-2d4b-4231-a109-f2e1d5d14c39-config\") pod \"openshift-apiserver-operator-796bbdcf4f-lnrns\" (UID: \"887b083d-2d4b-4231-a109-f2e1d5d14c39\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lnrns" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.334759 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-g4vb5" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.334942 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t77ps" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.335120 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-8p4v9" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.335251 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/255d9d52-daaf-41e1-be00-4a94de0a6324-console-config\") pod \"console-f9d7485db-hbdm4\" (UID: \"255d9d52-daaf-41e1-be00-4a94de0a6324\") " pod="openshift-console/console-f9d7485db-hbdm4" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.335301 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jhhdn" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.335535 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/887b083d-2d4b-4231-a109-f2e1d5d14c39-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-lnrns\" (UID: \"887b083d-2d4b-4231-a109-f2e1d5d14c39\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lnrns" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.335820 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/22b2e7a5-b20a-41cd-b9fc-694a9aa3e964-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-htkzj\" (UID: \"22b2e7a5-b20a-41cd-b9fc-694a9aa3e964\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-htkzj" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.335276 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-k4f69" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.337094 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv66w\" (UniqueName: \"kubernetes.io/projected/fdc44942-56de-4694-bcd4-bca48f1e1e08-kube-api-access-gv66w\") pod \"oauth-openshift-558db77b4-9kjp9\" (UID: \"fdc44942-56de-4694-bcd4-bca48f1e1e08\") " pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.337185 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/858fe62f-567a-47e7-9847-c393790eb41f-audit-dir\") pod \"apiserver-76f77b778f-7jxs2\" (UID: \"858fe62f-567a-47e7-9847-c393790eb41f\") " pod="openshift-apiserver/apiserver-76f77b778f-7jxs2" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.337270 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/887b083d-2d4b-4231-a109-f2e1d5d14c39-config\") pod \"openshift-apiserver-operator-796bbdcf4f-lnrns\" (UID: \"887b083d-2d4b-4231-a109-f2e1d5d14c39\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lnrns" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.337347 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/858fe62f-567a-47e7-9847-c393790eb41f-audit-dir\") pod \"apiserver-76f77b778f-7jxs2\" (UID: \"858fe62f-567a-47e7-9847-c393790eb41f\") " pod="openshift-apiserver/apiserver-76f77b778f-7jxs2" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.337482 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4fecb426-1ec9-4ee4-aee7-f079d088dea4-etcd-ca\") pod \"etcd-operator-b45778765-nqcjp\" (UID: \"4fecb426-1ec9-4ee4-aee7-f079d088dea4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nqcjp" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.360622 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-9kjp9\" (UID: \"fdc44942-56de-4694-bcd4-bca48f1e1e08\") " pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.360747 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwrx2\" (UniqueName: \"kubernetes.io/projected/858fe62f-567a-47e7-9847-c393790eb41f-kube-api-access-jwrx2\") pod \"apiserver-76f77b778f-7jxs2\" (UID: \"858fe62f-567a-47e7-9847-c393790eb41f\") " pod="openshift-apiserver/apiserver-76f77b778f-7jxs2" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.360812 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzn95\" (UniqueName: \"kubernetes.io/projected/f792056c-fffa-4089-a040-8e09a1d6489f-kube-api-access-kzn95\") pod \"kube-storage-version-migrator-operator-b67b599dd-s7n9n\" (UID: \"f792056c-fffa-4089-a040-8e09a1d6489f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-s7n9n" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.360847 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ceadcc2-c87f-4382-895a-f052e3c3597d-config\") pod \"route-controller-manager-6576b87f9c-7h9cs\" (UID: \"9ceadcc2-c87f-4382-895a-f052e3c3597d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7h9cs" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.360883 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zlqs\" (UniqueName: \"kubernetes.io/projected/38ea1569-149a-4a65-a61d-021204d2cde6-kube-api-access-4zlqs\") pod \"machine-approver-56656f9798-f84g9\" (UID: \"38ea1569-149a-4a65-a61d-021204d2cde6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-f84g9" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.360918 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fdc44942-56de-4694-bcd4-bca48f1e1e08-audit-dir\") pod \"oauth-openshift-558db77b4-9kjp9\" (UID: \"fdc44942-56de-4694-bcd4-bca48f1e1e08\") " pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.360949 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d2d9bc1-4264-4633-af76-b57166070ab0-serving-cert\") pod \"apiserver-7bbb656c7d-gsfgx\" (UID: \"8d2d9bc1-4264-4633-af76-b57166070ab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gsfgx" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.360989 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d8ad60c4-c4e9-48bd-bb54-f22bef5a8b76-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-kzwmx\" (UID: \"d8ad60c4-c4e9-48bd-bb54-f22bef5a8b76\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kzwmx" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.361025 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4fecb426-1ec9-4ee4-aee7-f079d088dea4-etcd-service-ca\") pod \"etcd-operator-b45778765-nqcjp\" (UID: \"4fecb426-1ec9-4ee4-aee7-f079d088dea4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nqcjp" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.361082 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrc9h\" (UniqueName: \"kubernetes.io/projected/d8ad60c4-c4e9-48bd-bb54-f22bef5a8b76-kube-api-access-qrc9h\") pod \"cluster-samples-operator-665b6dd947-kzwmx\" (UID: \"d8ad60c4-c4e9-48bd-bb54-f22bef5a8b76\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kzwmx" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.361115 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-9kjp9\" (UID: \"fdc44942-56de-4694-bcd4-bca48f1e1e08\") " pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.361148 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27lxz\" (UniqueName: \"kubernetes.io/projected/887b083d-2d4b-4231-a109-f2e1d5d14c39-kube-api-access-27lxz\") pod \"openshift-apiserver-operator-796bbdcf4f-lnrns\" (UID: \"887b083d-2d4b-4231-a109-f2e1d5d14c39\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lnrns" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.361197 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/255d9d52-daaf-41e1-be00-4a94de0a6324-oauth-serving-cert\") pod \"console-f9d7485db-hbdm4\" (UID: \"255d9d52-daaf-41e1-be00-4a94de0a6324\") " pod="openshift-console/console-f9d7485db-hbdm4" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.361241 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ceadcc2-c87f-4382-895a-f052e3c3597d-serving-cert\") pod \"route-controller-manager-6576b87f9c-7h9cs\" (UID: \"9ceadcc2-c87f-4382-895a-f052e3c3597d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7h9cs" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.361272 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/858fe62f-567a-47e7-9847-c393790eb41f-node-pullsecrets\") pod \"apiserver-76f77b778f-7jxs2\" (UID: \"858fe62f-567a-47e7-9847-c393790eb41f\") " pod="openshift-apiserver/apiserver-76f77b778f-7jxs2" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.361302 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8d2d9bc1-4264-4633-af76-b57166070ab0-etcd-client\") pod \"apiserver-7bbb656c7d-gsfgx\" (UID: \"8d2d9bc1-4264-4633-af76-b57166070ab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gsfgx" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.361332 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/22b2e7a5-b20a-41cd-b9fc-694a9aa3e964-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-htkzj\" (UID: \"22b2e7a5-b20a-41cd-b9fc-694a9aa3e964\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-htkzj" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.361365 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/90b067c5-a234-4e7f-a68b-e0b1c5cdac35-trusted-ca\") pod \"ingress-operator-5b745b69d9-8phw8\" (UID: \"90b067c5-a234-4e7f-a68b-e0b1c5cdac35\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8phw8" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.361398 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fecb426-1ec9-4ee4-aee7-f079d088dea4-serving-cert\") pod \"etcd-operator-b45778765-nqcjp\" (UID: \"4fecb426-1ec9-4ee4-aee7-f079d088dea4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nqcjp" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.361428 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-9kjp9\" (UID: \"fdc44942-56de-4694-bcd4-bca48f1e1e08\") " pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.344443 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-n7cr7"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.362478 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-p7ll4"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.362501 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-fm6nl"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.362525 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p6k9r"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.362538 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-l9spd"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.362555 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sb8td"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.343829 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-9kjp9\" (UID: \"fdc44942-56de-4694-bcd4-bca48f1e1e08\") " pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.344006 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-9kjp9\" (UID: \"fdc44942-56de-4694-bcd4-bca48f1e1e08\") " pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.337772 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/22b2e7a5-b20a-41cd-b9fc-694a9aa3e964-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-htkzj\" (UID: \"22b2e7a5-b20a-41cd-b9fc-694a9aa3e964\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-htkzj" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.338319 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/858fe62f-567a-47e7-9847-c393790eb41f-etcd-client\") pod \"apiserver-76f77b778f-7jxs2\" (UID: \"858fe62f-567a-47e7-9847-c393790eb41f\") " pod="openshift-apiserver/apiserver-76f77b778f-7jxs2" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.345540 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.338780 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f792056c-fffa-4089-a040-8e09a1d6489f-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-s7n9n\" (UID: \"f792056c-fffa-4089-a040-8e09a1d6489f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-s7n9n" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.339396 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/255d9d52-daaf-41e1-be00-4a94de0a6324-console-oauth-config\") pod \"console-f9d7485db-hbdm4\" (UID: \"255d9d52-daaf-41e1-be00-4a94de0a6324\") " pod="openshift-console/console-f9d7485db-hbdm4" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.363565 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ceadcc2-c87f-4382-895a-f052e3c3597d-config\") pod \"route-controller-manager-6576b87f9c-7h9cs\" (UID: \"9ceadcc2-c87f-4382-895a-f052e3c3597d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7h9cs" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.363586 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fdc44942-56de-4694-bcd4-bca48f1e1e08-audit-dir\") pod \"oauth-openshift-558db77b4-9kjp9\" (UID: \"fdc44942-56de-4694-bcd4-bca48f1e1e08\") " pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.343463 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8d2d9bc1-4264-4633-af76-b57166070ab0-audit-policies\") pod \"apiserver-7bbb656c7d-gsfgx\" (UID: \"8d2d9bc1-4264-4633-af76-b57166070ab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gsfgx" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.343480 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-9kjp9\" (UID: \"fdc44942-56de-4694-bcd4-bca48f1e1e08\") " pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.343802 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/255d9d52-daaf-41e1-be00-4a94de0a6324-console-serving-cert\") pod \"console-f9d7485db-hbdm4\" (UID: \"255d9d52-daaf-41e1-be00-4a94de0a6324\") " pod="openshift-console/console-f9d7485db-hbdm4" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.365127 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/255d9d52-daaf-41e1-be00-4a94de0a6324-oauth-serving-cert\") pod \"console-f9d7485db-hbdm4\" (UID: \"255d9d52-daaf-41e1-be00-4a94de0a6324\") " pod="openshift-console/console-f9d7485db-hbdm4" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.343764 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/38ea1569-149a-4a65-a61d-021204d2cde6-machine-approver-tls\") pod \"machine-approver-56656f9798-f84g9\" (UID: \"38ea1569-149a-4a65-a61d-021204d2cde6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-f84g9" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.365255 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/858fe62f-567a-47e7-9847-c393790eb41f-node-pullsecrets\") pod \"apiserver-76f77b778f-7jxs2\" (UID: \"858fe62f-567a-47e7-9847-c393790eb41f\") " pod="openshift-apiserver/apiserver-76f77b778f-7jxs2" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.365749 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-9kjp9\" (UID: \"fdc44942-56de-4694-bcd4-bca48f1e1e08\") " pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.366407 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-9kjp9\" (UID: \"fdc44942-56de-4694-bcd4-bca48f1e1e08\") " pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.366849 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d8ad60c4-c4e9-48bd-bb54-f22bef5a8b76-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-kzwmx\" (UID: \"d8ad60c4-c4e9-48bd-bb54-f22bef5a8b76\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kzwmx" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.368009 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.371218 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-9kjp9\" (UID: \"fdc44942-56de-4694-bcd4-bca48f1e1e08\") " pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.372972 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8d2d9bc1-4264-4633-af76-b57166070ab0-etcd-client\") pod \"apiserver-7bbb656c7d-gsfgx\" (UID: \"8d2d9bc1-4264-4633-af76-b57166070ab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gsfgx" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.374245 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ceadcc2-c87f-4382-895a-f052e3c3597d-serving-cert\") pod \"route-controller-manager-6576b87f9c-7h9cs\" (UID: \"9ceadcc2-c87f-4382-895a-f052e3c3597d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7h9cs" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.377128 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d2d9bc1-4264-4633-af76-b57166070ab0-serving-cert\") pod \"apiserver-7bbb656c7d-gsfgx\" (UID: \"8d2d9bc1-4264-4633-af76-b57166070ab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gsfgx" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.377207 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kzwmx"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.378690 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-8phw8"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.379336 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7h9cs"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.381340 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-ktwh7"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.382192 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-hbdm4"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.383278 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6jt9w"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.384111 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z594r"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.385183 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-htkzj"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.385599 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.386360 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-bbw9t"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.388136 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7c9pc"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.388264 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-bbw9t" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.389054 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jxrhw"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.391445 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-k965v"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.392223 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-k965v" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.393584 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-mv7h7"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.395442 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-g4vb5"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.396535 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-shctm"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.398035 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6f78q"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.399193 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gftx9"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.400410 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-brpd2"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.401639 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qw4sc"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.402983 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-nqcjp"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.404929 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-bbw9t"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.406134 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-k965v"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.406181 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.407489 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490870-k4f69"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.408715 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-8p4v9"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.409792 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-qgt58"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.410521 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-qgt58" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.410914 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t77ps"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.412032 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-jhhdn"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.413120 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-qgt58"] Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.426774 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.446646 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.462177 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe2f1edb-4ba9-4745-ba10-2377d62e0313-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-shctm\" (UID: \"fe2f1edb-4ba9-4745-ba10-2377d62e0313\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-shctm" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.462285 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4fecb426-1ec9-4ee4-aee7-f079d088dea4-etcd-ca\") pod \"etcd-operator-b45778765-nqcjp\" (UID: \"4fecb426-1ec9-4ee4-aee7-f079d088dea4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nqcjp" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.462325 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4fecb426-1ec9-4ee4-aee7-f079d088dea4-etcd-service-ca\") pod \"etcd-operator-b45778765-nqcjp\" (UID: \"4fecb426-1ec9-4ee4-aee7-f079d088dea4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nqcjp" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.462360 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fecb426-1ec9-4ee4-aee7-f079d088dea4-serving-cert\") pod \"etcd-operator-b45778765-nqcjp\" (UID: \"4fecb426-1ec9-4ee4-aee7-f079d088dea4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nqcjp" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.462384 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/90b067c5-a234-4e7f-a68b-e0b1c5cdac35-trusted-ca\") pod \"ingress-operator-5b745b69d9-8phw8\" (UID: \"90b067c5-a234-4e7f-a68b-e0b1c5cdac35\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8phw8" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.462402 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/90b067c5-a234-4e7f-a68b-e0b1c5cdac35-metrics-tls\") pod \"ingress-operator-5b745b69d9-8phw8\" (UID: \"90b067c5-a234-4e7f-a68b-e0b1c5cdac35\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8phw8" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.462422 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fecb426-1ec9-4ee4-aee7-f079d088dea4-config\") pod \"etcd-operator-b45778765-nqcjp\" (UID: \"4fecb426-1ec9-4ee4-aee7-f079d088dea4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nqcjp" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.462463 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/90b067c5-a234-4e7f-a68b-e0b1c5cdac35-bound-sa-token\") pod \"ingress-operator-5b745b69d9-8phw8\" (UID: \"90b067c5-a234-4e7f-a68b-e0b1c5cdac35\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8phw8" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.462482 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-964nh\" (UniqueName: \"kubernetes.io/projected/036a0e85-4072-4906-90a1-c87c319a4abe-kube-api-access-964nh\") pod \"migrator-59844c95c7-qw4sc\" (UID: \"036a0e85-4072-4906-90a1-c87c319a4abe\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qw4sc" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.462533 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwc2s\" (UniqueName: \"kubernetes.io/projected/abf4a817-2de4-4f69-9ad8-d15ed857d5ab-kube-api-access-dwc2s\") pod \"downloads-7954f5f757-brpd2\" (UID: \"abf4a817-2de4-4f69-9ad8-d15ed857d5ab\") " pod="openshift-console/downloads-7954f5f757-brpd2" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.462560 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbl4f\" (UniqueName: \"kubernetes.io/projected/4fecb426-1ec9-4ee4-aee7-f079d088dea4-kube-api-access-jbl4f\") pod \"etcd-operator-b45778765-nqcjp\" (UID: \"4fecb426-1ec9-4ee4-aee7-f079d088dea4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nqcjp" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.462585 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6l5bm\" (UniqueName: \"kubernetes.io/projected/90b067c5-a234-4e7f-a68b-e0b1c5cdac35-kube-api-access-6l5bm\") pod \"ingress-operator-5b745b69d9-8phw8\" (UID: \"90b067c5-a234-4e7f-a68b-e0b1c5cdac35\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8phw8" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.462605 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe2f1edb-4ba9-4745-ba10-2377d62e0313-config\") pod \"kube-controller-manager-operator-78b949d7b-shctm\" (UID: \"fe2f1edb-4ba9-4745-ba10-2377d62e0313\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-shctm" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.462638 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4fecb426-1ec9-4ee4-aee7-f079d088dea4-etcd-client\") pod \"etcd-operator-b45778765-nqcjp\" (UID: \"4fecb426-1ec9-4ee4-aee7-f079d088dea4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nqcjp" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.462657 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fe2f1edb-4ba9-4745-ba10-2377d62e0313-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-shctm\" (UID: \"fe2f1edb-4ba9-4745-ba10-2377d62e0313\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-shctm" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.463439 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4fecb426-1ec9-4ee4-aee7-f079d088dea4-etcd-ca\") pod \"etcd-operator-b45778765-nqcjp\" (UID: \"4fecb426-1ec9-4ee4-aee7-f079d088dea4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nqcjp" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.463585 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4fecb426-1ec9-4ee4-aee7-f079d088dea4-etcd-service-ca\") pod \"etcd-operator-b45778765-nqcjp\" (UID: \"4fecb426-1ec9-4ee4-aee7-f079d088dea4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nqcjp" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.464697 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fecb426-1ec9-4ee4-aee7-f079d088dea4-config\") pod \"etcd-operator-b45778765-nqcjp\" (UID: \"4fecb426-1ec9-4ee4-aee7-f079d088dea4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nqcjp" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.464948 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe2f1edb-4ba9-4745-ba10-2377d62e0313-config\") pod \"kube-controller-manager-operator-78b949d7b-shctm\" (UID: \"fe2f1edb-4ba9-4745-ba10-2377d62e0313\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-shctm" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.464982 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/90b067c5-a234-4e7f-a68b-e0b1c5cdac35-trusted-ca\") pod \"ingress-operator-5b745b69d9-8phw8\" (UID: \"90b067c5-a234-4e7f-a68b-e0b1c5cdac35\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8phw8" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.466107 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe2f1edb-4ba9-4745-ba10-2377d62e0313-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-shctm\" (UID: \"fe2f1edb-4ba9-4745-ba10-2377d62e0313\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-shctm" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.466936 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.467354 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fecb426-1ec9-4ee4-aee7-f079d088dea4-serving-cert\") pod \"etcd-operator-b45778765-nqcjp\" (UID: \"4fecb426-1ec9-4ee4-aee7-f079d088dea4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nqcjp" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.468932 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/90b067c5-a234-4e7f-a68b-e0b1c5cdac35-metrics-tls\") pod \"ingress-operator-5b745b69d9-8phw8\" (UID: \"90b067c5-a234-4e7f-a68b-e0b1c5cdac35\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8phw8" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.470246 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4fecb426-1ec9-4ee4-aee7-f079d088dea4-etcd-client\") pod \"etcd-operator-b45778765-nqcjp\" (UID: \"4fecb426-1ec9-4ee4-aee7-f079d088dea4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nqcjp" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.486788 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.506618 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.526576 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.546043 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.566732 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.586447 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.627173 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.646362 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.666875 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.686288 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.706296 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.742397 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.746520 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.766830 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.787212 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.806453 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.827142 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.848529 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.866478 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.887772 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.908405 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.927850 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.947016 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.966434 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 26 18:32:39 crc kubenswrapper[4737]: I0126 18:32:39.985909 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 26 18:32:40 crc kubenswrapper[4737]: I0126 18:32:40.007405 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 26 18:32:40 crc kubenswrapper[4737]: I0126 18:32:40.026424 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 26 18:32:40 crc kubenswrapper[4737]: I0126 18:32:40.046611 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 26 18:32:40 crc kubenswrapper[4737]: I0126 18:32:40.067335 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 26 18:32:40 crc kubenswrapper[4737]: I0126 18:32:40.107800 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 26 18:32:40 crc kubenswrapper[4737]: I0126 18:32:40.142250 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnqpt\" (UniqueName: \"kubernetes.io/projected/c63a5aaa-f8bc-481b-b607-cda4e9eb4f9d-kube-api-access-lnqpt\") pod \"console-operator-58897d9998-l9spd\" (UID: \"c63a5aaa-f8bc-481b-b607-cda4e9eb4f9d\") " pod="openshift-console-operator/console-operator-58897d9998-l9spd" Jan 26 18:32:40 crc kubenswrapper[4737]: I0126 18:32:40.147827 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 26 18:32:40 crc kubenswrapper[4737]: I0126 18:32:40.166686 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 26 18:32:40 crc kubenswrapper[4737]: I0126 18:32:40.186784 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 26 18:32:40 crc kubenswrapper[4737]: I0126 18:32:40.207557 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 26 18:32:40 crc kubenswrapper[4737]: I0126 18:32:40.226224 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 26 18:32:40 crc kubenswrapper[4737]: I0126 18:32:40.246511 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 26 18:32:40 crc kubenswrapper[4737]: I0126 18:32:40.267137 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 26 18:32:40 crc kubenswrapper[4737]: I0126 18:32:40.283672 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-l9spd" Jan 26 18:32:40 crc kubenswrapper[4737]: I0126 18:32:40.287475 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 18:32:40 crc kubenswrapper[4737]: I0126 18:32:40.306972 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 26 18:32:40 crc kubenswrapper[4737]: I0126 18:32:40.324744 4737 request.go:700] Waited for 1.012548304s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-sa-dockercfg-msq4c&limit=500&resourceVersion=0 Jan 26 18:32:40 crc kubenswrapper[4737]: I0126 18:32:40.327236 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 18:32:40 crc kubenswrapper[4737]: I0126 18:32:40.346820 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 18:32:40 crc kubenswrapper[4737]: I0126 18:32:40.366344 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 18:32:40 crc kubenswrapper[4737]: I0126 18:32:40.388209 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 18:32:40 crc kubenswrapper[4737]: I0126 18:32:40.414002 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 18:32:40 crc kubenswrapper[4737]: I0126 18:32:40.426546 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 18:32:40 crc kubenswrapper[4737]: I0126 18:32:40.446946 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 26 18:32:40 crc kubenswrapper[4737]: I0126 18:32:40.466349 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 26 18:32:40 crc kubenswrapper[4737]: I0126 18:32:40.486940 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 26 18:32:40 crc kubenswrapper[4737]: I0126 18:32:40.493385 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-l9spd"] Jan 26 18:32:40 crc kubenswrapper[4737]: I0126 18:32:40.507117 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 26 18:32:40 crc kubenswrapper[4737]: I0126 18:32:40.534506 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 26 18:32:40 crc kubenswrapper[4737]: I0126 18:32:40.546158 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 26 18:32:40 crc kubenswrapper[4737]: I0126 18:32:40.567336 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 26 18:32:40 crc kubenswrapper[4737]: I0126 18:32:40.585349 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 26 18:32:40 crc kubenswrapper[4737]: I0126 18:32:40.623453 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf6dj\" (UniqueName: \"kubernetes.io/projected/255d9d52-daaf-41e1-be00-4a94de0a6324-kube-api-access-zf6dj\") pod \"console-f9d7485db-hbdm4\" (UID: \"255d9d52-daaf-41e1-be00-4a94de0a6324\") " pod="openshift-console/console-f9d7485db-hbdm4" Jan 26 18:32:40 crc kubenswrapper[4737]: I0126 18:32:40.664006 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhnxn\" (UniqueName: \"kubernetes.io/projected/8d2d9bc1-4264-4633-af76-b57166070ab0-kube-api-access-xhnxn\") pod \"apiserver-7bbb656c7d-gsfgx\" (UID: \"8d2d9bc1-4264-4633-af76-b57166070ab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gsfgx" Jan 26 18:32:40 crc kubenswrapper[4737]: I0126 18:32:40.683618 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bb8fc\" (UniqueName: \"kubernetes.io/projected/22b2e7a5-b20a-41cd-b9fc-694a9aa3e964-kube-api-access-bb8fc\") pod \"cluster-image-registry-operator-dc59b4c8b-htkzj\" (UID: \"22b2e7a5-b20a-41cd-b9fc-694a9aa3e964\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-htkzj" Jan 26 18:32:40 crc kubenswrapper[4737]: I0126 18:32:40.687654 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gsfgx" Jan 26 18:32:40 crc kubenswrapper[4737]: I0126 18:32:40.705833 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blhrc\" (UniqueName: \"kubernetes.io/projected/c8a64e01-05c7-4ea4-a60c-0bcce98ea3ff-kube-api-access-blhrc\") pod \"openshift-config-operator-7777fb866f-p7ll4\" (UID: \"c8a64e01-05c7-4ea4-a60c-0bcce98ea3ff\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p7ll4" Jan 26 18:32:40 crc kubenswrapper[4737]: I0126 18:32:40.706928 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 26 18:32:40 crc kubenswrapper[4737]: I0126 18:32:40.726356 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 26 18:32:40 crc kubenswrapper[4737]: I0126 18:32:40.746597 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 26 18:32:40 crc kubenswrapper[4737]: I0126 18:32:40.765821 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 26 18:32:40 crc kubenswrapper[4737]: I0126 18:32:40.786532 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 26 18:32:40 crc kubenswrapper[4737]: I0126 18:32:40.792160 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p7ll4" Jan 26 18:32:40 crc kubenswrapper[4737]: I0126 18:32:40.803854 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-hbdm4" Jan 26 18:32:40 crc kubenswrapper[4737]: I0126 18:32:40.809815 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 26 18:32:40 crc kubenswrapper[4737]: I0126 18:32:40.826693 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 26 18:32:40 crc kubenswrapper[4737]: I0126 18:32:40.846546 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 26 18:32:40 crc kubenswrapper[4737]: I0126 18:32:40.866296 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-gsfgx"] Jan 26 18:32:40 crc kubenswrapper[4737]: I0126 18:32:40.868455 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 26 18:32:40 crc kubenswrapper[4737]: W0126 18:32:40.874115 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d2d9bc1_4264_4633_af76_b57166070ab0.slice/crio-2f8f7bba223e5c0e31f4f7f7953cd31b0b6d1b312176a2fb440efe77a92fdc1f WatchSource:0}: Error finding container 2f8f7bba223e5c0e31f4f7f7953cd31b0b6d1b312176a2fb440efe77a92fdc1f: Status 404 returned error can't find the container with id 2f8f7bba223e5c0e31f4f7f7953cd31b0b6d1b312176a2fb440efe77a92fdc1f Jan 26 18:32:40 crc kubenswrapper[4737]: I0126 18:32:40.887326 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 26 18:32:40 crc kubenswrapper[4737]: I0126 18:32:40.907585 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 26 18:32:40 crc kubenswrapper[4737]: I0126 18:32:40.926992 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 26 18:32:40 crc kubenswrapper[4737]: I0126 18:32:40.946603 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 18:32:40 crc kubenswrapper[4737]: I0126 18:32:40.966622 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 26 18:32:40 crc kubenswrapper[4737]: I0126 18:32:40.987274 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.005907 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-p7ll4"] Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.008665 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 26 18:32:41 crc kubenswrapper[4737]: W0126 18:32:41.014383 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8a64e01_05c7_4ea4_a60c_0bcce98ea3ff.slice/crio-d58dbd21f9efc12167c951e55a94fd90a5dd4793aa60a831ce8018aa0e80ed2f WatchSource:0}: Error finding container d58dbd21f9efc12167c951e55a94fd90a5dd4793aa60a831ce8018aa0e80ed2f: Status 404 returned error can't find the container with id d58dbd21f9efc12167c951e55a94fd90a5dd4793aa60a831ce8018aa0e80ed2f Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.026379 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.035084 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-hbdm4"] Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.047699 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.094090 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv66w\" (UniqueName: \"kubernetes.io/projected/fdc44942-56de-4694-bcd4-bca48f1e1e08-kube-api-access-gv66w\") pod \"oauth-openshift-558db77b4-9kjp9\" (UID: \"fdc44942-56de-4694-bcd4-bca48f1e1e08\") " pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.104232 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27lxz\" (UniqueName: \"kubernetes.io/projected/887b083d-2d4b-4231-a109-f2e1d5d14c39-kube-api-access-27lxz\") pod \"openshift-apiserver-operator-796bbdcf4f-lnrns\" (UID: \"887b083d-2d4b-4231-a109-f2e1d5d14c39\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lnrns" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.108367 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-hbdm4" event={"ID":"255d9d52-daaf-41e1-be00-4a94de0a6324","Type":"ContainerStarted","Data":"cc3bb592bcc22180a1d958bf5bdaaf966a903ba616b9b7c7dcf4a2f47bfa9027"} Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.115480 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p7ll4" event={"ID":"c8a64e01-05c7-4ea4-a60c-0bcce98ea3ff","Type":"ContainerStarted","Data":"d58dbd21f9efc12167c951e55a94fd90a5dd4793aa60a831ce8018aa0e80ed2f"} Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.123274 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gsfgx" event={"ID":"8d2d9bc1-4264-4633-af76-b57166070ab0","Type":"ContainerStarted","Data":"2f8f7bba223e5c0e31f4f7f7953cd31b0b6d1b312176a2fb440efe77a92fdc1f"} Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.126889 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-l9spd" event={"ID":"c63a5aaa-f8bc-481b-b607-cda4e9eb4f9d","Type":"ContainerStarted","Data":"25a093323b3d345e9450fa4347845a051adfc54112cd7d824cf085d0a4b5f46b"} Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.126965 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-l9spd" event={"ID":"c63a5aaa-f8bc-481b-b607-cda4e9eb4f9d","Type":"ContainerStarted","Data":"402d7252b775d0666e2e863a467fa4a6254c0d89d79bbeab29c47f6bc1769cab"} Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.127469 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-l9spd" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.129694 4737 patch_prober.go:28] interesting pod/console-operator-58897d9998-l9spd container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/readyz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.129757 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-l9spd" podUID="c63a5aaa-f8bc-481b-b607-cda4e9eb4f9d" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/readyz\": dial tcp 10.217.0.19:8443: connect: connection refused" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.129775 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrc9h\" (UniqueName: \"kubernetes.io/projected/d8ad60c4-c4e9-48bd-bb54-f22bef5a8b76-kube-api-access-qrc9h\") pod \"cluster-samples-operator-665b6dd947-kzwmx\" (UID: \"d8ad60c4-c4e9-48bd-bb54-f22bef5a8b76\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kzwmx" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.143247 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwrx2\" (UniqueName: \"kubernetes.io/projected/858fe62f-567a-47e7-9847-c393790eb41f-kube-api-access-jwrx2\") pod \"apiserver-76f77b778f-7jxs2\" (UID: \"858fe62f-567a-47e7-9847-c393790eb41f\") " pod="openshift-apiserver/apiserver-76f77b778f-7jxs2" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.163614 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzn95\" (UniqueName: \"kubernetes.io/projected/f792056c-fffa-4089-a040-8e09a1d6489f-kube-api-access-kzn95\") pod \"kube-storage-version-migrator-operator-b67b599dd-s7n9n\" (UID: \"f792056c-fffa-4089-a040-8e09a1d6489f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-s7n9n" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.175671 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-s7n9n" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.179558 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zlqs\" (UniqueName: \"kubernetes.io/projected/38ea1569-149a-4a65-a61d-021204d2cde6-kube-api-access-4zlqs\") pod \"machine-approver-56656f9798-f84g9\" (UID: \"38ea1569-149a-4a65-a61d-021204d2cde6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-f84g9" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.193134 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kzwmx" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.195034 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-7jxs2" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.200849 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/22b2e7a5-b20a-41cd-b9fc-694a9aa3e964-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-htkzj\" (UID: \"22b2e7a5-b20a-41cd-b9fc-694a9aa3e964\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-htkzj" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.206196 4737 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.214374 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.226867 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.248214 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.267033 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.287570 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.327029 4737 request.go:700] Waited for 1.934470923s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Ddns-dockercfg-jwfmh&limit=500&resourceVersion=0 Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.327264 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lnrns" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.330175 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.330709 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.337185 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-f84g9" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.348602 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.366300 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 26 18:32:41 crc kubenswrapper[4737]: W0126 18:32:41.366534 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38ea1569_149a_4a65_a61d_021204d2cde6.slice/crio-a7ecb136203bdcb98b6a1d1027e00d22f10b0b1330a83e6bff9c1ace011bae62 WatchSource:0}: Error finding container a7ecb136203bdcb98b6a1d1027e00d22f10b0b1330a83e6bff9c1ace011bae62: Status 404 returned error can't find the container with id a7ecb136203bdcb98b6a1d1027e00d22f10b0b1330a83e6bff9c1ace011bae62 Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.376174 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-htkzj" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.386648 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.436719 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fe2f1edb-4ba9-4745-ba10-2377d62e0313-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-shctm\" (UID: \"fe2f1edb-4ba9-4745-ba10-2377d62e0313\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-shctm" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.449238 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwc2s\" (UniqueName: \"kubernetes.io/projected/abf4a817-2de4-4f69-9ad8-d15ed857d5ab-kube-api-access-dwc2s\") pod \"downloads-7954f5f757-brpd2\" (UID: \"abf4a817-2de4-4f69-9ad8-d15ed857d5ab\") " pod="openshift-console/downloads-7954f5f757-brpd2" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.497732 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/90b067c5-a234-4e7f-a68b-e0b1c5cdac35-bound-sa-token\") pod \"ingress-operator-5b745b69d9-8phw8\" (UID: \"90b067c5-a234-4e7f-a68b-e0b1c5cdac35\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8phw8" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.502120 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-964nh\" (UniqueName: \"kubernetes.io/projected/036a0e85-4072-4906-90a1-c87c319a4abe-kube-api-access-964nh\") pod \"migrator-59844c95c7-qw4sc\" (UID: \"036a0e85-4072-4906-90a1-c87c319a4abe\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qw4sc" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.521227 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbl4f\" (UniqueName: \"kubernetes.io/projected/4fecb426-1ec9-4ee4-aee7-f079d088dea4-kube-api-access-jbl4f\") pod \"etcd-operator-b45778765-nqcjp\" (UID: \"4fecb426-1ec9-4ee4-aee7-f079d088dea4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nqcjp" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.525010 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-shctm" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.547200 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.559740 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-9kjp9"] Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.559843 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prdjb\" (UniqueName: \"kubernetes.io/projected/9ceadcc2-c87f-4382-895a-f052e3c3597d-kube-api-access-prdjb\") pod \"route-controller-manager-6576b87f9c-7h9cs\" (UID: \"9ceadcc2-c87f-4382-895a-f052e3c3597d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7h9cs" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.593049 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7h9cs" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.593165 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6l5bm\" (UniqueName: \"kubernetes.io/projected/90b067c5-a234-4e7f-a68b-e0b1c5cdac35-kube-api-access-6l5bm\") pod \"ingress-operator-5b745b69d9-8phw8\" (UID: \"90b067c5-a234-4e7f-a68b-e0b1c5cdac35\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8phw8" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.593385 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-brpd2" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.593857 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qw4sc" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.599217 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7cd9832f-e47d-4503-88fb-6a197b2fe89d-trusted-ca\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.599282 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/754c2fa2-3520-4a1e-a052-16c16efc7d51-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-n2t8j\" (UID: \"754c2fa2-3520-4a1e-a052-16c16efc7d51\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n2t8j" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.599314 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntdp8\" (UniqueName: \"kubernetes.io/projected/754c2fa2-3520-4a1e-a052-16c16efc7d51-kube-api-access-ntdp8\") pod \"machine-config-controller-84d6567774-n2t8j\" (UID: \"754c2fa2-3520-4a1e-a052-16c16efc7d51\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n2t8j" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.599384 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7cd9832f-e47d-4503-88fb-6a197b2fe89d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.599425 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh8jn\" (UniqueName: \"kubernetes.io/projected/c8be3738-e6c1-4cc8-ae8a-a23387b73213-kube-api-access-mh8jn\") pod \"machine-api-operator-5694c8668f-ktwh7\" (UID: \"c8be3738-e6c1-4cc8-ae8a-a23387b73213\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ktwh7" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.599449 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7637c14c-92d8-4049-945c-33d6c7f7f9d1-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-vkl6w\" (UID: \"7637c14c-92d8-4049-945c-33d6c7f7f9d1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vkl6w" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.599481 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c8be3738-e6c1-4cc8-ae8a-a23387b73213-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-ktwh7\" (UID: \"c8be3738-e6c1-4cc8-ae8a-a23387b73213\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ktwh7" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.599572 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7cd9832f-e47d-4503-88fb-6a197b2fe89d-registry-tls\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.599624 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3c8453aa-abd7-49cc-a743-5e6bb8649740-metrics-tls\") pod \"dns-operator-744455d44c-fm6nl\" (UID: \"3c8453aa-abd7-49cc-a743-5e6bb8649740\") " pod="openshift-dns-operator/dns-operator-744455d44c-fm6nl" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.599661 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7637c14c-92d8-4049-945c-33d6c7f7f9d1-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-vkl6w\" (UID: \"7637c14c-92d8-4049-945c-33d6c7f7f9d1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vkl6w" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.599704 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7v78\" (UniqueName: \"kubernetes.io/projected/3c8453aa-abd7-49cc-a743-5e6bb8649740-kube-api-access-z7v78\") pod \"dns-operator-744455d44c-fm6nl\" (UID: \"3c8453aa-abd7-49cc-a743-5e6bb8649740\") " pod="openshift-dns-operator/dns-operator-744455d44c-fm6nl" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.599731 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c8be3738-e6c1-4cc8-ae8a-a23387b73213-images\") pod \"machine-api-operator-5694c8668f-ktwh7\" (UID: \"c8be3738-e6c1-4cc8-ae8a-a23387b73213\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ktwh7" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.599770 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.599806 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7cd9832f-e47d-4503-88fb-6a197b2fe89d-registry-certificates\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.599833 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7637c14c-92d8-4049-945c-33d6c7f7f9d1-config\") pod \"kube-apiserver-operator-766d6c64bb-vkl6w\" (UID: \"7637c14c-92d8-4049-945c-33d6c7f7f9d1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vkl6w" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.599861 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8be3738-e6c1-4cc8-ae8a-a23387b73213-config\") pod \"machine-api-operator-5694c8668f-ktwh7\" (UID: \"c8be3738-e6c1-4cc8-ae8a-a23387b73213\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ktwh7" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.599887 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/754c2fa2-3520-4a1e-a052-16c16efc7d51-proxy-tls\") pod \"machine-config-controller-84d6567774-n2t8j\" (UID: \"754c2fa2-3520-4a1e-a052-16c16efc7d51\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n2t8j" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.599929 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7cd9832f-e47d-4503-88fb-6a197b2fe89d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.599958 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7cd9832f-e47d-4503-88fb-6a197b2fe89d-bound-sa-token\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.599982 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh7x4\" (UniqueName: \"kubernetes.io/projected/7cd9832f-e47d-4503-88fb-6a197b2fe89d-kube-api-access-hh7x4\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:41 crc kubenswrapper[4737]: E0126 18:32:41.601082 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:32:42.101046561 +0000 UTC m=+135.409241269 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7c9pc" (UID: "7cd9832f-e47d-4503-88fb-6a197b2fe89d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.601555 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-7jxs2"] Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.649191 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lnrns"] Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.684140 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-htkzj"] Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.700796 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.701132 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6srr\" (UniqueName: \"kubernetes.io/projected/a2b6e28b-2e70-4f70-9284-942460f8d1fd-kube-api-access-l6srr\") pod \"dns-default-k965v\" (UID: \"a2b6e28b-2e70-4f70-9284-942460f8d1fd\") " pod="openshift-dns/dns-default-k965v" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.701204 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfhmr\" (UniqueName: \"kubernetes.io/projected/d0215af9-47a6-42bb-bb48-29c002caff5a-kube-api-access-qfhmr\") pod \"package-server-manager-789f6589d5-6jt9w\" (UID: \"d0215af9-47a6-42bb-bb48-29c002caff5a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6jt9w" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.701232 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7cd9832f-e47d-4503-88fb-6a197b2fe89d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.701253 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/eec275ca-9658-4733-b311-48a052e4e843-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gftx9\" (UID: \"eec275ca-9658-4733-b311-48a052e4e843\") " pod="openshift-marketplace/marketplace-operator-79b997595-gftx9" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.701302 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkdkp\" (UniqueName: \"kubernetes.io/projected/ac652a18-5fbd-483e-94d1-0782ee0cc3ac-kube-api-access-nkdkp\") pod \"collect-profiles-29490870-k4f69\" (UID: \"ac652a18-5fbd-483e-94d1-0782ee0cc3ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-k4f69" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.701331 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a8407c17-c270-4f2c-be13-4b03ee2bbc28-webhook-cert\") pod \"packageserver-d55dfcdfc-sb8td\" (UID: \"a8407c17-c270-4f2c-be13-4b03ee2bbc28\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sb8td" Jan 26 18:32:41 crc kubenswrapper[4737]: E0126 18:32:41.701391 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:32:42.201328242 +0000 UTC m=+135.509522970 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.701451 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/ed97d0e9-4ae3-4db6-9635-38141f37948e-plugins-dir\") pod \"csi-hostpathplugin-bbw9t\" (UID: \"ed97d0e9-4ae3-4db6-9635-38141f37948e\") " pod="hostpath-provisioner/csi-hostpathplugin-bbw9t" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.701513 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/754c2fa2-3520-4a1e-a052-16c16efc7d51-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-n2t8j\" (UID: \"754c2fa2-3520-4a1e-a052-16c16efc7d51\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n2t8j" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.701553 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f8rd\" (UniqueName: \"kubernetes.io/projected/8281dd1a-854f-48af-855b-bb3f8f2a2b2a-kube-api-access-8f8rd\") pod \"machine-config-server-m5fhx\" (UID: \"8281dd1a-854f-48af-855b-bb3f8f2a2b2a\") " pod="openshift-machine-config-operator/machine-config-server-m5fhx" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.701580 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/ed97d0e9-4ae3-4db6-9635-38141f37948e-mountpoint-dir\") pod \"csi-hostpathplugin-bbw9t\" (UID: \"ed97d0e9-4ae3-4db6-9635-38141f37948e\") " pod="hostpath-provisioner/csi-hostpathplugin-bbw9t" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.701603 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmv94\" (UniqueName: \"kubernetes.io/projected/ed97d0e9-4ae3-4db6-9635-38141f37948e-kube-api-access-xmv94\") pod \"csi-hostpathplugin-bbw9t\" (UID: \"ed97d0e9-4ae3-4db6-9635-38141f37948e\") " pod="hostpath-provisioner/csi-hostpathplugin-bbw9t" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.701654 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/60a6a19b-baa5-47c5-8733-202b5bfd0c97-default-certificate\") pod \"router-default-5444994796-wwzqx\" (UID: \"60a6a19b-baa5-47c5-8733-202b5bfd0c97\") " pod="openshift-ingress/router-default-5444994796-wwzqx" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.701678 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/60a6a19b-baa5-47c5-8733-202b5bfd0c97-service-ca-bundle\") pod \"router-default-5444994796-wwzqx\" (UID: \"60a6a19b-baa5-47c5-8733-202b5bfd0c97\") " pod="openshift-ingress/router-default-5444994796-wwzqx" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.701742 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qx6p\" (UniqueName: \"kubernetes.io/projected/9b4a67b3-c096-4abe-80d8-f15e2d4ab72d-kube-api-access-6qx6p\") pod \"controller-manager-879f6c89f-n7cr7\" (UID: \"9b4a67b3-c096-4abe-80d8-f15e2d4ab72d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-n7cr7" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.701789 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b4a67b3-c096-4abe-80d8-f15e2d4ab72d-serving-cert\") pod \"controller-manager-879f6c89f-n7cr7\" (UID: \"9b4a67b3-c096-4abe-80d8-f15e2d4ab72d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-n7cr7" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.701830 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh8jn\" (UniqueName: \"kubernetes.io/projected/c8be3738-e6c1-4cc8-ae8a-a23387b73213-kube-api-access-mh8jn\") pod \"machine-api-operator-5694c8668f-ktwh7\" (UID: \"c8be3738-e6c1-4cc8-ae8a-a23387b73213\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ktwh7" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.701851 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6n6tf\" (UniqueName: \"kubernetes.io/projected/eec275ca-9658-4733-b311-48a052e4e843-kube-api-access-6n6tf\") pod \"marketplace-operator-79b997595-gftx9\" (UID: \"eec275ca-9658-4733-b311-48a052e4e843\") " pod="openshift-marketplace/marketplace-operator-79b997595-gftx9" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.701890 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nr29\" (UniqueName: \"kubernetes.io/projected/17e356af-cb63-4f1c-9b53-d226b15d5a35-kube-api-access-8nr29\") pod \"service-ca-9c57cc56f-8p4v9\" (UID: \"17e356af-cb63-4f1c-9b53-d226b15d5a35\") " pod="openshift-service-ca/service-ca-9c57cc56f-8p4v9" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.701914 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c8be3738-e6c1-4cc8-ae8a-a23387b73213-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-ktwh7\" (UID: \"c8be3738-e6c1-4cc8-ae8a-a23387b73213\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ktwh7" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.701929 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a8e30f97-e004-4054-9ffb-9f1bb9df0470-srv-cert\") pod \"olm-operator-6b444d44fb-jxrhw\" (UID: \"a8e30f97-e004-4054-9ffb-9f1bb9df0470\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jxrhw" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.701948 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a8407c17-c270-4f2c-be13-4b03ee2bbc28-tmpfs\") pod \"packageserver-d55dfcdfc-sb8td\" (UID: \"a8407c17-c270-4f2c-be13-4b03ee2bbc28\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sb8td" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.701968 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/ed97d0e9-4ae3-4db6-9635-38141f37948e-csi-data-dir\") pod \"csi-hostpathplugin-bbw9t\" (UID: \"ed97d0e9-4ae3-4db6-9635-38141f37948e\") " pod="hostpath-provisioner/csi-hostpathplugin-bbw9t" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.701993 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b40b453c-36fe-4b0b-8e67-12715f0e15e7-service-ca-bundle\") pod \"authentication-operator-69f744f599-scmj7\" (UID: \"b40b453c-36fe-4b0b-8e67-12715f0e15e7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-scmj7" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.702008 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac652a18-5fbd-483e-94d1-0782ee0cc3ac-secret-volume\") pod \"collect-profiles-29490870-k4f69\" (UID: \"ac652a18-5fbd-483e-94d1-0782ee0cc3ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-k4f69" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.702912 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/754c2fa2-3520-4a1e-a052-16c16efc7d51-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-n2t8j\" (UID: \"754c2fa2-3520-4a1e-a052-16c16efc7d51\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n2t8j" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.703219 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxgbr\" (UniqueName: \"kubernetes.io/projected/a8e30f97-e004-4054-9ffb-9f1bb9df0470-kube-api-access-rxgbr\") pod \"olm-operator-6b444d44fb-jxrhw\" (UID: \"a8e30f97-e004-4054-9ffb-9f1bb9df0470\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jxrhw" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.703303 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/17e356af-cb63-4f1c-9b53-d226b15d5a35-signing-cabundle\") pod \"service-ca-9c57cc56f-8p4v9\" (UID: \"17e356af-cb63-4f1c-9b53-d226b15d5a35\") " pod="openshift-service-ca/service-ca-9c57cc56f-8p4v9" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.703382 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxmkd\" (UniqueName: \"kubernetes.io/projected/833792c1-41f1-45ee-b08b-aacc3388e916-kube-api-access-vxmkd\") pod \"multus-admission-controller-857f4d67dd-g4vb5\" (UID: \"833792c1-41f1-45ee-b08b-aacc3388e916\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-g4vb5" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.703554 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b40b453c-36fe-4b0b-8e67-12715f0e15e7-config\") pod \"authentication-operator-69f744f599-scmj7\" (UID: \"b40b453c-36fe-4b0b-8e67-12715f0e15e7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-scmj7" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.703592 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dflkm\" (UniqueName: \"kubernetes.io/projected/60a6a19b-baa5-47c5-8733-202b5bfd0c97-kube-api-access-dflkm\") pod \"router-default-5444994796-wwzqx\" (UID: \"60a6a19b-baa5-47c5-8733-202b5bfd0c97\") " pod="openshift-ingress/router-default-5444994796-wwzqx" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.704011 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eec275ca-9658-4733-b311-48a052e4e843-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gftx9\" (UID: \"eec275ca-9658-4733-b311-48a052e4e843\") " pod="openshift-marketplace/marketplace-operator-79b997595-gftx9" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.704130 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad4a4950-08fa-4707-8af8-4814f89b5ec8-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-z594r\" (UID: \"ad4a4950-08fa-4707-8af8-4814f89b5ec8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z594r" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.704139 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7cd9832f-e47d-4503-88fb-6a197b2fe89d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.704305 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2c6d44e4-59b3-46ff-8a01-43c41890a722-profile-collector-cert\") pod \"catalog-operator-68c6474976-t77ps\" (UID: \"2c6d44e4-59b3-46ff-8a01-43c41890a722\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t77ps" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.704579 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b69e01a5-0952-496d-97cd-21586e50a7de-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-p6k9r\" (UID: \"b69e01a5-0952-496d-97cd-21586e50a7de\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p6k9r" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.704605 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b4a67b3-c096-4abe-80d8-f15e2d4ab72d-config\") pod \"controller-manager-879f6c89f-n7cr7\" (UID: \"9b4a67b3-c096-4abe-80d8-f15e2d4ab72d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-n7cr7" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.704660 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1595442-c281-470d-a08c-b04158a7c899-config\") pod \"service-ca-operator-777779d784-jhhdn\" (UID: \"d1595442-c281-470d-a08c-b04158a7c899\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jhhdn" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.704716 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78t7s\" (UniqueName: \"kubernetes.io/projected/d1595442-c281-470d-a08c-b04158a7c899-kube-api-access-78t7s\") pod \"service-ca-operator-777779d784-jhhdn\" (UID: \"d1595442-c281-470d-a08c-b04158a7c899\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jhhdn" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.704746 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac652a18-5fbd-483e-94d1-0782ee0cc3ac-config-volume\") pod \"collect-profiles-29490870-k4f69\" (UID: \"ac652a18-5fbd-483e-94d1-0782ee0cc3ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-k4f69" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.704779 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7637c14c-92d8-4049-945c-33d6c7f7f9d1-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-vkl6w\" (UID: \"7637c14c-92d8-4049-945c-33d6c7f7f9d1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vkl6w" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.704938 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60a6a19b-baa5-47c5-8733-202b5bfd0c97-metrics-certs\") pod \"router-default-5444994796-wwzqx\" (UID: \"60a6a19b-baa5-47c5-8733-202b5bfd0c97\") " pod="openshift-ingress/router-default-5444994796-wwzqx" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.704966 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a8e30f97-e004-4054-9ffb-9f1bb9df0470-profile-collector-cert\") pod \"olm-operator-6b444d44fb-jxrhw\" (UID: \"a8e30f97-e004-4054-9ffb-9f1bb9df0470\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jxrhw" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.704984 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a8407c17-c270-4f2c-be13-4b03ee2bbc28-apiservice-cert\") pod \"packageserver-d55dfcdfc-sb8td\" (UID: \"a8407c17-c270-4f2c-be13-4b03ee2bbc28\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sb8td" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.705015 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b40b453c-36fe-4b0b-8e67-12715f0e15e7-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-scmj7\" (UID: \"b40b453c-36fe-4b0b-8e67-12715f0e15e7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-scmj7" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.705033 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9b4a67b3-c096-4abe-80d8-f15e2d4ab72d-client-ca\") pod \"controller-manager-879f6c89f-n7cr7\" (UID: \"9b4a67b3-c096-4abe-80d8-f15e2d4ab72d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-n7cr7" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.705050 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2b6e28b-2e70-4f70-9284-942460f8d1fd-config-volume\") pod \"dns-default-k965v\" (UID: \"a2b6e28b-2e70-4f70-9284-942460f8d1fd\") " pod="openshift-dns/dns-default-k965v" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.705185 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c8be3738-e6c1-4cc8-ae8a-a23387b73213-images\") pod \"machine-api-operator-5694c8668f-ktwh7\" (UID: \"c8be3738-e6c1-4cc8-ae8a-a23387b73213\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ktwh7" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.705231 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a2b6e28b-2e70-4f70-9284-942460f8d1fd-metrics-tls\") pod \"dns-default-k965v\" (UID: \"a2b6e28b-2e70-4f70-9284-942460f8d1fd\") " pod="openshift-dns/dns-default-k965v" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.705606 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d0215af9-47a6-42bb-bb48-29c002caff5a-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-6jt9w\" (UID: \"d0215af9-47a6-42bb-bb48-29c002caff5a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6jt9w" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.705646 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/833792c1-41f1-45ee-b08b-aacc3388e916-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-g4vb5\" (UID: \"833792c1-41f1-45ee-b08b-aacc3388e916\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-g4vb5" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.705673 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rrd7\" (UniqueName: \"kubernetes.io/projected/cf12407d-16ca-40d9-8279-f46693aee8b1-kube-api-access-9rrd7\") pod \"control-plane-machine-set-operator-78cbb6b69f-6f78q\" (UID: \"cf12407d-16ca-40d9-8279-f46693aee8b1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6f78q" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.705713 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7cd9832f-e47d-4503-88fb-6a197b2fe89d-registry-certificates\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.705730 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7637c14c-92d8-4049-945c-33d6c7f7f9d1-config\") pod \"kube-apiserver-operator-766d6c64bb-vkl6w\" (UID: \"7637c14c-92d8-4049-945c-33d6c7f7f9d1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vkl6w" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.705747 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b69e01a5-0952-496d-97cd-21586e50a7de-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-p6k9r\" (UID: \"b69e01a5-0952-496d-97cd-21586e50a7de\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p6k9r" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.706055 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c8be3738-e6c1-4cc8-ae8a-a23387b73213-images\") pod \"machine-api-operator-5694c8668f-ktwh7\" (UID: \"c8be3738-e6c1-4cc8-ae8a-a23387b73213\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ktwh7" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.706662 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7637c14c-92d8-4049-945c-33d6c7f7f9d1-config\") pod \"kube-apiserver-operator-766d6c64bb-vkl6w\" (UID: \"7637c14c-92d8-4049-945c-33d6c7f7f9d1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vkl6w" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.707393 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7cd9832f-e47d-4503-88fb-6a197b2fe89d-registry-certificates\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.709281 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9b4a67b3-c096-4abe-80d8-f15e2d4ab72d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-n7cr7\" (UID: \"9b4a67b3-c096-4abe-80d8-f15e2d4ab72d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-n7cr7" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.714651 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c8be3738-e6c1-4cc8-ae8a-a23387b73213-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-ktwh7\" (UID: \"c8be3738-e6c1-4cc8-ae8a-a23387b73213\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ktwh7" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.716282 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/754c2fa2-3520-4a1e-a052-16c16efc7d51-proxy-tls\") pod \"machine-config-controller-84d6567774-n2t8j\" (UID: \"754c2fa2-3520-4a1e-a052-16c16efc7d51\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n2t8j" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.716337 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/34949d5f-f358-40f5-8b72-7e82ec14b2ad-auth-proxy-config\") pod \"machine-config-operator-74547568cd-mv7h7\" (UID: \"34949d5f-f358-40f5-8b72-7e82ec14b2ad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mv7h7" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.716423 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b40b453c-36fe-4b0b-8e67-12715f0e15e7-serving-cert\") pod \"authentication-operator-69f744f599-scmj7\" (UID: \"b40b453c-36fe-4b0b-8e67-12715f0e15e7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-scmj7" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.716466 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7cd9832f-e47d-4503-88fb-6a197b2fe89d-bound-sa-token\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.716488 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh7x4\" (UniqueName: \"kubernetes.io/projected/7cd9832f-e47d-4503-88fb-6a197b2fe89d-kube-api-access-hh7x4\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.716509 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zftsq\" (UniqueName: \"kubernetes.io/projected/34949d5f-f358-40f5-8b72-7e82ec14b2ad-kube-api-access-zftsq\") pod \"machine-config-operator-74547568cd-mv7h7\" (UID: \"34949d5f-f358-40f5-8b72-7e82ec14b2ad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mv7h7" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.716578 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7cd9832f-e47d-4503-88fb-6a197b2fe89d-trusted-ca\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.716600 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntdp8\" (UniqueName: \"kubernetes.io/projected/754c2fa2-3520-4a1e-a052-16c16efc7d51-kube-api-access-ntdp8\") pod \"machine-config-controller-84d6567774-n2t8j\" (UID: \"754c2fa2-3520-4a1e-a052-16c16efc7d51\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n2t8j" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.716621 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ed97d0e9-4ae3-4db6-9635-38141f37948e-socket-dir\") pod \"csi-hostpathplugin-bbw9t\" (UID: \"ed97d0e9-4ae3-4db6-9635-38141f37948e\") " pod="hostpath-provisioner/csi-hostpathplugin-bbw9t" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.716643 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8smvr\" (UniqueName: \"kubernetes.io/projected/ad4a4950-08fa-4707-8af8-4814f89b5ec8-kube-api-access-8smvr\") pod \"openshift-controller-manager-operator-756b6f6bc6-z594r\" (UID: \"ad4a4950-08fa-4707-8af8-4814f89b5ec8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z594r" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.716663 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/8281dd1a-854f-48af-855b-bb3f8f2a2b2a-certs\") pod \"machine-config-server-m5fhx\" (UID: \"8281dd1a-854f-48af-855b-bb3f8f2a2b2a\") " pod="openshift-machine-config-operator/machine-config-server-m5fhx" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.716711 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7cd9832f-e47d-4503-88fb-6a197b2fe89d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.716732 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/34949d5f-f358-40f5-8b72-7e82ec14b2ad-proxy-tls\") pod \"machine-config-operator-74547568cd-mv7h7\" (UID: \"34949d5f-f358-40f5-8b72-7e82ec14b2ad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mv7h7" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.716771 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4xz8\" (UniqueName: \"kubernetes.io/projected/0b8a65d4-ee10-4c70-bcef-cd823b4a7cc9-kube-api-access-c4xz8\") pod \"ingress-canary-qgt58\" (UID: \"0b8a65d4-ee10-4c70-bcef-cd823b4a7cc9\") " pod="openshift-ingress-canary/ingress-canary-qgt58" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.716807 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/cf12407d-16ca-40d9-8279-f46693aee8b1-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-6f78q\" (UID: \"cf12407d-16ca-40d9-8279-f46693aee8b1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6f78q" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.716851 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7637c14c-92d8-4049-945c-33d6c7f7f9d1-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-vkl6w\" (UID: \"7637c14c-92d8-4049-945c-33d6c7f7f9d1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vkl6w" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.716870 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/17e356af-cb63-4f1c-9b53-d226b15d5a35-signing-key\") pod \"service-ca-9c57cc56f-8p4v9\" (UID: \"17e356af-cb63-4f1c-9b53-d226b15d5a35\") " pod="openshift-service-ca/service-ca-9c57cc56f-8p4v9" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.716919 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk2jm\" (UniqueName: \"kubernetes.io/projected/2c6d44e4-59b3-46ff-8a01-43c41890a722-kube-api-access-fk2jm\") pod \"catalog-operator-68c6474976-t77ps\" (UID: \"2c6d44e4-59b3-46ff-8a01-43c41890a722\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t77ps" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.716942 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b69e01a5-0952-496d-97cd-21586e50a7de-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-p6k9r\" (UID: \"b69e01a5-0952-496d-97cd-21586e50a7de\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p6k9r" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.716964 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0b8a65d4-ee10-4c70-bcef-cd823b4a7cc9-cert\") pod \"ingress-canary-qgt58\" (UID: \"0b8a65d4-ee10-4c70-bcef-cd823b4a7cc9\") " pod="openshift-ingress-canary/ingress-canary-qgt58" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.717041 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gccn\" (UniqueName: \"kubernetes.io/projected/b40b453c-36fe-4b0b-8e67-12715f0e15e7-kube-api-access-6gccn\") pod \"authentication-operator-69f744f599-scmj7\" (UID: \"b40b453c-36fe-4b0b-8e67-12715f0e15e7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-scmj7" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.717091 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/60a6a19b-baa5-47c5-8733-202b5bfd0c97-stats-auth\") pod \"router-default-5444994796-wwzqx\" (UID: \"60a6a19b-baa5-47c5-8733-202b5bfd0c97\") " pod="openshift-ingress/router-default-5444994796-wwzqx" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.717125 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2c6d44e4-59b3-46ff-8a01-43c41890a722-srv-cert\") pod \"catalog-operator-68c6474976-t77ps\" (UID: \"2c6d44e4-59b3-46ff-8a01-43c41890a722\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t77ps" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.717154 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1595442-c281-470d-a08c-b04158a7c899-serving-cert\") pod \"service-ca-operator-777779d784-jhhdn\" (UID: \"d1595442-c281-470d-a08c-b04158a7c899\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jhhdn" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.717201 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7cd9832f-e47d-4503-88fb-6a197b2fe89d-registry-tls\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.717248 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3c8453aa-abd7-49cc-a743-5e6bb8649740-metrics-tls\") pod \"dns-operator-744455d44c-fm6nl\" (UID: \"3c8453aa-abd7-49cc-a743-5e6bb8649740\") " pod="openshift-dns-operator/dns-operator-744455d44c-fm6nl" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.717242 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kzwmx"] Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.717272 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/8281dd1a-854f-48af-855b-bb3f8f2a2b2a-node-bootstrap-token\") pod \"machine-config-server-m5fhx\" (UID: \"8281dd1a-854f-48af-855b-bb3f8f2a2b2a\") " pod="openshift-machine-config-operator/machine-config-server-m5fhx" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.717957 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ad4a4950-08fa-4707-8af8-4814f89b5ec8-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-z594r\" (UID: \"ad4a4950-08fa-4707-8af8-4814f89b5ec8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z594r" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.718087 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ed97d0e9-4ae3-4db6-9635-38141f37948e-registration-dir\") pod \"csi-hostpathplugin-bbw9t\" (UID: \"ed97d0e9-4ae3-4db6-9635-38141f37948e\") " pod="hostpath-provisioner/csi-hostpathplugin-bbw9t" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.719055 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7v78\" (UniqueName: \"kubernetes.io/projected/3c8453aa-abd7-49cc-a743-5e6bb8649740-kube-api-access-z7v78\") pod \"dns-operator-744455d44c-fm6nl\" (UID: \"3c8453aa-abd7-49cc-a743-5e6bb8649740\") " pod="openshift-dns-operator/dns-operator-744455d44c-fm6nl" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.720050 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/34949d5f-f358-40f5-8b72-7e82ec14b2ad-images\") pod \"machine-config-operator-74547568cd-mv7h7\" (UID: \"34949d5f-f358-40f5-8b72-7e82ec14b2ad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mv7h7" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.720464 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.720498 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8be3738-e6c1-4cc8-ae8a-a23387b73213-config\") pod \"machine-api-operator-5694c8668f-ktwh7\" (UID: \"c8be3738-e6c1-4cc8-ae8a-a23387b73213\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ktwh7" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.720525 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvbmf\" (UniqueName: \"kubernetes.io/projected/a8407c17-c270-4f2c-be13-4b03ee2bbc28-kube-api-access-lvbmf\") pod \"packageserver-d55dfcdfc-sb8td\" (UID: \"a8407c17-c270-4f2c-be13-4b03ee2bbc28\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sb8td" Jan 26 18:32:41 crc kubenswrapper[4737]: E0126 18:32:41.721013 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:32:42.220995398 +0000 UTC m=+135.529190116 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7c9pc" (UID: "7cd9832f-e47d-4503-88fb-6a197b2fe89d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.724740 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8be3738-e6c1-4cc8-ae8a-a23387b73213-config\") pod \"machine-api-operator-5694c8668f-ktwh7\" (UID: \"c8be3738-e6c1-4cc8-ae8a-a23387b73213\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ktwh7" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.726648 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7cd9832f-e47d-4503-88fb-6a197b2fe89d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.728310 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/754c2fa2-3520-4a1e-a052-16c16efc7d51-proxy-tls\") pod \"machine-config-controller-84d6567774-n2t8j\" (UID: \"754c2fa2-3520-4a1e-a052-16c16efc7d51\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n2t8j" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.729540 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7cd9832f-e47d-4503-88fb-6a197b2fe89d-trusted-ca\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.731051 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7637c14c-92d8-4049-945c-33d6c7f7f9d1-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-vkl6w\" (UID: \"7637c14c-92d8-4049-945c-33d6c7f7f9d1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vkl6w" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.731395 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7cd9832f-e47d-4503-88fb-6a197b2fe89d-registry-tls\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.738761 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3c8453aa-abd7-49cc-a743-5e6bb8649740-metrics-tls\") pod \"dns-operator-744455d44c-fm6nl\" (UID: \"3c8453aa-abd7-49cc-a743-5e6bb8649740\") " pod="openshift-dns-operator/dns-operator-744455d44c-fm6nl" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.741166 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-s7n9n"] Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.742429 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh8jn\" (UniqueName: \"kubernetes.io/projected/c8be3738-e6c1-4cc8-ae8a-a23387b73213-kube-api-access-mh8jn\") pod \"machine-api-operator-5694c8668f-ktwh7\" (UID: \"c8be3738-e6c1-4cc8-ae8a-a23387b73213\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ktwh7" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.760360 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-shctm"] Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.765586 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7637c14c-92d8-4049-945c-33d6c7f7f9d1-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-vkl6w\" (UID: \"7637c14c-92d8-4049-945c-33d6c7f7f9d1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vkl6w" Jan 26 18:32:41 crc kubenswrapper[4737]: W0126 18:32:41.777498 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe2f1edb_4ba9_4745_ba10_2377d62e0313.slice/crio-44ace6513e7109455ee3e30019b19b7626f6bcc6e919a19448fbce845b9cece3 WatchSource:0}: Error finding container 44ace6513e7109455ee3e30019b19b7626f6bcc6e919a19448fbce845b9cece3: Status 404 returned error can't find the container with id 44ace6513e7109455ee3e30019b19b7626f6bcc6e919a19448fbce845b9cece3 Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.809538 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-nqcjp" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.814670 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh7x4\" (UniqueName: \"kubernetes.io/projected/7cd9832f-e47d-4503-88fb-6a197b2fe89d-kube-api-access-hh7x4\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.818404 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8phw8" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.822632 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:32:41 crc kubenswrapper[4737]: E0126 18:32:41.823360 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:32:42.323333826 +0000 UTC m=+135.631528534 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.829660 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b69e01a5-0952-496d-97cd-21586e50a7de-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-p6k9r\" (UID: \"b69e01a5-0952-496d-97cd-21586e50a7de\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p6k9r" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.830686 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0b8a65d4-ee10-4c70-bcef-cd823b4a7cc9-cert\") pod \"ingress-canary-qgt58\" (UID: \"0b8a65d4-ee10-4c70-bcef-cd823b4a7cc9\") " pod="openshift-ingress-canary/ingress-canary-qgt58" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.830764 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gccn\" (UniqueName: \"kubernetes.io/projected/b40b453c-36fe-4b0b-8e67-12715f0e15e7-kube-api-access-6gccn\") pod \"authentication-operator-69f744f599-scmj7\" (UID: \"b40b453c-36fe-4b0b-8e67-12715f0e15e7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-scmj7" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.830824 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/60a6a19b-baa5-47c5-8733-202b5bfd0c97-stats-auth\") pod \"router-default-5444994796-wwzqx\" (UID: \"60a6a19b-baa5-47c5-8733-202b5bfd0c97\") " pod="openshift-ingress/router-default-5444994796-wwzqx" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.830875 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2c6d44e4-59b3-46ff-8a01-43c41890a722-srv-cert\") pod \"catalog-operator-68c6474976-t77ps\" (UID: \"2c6d44e4-59b3-46ff-8a01-43c41890a722\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t77ps" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.830913 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1595442-c281-470d-a08c-b04158a7c899-serving-cert\") pod \"service-ca-operator-777779d784-jhhdn\" (UID: \"d1595442-c281-470d-a08c-b04158a7c899\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jhhdn" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.830983 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/8281dd1a-854f-48af-855b-bb3f8f2a2b2a-node-bootstrap-token\") pod \"machine-config-server-m5fhx\" (UID: \"8281dd1a-854f-48af-855b-bb3f8f2a2b2a\") " pod="openshift-machine-config-operator/machine-config-server-m5fhx" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.831063 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ad4a4950-08fa-4707-8af8-4814f89b5ec8-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-z594r\" (UID: \"ad4a4950-08fa-4707-8af8-4814f89b5ec8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z594r" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.831111 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ed97d0e9-4ae3-4db6-9635-38141f37948e-registration-dir\") pod \"csi-hostpathplugin-bbw9t\" (UID: \"ed97d0e9-4ae3-4db6-9635-38141f37948e\") " pod="hostpath-provisioner/csi-hostpathplugin-bbw9t" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.831170 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/34949d5f-f358-40f5-8b72-7e82ec14b2ad-images\") pod \"machine-config-operator-74547568cd-mv7h7\" (UID: \"34949d5f-f358-40f5-8b72-7e82ec14b2ad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mv7h7" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.831204 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvbmf\" (UniqueName: \"kubernetes.io/projected/a8407c17-c270-4f2c-be13-4b03ee2bbc28-kube-api-access-lvbmf\") pod \"packageserver-d55dfcdfc-sb8td\" (UID: \"a8407c17-c270-4f2c-be13-4b03ee2bbc28\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sb8td" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.831241 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.831270 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6srr\" (UniqueName: \"kubernetes.io/projected/a2b6e28b-2e70-4f70-9284-942460f8d1fd-kube-api-access-l6srr\") pod \"dns-default-k965v\" (UID: \"a2b6e28b-2e70-4f70-9284-942460f8d1fd\") " pod="openshift-dns/dns-default-k965v" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.831297 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfhmr\" (UniqueName: \"kubernetes.io/projected/d0215af9-47a6-42bb-bb48-29c002caff5a-kube-api-access-qfhmr\") pod \"package-server-manager-789f6589d5-6jt9w\" (UID: \"d0215af9-47a6-42bb-bb48-29c002caff5a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6jt9w" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.831329 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/eec275ca-9658-4733-b311-48a052e4e843-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gftx9\" (UID: \"eec275ca-9658-4733-b311-48a052e4e843\") " pod="openshift-marketplace/marketplace-operator-79b997595-gftx9" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.831374 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a8407c17-c270-4f2c-be13-4b03ee2bbc28-webhook-cert\") pod \"packageserver-d55dfcdfc-sb8td\" (UID: \"a8407c17-c270-4f2c-be13-4b03ee2bbc28\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sb8td" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.831399 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/ed97d0e9-4ae3-4db6-9635-38141f37948e-plugins-dir\") pod \"csi-hostpathplugin-bbw9t\" (UID: \"ed97d0e9-4ae3-4db6-9635-38141f37948e\") " pod="hostpath-provisioner/csi-hostpathplugin-bbw9t" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.831433 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkdkp\" (UniqueName: \"kubernetes.io/projected/ac652a18-5fbd-483e-94d1-0782ee0cc3ac-kube-api-access-nkdkp\") pod \"collect-profiles-29490870-k4f69\" (UID: \"ac652a18-5fbd-483e-94d1-0782ee0cc3ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-k4f69" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.831458 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8f8rd\" (UniqueName: \"kubernetes.io/projected/8281dd1a-854f-48af-855b-bb3f8f2a2b2a-kube-api-access-8f8rd\") pod \"machine-config-server-m5fhx\" (UID: \"8281dd1a-854f-48af-855b-bb3f8f2a2b2a\") " pod="openshift-machine-config-operator/machine-config-server-m5fhx" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.831478 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/ed97d0e9-4ae3-4db6-9635-38141f37948e-mountpoint-dir\") pod \"csi-hostpathplugin-bbw9t\" (UID: \"ed97d0e9-4ae3-4db6-9635-38141f37948e\") " pod="hostpath-provisioner/csi-hostpathplugin-bbw9t" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.831502 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmv94\" (UniqueName: \"kubernetes.io/projected/ed97d0e9-4ae3-4db6-9635-38141f37948e-kube-api-access-xmv94\") pod \"csi-hostpathplugin-bbw9t\" (UID: \"ed97d0e9-4ae3-4db6-9635-38141f37948e\") " pod="hostpath-provisioner/csi-hostpathplugin-bbw9t" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.831537 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qx6p\" (UniqueName: \"kubernetes.io/projected/9b4a67b3-c096-4abe-80d8-f15e2d4ab72d-kube-api-access-6qx6p\") pod \"controller-manager-879f6c89f-n7cr7\" (UID: \"9b4a67b3-c096-4abe-80d8-f15e2d4ab72d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-n7cr7" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.831578 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/60a6a19b-baa5-47c5-8733-202b5bfd0c97-default-certificate\") pod \"router-default-5444994796-wwzqx\" (UID: \"60a6a19b-baa5-47c5-8733-202b5bfd0c97\") " pod="openshift-ingress/router-default-5444994796-wwzqx" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.831602 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/60a6a19b-baa5-47c5-8733-202b5bfd0c97-service-ca-bundle\") pod \"router-default-5444994796-wwzqx\" (UID: \"60a6a19b-baa5-47c5-8733-202b5bfd0c97\") " pod="openshift-ingress/router-default-5444994796-wwzqx" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.831628 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b4a67b3-c096-4abe-80d8-f15e2d4ab72d-serving-cert\") pod \"controller-manager-879f6c89f-n7cr7\" (UID: \"9b4a67b3-c096-4abe-80d8-f15e2d4ab72d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-n7cr7" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.831653 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6n6tf\" (UniqueName: \"kubernetes.io/projected/eec275ca-9658-4733-b311-48a052e4e843-kube-api-access-6n6tf\") pod \"marketplace-operator-79b997595-gftx9\" (UID: \"eec275ca-9658-4733-b311-48a052e4e843\") " pod="openshift-marketplace/marketplace-operator-79b997595-gftx9" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.831691 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nr29\" (UniqueName: \"kubernetes.io/projected/17e356af-cb63-4f1c-9b53-d226b15d5a35-kube-api-access-8nr29\") pod \"service-ca-9c57cc56f-8p4v9\" (UID: \"17e356af-cb63-4f1c-9b53-d226b15d5a35\") " pod="openshift-service-ca/service-ca-9c57cc56f-8p4v9" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.831726 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a8e30f97-e004-4054-9ffb-9f1bb9df0470-srv-cert\") pod \"olm-operator-6b444d44fb-jxrhw\" (UID: \"a8e30f97-e004-4054-9ffb-9f1bb9df0470\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jxrhw" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.831752 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a8407c17-c270-4f2c-be13-4b03ee2bbc28-tmpfs\") pod \"packageserver-d55dfcdfc-sb8td\" (UID: \"a8407c17-c270-4f2c-be13-4b03ee2bbc28\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sb8td" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.831784 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/ed97d0e9-4ae3-4db6-9635-38141f37948e-csi-data-dir\") pod \"csi-hostpathplugin-bbw9t\" (UID: \"ed97d0e9-4ae3-4db6-9635-38141f37948e\") " pod="hostpath-provisioner/csi-hostpathplugin-bbw9t" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.831817 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b40b453c-36fe-4b0b-8e67-12715f0e15e7-service-ca-bundle\") pod \"authentication-operator-69f744f599-scmj7\" (UID: \"b40b453c-36fe-4b0b-8e67-12715f0e15e7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-scmj7" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.831842 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxgbr\" (UniqueName: \"kubernetes.io/projected/a8e30f97-e004-4054-9ffb-9f1bb9df0470-kube-api-access-rxgbr\") pod \"olm-operator-6b444d44fb-jxrhw\" (UID: \"a8e30f97-e004-4054-9ffb-9f1bb9df0470\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jxrhw" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.831868 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac652a18-5fbd-483e-94d1-0782ee0cc3ac-secret-volume\") pod \"collect-profiles-29490870-k4f69\" (UID: \"ac652a18-5fbd-483e-94d1-0782ee0cc3ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-k4f69" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.831907 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/17e356af-cb63-4f1c-9b53-d226b15d5a35-signing-cabundle\") pod \"service-ca-9c57cc56f-8p4v9\" (UID: \"17e356af-cb63-4f1c-9b53-d226b15d5a35\") " pod="openshift-service-ca/service-ca-9c57cc56f-8p4v9" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.831951 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxmkd\" (UniqueName: \"kubernetes.io/projected/833792c1-41f1-45ee-b08b-aacc3388e916-kube-api-access-vxmkd\") pod \"multus-admission-controller-857f4d67dd-g4vb5\" (UID: \"833792c1-41f1-45ee-b08b-aacc3388e916\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-g4vb5" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.831994 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b40b453c-36fe-4b0b-8e67-12715f0e15e7-config\") pod \"authentication-operator-69f744f599-scmj7\" (UID: \"b40b453c-36fe-4b0b-8e67-12715f0e15e7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-scmj7" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.832025 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dflkm\" (UniqueName: \"kubernetes.io/projected/60a6a19b-baa5-47c5-8733-202b5bfd0c97-kube-api-access-dflkm\") pod \"router-default-5444994796-wwzqx\" (UID: \"60a6a19b-baa5-47c5-8733-202b5bfd0c97\") " pod="openshift-ingress/router-default-5444994796-wwzqx" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.832142 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eec275ca-9658-4733-b311-48a052e4e843-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gftx9\" (UID: \"eec275ca-9658-4733-b311-48a052e4e843\") " pod="openshift-marketplace/marketplace-operator-79b997595-gftx9" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.832186 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad4a4950-08fa-4707-8af8-4814f89b5ec8-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-z594r\" (UID: \"ad4a4950-08fa-4707-8af8-4814f89b5ec8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z594r" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.832214 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2c6d44e4-59b3-46ff-8a01-43c41890a722-profile-collector-cert\") pod \"catalog-operator-68c6474976-t77ps\" (UID: \"2c6d44e4-59b3-46ff-8a01-43c41890a722\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t77ps" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.832277 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b69e01a5-0952-496d-97cd-21586e50a7de-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-p6k9r\" (UID: \"b69e01a5-0952-496d-97cd-21586e50a7de\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p6k9r" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.832318 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b4a67b3-c096-4abe-80d8-f15e2d4ab72d-config\") pod \"controller-manager-879f6c89f-n7cr7\" (UID: \"9b4a67b3-c096-4abe-80d8-f15e2d4ab72d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-n7cr7" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.832346 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1595442-c281-470d-a08c-b04158a7c899-config\") pod \"service-ca-operator-777779d784-jhhdn\" (UID: \"d1595442-c281-470d-a08c-b04158a7c899\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jhhdn" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.832405 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78t7s\" (UniqueName: \"kubernetes.io/projected/d1595442-c281-470d-a08c-b04158a7c899-kube-api-access-78t7s\") pod \"service-ca-operator-777779d784-jhhdn\" (UID: \"d1595442-c281-470d-a08c-b04158a7c899\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jhhdn" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.832435 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac652a18-5fbd-483e-94d1-0782ee0cc3ac-config-volume\") pod \"collect-profiles-29490870-k4f69\" (UID: \"ac652a18-5fbd-483e-94d1-0782ee0cc3ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-k4f69" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.832461 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60a6a19b-baa5-47c5-8733-202b5bfd0c97-metrics-certs\") pod \"router-default-5444994796-wwzqx\" (UID: \"60a6a19b-baa5-47c5-8733-202b5bfd0c97\") " pod="openshift-ingress/router-default-5444994796-wwzqx" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.832490 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a8407c17-c270-4f2c-be13-4b03ee2bbc28-apiservice-cert\") pod \"packageserver-d55dfcdfc-sb8td\" (UID: \"a8407c17-c270-4f2c-be13-4b03ee2bbc28\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sb8td" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.832520 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a8e30f97-e004-4054-9ffb-9f1bb9df0470-profile-collector-cert\") pod \"olm-operator-6b444d44fb-jxrhw\" (UID: \"a8e30f97-e004-4054-9ffb-9f1bb9df0470\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jxrhw" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.832542 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2b6e28b-2e70-4f70-9284-942460f8d1fd-config-volume\") pod \"dns-default-k965v\" (UID: \"a2b6e28b-2e70-4f70-9284-942460f8d1fd\") " pod="openshift-dns/dns-default-k965v" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.832760 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b40b453c-36fe-4b0b-8e67-12715f0e15e7-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-scmj7\" (UID: \"b40b453c-36fe-4b0b-8e67-12715f0e15e7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-scmj7" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.832791 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9b4a67b3-c096-4abe-80d8-f15e2d4ab72d-client-ca\") pod \"controller-manager-879f6c89f-n7cr7\" (UID: \"9b4a67b3-c096-4abe-80d8-f15e2d4ab72d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-n7cr7" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.832832 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/833792c1-41f1-45ee-b08b-aacc3388e916-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-g4vb5\" (UID: \"833792c1-41f1-45ee-b08b-aacc3388e916\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-g4vb5" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.832865 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a2b6e28b-2e70-4f70-9284-942460f8d1fd-metrics-tls\") pod \"dns-default-k965v\" (UID: \"a2b6e28b-2e70-4f70-9284-942460f8d1fd\") " pod="openshift-dns/dns-default-k965v" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.832894 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d0215af9-47a6-42bb-bb48-29c002caff5a-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-6jt9w\" (UID: \"d0215af9-47a6-42bb-bb48-29c002caff5a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6jt9w" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.832934 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rrd7\" (UniqueName: \"kubernetes.io/projected/cf12407d-16ca-40d9-8279-f46693aee8b1-kube-api-access-9rrd7\") pod \"control-plane-machine-set-operator-78cbb6b69f-6f78q\" (UID: \"cf12407d-16ca-40d9-8279-f46693aee8b1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6f78q" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.832974 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b69e01a5-0952-496d-97cd-21586e50a7de-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-p6k9r\" (UID: \"b69e01a5-0952-496d-97cd-21586e50a7de\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p6k9r" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.833008 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/34949d5f-f358-40f5-8b72-7e82ec14b2ad-auth-proxy-config\") pod \"machine-config-operator-74547568cd-mv7h7\" (UID: \"34949d5f-f358-40f5-8b72-7e82ec14b2ad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mv7h7" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.833037 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9b4a67b3-c096-4abe-80d8-f15e2d4ab72d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-n7cr7\" (UID: \"9b4a67b3-c096-4abe-80d8-f15e2d4ab72d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-n7cr7" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.833103 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b40b453c-36fe-4b0b-8e67-12715f0e15e7-serving-cert\") pod \"authentication-operator-69f744f599-scmj7\" (UID: \"b40b453c-36fe-4b0b-8e67-12715f0e15e7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-scmj7" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.833158 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zftsq\" (UniqueName: \"kubernetes.io/projected/34949d5f-f358-40f5-8b72-7e82ec14b2ad-kube-api-access-zftsq\") pod \"machine-config-operator-74547568cd-mv7h7\" (UID: \"34949d5f-f358-40f5-8b72-7e82ec14b2ad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mv7h7" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.833211 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ed97d0e9-4ae3-4db6-9635-38141f37948e-socket-dir\") pod \"csi-hostpathplugin-bbw9t\" (UID: \"ed97d0e9-4ae3-4db6-9635-38141f37948e\") " pod="hostpath-provisioner/csi-hostpathplugin-bbw9t" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.833249 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8smvr\" (UniqueName: \"kubernetes.io/projected/ad4a4950-08fa-4707-8af8-4814f89b5ec8-kube-api-access-8smvr\") pod \"openshift-controller-manager-operator-756b6f6bc6-z594r\" (UID: \"ad4a4950-08fa-4707-8af8-4814f89b5ec8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z594r" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.833285 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/8281dd1a-854f-48af-855b-bb3f8f2a2b2a-certs\") pod \"machine-config-server-m5fhx\" (UID: \"8281dd1a-854f-48af-855b-bb3f8f2a2b2a\") " pod="openshift-machine-config-operator/machine-config-server-m5fhx" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.833351 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4xz8\" (UniqueName: \"kubernetes.io/projected/0b8a65d4-ee10-4c70-bcef-cd823b4a7cc9-kube-api-access-c4xz8\") pod \"ingress-canary-qgt58\" (UID: \"0b8a65d4-ee10-4c70-bcef-cd823b4a7cc9\") " pod="openshift-ingress-canary/ingress-canary-qgt58" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.833384 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/34949d5f-f358-40f5-8b72-7e82ec14b2ad-proxy-tls\") pod \"machine-config-operator-74547568cd-mv7h7\" (UID: \"34949d5f-f358-40f5-8b72-7e82ec14b2ad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mv7h7" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.833426 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/cf12407d-16ca-40d9-8279-f46693aee8b1-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-6f78q\" (UID: \"cf12407d-16ca-40d9-8279-f46693aee8b1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6f78q" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.833468 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/17e356af-cb63-4f1c-9b53-d226b15d5a35-signing-key\") pod \"service-ca-9c57cc56f-8p4v9\" (UID: \"17e356af-cb63-4f1c-9b53-d226b15d5a35\") " pod="openshift-service-ca/service-ca-9c57cc56f-8p4v9" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.833507 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk2jm\" (UniqueName: \"kubernetes.io/projected/2c6d44e4-59b3-46ff-8a01-43c41890a722-kube-api-access-fk2jm\") pod \"catalog-operator-68c6474976-t77ps\" (UID: \"2c6d44e4-59b3-46ff-8a01-43c41890a722\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t77ps" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.834797 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/17e356af-cb63-4f1c-9b53-d226b15d5a35-signing-cabundle\") pod \"service-ca-9c57cc56f-8p4v9\" (UID: \"17e356af-cb63-4f1c-9b53-d226b15d5a35\") " pod="openshift-service-ca/service-ca-9c57cc56f-8p4v9" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.835685 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b40b453c-36fe-4b0b-8e67-12715f0e15e7-config\") pod \"authentication-operator-69f744f599-scmj7\" (UID: \"b40b453c-36fe-4b0b-8e67-12715f0e15e7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-scmj7" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.837714 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2c6d44e4-59b3-46ff-8a01-43c41890a722-srv-cert\") pod \"catalog-operator-68c6474976-t77ps\" (UID: \"2c6d44e4-59b3-46ff-8a01-43c41890a722\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t77ps" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.837777 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/60a6a19b-baa5-47c5-8733-202b5bfd0c97-stats-auth\") pod \"router-default-5444994796-wwzqx\" (UID: \"60a6a19b-baa5-47c5-8733-202b5bfd0c97\") " pod="openshift-ingress/router-default-5444994796-wwzqx" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.838199 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntdp8\" (UniqueName: \"kubernetes.io/projected/754c2fa2-3520-4a1e-a052-16c16efc7d51-kube-api-access-ntdp8\") pod \"machine-config-controller-84d6567774-n2t8j\" (UID: \"754c2fa2-3520-4a1e-a052-16c16efc7d51\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n2t8j" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.838759 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eec275ca-9658-4733-b311-48a052e4e843-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gftx9\" (UID: \"eec275ca-9658-4733-b311-48a052e4e843\") " pod="openshift-marketplace/marketplace-operator-79b997595-gftx9" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.839585 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0b8a65d4-ee10-4c70-bcef-cd823b4a7cc9-cert\") pod \"ingress-canary-qgt58\" (UID: \"0b8a65d4-ee10-4c70-bcef-cd823b4a7cc9\") " pod="openshift-ingress-canary/ingress-canary-qgt58" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.839875 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ed97d0e9-4ae3-4db6-9635-38141f37948e-socket-dir\") pod \"csi-hostpathplugin-bbw9t\" (UID: \"ed97d0e9-4ae3-4db6-9635-38141f37948e\") " pod="hostpath-provisioner/csi-hostpathplugin-bbw9t" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.840120 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac652a18-5fbd-483e-94d1-0782ee0cc3ac-config-volume\") pod \"collect-profiles-29490870-k4f69\" (UID: \"ac652a18-5fbd-483e-94d1-0782ee0cc3ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-k4f69" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.840871 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b69e01a5-0952-496d-97cd-21586e50a7de-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-p6k9r\" (UID: \"b69e01a5-0952-496d-97cd-21586e50a7de\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p6k9r" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.841034 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b69e01a5-0952-496d-97cd-21586e50a7de-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-p6k9r\" (UID: \"b69e01a5-0952-496d-97cd-21586e50a7de\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p6k9r" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.841523 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/34949d5f-f358-40f5-8b72-7e82ec14b2ad-auth-proxy-config\") pod \"machine-config-operator-74547568cd-mv7h7\" (UID: \"34949d5f-f358-40f5-8b72-7e82ec14b2ad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mv7h7" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.841761 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1595442-c281-470d-a08c-b04158a7c899-serving-cert\") pod \"service-ca-operator-777779d784-jhhdn\" (UID: \"d1595442-c281-470d-a08c-b04158a7c899\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jhhdn" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.843938 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad4a4950-08fa-4707-8af8-4814f89b5ec8-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-z594r\" (UID: \"ad4a4950-08fa-4707-8af8-4814f89b5ec8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z594r" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.843999 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b4a67b3-c096-4abe-80d8-f15e2d4ab72d-config\") pod \"controller-manager-879f6c89f-n7cr7\" (UID: \"9b4a67b3-c096-4abe-80d8-f15e2d4ab72d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-n7cr7" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.844207 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9b4a67b3-c096-4abe-80d8-f15e2d4ab72d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-n7cr7\" (UID: \"9b4a67b3-c096-4abe-80d8-f15e2d4ab72d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-n7cr7" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.844912 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1595442-c281-470d-a08c-b04158a7c899-config\") pod \"service-ca-operator-777779d784-jhhdn\" (UID: \"d1595442-c281-470d-a08c-b04158a7c899\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jhhdn" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.845148 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/8281dd1a-854f-48af-855b-bb3f8f2a2b2a-node-bootstrap-token\") pod \"machine-config-server-m5fhx\" (UID: \"8281dd1a-854f-48af-855b-bb3f8f2a2b2a\") " pod="openshift-machine-config-operator/machine-config-server-m5fhx" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.845570 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/60a6a19b-baa5-47c5-8733-202b5bfd0c97-default-certificate\") pod \"router-default-5444994796-wwzqx\" (UID: \"60a6a19b-baa5-47c5-8733-202b5bfd0c97\") " pod="openshift-ingress/router-default-5444994796-wwzqx" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.847398 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/8281dd1a-854f-48af-855b-bb3f8f2a2b2a-certs\") pod \"machine-config-server-m5fhx\" (UID: \"8281dd1a-854f-48af-855b-bb3f8f2a2b2a\") " pod="openshift-machine-config-operator/machine-config-server-m5fhx" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.847503 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac652a18-5fbd-483e-94d1-0782ee0cc3ac-secret-volume\") pod \"collect-profiles-29490870-k4f69\" (UID: \"ac652a18-5fbd-483e-94d1-0782ee0cc3ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-k4f69" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.848181 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vkl6w" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.848362 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/60a6a19b-baa5-47c5-8733-202b5bfd0c97-service-ca-bundle\") pod \"router-default-5444994796-wwzqx\" (UID: \"60a6a19b-baa5-47c5-8733-202b5bfd0c97\") " pod="openshift-ingress/router-default-5444994796-wwzqx" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.848500 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d0215af9-47a6-42bb-bb48-29c002caff5a-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-6jt9w\" (UID: \"d0215af9-47a6-42bb-bb48-29c002caff5a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6jt9w" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.849352 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a8407c17-c270-4f2c-be13-4b03ee2bbc28-tmpfs\") pod \"packageserver-d55dfcdfc-sb8td\" (UID: \"a8407c17-c270-4f2c-be13-4b03ee2bbc28\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sb8td" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.849429 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/ed97d0e9-4ae3-4db6-9635-38141f37948e-csi-data-dir\") pod \"csi-hostpathplugin-bbw9t\" (UID: \"ed97d0e9-4ae3-4db6-9635-38141f37948e\") " pod="hostpath-provisioner/csi-hostpathplugin-bbw9t" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.850551 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/34949d5f-f358-40f5-8b72-7e82ec14b2ad-images\") pod \"machine-config-operator-74547568cd-mv7h7\" (UID: \"34949d5f-f358-40f5-8b72-7e82ec14b2ad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mv7h7" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.852316 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a8e30f97-e004-4054-9ffb-9f1bb9df0470-srv-cert\") pod \"olm-operator-6b444d44fb-jxrhw\" (UID: \"a8e30f97-e004-4054-9ffb-9f1bb9df0470\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jxrhw" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.852991 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ad4a4950-08fa-4707-8af8-4814f89b5ec8-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-z594r\" (UID: \"ad4a4950-08fa-4707-8af8-4814f89b5ec8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z594r" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.853750 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/ed97d0e9-4ae3-4db6-9635-38141f37948e-plugins-dir\") pod \"csi-hostpathplugin-bbw9t\" (UID: \"ed97d0e9-4ae3-4db6-9635-38141f37948e\") " pod="hostpath-provisioner/csi-hostpathplugin-bbw9t" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.853843 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7cd9832f-e47d-4503-88fb-6a197b2fe89d-bound-sa-token\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.853915 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2b6e28b-2e70-4f70-9284-942460f8d1fd-config-volume\") pod \"dns-default-k965v\" (UID: \"a2b6e28b-2e70-4f70-9284-942460f8d1fd\") " pod="openshift-dns/dns-default-k965v" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.854752 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ed97d0e9-4ae3-4db6-9635-38141f37948e-registration-dir\") pod \"csi-hostpathplugin-bbw9t\" (UID: \"ed97d0e9-4ae3-4db6-9635-38141f37948e\") " pod="hostpath-provisioner/csi-hostpathplugin-bbw9t" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.855315 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a2b6e28b-2e70-4f70-9284-942460f8d1fd-metrics-tls\") pod \"dns-default-k965v\" (UID: \"a2b6e28b-2e70-4f70-9284-942460f8d1fd\") " pod="openshift-dns/dns-default-k965v" Jan 26 18:32:41 crc kubenswrapper[4737]: E0126 18:32:41.855856 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:32:42.355814158 +0000 UTC m=+135.664008856 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7c9pc" (UID: "7cd9832f-e47d-4503-88fb-6a197b2fe89d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.856521 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b40b453c-36fe-4b0b-8e67-12715f0e15e7-service-ca-bundle\") pod \"authentication-operator-69f744f599-scmj7\" (UID: \"b40b453c-36fe-4b0b-8e67-12715f0e15e7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-scmj7" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.857553 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/ed97d0e9-4ae3-4db6-9635-38141f37948e-mountpoint-dir\") pod \"csi-hostpathplugin-bbw9t\" (UID: \"ed97d0e9-4ae3-4db6-9635-38141f37948e\") " pod="hostpath-provisioner/csi-hostpathplugin-bbw9t" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.858757 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2c6d44e4-59b3-46ff-8a01-43c41890a722-profile-collector-cert\") pod \"catalog-operator-68c6474976-t77ps\" (UID: \"2c6d44e4-59b3-46ff-8a01-43c41890a722\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t77ps" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.859830 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b40b453c-36fe-4b0b-8e67-12715f0e15e7-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-scmj7\" (UID: \"b40b453c-36fe-4b0b-8e67-12715f0e15e7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-scmj7" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.860520 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/34949d5f-f358-40f5-8b72-7e82ec14b2ad-proxy-tls\") pod \"machine-config-operator-74547568cd-mv7h7\" (UID: \"34949d5f-f358-40f5-8b72-7e82ec14b2ad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mv7h7" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.860538 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9b4a67b3-c096-4abe-80d8-f15e2d4ab72d-client-ca\") pod \"controller-manager-879f6c89f-n7cr7\" (UID: \"9b4a67b3-c096-4abe-80d8-f15e2d4ab72d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-n7cr7" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.861686 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-ktwh7" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.864940 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/eec275ca-9658-4733-b311-48a052e4e843-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gftx9\" (UID: \"eec275ca-9658-4733-b311-48a052e4e843\") " pod="openshift-marketplace/marketplace-operator-79b997595-gftx9" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.865867 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/833792c1-41f1-45ee-b08b-aacc3388e916-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-g4vb5\" (UID: \"833792c1-41f1-45ee-b08b-aacc3388e916\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-g4vb5" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.867183 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b4a67b3-c096-4abe-80d8-f15e2d4ab72d-serving-cert\") pod \"controller-manager-879f6c89f-n7cr7\" (UID: \"9b4a67b3-c096-4abe-80d8-f15e2d4ab72d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-n7cr7" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.867190 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a8407c17-c270-4f2c-be13-4b03ee2bbc28-apiservice-cert\") pod \"packageserver-d55dfcdfc-sb8td\" (UID: \"a8407c17-c270-4f2c-be13-4b03ee2bbc28\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sb8td" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.869141 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/cf12407d-16ca-40d9-8279-f46693aee8b1-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-6f78q\" (UID: \"cf12407d-16ca-40d9-8279-f46693aee8b1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6f78q" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.871216 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b40b453c-36fe-4b0b-8e67-12715f0e15e7-serving-cert\") pod \"authentication-operator-69f744f599-scmj7\" (UID: \"b40b453c-36fe-4b0b-8e67-12715f0e15e7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-scmj7" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.874579 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a8407c17-c270-4f2c-be13-4b03ee2bbc28-webhook-cert\") pod \"packageserver-d55dfcdfc-sb8td\" (UID: \"a8407c17-c270-4f2c-be13-4b03ee2bbc28\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sb8td" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.879306 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n2t8j" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.880718 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60a6a19b-baa5-47c5-8733-202b5bfd0c97-metrics-certs\") pod \"router-default-5444994796-wwzqx\" (UID: \"60a6a19b-baa5-47c5-8733-202b5bfd0c97\") " pod="openshift-ingress/router-default-5444994796-wwzqx" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.885761 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/17e356af-cb63-4f1c-9b53-d226b15d5a35-signing-key\") pod \"service-ca-9c57cc56f-8p4v9\" (UID: \"17e356af-cb63-4f1c-9b53-d226b15d5a35\") " pod="openshift-service-ca/service-ca-9c57cc56f-8p4v9" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.892248 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a8e30f97-e004-4054-9ffb-9f1bb9df0470-profile-collector-cert\") pod \"olm-operator-6b444d44fb-jxrhw\" (UID: \"a8e30f97-e004-4054-9ffb-9f1bb9df0470\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jxrhw" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.893103 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7v78\" (UniqueName: \"kubernetes.io/projected/3c8453aa-abd7-49cc-a743-5e6bb8649740-kube-api-access-z7v78\") pod \"dns-operator-744455d44c-fm6nl\" (UID: \"3c8453aa-abd7-49cc-a743-5e6bb8649740\") " pod="openshift-dns-operator/dns-operator-744455d44c-fm6nl" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.902738 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b69e01a5-0952-496d-97cd-21586e50a7de-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-p6k9r\" (UID: \"b69e01a5-0952-496d-97cd-21586e50a7de\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p6k9r" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.914308 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p6k9r" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.935992 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:32:41 crc kubenswrapper[4737]: E0126 18:32:41.936516 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:32:42.436486435 +0000 UTC m=+135.744681143 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.937188 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:41 crc kubenswrapper[4737]: E0126 18:32:41.937691 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:32:42.437680968 +0000 UTC m=+135.745875676 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7c9pc" (UID: "7cd9832f-e47d-4503-88fb-6a197b2fe89d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.948625 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qx6p\" (UniqueName: \"kubernetes.io/projected/9b4a67b3-c096-4abe-80d8-f15e2d4ab72d-kube-api-access-6qx6p\") pod \"controller-manager-879f6c89f-n7cr7\" (UID: \"9b4a67b3-c096-4abe-80d8-f15e2d4ab72d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-n7cr7" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.968442 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk2jm\" (UniqueName: \"kubernetes.io/projected/2c6d44e4-59b3-46ff-8a01-43c41890a722-kube-api-access-fk2jm\") pod \"catalog-operator-68c6474976-t77ps\" (UID: \"2c6d44e4-59b3-46ff-8a01-43c41890a722\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t77ps" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.973100 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxmkd\" (UniqueName: \"kubernetes.io/projected/833792c1-41f1-45ee-b08b-aacc3388e916-kube-api-access-vxmkd\") pod \"multus-admission-controller-857f4d67dd-g4vb5\" (UID: \"833792c1-41f1-45ee-b08b-aacc3388e916\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-g4vb5" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.984641 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-g4vb5" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.986316 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dflkm\" (UniqueName: \"kubernetes.io/projected/60a6a19b-baa5-47c5-8733-202b5bfd0c97-kube-api-access-dflkm\") pod \"router-default-5444994796-wwzqx\" (UID: \"60a6a19b-baa5-47c5-8733-202b5bfd0c97\") " pod="openshift-ingress/router-default-5444994796-wwzqx" Jan 26 18:32:41 crc kubenswrapper[4737]: I0126 18:32:41.993009 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t77ps" Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.007824 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nr29\" (UniqueName: \"kubernetes.io/projected/17e356af-cb63-4f1c-9b53-d226b15d5a35-kube-api-access-8nr29\") pod \"service-ca-9c57cc56f-8p4v9\" (UID: \"17e356af-cb63-4f1c-9b53-d226b15d5a35\") " pod="openshift-service-ca/service-ca-9c57cc56f-8p4v9" Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.012534 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7h9cs"] Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.035004 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gccn\" (UniqueName: \"kubernetes.io/projected/b40b453c-36fe-4b0b-8e67-12715f0e15e7-kube-api-access-6gccn\") pod \"authentication-operator-69f744f599-scmj7\" (UID: \"b40b453c-36fe-4b0b-8e67-12715f0e15e7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-scmj7" Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.038452 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:32:42 crc kubenswrapper[4737]: E0126 18:32:42.038813 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:32:42.538796333 +0000 UTC m=+135.846991041 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.049159 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rrd7\" (UniqueName: \"kubernetes.io/projected/cf12407d-16ca-40d9-8279-f46693aee8b1-kube-api-access-9rrd7\") pod \"control-plane-machine-set-operator-78cbb6b69f-6f78q\" (UID: \"cf12407d-16ca-40d9-8279-f46693aee8b1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6f78q" Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.075549 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4xz8\" (UniqueName: \"kubernetes.io/projected/0b8a65d4-ee10-4c70-bcef-cd823b4a7cc9-kube-api-access-c4xz8\") pod \"ingress-canary-qgt58\" (UID: \"0b8a65d4-ee10-4c70-bcef-cd823b4a7cc9\") " pod="openshift-ingress-canary/ingress-canary-qgt58" Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.077421 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-brpd2"] Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.089683 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8smvr\" (UniqueName: \"kubernetes.io/projected/ad4a4950-08fa-4707-8af8-4814f89b5ec8-kube-api-access-8smvr\") pod \"openshift-controller-manager-operator-756b6f6bc6-z594r\" (UID: \"ad4a4950-08fa-4707-8af8-4814f89b5ec8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z594r" Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.105435 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-fm6nl" Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.109386 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfhmr\" (UniqueName: \"kubernetes.io/projected/d0215af9-47a6-42bb-bb48-29c002caff5a-kube-api-access-qfhmr\") pod \"package-server-manager-789f6589d5-6jt9w\" (UID: \"d0215af9-47a6-42bb-bb48-29c002caff5a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6jt9w" Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.127899 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78t7s\" (UniqueName: \"kubernetes.io/projected/d1595442-c281-470d-a08c-b04158a7c899-kube-api-access-78t7s\") pod \"service-ca-operator-777779d784-jhhdn\" (UID: \"d1595442-c281-470d-a08c-b04158a7c899\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jhhdn" Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.138479 4737 generic.go:334] "Generic (PLEG): container finished" podID="8d2d9bc1-4264-4633-af76-b57166070ab0" containerID="e05af55f95506daf55f78f48a03b806e4ec00aed46ae1af76cb9ae607d86796b" exitCode=0 Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.138567 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gsfgx" event={"ID":"8d2d9bc1-4264-4633-af76-b57166070ab0","Type":"ContainerDied","Data":"e05af55f95506daf55f78f48a03b806e4ec00aed46ae1af76cb9ae607d86796b"} Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.140258 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:42 crc kubenswrapper[4737]: E0126 18:32:42.140790 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:32:42.640766241 +0000 UTC m=+135.948960949 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7c9pc" (UID: "7cd9832f-e47d-4503-88fb-6a197b2fe89d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.144095 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-7jxs2" event={"ID":"858fe62f-567a-47e7-9847-c393790eb41f","Type":"ContainerStarted","Data":"d4795acf87705c2ac71c29cebd1c8b4bc1839ed88c0e27cb62085bf8774ccf7b"} Jan 26 18:32:42 crc kubenswrapper[4737]: W0126 18:32:42.144925 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabf4a817_2de4_4f69_9ad8_d15ed857d5ab.slice/crio-154950165c69ac0188c0d3435241d3f7a3ad8ce40edf64b3bff1547528c13da0 WatchSource:0}: Error finding container 154950165c69ac0188c0d3435241d3f7a3ad8ce40edf64b3bff1547528c13da0: Status 404 returned error can't find the container with id 154950165c69ac0188c0d3435241d3f7a3ad8ce40edf64b3bff1547528c13da0 Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.145261 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7h9cs" event={"ID":"9ceadcc2-c87f-4382-895a-f052e3c3597d","Type":"ContainerStarted","Data":"9f408e8e9550ffb6d5a4cc6221e30443ddda359fcec4f12c81e0c6981597e4c5"} Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.147495 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxgbr\" (UniqueName: \"kubernetes.io/projected/a8e30f97-e004-4054-9ffb-9f1bb9df0470-kube-api-access-rxgbr\") pod \"olm-operator-6b444d44fb-jxrhw\" (UID: \"a8e30f97-e004-4054-9ffb-9f1bb9df0470\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jxrhw" Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.151173 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-htkzj" event={"ID":"22b2e7a5-b20a-41cd-b9fc-694a9aa3e964","Type":"ContainerStarted","Data":"d3fc189eef584a5071a7dfd1821f233001a86c86afb4e61499874b9562d5555c"} Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.151265 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-htkzj" event={"ID":"22b2e7a5-b20a-41cd-b9fc-694a9aa3e964","Type":"ContainerStarted","Data":"56c297e569b2fff091e3e495aeb4f1487da767fd098817e792ded86edc0b87d6"} Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.154924 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" event={"ID":"fdc44942-56de-4694-bcd4-bca48f1e1e08","Type":"ContainerStarted","Data":"7590ec628d4c165b51e9a8a05ac09c509e26161d57da8ee9ed3598ec56b7dd4b"} Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.158329 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-s7n9n" event={"ID":"f792056c-fffa-4089-a040-8e09a1d6489f","Type":"ContainerStarted","Data":"1145db900d51fd1c2bfd211fe8ffb899055a175eda60d398161412c298da0b17"} Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.170368 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkdkp\" (UniqueName: \"kubernetes.io/projected/ac652a18-5fbd-483e-94d1-0782ee0cc3ac-kube-api-access-nkdkp\") pod \"collect-profiles-29490870-k4f69\" (UID: \"ac652a18-5fbd-483e-94d1-0782ee0cc3ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-k4f69" Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.171185 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-shctm" event={"ID":"fe2f1edb-4ba9-4745-ba10-2377d62e0313","Type":"ContainerStarted","Data":"44ace6513e7109455ee3e30019b19b7626f6bcc6e919a19448fbce845b9cece3"} Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.189606 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z594r" Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.198239 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6n6tf\" (UniqueName: \"kubernetes.io/projected/eec275ca-9658-4733-b311-48a052e4e843-kube-api-access-6n6tf\") pod \"marketplace-operator-79b997595-gftx9\" (UID: \"eec275ca-9658-4733-b311-48a052e4e843\") " pod="openshift-marketplace/marketplace-operator-79b997595-gftx9" Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.199322 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-scmj7" Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.203135 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zftsq\" (UniqueName: \"kubernetes.io/projected/34949d5f-f358-40f5-8b72-7e82ec14b2ad-kube-api-access-zftsq\") pod \"machine-config-operator-74547568cd-mv7h7\" (UID: \"34949d5f-f358-40f5-8b72-7e82ec14b2ad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mv7h7" Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.207712 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-wwzqx" Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.230316 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mv7h7" Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.236191 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jxrhw" Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.238438 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6srr\" (UniqueName: \"kubernetes.io/projected/a2b6e28b-2e70-4f70-9284-942460f8d1fd-kube-api-access-l6srr\") pod \"dns-default-k965v\" (UID: \"a2b6e28b-2e70-4f70-9284-942460f8d1fd\") " pod="openshift-dns/dns-default-k965v" Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.242218 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:32:42 crc kubenswrapper[4737]: E0126 18:32:42.243961 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:32:42.743934233 +0000 UTC m=+136.052128941 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.245005 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-n7cr7" Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.247400 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lnrns" event={"ID":"887b083d-2d4b-4231-a109-f2e1d5d14c39","Type":"ContainerStarted","Data":"3e4b054a334d8ae6b0a99c502b1df3f00f5a4f7b947822bbc23b4d649d40378a"} Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.252715 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6jt9w" Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.256228 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8f8rd\" (UniqueName: \"kubernetes.io/projected/8281dd1a-854f-48af-855b-bb3f8f2a2b2a-kube-api-access-8f8rd\") pod \"machine-config-server-m5fhx\" (UID: \"8281dd1a-854f-48af-855b-bb3f8f2a2b2a\") " pod="openshift-machine-config-operator/machine-config-server-m5fhx" Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.261203 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gftx9" Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.269244 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6f78q" Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.272089 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmv94\" (UniqueName: \"kubernetes.io/projected/ed97d0e9-4ae3-4db6-9635-38141f37948e-kube-api-access-xmv94\") pod \"csi-hostpathplugin-bbw9t\" (UID: \"ed97d0e9-4ae3-4db6-9635-38141f37948e\") " pod="hostpath-provisioner/csi-hostpathplugin-bbw9t" Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.276643 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-m5fhx" Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.278529 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-hbdm4" event={"ID":"255d9d52-daaf-41e1-be00-4a94de0a6324","Type":"ContainerStarted","Data":"7ad1c983cd49e50a7eb1f5d187e10c3a08328d94624a10767a7aa06eea0c137c"} Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.284974 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kzwmx" event={"ID":"d8ad60c4-c4e9-48bd-bb54-f22bef5a8b76","Type":"ContainerStarted","Data":"b6678f64cb05d2c3d9efca71470064cf06f37d63647734b24a2036201a26d3b6"} Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.291326 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvbmf\" (UniqueName: \"kubernetes.io/projected/a8407c17-c270-4f2c-be13-4b03ee2bbc28-kube-api-access-lvbmf\") pod \"packageserver-d55dfcdfc-sb8td\" (UID: \"a8407c17-c270-4f2c-be13-4b03ee2bbc28\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sb8td" Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.292777 4737 generic.go:334] "Generic (PLEG): container finished" podID="c8a64e01-05c7-4ea4-a60c-0bcce98ea3ff" containerID="a8382a0a14f7a5452169ddd1c16da6e190b4e561e1d5420f6833b827f127b2dd" exitCode=0 Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.292855 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p7ll4" event={"ID":"c8a64e01-05c7-4ea4-a60c-0bcce98ea3ff","Type":"ContainerDied","Data":"a8382a0a14f7a5452169ddd1c16da6e190b4e561e1d5420f6833b827f127b2dd"} Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.302597 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-8p4v9" Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.308105 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-f84g9" event={"ID":"38ea1569-149a-4a65-a61d-021204d2cde6","Type":"ContainerStarted","Data":"e1cf6f639caf638bbea8dfd11ad533eabdcd141a470d74ea8bb11c8b3c703e82"} Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.308199 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-f84g9" event={"ID":"38ea1569-149a-4a65-a61d-021204d2cde6","Type":"ContainerStarted","Data":"a7ecb136203bdcb98b6a1d1027e00d22f10b0b1330a83e6bff9c1ace011bae62"} Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.311433 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jhhdn" Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.320255 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-k4f69" Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.338017 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-l9spd" Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.340315 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qw4sc"] Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.340763 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-bbw9t" Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.353177 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.355443 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-k965v" Jan 26 18:32:42 crc kubenswrapper[4737]: E0126 18:32:42.355463 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:32:42.855446696 +0000 UTC m=+136.163641404 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7c9pc" (UID: "7cd9832f-e47d-4503-88fb-6a197b2fe89d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.359289 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-qgt58" Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.454391 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:32:42 crc kubenswrapper[4737]: E0126 18:32:42.454827 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:32:42.954807512 +0000 UTC m=+136.263002220 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.521472 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sb8td" Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.556205 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:42 crc kubenswrapper[4737]: E0126 18:32:42.557124 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:32:43.05710951 +0000 UTC m=+136.365304218 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7c9pc" (UID: "7cd9832f-e47d-4503-88fb-6a197b2fe89d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.659647 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:32:42 crc kubenswrapper[4737]: E0126 18:32:42.663785 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:32:43.163753128 +0000 UTC m=+136.471947836 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.678842 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-8phw8"] Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.768172 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:42 crc kubenswrapper[4737]: E0126 18:32:42.768706 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:32:43.268681768 +0000 UTC m=+136.576876476 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7c9pc" (UID: "7cd9832f-e47d-4503-88fb-6a197b2fe89d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.802414 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-nqcjp"] Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.858931 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p6k9r"] Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.872607 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:32:42 crc kubenswrapper[4737]: E0126 18:32:42.873316 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:32:43.373295929 +0000 UTC m=+136.681490637 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.879910 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t77ps"] Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.882002 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-ktwh7"] Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.887451 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-n2t8j"] Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.904764 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vkl6w"] Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.920176 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-g4vb5"] Jan 26 18:32:42 crc kubenswrapper[4737]: I0126 18:32:42.974855 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:42 crc kubenswrapper[4737]: E0126 18:32:42.975320 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:32:43.475302559 +0000 UTC m=+136.783497267 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7c9pc" (UID: "7cd9832f-e47d-4503-88fb-6a197b2fe89d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:43 crc kubenswrapper[4737]: I0126 18:32:43.076395 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:32:43 crc kubenswrapper[4737]: E0126 18:32:43.077028 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:32:43.576978359 +0000 UTC m=+136.885173067 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:43 crc kubenswrapper[4737]: I0126 18:32:43.077610 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:43 crc kubenswrapper[4737]: E0126 18:32:43.078297 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:32:43.578288345 +0000 UTC m=+136.886483043 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7c9pc" (UID: "7cd9832f-e47d-4503-88fb-6a197b2fe89d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:43 crc kubenswrapper[4737]: I0126 18:32:43.178413 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:32:43 crc kubenswrapper[4737]: E0126 18:32:43.184642 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:32:43.684583984 +0000 UTC m=+136.992778712 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:43 crc kubenswrapper[4737]: I0126 18:32:43.281174 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:43 crc kubenswrapper[4737]: E0126 18:32:43.281842 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:32:43.781826791 +0000 UTC m=+137.090021499 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7c9pc" (UID: "7cd9832f-e47d-4503-88fb-6a197b2fe89d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:44 crc kubenswrapper[4737]: I0126 18:32:43.350658 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" event={"ID":"fdc44942-56de-4694-bcd4-bca48f1e1e08","Type":"ContainerStarted","Data":"a7dd5c5a40c38e57b127df6dfb7900c2f3b7b3dc73cb475cba8fabacacbb037e"} Jan 26 18:32:44 crc kubenswrapper[4737]: I0126 18:32:43.351905 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" Jan 26 18:32:44 crc kubenswrapper[4737]: I0126 18:32:43.378506 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p6k9r" event={"ID":"b69e01a5-0952-496d-97cd-21586e50a7de","Type":"ContainerStarted","Data":"c455c85953dbee5961f2b01dc8e7a46916bd103d6f14e518538d1ec36369511c"} Jan 26 18:32:44 crc kubenswrapper[4737]: I0126 18:32:43.384189 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:32:44 crc kubenswrapper[4737]: I0126 18:32:43.384435 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7h9cs" event={"ID":"9ceadcc2-c87f-4382-895a-f052e3c3597d","Type":"ContainerStarted","Data":"11a0ae5f0b174de66e703b99bfc2b5d02f9a22aa60ae32587ca86366804c4487"} Jan 26 18:32:44 crc kubenswrapper[4737]: E0126 18:32:43.384798 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:32:43.884740996 +0000 UTC m=+137.192935704 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:44 crc kubenswrapper[4737]: I0126 18:32:43.385513 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7h9cs" Jan 26 18:32:44 crc kubenswrapper[4737]: I0126 18:32:43.389121 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vkl6w" event={"ID":"7637c14c-92d8-4049-945c-33d6c7f7f9d1","Type":"ContainerStarted","Data":"2045a5e765f761c27a4606878fa77b5e1eddfa8c120fbad193d927934c31796f"} Jan 26 18:32:44 crc kubenswrapper[4737]: I0126 18:32:43.397088 4737 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-7h9cs container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Jan 26 18:32:44 crc kubenswrapper[4737]: I0126 18:32:43.397177 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7h9cs" podUID="9ceadcc2-c87f-4382-895a-f052e3c3597d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Jan 26 18:32:44 crc kubenswrapper[4737]: I0126 18:32:43.453198 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n2t8j" event={"ID":"754c2fa2-3520-4a1e-a052-16c16efc7d51","Type":"ContainerStarted","Data":"6c50c59daae848297be545c80220f872326cdd5adbafe3379086a784865be061"} Jan 26 18:32:44 crc kubenswrapper[4737]: I0126 18:32:43.456127 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490870-k4f69"] Jan 26 18:32:44 crc kubenswrapper[4737]: I0126 18:32:43.457870 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kzwmx" event={"ID":"d8ad60c4-c4e9-48bd-bb54-f22bef5a8b76","Type":"ContainerStarted","Data":"67d723e070f7e442acf184d8b50ee2fa7716453d3c1df97c31534643c01cde54"} Jan 26 18:32:44 crc kubenswrapper[4737]: I0126 18:32:43.458860 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-n7cr7"] Jan 26 18:32:44 crc kubenswrapper[4737]: I0126 18:32:43.460092 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-wwzqx" event={"ID":"60a6a19b-baa5-47c5-8733-202b5bfd0c97","Type":"ContainerStarted","Data":"3e2683aea03554aa0c09881c55b32a2dbeeca4175b6994947bc7f62319165e6d"} Jan 26 18:32:44 crc kubenswrapper[4737]: I0126 18:32:43.461396 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-nqcjp" event={"ID":"4fecb426-1ec9-4ee4-aee7-f079d088dea4","Type":"ContainerStarted","Data":"0c84d685d07d4524264783e15530b329757ff68378f05c469c970f89cde0f53a"} Jan 26 18:32:44 crc kubenswrapper[4737]: I0126 18:32:43.462893 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-shctm" event={"ID":"fe2f1edb-4ba9-4745-ba10-2377d62e0313","Type":"ContainerStarted","Data":"fb9ee1962232307446158addf22ded1208e9b0d865ccaa034a7ce36d820c8062"} Jan 26 18:32:44 crc kubenswrapper[4737]: I0126 18:32:43.469215 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p7ll4" event={"ID":"c8a64e01-05c7-4ea4-a60c-0bcce98ea3ff","Type":"ContainerStarted","Data":"3d491bc935718a6856f674840ddd595df4081aca37738ad04da9ed35ea546f32"} Jan 26 18:32:44 crc kubenswrapper[4737]: I0126 18:32:43.471771 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p7ll4" Jan 26 18:32:44 crc kubenswrapper[4737]: I0126 18:32:43.476121 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-brpd2" Jan 26 18:32:44 crc kubenswrapper[4737]: I0126 18:32:43.476263 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-brpd2" event={"ID":"abf4a817-2de4-4f69-9ad8-d15ed857d5ab","Type":"ContainerStarted","Data":"f4563b3ac1ec51fe6f41461f05bbeddf91d05132c7af6bc746df7670570e0972"} Jan 26 18:32:44 crc kubenswrapper[4737]: I0126 18:32:43.476286 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-brpd2" event={"ID":"abf4a817-2de4-4f69-9ad8-d15ed857d5ab","Type":"ContainerStarted","Data":"154950165c69ac0188c0d3435241d3f7a3ad8ce40edf64b3bff1547528c13da0"} Jan 26 18:32:44 crc kubenswrapper[4737]: I0126 18:32:43.477997 4737 patch_prober.go:28] interesting pod/downloads-7954f5f757-brpd2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Jan 26 18:32:44 crc kubenswrapper[4737]: I0126 18:32:43.478041 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-brpd2" podUID="abf4a817-2de4-4f69-9ad8-d15ed857d5ab" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.34:8080/\": dial tcp 10.217.0.34:8080: connect: connection refused" Jan 26 18:32:44 crc kubenswrapper[4737]: I0126 18:32:43.479256 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-s7n9n" event={"ID":"f792056c-fffa-4089-a040-8e09a1d6489f","Type":"ContainerStarted","Data":"bc53cf8c8d19cda118fd4aaef0a2c684b8563c64cbcbd6a5f88961d0b49d376c"} Jan 26 18:32:44 crc kubenswrapper[4737]: I0126 18:32:43.480571 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8phw8" event={"ID":"90b067c5-a234-4e7f-a68b-e0b1c5cdac35","Type":"ContainerStarted","Data":"51f7b2a5ea58682bdf6c49e52fb3b72644057a7014b106c98a733e6722f87efb"} Jan 26 18:32:44 crc kubenswrapper[4737]: I0126 18:32:43.481269 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-g4vb5" event={"ID":"833792c1-41f1-45ee-b08b-aacc3388e916","Type":"ContainerStarted","Data":"d7366e50f632e1c2c57207c0c3afa8e0174f619e925f17c73dc22a7ace14ddff"} Jan 26 18:32:44 crc kubenswrapper[4737]: I0126 18:32:43.481882 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qw4sc" event={"ID":"036a0e85-4072-4906-90a1-c87c319a4abe","Type":"ContainerStarted","Data":"89d557f69aae99b96be70a57603574c1cd6cd6ed9882a4c89b99b8779cf27e64"} Jan 26 18:32:44 crc kubenswrapper[4737]: I0126 18:32:43.482522 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t77ps" event={"ID":"2c6d44e4-59b3-46ff-8a01-43c41890a722","Type":"ContainerStarted","Data":"5ff865253578d67dfdf7de11ccfcd64a4ebba3b79129a22fb1d22add059cf6cc"} Jan 26 18:32:44 crc kubenswrapper[4737]: I0126 18:32:43.484136 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-7jxs2" event={"ID":"858fe62f-567a-47e7-9847-c393790eb41f","Type":"ContainerDied","Data":"3d551660c99eb8e839cfe1228475e475519e5a5c62b55b1a6f586ffc4aa14b5a"} Jan 26 18:32:44 crc kubenswrapper[4737]: I0126 18:32:43.484305 4737 generic.go:334] "Generic (PLEG): container finished" podID="858fe62f-567a-47e7-9847-c393790eb41f" containerID="3d551660c99eb8e839cfe1228475e475519e5a5c62b55b1a6f586ffc4aa14b5a" exitCode=0 Jan 26 18:32:44 crc kubenswrapper[4737]: I0126 18:32:43.486935 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:44 crc kubenswrapper[4737]: E0126 18:32:43.494019 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:32:43.993997676 +0000 UTC m=+137.302192384 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7c9pc" (UID: "7cd9832f-e47d-4503-88fb-6a197b2fe89d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:44 crc kubenswrapper[4737]: I0126 18:32:44.864240 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-m5fhx" event={"ID":"8281dd1a-854f-48af-855b-bb3f8f2a2b2a","Type":"ContainerStarted","Data":"13987f802adf6b9b727f4329845b19f019946a50e93c2ac48bd76d9b18f85810"} Jan 26 18:32:44 crc kubenswrapper[4737]: I0126 18:32:44.865511 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:32:44 crc kubenswrapper[4737]: E0126 18:32:44.865824 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:32:45.865789164 +0000 UTC m=+139.173983872 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:44 crc kubenswrapper[4737]: I0126 18:32:44.868930 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-ktwh7" event={"ID":"c8be3738-e6c1-4cc8-ae8a-a23387b73213","Type":"ContainerStarted","Data":"3bc218166a6398f670da8149292e01c9405e6ae23e446edaff9b2d83595e37f9"} Jan 26 18:32:44 crc kubenswrapper[4737]: I0126 18:32:44.873732 4737 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-9kjp9 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.6:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 18:32:44 crc kubenswrapper[4737]: I0126 18:32:44.873798 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" podUID="fdc44942-56de-4694-bcd4-bca48f1e1e08" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.6:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 18:32:44 crc kubenswrapper[4737]: I0126 18:32:44.876623 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lnrns" event={"ID":"887b083d-2d4b-4231-a109-f2e1d5d14c39","Type":"ContainerStarted","Data":"329f6dde84dd70530616f1ae965c868089e467b3be7d7359cd4438d8f70e3b00"} Jan 26 18:32:44 crc kubenswrapper[4737]: I0126 18:32:44.896175 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-mv7h7"] Jan 26 18:32:44 crc kubenswrapper[4737]: I0126 18:32:44.940889 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z594r"] Jan 26 18:32:44 crc kubenswrapper[4737]: I0126 18:32:44.982543 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:32:44 crc kubenswrapper[4737]: E0126 18:32:44.992731 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:32:45.492704385 +0000 UTC m=+138.800899093 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:45 crc kubenswrapper[4737]: I0126 18:32:45.086734 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-hbdm4" podStartSLOduration=121.086707842 podStartE2EDuration="2m1.086707842s" podCreationTimestamp="2026-01-26 18:30:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:32:45.045742386 +0000 UTC m=+138.353937094" watchObservedRunningTime="2026-01-26 18:32:45.086707842 +0000 UTC m=+138.394902550" Jan 26 18:32:45 crc kubenswrapper[4737]: I0126 18:32:45.096380 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:45 crc kubenswrapper[4737]: E0126 18:32:45.096846 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:32:45.596827113 +0000 UTC m=+138.905021821 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7c9pc" (UID: "7cd9832f-e47d-4503-88fb-6a197b2fe89d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:45 crc kubenswrapper[4737]: I0126 18:32:45.114210 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-fm6nl"] Jan 26 18:32:45 crc kubenswrapper[4737]: I0126 18:32:45.209723 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:32:45 crc kubenswrapper[4737]: E0126 18:32:45.210677 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:32:45.710656761 +0000 UTC m=+139.018851469 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:45 crc kubenswrapper[4737]: I0126 18:32:45.211655 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-htkzj" podStartSLOduration=121.211621877 podStartE2EDuration="2m1.211621877s" podCreationTimestamp="2026-01-26 18:30:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:32:45.2092048 +0000 UTC m=+138.517399508" watchObservedRunningTime="2026-01-26 18:32:45.211621877 +0000 UTC m=+138.519816585" Jan 26 18:32:45 crc kubenswrapper[4737]: I0126 18:32:45.263319 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-l9spd" podStartSLOduration=121.26329172 podStartE2EDuration="2m1.26329172s" podCreationTimestamp="2026-01-26 18:30:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:32:45.261748137 +0000 UTC m=+138.569942845" watchObservedRunningTime="2026-01-26 18:32:45.26329172 +0000 UTC m=+138.571486428" Jan 26 18:32:45 crc kubenswrapper[4737]: I0126 18:32:45.322183 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" podStartSLOduration=121.322159943 podStartE2EDuration="2m1.322159943s" podCreationTimestamp="2026-01-26 18:30:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:32:45.321624898 +0000 UTC m=+138.629819606" watchObservedRunningTime="2026-01-26 18:32:45.322159943 +0000 UTC m=+138.630354651" Jan 26 18:32:45 crc kubenswrapper[4737]: I0126 18:32:45.324035 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:45 crc kubenswrapper[4737]: E0126 18:32:45.326628 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:32:45.826612117 +0000 UTC m=+139.134806825 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7c9pc" (UID: "7cd9832f-e47d-4503-88fb-6a197b2fe89d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:45 crc kubenswrapper[4737]: I0126 18:32:45.349461 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-qgt58"] Jan 26 18:32:45 crc kubenswrapper[4737]: I0126 18:32:45.352046 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-bbw9t"] Jan 26 18:32:45 crc kubenswrapper[4737]: I0126 18:32:45.356034 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7h9cs" podStartSLOduration=120.356013402 podStartE2EDuration="2m0.356013402s" podCreationTimestamp="2026-01-26 18:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:32:45.353725619 +0000 UTC m=+138.661920327" watchObservedRunningTime="2026-01-26 18:32:45.356013402 +0000 UTC m=+138.664208110" Jan 26 18:32:45 crc kubenswrapper[4737]: I0126 18:32:45.382536 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gftx9"] Jan 26 18:32:45 crc kubenswrapper[4737]: I0126 18:32:45.400278 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lnrns" podStartSLOduration=121.400239279 podStartE2EDuration="2m1.400239279s" podCreationTimestamp="2026-01-26 18:30:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:32:45.391547188 +0000 UTC m=+138.699741906" watchObservedRunningTime="2026-01-26 18:32:45.400239279 +0000 UTC m=+138.708434007" Jan 26 18:32:45 crc kubenswrapper[4737]: I0126 18:32:45.429054 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:32:45 crc kubenswrapper[4737]: E0126 18:32:45.430004 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:32:45.929979533 +0000 UTC m=+139.238174241 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:45 crc kubenswrapper[4737]: I0126 18:32:45.465132 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-scmj7"] Jan 26 18:32:45 crc kubenswrapper[4737]: I0126 18:32:45.477288 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jxrhw"] Jan 26 18:32:45 crc kubenswrapper[4737]: I0126 18:32:45.477382 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-shctm" podStartSLOduration=120.477369798 podStartE2EDuration="2m0.477369798s" podCreationTimestamp="2026-01-26 18:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:32:45.450898133 +0000 UTC m=+138.759092841" watchObservedRunningTime="2026-01-26 18:32:45.477369798 +0000 UTC m=+138.785564506" Jan 26 18:32:45 crc kubenswrapper[4737]: I0126 18:32:45.531296 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:45 crc kubenswrapper[4737]: E0126 18:32:45.531627 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:32:46.031614363 +0000 UTC m=+139.339809061 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7c9pc" (UID: "7cd9832f-e47d-4503-88fb-6a197b2fe89d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:45 crc kubenswrapper[4737]: I0126 18:32:45.542017 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-s7n9n" podStartSLOduration=120.54199021 podStartE2EDuration="2m0.54199021s" podCreationTimestamp="2026-01-26 18:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:32:45.474309143 +0000 UTC m=+138.782503851" watchObservedRunningTime="2026-01-26 18:32:45.54199021 +0000 UTC m=+138.850184918" Jan 26 18:32:45 crc kubenswrapper[4737]: I0126 18:32:45.543992 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p7ll4" podStartSLOduration=121.543985096 podStartE2EDuration="2m1.543985096s" podCreationTimestamp="2026-01-26 18:30:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:32:45.531804788 +0000 UTC m=+138.839999496" watchObservedRunningTime="2026-01-26 18:32:45.543985096 +0000 UTC m=+138.852179804" Jan 26 18:32:45 crc kubenswrapper[4737]: I0126 18:32:45.581592 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-brpd2" podStartSLOduration=121.581571028 podStartE2EDuration="2m1.581571028s" podCreationTimestamp="2026-01-26 18:30:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:32:45.579538212 +0000 UTC m=+138.887732920" watchObservedRunningTime="2026-01-26 18:32:45.581571028 +0000 UTC m=+138.889765736" Jan 26 18:32:45 crc kubenswrapper[4737]: I0126 18:32:45.638117 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:32:45 crc kubenswrapper[4737]: E0126 18:32:45.638760 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:32:46.138738623 +0000 UTC m=+139.446933331 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:45 crc kubenswrapper[4737]: I0126 18:32:45.715035 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6f78q"] Jan 26 18:32:45 crc kubenswrapper[4737]: I0126 18:32:45.729825 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6jt9w"] Jan 26 18:32:45 crc kubenswrapper[4737]: I0126 18:32:45.740327 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:45 crc kubenswrapper[4737]: E0126 18:32:45.740929 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:32:46.240912337 +0000 UTC m=+139.549107055 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7c9pc" (UID: "7cd9832f-e47d-4503-88fb-6a197b2fe89d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:45 crc kubenswrapper[4737]: I0126 18:32:45.781514 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-jhhdn"] Jan 26 18:32:45 crc kubenswrapper[4737]: I0126 18:32:45.814668 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sb8td"] Jan 26 18:32:45 crc kubenswrapper[4737]: I0126 18:32:45.826719 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-8p4v9"] Jan 26 18:32:45 crc kubenswrapper[4737]: I0126 18:32:45.831032 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-k965v"] Jan 26 18:32:45 crc kubenswrapper[4737]: I0126 18:32:45.842612 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:32:45 crc kubenswrapper[4737]: E0126 18:32:45.843861 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:32:46.343835853 +0000 UTC m=+139.652030561 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:45 crc kubenswrapper[4737]: I0126 18:32:45.900580 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-fm6nl" event={"ID":"3c8453aa-abd7-49cc-a743-5e6bb8649740","Type":"ContainerStarted","Data":"e13cd18c149cb3f4d1196c06cddcdc3e38ca876528eea35cff26ce14e3247903"} Jan 26 18:32:45 crc kubenswrapper[4737]: W0126 18:32:45.922242 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17e356af_cb63_4f1c_9b53_d226b15d5a35.slice/crio-7881d97cfcf5497fcfcb0e2177da46c34013a36a148faabfd3ccf72a380ac7bb WatchSource:0}: Error finding container 7881d97cfcf5497fcfcb0e2177da46c34013a36a148faabfd3ccf72a380ac7bb: Status 404 returned error can't find the container with id 7881d97cfcf5497fcfcb0e2177da46c34013a36a148faabfd3ccf72a380ac7bb Jan 26 18:32:45 crc kubenswrapper[4737]: I0126 18:32:45.926516 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-k4f69" event={"ID":"ac652a18-5fbd-483e-94d1-0782ee0cc3ac","Type":"ContainerStarted","Data":"3717827fc8efbb8b95cc6d13b3247b6fd34c1a1e3bc5b019f720e00d07062152"} Jan 26 18:32:45 crc kubenswrapper[4737]: I0126 18:32:45.946204 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mv7h7" event={"ID":"34949d5f-f358-40f5-8b72-7e82ec14b2ad","Type":"ContainerStarted","Data":"7c3faf40e0a80c0d4f21641847a07df5bb0f7820eb9cd1c57725b1b90ee41512"} Jan 26 18:32:45 crc kubenswrapper[4737]: I0126 18:32:45.947479 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:45 crc kubenswrapper[4737]: E0126 18:32:45.947874 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:32:46.447854347 +0000 UTC m=+139.756049055 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7c9pc" (UID: "7cd9832f-e47d-4503-88fb-6a197b2fe89d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:45 crc kubenswrapper[4737]: I0126 18:32:45.998917 4737 csr.go:261] certificate signing request csr-2t4fb is approved, waiting to be issued Jan 26 18:32:45 crc kubenswrapper[4737]: I0126 18:32:45.998969 4737 csr.go:257] certificate signing request csr-2t4fb is issued Jan 26 18:32:46 crc kubenswrapper[4737]: I0126 18:32:46.025312 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p6k9r" event={"ID":"b69e01a5-0952-496d-97cd-21586e50a7de","Type":"ContainerStarted","Data":"91fdc3d6739ae59e36895ecc746d2ecf843cb173c622bd2489042cd3db2e8acf"} Jan 26 18:32:46 crc kubenswrapper[4737]: I0126 18:32:46.036385 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-n7cr7" event={"ID":"9b4a67b3-c096-4abe-80d8-f15e2d4ab72d","Type":"ContainerStarted","Data":"fa77fb6d7269f7a354d8e59ea280185366db1926f843ac68ba566c117fc068f6"} Jan 26 18:32:46 crc kubenswrapper[4737]: I0126 18:32:46.036434 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-n7cr7" event={"ID":"9b4a67b3-c096-4abe-80d8-f15e2d4ab72d","Type":"ContainerStarted","Data":"de152d55b63860c94e56c78a1aee141c43521fc3edbbf92a63419be8ba723178"} Jan 26 18:32:46 crc kubenswrapper[4737]: I0126 18:32:46.037500 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-n7cr7" Jan 26 18:32:46 crc kubenswrapper[4737]: I0126 18:32:46.045431 4737 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-n7cr7 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 26 18:32:46 crc kubenswrapper[4737]: I0126 18:32:46.045496 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-n7cr7" podUID="9b4a67b3-c096-4abe-80d8-f15e2d4ab72d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 26 18:32:46 crc kubenswrapper[4737]: I0126 18:32:46.048537 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:32:46 crc kubenswrapper[4737]: I0126 18:32:46.051586 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-bbw9t" event={"ID":"ed97d0e9-4ae3-4db6-9635-38141f37948e","Type":"ContainerStarted","Data":"a0061f921ea3ed791993319bfe2f598c87d2a110f32cd01848fd34b4d5588d98"} Jan 26 18:32:46 crc kubenswrapper[4737]: E0126 18:32:46.053466 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:32:46.553444677 +0000 UTC m=+139.861639385 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:46 crc kubenswrapper[4737]: I0126 18:32:46.083441 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kzwmx" event={"ID":"d8ad60c4-c4e9-48bd-bb54-f22bef5a8b76","Type":"ContainerStarted","Data":"d4599ee7488a56d7f81d534ce452a8e423de129d513db76656849ebe8abb977a"} Jan 26 18:32:46 crc kubenswrapper[4737]: I0126 18:32:46.096945 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-p6k9r" podStartSLOduration=121.085939218 podStartE2EDuration="2m1.085939218s" podCreationTimestamp="2026-01-26 18:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:32:46.083785088 +0000 UTC m=+139.391979796" watchObservedRunningTime="2026-01-26 18:32:46.085939218 +0000 UTC m=+139.394133926" Jan 26 18:32:46 crc kubenswrapper[4737]: I0126 18:32:46.129960 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-scmj7" event={"ID":"b40b453c-36fe-4b0b-8e67-12715f0e15e7","Type":"ContainerStarted","Data":"cb6560dcc7fd90e4975c4a8551c6431bc006dbf2286276728c6b165bd48daf0e"} Jan 26 18:32:46 crc kubenswrapper[4737]: I0126 18:32:46.141741 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-n7cr7" podStartSLOduration=122.141716785 podStartE2EDuration="2m2.141716785s" podCreationTimestamp="2026-01-26 18:30:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:32:46.140314596 +0000 UTC m=+139.448509304" watchObservedRunningTime="2026-01-26 18:32:46.141716785 +0000 UTC m=+139.449911493" Jan 26 18:32:46 crc kubenswrapper[4737]: I0126 18:32:46.154716 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:46 crc kubenswrapper[4737]: E0126 18:32:46.157603 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:32:46.657584055 +0000 UTC m=+139.965778763 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7c9pc" (UID: "7cd9832f-e47d-4503-88fb-6a197b2fe89d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:46 crc kubenswrapper[4737]: I0126 18:32:46.169824 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kzwmx" podStartSLOduration=122.169797734 podStartE2EDuration="2m2.169797734s" podCreationTimestamp="2026-01-26 18:30:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:32:46.169460284 +0000 UTC m=+139.477654992" watchObservedRunningTime="2026-01-26 18:32:46.169797734 +0000 UTC m=+139.477992432" Jan 26 18:32:46 crc kubenswrapper[4737]: I0126 18:32:46.171669 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6f78q" event={"ID":"cf12407d-16ca-40d9-8279-f46693aee8b1","Type":"ContainerStarted","Data":"a261312a01472472625bc75213e2519b57c81ed9b0cfc107cd692591de9baf16"} Jan 26 18:32:46 crc kubenswrapper[4737]: I0126 18:32:46.191335 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6jt9w" event={"ID":"d0215af9-47a6-42bb-bb48-29c002caff5a","Type":"ContainerStarted","Data":"c8c2c590bb168a5fd1a2f15503cf029734486d622d6b88012366bd599f9abcd3"} Jan 26 18:32:46 crc kubenswrapper[4737]: I0126 18:32:46.209403 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t77ps" event={"ID":"2c6d44e4-59b3-46ff-8a01-43c41890a722","Type":"ContainerStarted","Data":"a4f57cc392c8f2aab3b33c40942f5fd7a97a25e1f6d946ee7e1f925ead0c5cd8"} Jan 26 18:32:46 crc kubenswrapper[4737]: I0126 18:32:46.210472 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t77ps" Jan 26 18:32:46 crc kubenswrapper[4737]: I0126 18:32:46.212392 4737 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-t77ps container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Jan 26 18:32:46 crc kubenswrapper[4737]: I0126 18:32:46.212433 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t77ps" podUID="2c6d44e4-59b3-46ff-8a01-43c41890a722" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Jan 26 18:32:46 crc kubenswrapper[4737]: I0126 18:32:46.229539 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gsfgx" event={"ID":"8d2d9bc1-4264-4633-af76-b57166070ab0","Type":"ContainerStarted","Data":"675178058d066415ca114c58afff90be13dd26161d0874d0e664283692798609"} Jan 26 18:32:46 crc kubenswrapper[4737]: I0126 18:32:46.236295 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-qgt58" event={"ID":"0b8a65d4-ee10-4c70-bcef-cd823b4a7cc9","Type":"ContainerStarted","Data":"037f0ec9b6f621ff98adefe97128cde0340fdb869b2fcdabcdf732e69c59c613"} Jan 26 18:32:46 crc kubenswrapper[4737]: I0126 18:32:46.247270 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-wwzqx" event={"ID":"60a6a19b-baa5-47c5-8733-202b5bfd0c97","Type":"ContainerStarted","Data":"5d77bd83a1097eef12238ff421bd5d6ee0d2f3eb3914f8f40b41a00683dfe3b7"} Jan 26 18:32:46 crc kubenswrapper[4737]: I0126 18:32:46.256358 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:32:46 crc kubenswrapper[4737]: E0126 18:32:46.257658 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:32:46.75763813 +0000 UTC m=+140.065832838 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:46 crc kubenswrapper[4737]: I0126 18:32:46.263703 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t77ps" podStartSLOduration=121.263690098 podStartE2EDuration="2m1.263690098s" podCreationTimestamp="2026-01-26 18:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:32:46.230283481 +0000 UTC m=+139.538478189" watchObservedRunningTime="2026-01-26 18:32:46.263690098 +0000 UTC m=+139.571884806" Jan 26 18:32:46 crc kubenswrapper[4737]: I0126 18:32:46.263784 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gsfgx" podStartSLOduration=121.263780671 podStartE2EDuration="2m1.263780671s" podCreationTimestamp="2026-01-26 18:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:32:46.261868387 +0000 UTC m=+139.570063115" watchObservedRunningTime="2026-01-26 18:32:46.263780671 +0000 UTC m=+139.571975369" Jan 26 18:32:46 crc kubenswrapper[4737]: I0126 18:32:46.281512 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z594r" event={"ID":"ad4a4950-08fa-4707-8af8-4814f89b5ec8","Type":"ContainerStarted","Data":"84f316b20bc58161ee68d9e5ae735c43ffe88eefb830136643f103807df71d0f"} Jan 26 18:32:46 crc kubenswrapper[4737]: I0126 18:32:46.359414 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:46 crc kubenswrapper[4737]: I0126 18:32:46.361994 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-wwzqx" podStartSLOduration=121.361961004 podStartE2EDuration="2m1.361961004s" podCreationTimestamp="2026-01-26 18:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:32:46.345508187 +0000 UTC m=+139.653702895" watchObservedRunningTime="2026-01-26 18:32:46.361961004 +0000 UTC m=+139.670155712" Jan 26 18:32:46 crc kubenswrapper[4737]: I0126 18:32:46.364188 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-ktwh7" event={"ID":"c8be3738-e6c1-4cc8-ae8a-a23387b73213","Type":"ContainerStarted","Data":"fff810ccd79fda68dbe8a51d0dec24b7740c30814e587092d9803d3f0ee3a1bd"} Jan 26 18:32:46 crc kubenswrapper[4737]: E0126 18:32:46.365849 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:32:46.865833721 +0000 UTC m=+140.174028429 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7c9pc" (UID: "7cd9832f-e47d-4503-88fb-6a197b2fe89d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:46 crc kubenswrapper[4737]: I0126 18:32:46.403145 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-f84g9" event={"ID":"38ea1569-149a-4a65-a61d-021204d2cde6","Type":"ContainerStarted","Data":"bc3951cc6e2dd048081d6a61a3aae0dd262de2022f6d51869703eda05ae2a08e"} Jan 26 18:32:46 crc kubenswrapper[4737]: I0126 18:32:46.436406 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z594r" podStartSLOduration=122.436381967 podStartE2EDuration="2m2.436381967s" podCreationTimestamp="2026-01-26 18:30:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:32:46.367828466 +0000 UTC m=+139.676023174" watchObservedRunningTime="2026-01-26 18:32:46.436381967 +0000 UTC m=+139.744576675" Jan 26 18:32:46 crc kubenswrapper[4737]: I0126 18:32:46.461363 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8phw8" event={"ID":"90b067c5-a234-4e7f-a68b-e0b1c5cdac35","Type":"ContainerStarted","Data":"c810d28b3a05f72df7d584a3f2728ee8d592f223d107b128ce25676c8fc6683e"} Jan 26 18:32:46 crc kubenswrapper[4737]: I0126 18:32:46.461860 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:32:46 crc kubenswrapper[4737]: E0126 18:32:46.463029 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:32:46.963011176 +0000 UTC m=+140.271205884 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:46 crc kubenswrapper[4737]: I0126 18:32:46.484727 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gftx9" event={"ID":"eec275ca-9658-4733-b311-48a052e4e843","Type":"ContainerStarted","Data":"04301efa9b877195639b5cd7785d45543d2b29c7d79cd2cd0eae22c876e0fcc1"} Jan 26 18:32:46 crc kubenswrapper[4737]: I0126 18:32:46.493874 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-m5fhx" event={"ID":"8281dd1a-854f-48af-855b-bb3f8f2a2b2a","Type":"ContainerStarted","Data":"f22e9575fe422b5e6fd62911ff7afab74f82688947e46a0c5acb89555a71445e"} Jan 26 18:32:46 crc kubenswrapper[4737]: I0126 18:32:46.513808 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qw4sc" event={"ID":"036a0e85-4072-4906-90a1-c87c319a4abe","Type":"ContainerStarted","Data":"5e67061cbbe3c49c72113b4f6332e87fd46bb41acecbaf9323d9e9f7b8015f91"} Jan 26 18:32:46 crc kubenswrapper[4737]: I0126 18:32:46.522295 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-f84g9" podStartSLOduration=122.522268109 podStartE2EDuration="2m2.522268109s" podCreationTimestamp="2026-01-26 18:30:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:32:46.437672163 +0000 UTC m=+139.745866871" watchObservedRunningTime="2026-01-26 18:32:46.522268109 +0000 UTC m=+139.830462817" Jan 26 18:32:46 crc kubenswrapper[4737]: I0126 18:32:46.551564 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n2t8j" event={"ID":"754c2fa2-3520-4a1e-a052-16c16efc7d51","Type":"ContainerStarted","Data":"96537de4c1b204c8b4f535bb8d9fd5742285bb5be17844d420488398e27c97c9"} Jan 26 18:32:46 crc kubenswrapper[4737]: I0126 18:32:46.558947 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jxrhw" event={"ID":"a8e30f97-e004-4054-9ffb-9f1bb9df0470","Type":"ContainerStarted","Data":"318d4c52635226de6e2d4d5cf85e7c80384170e0f9756ae07360d98044f4211e"} Jan 26 18:32:46 crc kubenswrapper[4737]: I0126 18:32:46.570954 4737 patch_prober.go:28] interesting pod/downloads-7954f5f757-brpd2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Jan 26 18:32:46 crc kubenswrapper[4737]: I0126 18:32:46.571030 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-brpd2" podUID="abf4a817-2de4-4f69-9ad8-d15ed857d5ab" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.34:8080/\": dial tcp 10.217.0.34:8080: connect: connection refused" Jan 26 18:32:46 crc kubenswrapper[4737]: I0126 18:32:46.572318 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:46 crc kubenswrapper[4737]: E0126 18:32:46.573395 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:32:47.073374537 +0000 UTC m=+140.381569245 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7c9pc" (UID: "7cd9832f-e47d-4503-88fb-6a197b2fe89d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:46 crc kubenswrapper[4737]: I0126 18:32:46.575346 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-m5fhx" podStartSLOduration=7.575326921 podStartE2EDuration="7.575326921s" podCreationTimestamp="2026-01-26 18:32:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:32:46.52337016 +0000 UTC m=+139.831564868" watchObservedRunningTime="2026-01-26 18:32:46.575326921 +0000 UTC m=+139.883521639" Jan 26 18:32:46 crc kubenswrapper[4737]: I0126 18:32:46.575789 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qw4sc" podStartSLOduration=121.575781404 podStartE2EDuration="2m1.575781404s" podCreationTimestamp="2026-01-26 18:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:32:46.571220187 +0000 UTC m=+139.879414905" watchObservedRunningTime="2026-01-26 18:32:46.575781404 +0000 UTC m=+139.883976112" Jan 26 18:32:46 crc kubenswrapper[4737]: I0126 18:32:46.579566 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7h9cs" Jan 26 18:32:46 crc kubenswrapper[4737]: I0126 18:32:46.599538 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" Jan 26 18:32:46 crc kubenswrapper[4737]: I0126 18:32:46.682163 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:32:46 crc kubenswrapper[4737]: E0126 18:32:46.684063 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:32:47.184038097 +0000 UTC m=+140.492232805 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:46 crc kubenswrapper[4737]: I0126 18:32:46.799238 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:46 crc kubenswrapper[4737]: E0126 18:32:46.799644 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:32:47.299629353 +0000 UTC m=+140.607824061 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7c9pc" (UID: "7cd9832f-e47d-4503-88fb-6a197b2fe89d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:46 crc kubenswrapper[4737]: I0126 18:32:46.803528 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p7ll4" Jan 26 18:32:46 crc kubenswrapper[4737]: I0126 18:32:46.900253 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:32:46 crc kubenswrapper[4737]: E0126 18:32:46.901173 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:32:47.401139118 +0000 UTC m=+140.709333826 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:47 crc kubenswrapper[4737]: I0126 18:32:47.000897 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-26 18:27:45 +0000 UTC, rotation deadline is 2026-10-22 10:49:34.273313415 +0000 UTC Jan 26 18:32:47 crc kubenswrapper[4737]: I0126 18:32:47.000953 4737 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6448h16m47.272363569s for next certificate rotation Jan 26 18:32:47 crc kubenswrapper[4737]: I0126 18:32:47.001991 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:47 crc kubenswrapper[4737]: E0126 18:32:47.002546 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:32:47.5025295 +0000 UTC m=+140.810724208 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7c9pc" (UID: "7cd9832f-e47d-4503-88fb-6a197b2fe89d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:47 crc kubenswrapper[4737]: I0126 18:32:47.104000 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:32:47 crc kubenswrapper[4737]: E0126 18:32:47.104600 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:32:47.60457818 +0000 UTC m=+140.912772888 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:47 crc kubenswrapper[4737]: I0126 18:32:47.206675 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:47 crc kubenswrapper[4737]: E0126 18:32:47.208085 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:32:47.70804274 +0000 UTC m=+141.016237448 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7c9pc" (UID: "7cd9832f-e47d-4503-88fb-6a197b2fe89d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:47 crc kubenswrapper[4737]: I0126 18:32:47.210173 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-wwzqx" Jan 26 18:32:47 crc kubenswrapper[4737]: I0126 18:32:47.240471 4737 patch_prober.go:28] interesting pod/router-default-5444994796-wwzqx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 18:32:47 crc kubenswrapper[4737]: [-]has-synced failed: reason withheld Jan 26 18:32:47 crc kubenswrapper[4737]: [+]process-running ok Jan 26 18:32:47 crc kubenswrapper[4737]: healthz check failed Jan 26 18:32:47 crc kubenswrapper[4737]: I0126 18:32:47.241246 4737 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wwzqx" podUID="60a6a19b-baa5-47c5-8733-202b5bfd0c97" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 18:32:47 crc kubenswrapper[4737]: I0126 18:32:47.316366 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:32:47 crc kubenswrapper[4737]: E0126 18:32:47.316823 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:32:47.816800307 +0000 UTC m=+141.124995015 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:47 crc kubenswrapper[4737]: I0126 18:32:47.431306 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:47 crc kubenswrapper[4737]: E0126 18:32:47.435301 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:32:47.935282974 +0000 UTC m=+141.243477682 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7c9pc" (UID: "7cd9832f-e47d-4503-88fb-6a197b2fe89d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:47 crc kubenswrapper[4737]: I0126 18:32:47.541098 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:32:47 crc kubenswrapper[4737]: E0126 18:32:47.541272 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:32:48.041236452 +0000 UTC m=+141.349431160 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:47 crc kubenswrapper[4737]: I0126 18:32:47.541893 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:47 crc kubenswrapper[4737]: E0126 18:32:47.542322 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:32:48.042313902 +0000 UTC m=+141.350508610 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7c9pc" (UID: "7cd9832f-e47d-4503-88fb-6a197b2fe89d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:47 crc kubenswrapper[4737]: I0126 18:32:47.607734 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-ktwh7" event={"ID":"c8be3738-e6c1-4cc8-ae8a-a23387b73213","Type":"ContainerStarted","Data":"a7e51a220bf41059e5692d1f26477bcc3326fc3ee945762522643ecaf58c91ca"} Jan 26 18:32:47 crc kubenswrapper[4737]: I0126 18:32:47.636474 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-k4f69" event={"ID":"ac652a18-5fbd-483e-94d1-0782ee0cc3ac","Type":"ContainerStarted","Data":"5843b80d4421ac37b77474ec11c8789e959f8d0527152c55f5e1fa7681a2742e"} Jan 26 18:32:47 crc kubenswrapper[4737]: I0126 18:32:47.643814 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:32:47 crc kubenswrapper[4737]: E0126 18:32:47.644280 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:32:48.14425955 +0000 UTC m=+141.452454258 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:47 crc kubenswrapper[4737]: I0126 18:32:47.669336 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-ktwh7" podStartSLOduration=122.669314335 podStartE2EDuration="2m2.669314335s" podCreationTimestamp="2026-01-26 18:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:32:47.666146027 +0000 UTC m=+140.974340745" watchObservedRunningTime="2026-01-26 18:32:47.669314335 +0000 UTC m=+140.977509043" Jan 26 18:32:47 crc kubenswrapper[4737]: I0126 18:32:47.672384 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sb8td" event={"ID":"a8407c17-c270-4f2c-be13-4b03ee2bbc28","Type":"ContainerStarted","Data":"9dd133ae90ba62457c59c6dcf9ae753df806287b22dd9b9f3b8bff939de48a3f"} Jan 26 18:32:47 crc kubenswrapper[4737]: I0126 18:32:47.672445 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sb8td" event={"ID":"a8407c17-c270-4f2c-be13-4b03ee2bbc28","Type":"ContainerStarted","Data":"20ca57ef5338da0a41a194cb8dbb099b1c57027e00ddadcad0942bcd5d270c9d"} Jan 26 18:32:47 crc kubenswrapper[4737]: I0126 18:32:47.673082 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sb8td" Jan 26 18:32:47 crc kubenswrapper[4737]: I0126 18:32:47.675805 4737 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-sb8td container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" start-of-body= Jan 26 18:32:47 crc kubenswrapper[4737]: I0126 18:32:47.675918 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sb8td" podUID="a8407c17-c270-4f2c-be13-4b03ee2bbc28" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" Jan 26 18:32:47 crc kubenswrapper[4737]: I0126 18:32:47.696967 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-nqcjp" event={"ID":"4fecb426-1ec9-4ee4-aee7-f079d088dea4","Type":"ContainerStarted","Data":"1795d401a6680aa84c886fad679a20ded29bc128b2c7a798dc7f75cb7def53ee"} Jan 26 18:32:47 crc kubenswrapper[4737]: I0126 18:32:47.726805 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z594r" event={"ID":"ad4a4950-08fa-4707-8af8-4814f89b5ec8","Type":"ContainerStarted","Data":"7c786a98b0036326a221fa186f320fe548c4d3d4335b520f10bd29a02e1872ed"} Jan 26 18:32:47 crc kubenswrapper[4737]: I0126 18:32:47.737219 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jhhdn" event={"ID":"d1595442-c281-470d-a08c-b04158a7c899","Type":"ContainerStarted","Data":"807473d3546a6aaf4758beec8eb710d3fb4b13943ee44753fa8234c989a54ac1"} Jan 26 18:32:47 crc kubenswrapper[4737]: I0126 18:32:47.737291 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jhhdn" event={"ID":"d1595442-c281-470d-a08c-b04158a7c899","Type":"ContainerStarted","Data":"86fe3887aead0b2a2fa5fc462cd1e4cbfbbf1a5e772bb2b4affe7cf5144dfb80"} Jan 26 18:32:47 crc kubenswrapper[4737]: I0126 18:32:47.745255 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:47 crc kubenswrapper[4737]: E0126 18:32:47.746748 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:32:48.246730832 +0000 UTC m=+141.554925740 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7c9pc" (UID: "7cd9832f-e47d-4503-88fb-6a197b2fe89d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:47 crc kubenswrapper[4737]: I0126 18:32:47.751213 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-k4f69" podStartSLOduration=123.751185986 podStartE2EDuration="2m3.751185986s" podCreationTimestamp="2026-01-26 18:30:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:32:47.716047231 +0000 UTC m=+141.024241939" watchObservedRunningTime="2026-01-26 18:32:47.751185986 +0000 UTC m=+141.059380694" Jan 26 18:32:47 crc kubenswrapper[4737]: I0126 18:32:47.752990 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6f78q" event={"ID":"cf12407d-16ca-40d9-8279-f46693aee8b1","Type":"ContainerStarted","Data":"62915b433e1704acae6886c91c767fabf874cb06a240b096399dbc2615bebd61"} Jan 26 18:32:47 crc kubenswrapper[4737]: I0126 18:32:47.753846 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-nqcjp" podStartSLOduration=123.753837069 podStartE2EDuration="2m3.753837069s" podCreationTimestamp="2026-01-26 18:30:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:32:47.749842978 +0000 UTC m=+141.058037676" watchObservedRunningTime="2026-01-26 18:32:47.753837069 +0000 UTC m=+141.062031777" Jan 26 18:32:47 crc kubenswrapper[4737]: I0126 18:32:47.784993 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mv7h7" event={"ID":"34949d5f-f358-40f5-8b72-7e82ec14b2ad","Type":"ContainerStarted","Data":"9d9732436d607efb592550b7e84954aa63596182f605779b0ff1777c6cc167f8"} Jan 26 18:32:47 crc kubenswrapper[4737]: I0126 18:32:47.785243 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mv7h7" event={"ID":"34949d5f-f358-40f5-8b72-7e82ec14b2ad","Type":"ContainerStarted","Data":"dc8b7630ac3fb7d063aefbee7d31bb28676dad9fd8c2722c7fa4bc74b43b1963"} Jan 26 18:32:47 crc kubenswrapper[4737]: I0126 18:32:47.800670 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sb8td" podStartSLOduration=122.800638737 podStartE2EDuration="2m2.800638737s" podCreationTimestamp="2026-01-26 18:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:32:47.787805671 +0000 UTC m=+141.096000379" watchObservedRunningTime="2026-01-26 18:32:47.800638737 +0000 UTC m=+141.108833445" Jan 26 18:32:47 crc kubenswrapper[4737]: I0126 18:32:47.825212 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jhhdn" podStartSLOduration=122.825188138 podStartE2EDuration="2m2.825188138s" podCreationTimestamp="2026-01-26 18:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:32:47.821476195 +0000 UTC m=+141.129670903" watchObservedRunningTime="2026-01-26 18:32:47.825188138 +0000 UTC m=+141.133382846" Jan 26 18:32:47 crc kubenswrapper[4737]: I0126 18:32:47.847081 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:32:47 crc kubenswrapper[4737]: E0126 18:32:47.848022 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:32:48.34798004 +0000 UTC m=+141.656174818 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:47 crc kubenswrapper[4737]: I0126 18:32:47.868339 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gftx9" event={"ID":"eec275ca-9658-4733-b311-48a052e4e843","Type":"ContainerStarted","Data":"76206cc768750069b7b9304646afbc03eb00334c9214c13020b6d7fd15730fe5"} Jan 26 18:32:47 crc kubenswrapper[4737]: I0126 18:32:47.869647 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-gftx9" Jan 26 18:32:47 crc kubenswrapper[4737]: I0126 18:32:47.888131 4737 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-gftx9 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Jan 26 18:32:47 crc kubenswrapper[4737]: I0126 18:32:47.888218 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-gftx9" podUID="eec275ca-9658-4733-b311-48a052e4e843" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" Jan 26 18:32:47 crc kubenswrapper[4737]: I0126 18:32:47.918103 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n2t8j" event={"ID":"754c2fa2-3520-4a1e-a052-16c16efc7d51","Type":"ContainerStarted","Data":"a57eff72485bafc4f2adc9537b75c36cd204693afc8472ac95fc57cca0426c77"} Jan 26 18:32:47 crc kubenswrapper[4737]: I0126 18:32:47.942447 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-g4vb5" event={"ID":"833792c1-41f1-45ee-b08b-aacc3388e916","Type":"ContainerStarted","Data":"7a74442f9664874d1e2a23785224cd397c3fe6f83a3deb154c889a0a61766aa9"} Jan 26 18:32:47 crc kubenswrapper[4737]: I0126 18:32:47.950009 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:47 crc kubenswrapper[4737]: E0126 18:32:47.951496 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:32:48.451480741 +0000 UTC m=+141.759675439 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7c9pc" (UID: "7cd9832f-e47d-4503-88fb-6a197b2fe89d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.003453 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jxrhw" event={"ID":"a8e30f97-e004-4054-9ffb-9f1bb9df0470","Type":"ContainerStarted","Data":"90437cddaff78ac67fb544eaf1325159d1189ee8eca071bb1505e5488fee845c"} Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.005666 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jxrhw" Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.006666 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6f78q" podStartSLOduration=123.006638102 podStartE2EDuration="2m3.006638102s" podCreationTimestamp="2026-01-26 18:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:32:48.006116687 +0000 UTC m=+141.314311415" watchObservedRunningTime="2026-01-26 18:32:48.006638102 +0000 UTC m=+141.314832810" Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.015453 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mv7h7" podStartSLOduration=123.013514972 podStartE2EDuration="2m3.013514972s" podCreationTimestamp="2026-01-26 18:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:32:47.905717603 +0000 UTC m=+141.213912311" watchObservedRunningTime="2026-01-26 18:32:48.013514972 +0000 UTC m=+141.321709680" Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.020736 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jxrhw" Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.051933 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.063011 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-k965v" event={"ID":"a2b6e28b-2e70-4f70-9284-942460f8d1fd","Type":"ContainerStarted","Data":"0edd5b098633d38d7a913ca6d7c10667a7ee40ba1e487d6544ac784d33f7b95c"} Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.063097 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-k965v" event={"ID":"a2b6e28b-2e70-4f70-9284-942460f8d1fd","Type":"ContainerStarted","Data":"b0921c45851bac1f9d6dbe87b19167cf8485c318681ef48a2802ddc24e22c507"} Jan 26 18:32:48 crc kubenswrapper[4737]: E0126 18:32:48.066035 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:32:48.565904355 +0000 UTC m=+141.874099063 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.122199 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-fm6nl" event={"ID":"3c8453aa-abd7-49cc-a743-5e6bb8649740","Type":"ContainerStarted","Data":"15e422aec8f4614ef32b905c32b309235844008d403452f009f4132ca61d3382"} Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.148433 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vkl6w" event={"ID":"7637c14c-92d8-4049-945c-33d6c7f7f9d1","Type":"ContainerStarted","Data":"46a77a3f8201971ddfa531874db5ac3009f667b8cea699a06cd8932251a99ac9"} Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.169052 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:48 crc kubenswrapper[4737]: E0126 18:32:48.169921 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:32:48.669902159 +0000 UTC m=+141.978096867 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7c9pc" (UID: "7cd9832f-e47d-4503-88fb-6a197b2fe89d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.172910 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-qgt58" event={"ID":"0b8a65d4-ee10-4c70-bcef-cd823b4a7cc9","Type":"ContainerStarted","Data":"b7329d80fec269ea5008cbefced4408233b470db0579571ae4cc03a8821b1076"} Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.205596 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-8p4v9" event={"ID":"17e356af-cb63-4f1c-9b53-d226b15d5a35","Type":"ContainerStarted","Data":"7b26bba1faed0f7fd4e9a9b00cf9ea15d1c7a706074c4105e94c050cfe6f12ef"} Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.205662 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-8p4v9" event={"ID":"17e356af-cb63-4f1c-9b53-d226b15d5a35","Type":"ContainerStarted","Data":"7881d97cfcf5497fcfcb0e2177da46c34013a36a148faabfd3ccf72a380ac7bb"} Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.233196 4737 patch_prober.go:28] interesting pod/router-default-5444994796-wwzqx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 18:32:48 crc kubenswrapper[4737]: [-]has-synced failed: reason withheld Jan 26 18:32:48 crc kubenswrapper[4737]: [+]process-running ok Jan 26 18:32:48 crc kubenswrapper[4737]: healthz check failed Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.233271 4737 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wwzqx" podUID="60a6a19b-baa5-47c5-8733-202b5bfd0c97" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.237385 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-scmj7" event={"ID":"b40b453c-36fe-4b0b-8e67-12715f0e15e7","Type":"ContainerStarted","Data":"e3aafeceecb2247876e09c616c5119d5f20c49e63f56132a7f8ae019d5637b28"} Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.275594 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:32:48 crc kubenswrapper[4737]: E0126 18:32:48.276356 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:32:48.776304631 +0000 UTC m=+142.084499359 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.279286 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6jt9w" event={"ID":"d0215af9-47a6-42bb-bb48-29c002caff5a","Type":"ContainerStarted","Data":"5b7e5ff7b0361ea6c46f3fdd9a5751d80cf960e06736956418a04a015fd5eeb9"} Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.280153 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6jt9w" Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.316791 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-7jxs2" event={"ID":"858fe62f-567a-47e7-9847-c393790eb41f","Type":"ContainerStarted","Data":"b448f7b2795f272a07aa778eb140778aef921577cd933b0953ef80c8a2d3d1bc"} Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.334864 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8phw8" event={"ID":"90b067c5-a234-4e7f-a68b-e0b1c5cdac35","Type":"ContainerStarted","Data":"7715f23cf1b3fb765c9a7bff8cd29a2d30517555e882c170c2e519e5a49c92bf"} Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.337587 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qw4sc" event={"ID":"036a0e85-4072-4906-90a1-c87c319a4abe","Type":"ContainerStarted","Data":"0a1886ddebe6e9ea63ffe92aba15acc846188b214766dca9f734087b90ceeb59"} Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.356062 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.365581 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-n7cr7" Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.366749 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-t77ps" Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.369016 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-f4ldv"] Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.377130 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:48 crc kubenswrapper[4737]: E0126 18:32:48.379561 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:32:48.879545784 +0000 UTC m=+142.187740492 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7c9pc" (UID: "7cd9832f-e47d-4503-88fb-6a197b2fe89d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.385760 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f4ldv" Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.395155 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.396935 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jxrhw" podStartSLOduration=123.396913306 podStartE2EDuration="2m3.396913306s" podCreationTimestamp="2026-01-26 18:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:32:48.395207669 +0000 UTC m=+141.703402377" watchObservedRunningTime="2026-01-26 18:32:48.396913306 +0000 UTC m=+141.705108014" Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.406936 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f4ldv"] Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.443635 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6tf2g"] Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.444925 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6tf2g" Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.447824 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.463684 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6tf2g"] Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.478079 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.478660 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7acd9116-baab-48b1-ab22-7310f60fada8-utilities\") pod \"certified-operators-f4ldv\" (UID: \"7acd9116-baab-48b1-ab22-7310f60fada8\") " pod="openshift-marketplace/certified-operators-f4ldv" Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.479108 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7acd9116-baab-48b1-ab22-7310f60fada8-catalog-content\") pod \"certified-operators-f4ldv\" (UID: \"7acd9116-baab-48b1-ab22-7310f60fada8\") " pod="openshift-marketplace/certified-operators-f4ldv" Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.479133 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6xnx\" (UniqueName: \"kubernetes.io/projected/7acd9116-baab-48b1-ab22-7310f60fada8-kube-api-access-k6xnx\") pod \"certified-operators-f4ldv\" (UID: \"7acd9116-baab-48b1-ab22-7310f60fada8\") " pod="openshift-marketplace/certified-operators-f4ldv" Jan 26 18:32:48 crc kubenswrapper[4737]: E0126 18:32:48.479721 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:32:48.979705442 +0000 UTC m=+142.287900150 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.502115 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-gftx9" podStartSLOduration=123.502088043 podStartE2EDuration="2m3.502088043s" podCreationTimestamp="2026-01-26 18:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:32:48.501385294 +0000 UTC m=+141.809580002" watchObservedRunningTime="2026-01-26 18:32:48.502088043 +0000 UTC m=+141.810282751" Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.531151 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n2t8j" podStartSLOduration=123.531132319 podStartE2EDuration="2m3.531132319s" podCreationTimestamp="2026-01-26 18:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:32:48.529547055 +0000 UTC m=+141.837741763" watchObservedRunningTime="2026-01-26 18:32:48.531132319 +0000 UTC m=+141.839327027" Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.580044 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bd24ab7-1242-4a05-afc2-bd24d931cb3d-catalog-content\") pod \"community-operators-6tf2g\" (UID: \"0bd24ab7-1242-4a05-afc2-bd24d931cb3d\") " pod="openshift-marketplace/community-operators-6tf2g" Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.580096 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7acd9116-baab-48b1-ab22-7310f60fada8-utilities\") pod \"certified-operators-f4ldv\" (UID: \"7acd9116-baab-48b1-ab22-7310f60fada8\") " pod="openshift-marketplace/certified-operators-f4ldv" Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.580169 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:48 crc kubenswrapper[4737]: E0126 18:32:48.580529 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:32:49.080512038 +0000 UTC m=+142.388706736 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7c9pc" (UID: "7cd9832f-e47d-4503-88fb-6a197b2fe89d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.580558 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7acd9116-baab-48b1-ab22-7310f60fada8-utilities\") pod \"certified-operators-f4ldv\" (UID: \"7acd9116-baab-48b1-ab22-7310f60fada8\") " pod="openshift-marketplace/certified-operators-f4ldv" Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.581241 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7acd9116-baab-48b1-ab22-7310f60fada8-catalog-content\") pod \"certified-operators-f4ldv\" (UID: \"7acd9116-baab-48b1-ab22-7310f60fada8\") " pod="openshift-marketplace/certified-operators-f4ldv" Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.581283 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7acd9116-baab-48b1-ab22-7310f60fada8-catalog-content\") pod \"certified-operators-f4ldv\" (UID: \"7acd9116-baab-48b1-ab22-7310f60fada8\") " pod="openshift-marketplace/certified-operators-f4ldv" Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.581324 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6xnx\" (UniqueName: \"kubernetes.io/projected/7acd9116-baab-48b1-ab22-7310f60fada8-kube-api-access-k6xnx\") pod \"certified-operators-f4ldv\" (UID: \"7acd9116-baab-48b1-ab22-7310f60fada8\") " pod="openshift-marketplace/certified-operators-f4ldv" Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.581687 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xhsg\" (UniqueName: \"kubernetes.io/projected/0bd24ab7-1242-4a05-afc2-bd24d931cb3d-kube-api-access-9xhsg\") pod \"community-operators-6tf2g\" (UID: \"0bd24ab7-1242-4a05-afc2-bd24d931cb3d\") " pod="openshift-marketplace/community-operators-6tf2g" Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.581742 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bd24ab7-1242-4a05-afc2-bd24d931cb3d-utilities\") pod \"community-operators-6tf2g\" (UID: \"0bd24ab7-1242-4a05-afc2-bd24d931cb3d\") " pod="openshift-marketplace/community-operators-6tf2g" Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.612314 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vkl6w" podStartSLOduration=123.61228466 podStartE2EDuration="2m3.61228466s" podCreationTimestamp="2026-01-26 18:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:32:48.571137859 +0000 UTC m=+141.879332577" watchObservedRunningTime="2026-01-26 18:32:48.61228466 +0000 UTC m=+141.920479368" Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.623444 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6xnx\" (UniqueName: \"kubernetes.io/projected/7acd9116-baab-48b1-ab22-7310f60fada8-kube-api-access-k6xnx\") pod \"certified-operators-f4ldv\" (UID: \"7acd9116-baab-48b1-ab22-7310f60fada8\") " pod="openshift-marketplace/certified-operators-f4ldv" Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.633744 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8phw8" podStartSLOduration=123.633718984 podStartE2EDuration="2m3.633718984s" podCreationTimestamp="2026-01-26 18:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:32:48.632999504 +0000 UTC m=+141.941194212" watchObservedRunningTime="2026-01-26 18:32:48.633718984 +0000 UTC m=+141.941913692" Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.651945 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ndrff"] Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.653009 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ndrff" Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.671216 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ndrff"] Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.682493 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.683024 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bd24ab7-1242-4a05-afc2-bd24d931cb3d-catalog-content\") pod \"community-operators-6tf2g\" (UID: \"0bd24ab7-1242-4a05-afc2-bd24d931cb3d\") " pod="openshift-marketplace/community-operators-6tf2g" Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.683147 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xhsg\" (UniqueName: \"kubernetes.io/projected/0bd24ab7-1242-4a05-afc2-bd24d931cb3d-kube-api-access-9xhsg\") pod \"community-operators-6tf2g\" (UID: \"0bd24ab7-1242-4a05-afc2-bd24d931cb3d\") " pod="openshift-marketplace/community-operators-6tf2g" Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.683187 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bd24ab7-1242-4a05-afc2-bd24d931cb3d-utilities\") pod \"community-operators-6tf2g\" (UID: \"0bd24ab7-1242-4a05-afc2-bd24d931cb3d\") " pod="openshift-marketplace/community-operators-6tf2g" Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.684469 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bd24ab7-1242-4a05-afc2-bd24d931cb3d-utilities\") pod \"community-operators-6tf2g\" (UID: \"0bd24ab7-1242-4a05-afc2-bd24d931cb3d\") " pod="openshift-marketplace/community-operators-6tf2g" Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.684747 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bd24ab7-1242-4a05-afc2-bd24d931cb3d-catalog-content\") pod \"community-operators-6tf2g\" (UID: \"0bd24ab7-1242-4a05-afc2-bd24d931cb3d\") " pod="openshift-marketplace/community-operators-6tf2g" Jan 26 18:32:48 crc kubenswrapper[4737]: E0126 18:32:48.684881 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:32:49.184854113 +0000 UTC m=+142.493048821 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.727843 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f4ldv" Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.776901 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xhsg\" (UniqueName: \"kubernetes.io/projected/0bd24ab7-1242-4a05-afc2-bd24d931cb3d-kube-api-access-9xhsg\") pod \"community-operators-6tf2g\" (UID: \"0bd24ab7-1242-4a05-afc2-bd24d931cb3d\") " pod="openshift-marketplace/community-operators-6tf2g" Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.778521 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6tf2g" Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.786867 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74a38d4e-7789-4e8b-abbc-da9d57d1bcc4-catalog-content\") pod \"certified-operators-ndrff\" (UID: \"74a38d4e-7789-4e8b-abbc-da9d57d1bcc4\") " pod="openshift-marketplace/certified-operators-ndrff" Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.786916 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.786952 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnsnx\" (UniqueName: \"kubernetes.io/projected/74a38d4e-7789-4e8b-abbc-da9d57d1bcc4-kube-api-access-jnsnx\") pod \"certified-operators-ndrff\" (UID: \"74a38d4e-7789-4e8b-abbc-da9d57d1bcc4\") " pod="openshift-marketplace/certified-operators-ndrff" Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.787009 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74a38d4e-7789-4e8b-abbc-da9d57d1bcc4-utilities\") pod \"certified-operators-ndrff\" (UID: \"74a38d4e-7789-4e8b-abbc-da9d57d1bcc4\") " pod="openshift-marketplace/certified-operators-ndrff" Jan 26 18:32:48 crc kubenswrapper[4737]: E0126 18:32:48.787680 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:32:49.287662934 +0000 UTC m=+142.595857642 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7c9pc" (UID: "7cd9832f-e47d-4503-88fb-6a197b2fe89d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.894568 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.894847 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnsnx\" (UniqueName: \"kubernetes.io/projected/74a38d4e-7789-4e8b-abbc-da9d57d1bcc4-kube-api-access-jnsnx\") pod \"certified-operators-ndrff\" (UID: \"74a38d4e-7789-4e8b-abbc-da9d57d1bcc4\") " pod="openshift-marketplace/certified-operators-ndrff" Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.894910 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74a38d4e-7789-4e8b-abbc-da9d57d1bcc4-utilities\") pod \"certified-operators-ndrff\" (UID: \"74a38d4e-7789-4e8b-abbc-da9d57d1bcc4\") " pod="openshift-marketplace/certified-operators-ndrff" Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.894968 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74a38d4e-7789-4e8b-abbc-da9d57d1bcc4-catalog-content\") pod \"certified-operators-ndrff\" (UID: \"74a38d4e-7789-4e8b-abbc-da9d57d1bcc4\") " pod="openshift-marketplace/certified-operators-ndrff" Jan 26 18:32:48 crc kubenswrapper[4737]: E0126 18:32:48.904698 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:32:49.404650029 +0000 UTC m=+142.712844747 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.915859 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lrlts"] Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.919269 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lrlts" Jan 26 18:32:48 crc kubenswrapper[4737]: I0126 18:32:48.963186 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lrlts"] Jan 26 18:32:49 crc kubenswrapper[4737]: I0126 18:32:49.000323 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ww4w\" (UniqueName: \"kubernetes.io/projected/7a6f7537-6a89-4f64-a0a1-c96e49c575db-kube-api-access-5ww4w\") pod \"community-operators-lrlts\" (UID: \"7a6f7537-6a89-4f64-a0a1-c96e49c575db\") " pod="openshift-marketplace/community-operators-lrlts" Jan 26 18:32:49 crc kubenswrapper[4737]: I0126 18:32:49.000402 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a6f7537-6a89-4f64-a0a1-c96e49c575db-catalog-content\") pod \"community-operators-lrlts\" (UID: \"7a6f7537-6a89-4f64-a0a1-c96e49c575db\") " pod="openshift-marketplace/community-operators-lrlts" Jan 26 18:32:49 crc kubenswrapper[4737]: I0126 18:32:49.000474 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:49 crc kubenswrapper[4737]: I0126 18:32:49.000506 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a6f7537-6a89-4f64-a0a1-c96e49c575db-utilities\") pod \"community-operators-lrlts\" (UID: \"7a6f7537-6a89-4f64-a0a1-c96e49c575db\") " pod="openshift-marketplace/community-operators-lrlts" Jan 26 18:32:49 crc kubenswrapper[4737]: E0126 18:32:49.000993 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:32:49.500972771 +0000 UTC m=+142.809167489 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7c9pc" (UID: "7cd9832f-e47d-4503-88fb-6a197b2fe89d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:49 crc kubenswrapper[4737]: I0126 18:32:49.002552 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-qgt58" podStartSLOduration=10.002520644 podStartE2EDuration="10.002520644s" podCreationTimestamp="2026-01-26 18:32:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:32:48.982116147 +0000 UTC m=+142.290310845" watchObservedRunningTime="2026-01-26 18:32:49.002520644 +0000 UTC m=+142.310715352" Jan 26 18:32:49 crc kubenswrapper[4737]: I0126 18:32:49.046325 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-8p4v9" podStartSLOduration=124.046297108 podStartE2EDuration="2m4.046297108s" podCreationTimestamp="2026-01-26 18:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:32:49.010120624 +0000 UTC m=+142.318315342" watchObservedRunningTime="2026-01-26 18:32:49.046297108 +0000 UTC m=+142.354491816" Jan 26 18:32:49 crc kubenswrapper[4737]: I0126 18:32:49.102232 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:32:49 crc kubenswrapper[4737]: I0126 18:32:49.102537 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ww4w\" (UniqueName: \"kubernetes.io/projected/7a6f7537-6a89-4f64-a0a1-c96e49c575db-kube-api-access-5ww4w\") pod \"community-operators-lrlts\" (UID: \"7a6f7537-6a89-4f64-a0a1-c96e49c575db\") " pod="openshift-marketplace/community-operators-lrlts" Jan 26 18:32:49 crc kubenswrapper[4737]: I0126 18:32:49.102575 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a6f7537-6a89-4f64-a0a1-c96e49c575db-catalog-content\") pod \"community-operators-lrlts\" (UID: \"7a6f7537-6a89-4f64-a0a1-c96e49c575db\") " pod="openshift-marketplace/community-operators-lrlts" Jan 26 18:32:49 crc kubenswrapper[4737]: I0126 18:32:49.102628 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a6f7537-6a89-4f64-a0a1-c96e49c575db-utilities\") pod \"community-operators-lrlts\" (UID: \"7a6f7537-6a89-4f64-a0a1-c96e49c575db\") " pod="openshift-marketplace/community-operators-lrlts" Jan 26 18:32:49 crc kubenswrapper[4737]: E0126 18:32:49.104455 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:32:49.604427601 +0000 UTC m=+142.912622309 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:49 crc kubenswrapper[4737]: I0126 18:32:49.104834 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a6f7537-6a89-4f64-a0a1-c96e49c575db-catalog-content\") pod \"community-operators-lrlts\" (UID: \"7a6f7537-6a89-4f64-a0a1-c96e49c575db\") " pod="openshift-marketplace/community-operators-lrlts" Jan 26 18:32:49 crc kubenswrapper[4737]: I0126 18:32:49.163836 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ww4w\" (UniqueName: \"kubernetes.io/projected/7a6f7537-6a89-4f64-a0a1-c96e49c575db-kube-api-access-5ww4w\") pod \"community-operators-lrlts\" (UID: \"7a6f7537-6a89-4f64-a0a1-c96e49c575db\") " pod="openshift-marketplace/community-operators-lrlts" Jan 26 18:32:49 crc kubenswrapper[4737]: I0126 18:32:49.204175 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:49 crc kubenswrapper[4737]: E0126 18:32:49.204688 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:32:49.70467197 +0000 UTC m=+143.012866678 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7c9pc" (UID: "7cd9832f-e47d-4503-88fb-6a197b2fe89d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:49 crc kubenswrapper[4737]: I0126 18:32:49.229124 4737 patch_prober.go:28] interesting pod/router-default-5444994796-wwzqx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 18:32:49 crc kubenswrapper[4737]: [-]has-synced failed: reason withheld Jan 26 18:32:49 crc kubenswrapper[4737]: [+]process-running ok Jan 26 18:32:49 crc kubenswrapper[4737]: healthz check failed Jan 26 18:32:49 crc kubenswrapper[4737]: I0126 18:32:49.229705 4737 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wwzqx" podUID="60a6a19b-baa5-47c5-8733-202b5bfd0c97" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 18:32:49 crc kubenswrapper[4737]: I0126 18:32:49.307690 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:32:49 crc kubenswrapper[4737]: E0126 18:32:49.308255 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:32:49.808235123 +0000 UTC m=+143.116429831 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:49 crc kubenswrapper[4737]: I0126 18:32:49.387732 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-k965v" event={"ID":"a2b6e28b-2e70-4f70-9284-942460f8d1fd","Type":"ContainerStarted","Data":"9dfd9dfd2ebec0c1c2e35af7021aa7f980848ee7aa9ae916a2540ee800648b34"} Jan 26 18:32:49 crc kubenswrapper[4737]: I0126 18:32:49.413733 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:49 crc kubenswrapper[4737]: E0126 18:32:49.414256 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:32:49.914239164 +0000 UTC m=+143.222433872 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7c9pc" (UID: "7cd9832f-e47d-4503-88fb-6a197b2fe89d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:49 crc kubenswrapper[4737]: I0126 18:32:49.439405 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-fm6nl" event={"ID":"3c8453aa-abd7-49cc-a743-5e6bb8649740","Type":"ContainerStarted","Data":"f44d9d294d508734a55e7ccecd289e7ba6c5efed3fb664f89b4d89174294d3bf"} Jan 26 18:32:49 crc kubenswrapper[4737]: I0126 18:32:49.504393 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6jt9w" event={"ID":"d0215af9-47a6-42bb-bb48-29c002caff5a","Type":"ContainerStarted","Data":"a98dcb7055da9010473434f3cdf343b1a967d67458a6f181a7356176b90beaae"} Jan 26 18:32:49 crc kubenswrapper[4737]: I0126 18:32:49.514642 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:32:49 crc kubenswrapper[4737]: E0126 18:32:49.515161 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:32:50.015125992 +0000 UTC m=+143.323320690 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:49 crc kubenswrapper[4737]: I0126 18:32:49.527552 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-7jxs2" event={"ID":"858fe62f-567a-47e7-9847-c393790eb41f","Type":"ContainerStarted","Data":"ab2c79ca6c14b6af65e161caa8c2c14ded8c5842893851a6efda9df2de15073c"} Jan 26 18:32:49 crc kubenswrapper[4737]: I0126 18:32:49.557314 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a6f7537-6a89-4f64-a0a1-c96e49c575db-utilities\") pod \"community-operators-lrlts\" (UID: \"7a6f7537-6a89-4f64-a0a1-c96e49c575db\") " pod="openshift-marketplace/community-operators-lrlts" Jan 26 18:32:49 crc kubenswrapper[4737]: I0126 18:32:49.562619 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74a38d4e-7789-4e8b-abbc-da9d57d1bcc4-catalog-content\") pod \"certified-operators-ndrff\" (UID: \"74a38d4e-7789-4e8b-abbc-da9d57d1bcc4\") " pod="openshift-marketplace/certified-operators-ndrff" Jan 26 18:32:49 crc kubenswrapper[4737]: I0126 18:32:49.565598 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74a38d4e-7789-4e8b-abbc-da9d57d1bcc4-utilities\") pod \"certified-operators-ndrff\" (UID: \"74a38d4e-7789-4e8b-abbc-da9d57d1bcc4\") " pod="openshift-marketplace/certified-operators-ndrff" Jan 26 18:32:49 crc kubenswrapper[4737]: I0126 18:32:49.567220 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-g4vb5" event={"ID":"833792c1-41f1-45ee-b08b-aacc3388e916","Type":"ContainerStarted","Data":"c86281a57595e4435ed235f351e1e683c7887977ec223b7bc0cc76d1582240cb"} Jan 26 18:32:49 crc kubenswrapper[4737]: I0126 18:32:49.594250 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lrlts" Jan 26 18:32:49 crc kubenswrapper[4737]: I0126 18:32:49.596218 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-7jxs2" podStartSLOduration=125.596203851 podStartE2EDuration="2m5.596203851s" podCreationTimestamp="2026-01-26 18:30:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:32:49.246811289 +0000 UTC m=+142.555005997" watchObservedRunningTime="2026-01-26 18:32:49.596203851 +0000 UTC m=+142.904398559" Jan 26 18:32:49 crc kubenswrapper[4737]: I0126 18:32:49.610270 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnsnx\" (UniqueName: \"kubernetes.io/projected/74a38d4e-7789-4e8b-abbc-da9d57d1bcc4-kube-api-access-jnsnx\") pod \"certified-operators-ndrff\" (UID: \"74a38d4e-7789-4e8b-abbc-da9d57d1bcc4\") " pod="openshift-marketplace/certified-operators-ndrff" Jan 26 18:32:49 crc kubenswrapper[4737]: I0126 18:32:49.616433 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:49 crc kubenswrapper[4737]: E0126 18:32:49.616823 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:32:50.116806743 +0000 UTC m=+143.425001451 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7c9pc" (UID: "7cd9832f-e47d-4503-88fb-6a197b2fe89d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:49 crc kubenswrapper[4737]: I0126 18:32:49.618239 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-bbw9t" event={"ID":"ed97d0e9-4ae3-4db6-9635-38141f37948e","Type":"ContainerStarted","Data":"76c53a16b58c98335e86e3f9ccec426262130f6086fdf33557e79128f2eda5c5"} Jan 26 18:32:49 crc kubenswrapper[4737]: I0126 18:32:49.623213 4737 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-gftx9 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Jan 26 18:32:49 crc kubenswrapper[4737]: I0126 18:32:49.623282 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-gftx9" podUID="eec275ca-9658-4733-b311-48a052e4e843" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" Jan 26 18:32:49 crc kubenswrapper[4737]: I0126 18:32:49.641280 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6jt9w" podStartSLOduration=124.641182608 podStartE2EDuration="2m4.641182608s" podCreationTimestamp="2026-01-26 18:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:32:49.28683222 +0000 UTC m=+142.595026928" watchObservedRunningTime="2026-01-26 18:32:49.641182608 +0000 UTC m=+142.949377316" Jan 26 18:32:49 crc kubenswrapper[4737]: I0126 18:32:49.643355 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-scmj7" podStartSLOduration=125.643347549 podStartE2EDuration="2m5.643347549s" podCreationTimestamp="2026-01-26 18:30:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:32:49.320845653 +0000 UTC m=+142.629040361" watchObservedRunningTime="2026-01-26 18:32:49.643347549 +0000 UTC m=+142.951542257" Jan 26 18:32:49 crc kubenswrapper[4737]: I0126 18:32:49.656501 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-fm6nl" podStartSLOduration=125.656476252 podStartE2EDuration="2m5.656476252s" podCreationTimestamp="2026-01-26 18:30:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:32:49.496579158 +0000 UTC m=+142.804773866" watchObservedRunningTime="2026-01-26 18:32:49.656476252 +0000 UTC m=+142.964670960" Jan 26 18:32:49 crc kubenswrapper[4737]: I0126 18:32:49.664283 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 18:32:49 crc kubenswrapper[4737]: I0126 18:32:49.665269 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 18:32:49 crc kubenswrapper[4737]: I0126 18:32:49.667415 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-g4vb5" podStartSLOduration=124.667396395 podStartE2EDuration="2m4.667396395s" podCreationTimestamp="2026-01-26 18:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:32:49.650724443 +0000 UTC m=+142.958919151" watchObservedRunningTime="2026-01-26 18:32:49.667396395 +0000 UTC m=+142.975591103" Jan 26 18:32:49 crc kubenswrapper[4737]: I0126 18:32:49.670034 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 18:32:49 crc kubenswrapper[4737]: I0126 18:32:49.673824 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 26 18:32:49 crc kubenswrapper[4737]: I0126 18:32:49.674114 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 26 18:32:49 crc kubenswrapper[4737]: I0126 18:32:49.717956 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:32:49 crc kubenswrapper[4737]: E0126 18:32:49.718505 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:32:50.218446851 +0000 UTC m=+143.526641559 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:49 crc kubenswrapper[4737]: I0126 18:32:49.719425 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/73a06f56-82bf-4ba9-b974-aa1465790909-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"73a06f56-82bf-4ba9-b974-aa1465790909\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 18:32:49 crc kubenswrapper[4737]: I0126 18:32:49.719862 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:49 crc kubenswrapper[4737]: I0126 18:32:49.719979 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73a06f56-82bf-4ba9-b974-aa1465790909-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"73a06f56-82bf-4ba9-b974-aa1465790909\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 18:32:49 crc kubenswrapper[4737]: E0126 18:32:49.736895 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:32:50.236858772 +0000 UTC m=+143.545053480 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7c9pc" (UID: "7cd9832f-e47d-4503-88fb-6a197b2fe89d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:49 crc kubenswrapper[4737]: I0126 18:32:49.790817 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f4ldv"] Jan 26 18:32:49 crc kubenswrapper[4737]: I0126 18:32:49.813994 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6tf2g"] Jan 26 18:32:49 crc kubenswrapper[4737]: I0126 18:32:49.826320 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:32:49 crc kubenswrapper[4737]: I0126 18:32:49.826625 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/73a06f56-82bf-4ba9-b974-aa1465790909-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"73a06f56-82bf-4ba9-b974-aa1465790909\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 18:32:49 crc kubenswrapper[4737]: I0126 18:32:49.826717 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73a06f56-82bf-4ba9-b974-aa1465790909-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"73a06f56-82bf-4ba9-b974-aa1465790909\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 18:32:49 crc kubenswrapper[4737]: E0126 18:32:49.827525 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:32:50.327462215 +0000 UTC m=+143.635656923 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:49 crc kubenswrapper[4737]: I0126 18:32:49.827582 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/73a06f56-82bf-4ba9-b974-aa1465790909-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"73a06f56-82bf-4ba9-b974-aa1465790909\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 18:32:49 crc kubenswrapper[4737]: I0126 18:32:49.851986 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73a06f56-82bf-4ba9-b974-aa1465790909-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"73a06f56-82bf-4ba9-b974-aa1465790909\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 18:32:49 crc kubenswrapper[4737]: I0126 18:32:49.879630 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ndrff" Jan 26 18:32:49 crc kubenswrapper[4737]: I0126 18:32:49.930017 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:49 crc kubenswrapper[4737]: E0126 18:32:49.931005 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:32:50.430989026 +0000 UTC m=+143.739183724 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7c9pc" (UID: "7cd9832f-e47d-4503-88fb-6a197b2fe89d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.027431 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.035721 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:32:50 crc kubenswrapper[4737]: E0126 18:32:50.036390 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:32:50.536363719 +0000 UTC m=+143.844558427 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.145241 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:50 crc kubenswrapper[4737]: E0126 18:32:50.145751 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:32:50.645734362 +0000 UTC m=+143.953929070 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7c9pc" (UID: "7cd9832f-e47d-4503-88fb-6a197b2fe89d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.219711 4737 patch_prober.go:28] interesting pod/router-default-5444994796-wwzqx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 18:32:50 crc kubenswrapper[4737]: [-]has-synced failed: reason withheld Jan 26 18:32:50 crc kubenswrapper[4737]: [+]process-running ok Jan 26 18:32:50 crc kubenswrapper[4737]: healthz check failed Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.220232 4737 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wwzqx" podUID="60a6a19b-baa5-47c5-8733-202b5bfd0c97" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.250088 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:32:50 crc kubenswrapper[4737]: E0126 18:32:50.250591 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:32:50.75057137 +0000 UTC m=+144.058766078 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.250902 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5j2cd"] Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.252250 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5j2cd" Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.258629 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.283232 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5j2cd"] Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.354927 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv8qk\" (UniqueName: \"kubernetes.io/projected/0a348468-634f-4d18-aa1d-ecc9aff08138-kube-api-access-lv8qk\") pod \"redhat-marketplace-5j2cd\" (UID: \"0a348468-634f-4d18-aa1d-ecc9aff08138\") " pod="openshift-marketplace/redhat-marketplace-5j2cd" Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.354987 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.355022 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a348468-634f-4d18-aa1d-ecc9aff08138-utilities\") pod \"redhat-marketplace-5j2cd\" (UID: \"0a348468-634f-4d18-aa1d-ecc9aff08138\") " pod="openshift-marketplace/redhat-marketplace-5j2cd" Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.355090 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a348468-634f-4d18-aa1d-ecc9aff08138-catalog-content\") pod \"redhat-marketplace-5j2cd\" (UID: \"0a348468-634f-4d18-aa1d-ecc9aff08138\") " pod="openshift-marketplace/redhat-marketplace-5j2cd" Jan 26 18:32:50 crc kubenswrapper[4737]: E0126 18:32:50.355433 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:32:50.855419519 +0000 UTC m=+144.163614217 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7c9pc" (UID: "7cd9832f-e47d-4503-88fb-6a197b2fe89d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.421283 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lrlts"] Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.457189 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.457447 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a348468-634f-4d18-aa1d-ecc9aff08138-catalog-content\") pod \"redhat-marketplace-5j2cd\" (UID: \"0a348468-634f-4d18-aa1d-ecc9aff08138\") " pod="openshift-marketplace/redhat-marketplace-5j2cd" Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.457550 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lv8qk\" (UniqueName: \"kubernetes.io/projected/0a348468-634f-4d18-aa1d-ecc9aff08138-kube-api-access-lv8qk\") pod \"redhat-marketplace-5j2cd\" (UID: \"0a348468-634f-4d18-aa1d-ecc9aff08138\") " pod="openshift-marketplace/redhat-marketplace-5j2cd" Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.457615 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a348468-634f-4d18-aa1d-ecc9aff08138-utilities\") pod \"redhat-marketplace-5j2cd\" (UID: \"0a348468-634f-4d18-aa1d-ecc9aff08138\") " pod="openshift-marketplace/redhat-marketplace-5j2cd" Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.458775 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a348468-634f-4d18-aa1d-ecc9aff08138-utilities\") pod \"redhat-marketplace-5j2cd\" (UID: \"0a348468-634f-4d18-aa1d-ecc9aff08138\") " pod="openshift-marketplace/redhat-marketplace-5j2cd" Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.459625 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a348468-634f-4d18-aa1d-ecc9aff08138-catalog-content\") pod \"redhat-marketplace-5j2cd\" (UID: \"0a348468-634f-4d18-aa1d-ecc9aff08138\") " pod="openshift-marketplace/redhat-marketplace-5j2cd" Jan 26 18:32:50 crc kubenswrapper[4737]: E0126 18:32:50.459744 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:32:50.959723332 +0000 UTC m=+144.267918040 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:50 crc kubenswrapper[4737]: W0126 18:32:50.470047 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a6f7537_6a89_4f64_a0a1_c96e49c575db.slice/crio-67632fe5344747d496459b7026b3bb49d41e3c1f87f7c82a5f544dc67917c9b4 WatchSource:0}: Error finding container 67632fe5344747d496459b7026b3bb49d41e3c1f87f7c82a5f544dc67917c9b4: Status 404 returned error can't find the container with id 67632fe5344747d496459b7026b3bb49d41e3c1f87f7c82a5f544dc67917c9b4 Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.519179 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lv8qk\" (UniqueName: \"kubernetes.io/projected/0a348468-634f-4d18-aa1d-ecc9aff08138-kube-api-access-lv8qk\") pod \"redhat-marketplace-5j2cd\" (UID: \"0a348468-634f-4d18-aa1d-ecc9aff08138\") " pod="openshift-marketplace/redhat-marketplace-5j2cd" Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.559119 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:50 crc kubenswrapper[4737]: E0126 18:32:50.559477 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:32:51.059463298 +0000 UTC m=+144.367658006 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7c9pc" (UID: "7cd9832f-e47d-4503-88fb-6a197b2fe89d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.623896 4737 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-sb8td container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:5443/healthz\": context deadline exceeded" start-of-body= Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.623983 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sb8td" podUID="a8407c17-c270-4f2c-be13-4b03ee2bbc28" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.18:5443/healthz\": context deadline exceeded" Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.633736 4737 generic.go:334] "Generic (PLEG): container finished" podID="7acd9116-baab-48b1-ab22-7310f60fada8" containerID="1803deef02265f1d97ac124d2f1daf6de0fbee22510ca792151b3ca7b7f44922" exitCode=0 Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.633831 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f4ldv" event={"ID":"7acd9116-baab-48b1-ab22-7310f60fada8","Type":"ContainerDied","Data":"1803deef02265f1d97ac124d2f1daf6de0fbee22510ca792151b3ca7b7f44922"} Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.633870 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f4ldv" event={"ID":"7acd9116-baab-48b1-ab22-7310f60fada8","Type":"ContainerStarted","Data":"42a8b0280e26f30c15d929c4022b28250c8cb4087a58203a5a92cc70a84622f3"} Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.640716 4737 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.657743 4737 generic.go:334] "Generic (PLEG): container finished" podID="ac652a18-5fbd-483e-94d1-0782ee0cc3ac" containerID="5843b80d4421ac37b77474ec11c8789e959f8d0527152c55f5e1fa7681a2742e" exitCode=0 Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.657886 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-k4f69" event={"ID":"ac652a18-5fbd-483e-94d1-0782ee0cc3ac","Type":"ContainerDied","Data":"5843b80d4421ac37b77474ec11c8789e959f8d0527152c55f5e1fa7681a2742e"} Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.661363 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:32:50 crc kubenswrapper[4737]: E0126 18:32:50.661954 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:32:51.16193166 +0000 UTC m=+144.470126368 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.664156 4737 generic.go:334] "Generic (PLEG): container finished" podID="0bd24ab7-1242-4a05-afc2-bd24d931cb3d" containerID="662e43ac99f0d65716cd00ff4843a9ef4ed1637173c0916f1cc2c052cb169073" exitCode=0 Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.664241 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6tf2g" event={"ID":"0bd24ab7-1242-4a05-afc2-bd24d931cb3d","Type":"ContainerDied","Data":"662e43ac99f0d65716cd00ff4843a9ef4ed1637173c0916f1cc2c052cb169073"} Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.664284 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6tf2g" event={"ID":"0bd24ab7-1242-4a05-afc2-bd24d931cb3d","Type":"ContainerStarted","Data":"30baa7b350004a5bac49fb79337f01d8672b087a2948e604cd1185f6a6b9c2cf"} Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.673656 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2ql5w"] Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.674889 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2ql5w" Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.677774 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lrlts" event={"ID":"7a6f7537-6a89-4f64-a0a1-c96e49c575db","Type":"ContainerStarted","Data":"67632fe5344747d496459b7026b3bb49d41e3c1f87f7c82a5f544dc67917c9b4"} Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.679424 4737 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-gftx9 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.679467 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-gftx9" podUID="eec275ca-9658-4733-b311-48a052e4e843" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.689672 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gsfgx" Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.689787 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gsfgx" Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.741188 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5j2cd" Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.742191 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gsfgx" Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.751837 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2ql5w"] Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.762844 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:50 crc kubenswrapper[4737]: E0126 18:32:50.764606 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:32:51.264582517 +0000 UTC m=+144.572777215 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7c9pc" (UID: "7cd9832f-e47d-4503-88fb-6a197b2fe89d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.818823 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-hbdm4" Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.819261 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-hbdm4" Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.830545 4737 patch_prober.go:28] interesting pod/console-f9d7485db-hbdm4 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.830613 4737 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-hbdm4" podUID="255d9d52-daaf-41e1-be00-4a94de0a6324" containerName="console" probeResult="failure" output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.863899 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:32:50 crc kubenswrapper[4737]: E0126 18:32:50.864274 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:32:51.364233932 +0000 UTC m=+144.672428640 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.864383 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/402663db-5331-4692-8539-f79973a5759b-catalog-content\") pod \"redhat-marketplace-2ql5w\" (UID: \"402663db-5331-4692-8539-f79973a5759b\") " pod="openshift-marketplace/redhat-marketplace-2ql5w" Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.866901 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/402663db-5331-4692-8539-f79973a5759b-utilities\") pod \"redhat-marketplace-2ql5w\" (UID: \"402663db-5331-4692-8539-f79973a5759b\") " pod="openshift-marketplace/redhat-marketplace-2ql5w" Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.867162 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59fs5\" (UniqueName: \"kubernetes.io/projected/402663db-5331-4692-8539-f79973a5759b-kube-api-access-59fs5\") pod \"redhat-marketplace-2ql5w\" (UID: \"402663db-5331-4692-8539-f79973a5759b\") " pod="openshift-marketplace/redhat-marketplace-2ql5w" Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.867460 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:50 crc kubenswrapper[4737]: E0126 18:32:50.914901 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:32:51.414865486 +0000 UTC m=+144.723060194 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7c9pc" (UID: "7cd9832f-e47d-4503-88fb-6a197b2fe89d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.967017 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-k965v" podStartSLOduration=11.966990242 podStartE2EDuration="11.966990242s" podCreationTimestamp="2026-01-26 18:32:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:32:50.861709752 +0000 UTC m=+144.169904460" watchObservedRunningTime="2026-01-26 18:32:50.966990242 +0000 UTC m=+144.275184950" Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.973343 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.973553 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/402663db-5331-4692-8539-f79973a5759b-catalog-content\") pod \"redhat-marketplace-2ql5w\" (UID: \"402663db-5331-4692-8539-f79973a5759b\") " pod="openshift-marketplace/redhat-marketplace-2ql5w" Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.973616 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/402663db-5331-4692-8539-f79973a5759b-utilities\") pod \"redhat-marketplace-2ql5w\" (UID: \"402663db-5331-4692-8539-f79973a5759b\") " pod="openshift-marketplace/redhat-marketplace-2ql5w" Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.973699 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59fs5\" (UniqueName: \"kubernetes.io/projected/402663db-5331-4692-8539-f79973a5759b-kube-api-access-59fs5\") pod \"redhat-marketplace-2ql5w\" (UID: \"402663db-5331-4692-8539-f79973a5759b\") " pod="openshift-marketplace/redhat-marketplace-2ql5w" Jan 26 18:32:50 crc kubenswrapper[4737]: E0126 18:32:50.974267 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:32:51.474247403 +0000 UTC m=+144.782442111 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.974686 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/402663db-5331-4692-8539-f79973a5759b-catalog-content\") pod \"redhat-marketplace-2ql5w\" (UID: \"402663db-5331-4692-8539-f79973a5759b\") " pod="openshift-marketplace/redhat-marketplace-2ql5w" Jan 26 18:32:50 crc kubenswrapper[4737]: I0126 18:32:50.995777 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/402663db-5331-4692-8539-f79973a5759b-utilities\") pod \"redhat-marketplace-2ql5w\" (UID: \"402663db-5331-4692-8539-f79973a5759b\") " pod="openshift-marketplace/redhat-marketplace-2ql5w" Jan 26 18:32:51 crc kubenswrapper[4737]: I0126 18:32:51.020360 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59fs5\" (UniqueName: \"kubernetes.io/projected/402663db-5331-4692-8539-f79973a5759b-kube-api-access-59fs5\") pod \"redhat-marketplace-2ql5w\" (UID: \"402663db-5331-4692-8539-f79973a5759b\") " pod="openshift-marketplace/redhat-marketplace-2ql5w" Jan 26 18:32:51 crc kubenswrapper[4737]: W0126 18:32:51.030568 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74a38d4e_7789_4e8b_abbc_da9d57d1bcc4.slice/crio-eb1d0d7853d05618ed0e72a3a81c3224d6ea1c3ab1d37e0acd9019d429294510 WatchSource:0}: Error finding container eb1d0d7853d05618ed0e72a3a81c3224d6ea1c3ab1d37e0acd9019d429294510: Status 404 returned error can't find the container with id eb1d0d7853d05618ed0e72a3a81c3224d6ea1c3ab1d37e0acd9019d429294510 Jan 26 18:32:51 crc kubenswrapper[4737]: I0126 18:32:51.046823 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ndrff"] Jan 26 18:32:51 crc kubenswrapper[4737]: I0126 18:32:51.075522 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:51 crc kubenswrapper[4737]: E0126 18:32:51.076225 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:32:51.576207611 +0000 UTC m=+144.884402319 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7c9pc" (UID: "7cd9832f-e47d-4503-88fb-6a197b2fe89d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:51 crc kubenswrapper[4737]: I0126 18:32:51.097536 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2ql5w" Jan 26 18:32:51 crc kubenswrapper[4737]: I0126 18:32:51.177850 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:32:51 crc kubenswrapper[4737]: E0126 18:32:51.178706 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:32:51.678683293 +0000 UTC m=+144.986878001 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:51 crc kubenswrapper[4737]: I0126 18:32:51.206209 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-7jxs2" Jan 26 18:32:51 crc kubenswrapper[4737]: I0126 18:32:51.206263 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-7jxs2" Jan 26 18:32:51 crc kubenswrapper[4737]: I0126 18:32:51.252047 4737 patch_prober.go:28] interesting pod/router-default-5444994796-wwzqx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 18:32:51 crc kubenswrapper[4737]: [-]has-synced failed: reason withheld Jan 26 18:32:51 crc kubenswrapper[4737]: [+]process-running ok Jan 26 18:32:51 crc kubenswrapper[4737]: healthz check failed Jan 26 18:32:51 crc kubenswrapper[4737]: I0126 18:32:51.252862 4737 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wwzqx" podUID="60a6a19b-baa5-47c5-8733-202b5bfd0c97" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 18:32:51 crc kubenswrapper[4737]: I0126 18:32:51.280179 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:51 crc kubenswrapper[4737]: E0126 18:32:51.280611 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:32:51.78058857 +0000 UTC m=+145.088783278 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7c9pc" (UID: "7cd9832f-e47d-4503-88fb-6a197b2fe89d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:51 crc kubenswrapper[4737]: I0126 18:32:51.360122 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-k965v" Jan 26 18:32:51 crc kubenswrapper[4737]: I0126 18:32:51.381284 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:32:51 crc kubenswrapper[4737]: E0126 18:32:51.381566 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:32:51.88154377 +0000 UTC m=+145.189738478 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:51 crc kubenswrapper[4737]: I0126 18:32:51.412143 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 18:32:51 crc kubenswrapper[4737]: I0126 18:32:51.482554 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:51 crc kubenswrapper[4737]: E0126 18:32:51.482942 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:32:51.982929242 +0000 UTC m=+145.291123950 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7c9pc" (UID: "7cd9832f-e47d-4503-88fb-6a197b2fe89d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:51 crc kubenswrapper[4737]: I0126 18:32:51.584371 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:32:51 crc kubenswrapper[4737]: E0126 18:32:51.585322 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:32:52.085297352 +0000 UTC m=+145.393492060 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:51 crc kubenswrapper[4737]: I0126 18:32:51.597090 4737 patch_prober.go:28] interesting pod/downloads-7954f5f757-brpd2 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.34:8080/\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Jan 26 18:32:51 crc kubenswrapper[4737]: I0126 18:32:51.597159 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-brpd2" podUID="abf4a817-2de4-4f69-9ad8-d15ed857d5ab" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.34:8080/\": dial tcp 10.217.0.34:8080: connect: connection refused" Jan 26 18:32:51 crc kubenswrapper[4737]: I0126 18:32:51.597249 4737 patch_prober.go:28] interesting pod/downloads-7954f5f757-brpd2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Jan 26 18:32:51 crc kubenswrapper[4737]: I0126 18:32:51.597324 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-brpd2" podUID="abf4a817-2de4-4f69-9ad8-d15ed857d5ab" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.34:8080/\": dial tcp 10.217.0.34:8080: connect: connection refused" Jan 26 18:32:51 crc kubenswrapper[4737]: I0126 18:32:51.655447 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nmjc5"] Jan 26 18:32:51 crc kubenswrapper[4737]: I0126 18:32:51.656885 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nmjc5" Jan 26 18:32:51 crc kubenswrapper[4737]: I0126 18:32:51.683960 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 18:32:51 crc kubenswrapper[4737]: I0126 18:32:51.697103 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:51 crc kubenswrapper[4737]: I0126 18:32:51.697166 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b29f7821-ed11-4b5d-b946-4562c4c595ef-utilities\") pod \"redhat-operators-nmjc5\" (UID: \"b29f7821-ed11-4b5d-b946-4562c4c595ef\") " pod="openshift-marketplace/redhat-operators-nmjc5" Jan 26 18:32:51 crc kubenswrapper[4737]: I0126 18:32:51.697237 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b29f7821-ed11-4b5d-b946-4562c4c595ef-catalog-content\") pod \"redhat-operators-nmjc5\" (UID: \"b29f7821-ed11-4b5d-b946-4562c4c595ef\") " pod="openshift-marketplace/redhat-operators-nmjc5" Jan 26 18:32:51 crc kubenswrapper[4737]: I0126 18:32:51.697287 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7br78\" (UniqueName: \"kubernetes.io/projected/b29f7821-ed11-4b5d-b946-4562c4c595ef-kube-api-access-7br78\") pod \"redhat-operators-nmjc5\" (UID: \"b29f7821-ed11-4b5d-b946-4562c4c595ef\") " pod="openshift-marketplace/redhat-operators-nmjc5" Jan 26 18:32:51 crc kubenswrapper[4737]: E0126 18:32:51.697734 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:32:52.19770503 +0000 UTC m=+145.505899918 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7c9pc" (UID: "7cd9832f-e47d-4503-88fb-6a197b2fe89d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:51 crc kubenswrapper[4737]: I0126 18:32:51.765245 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nmjc5"] Jan 26 18:32:51 crc kubenswrapper[4737]: I0126 18:32:51.784231 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-bbw9t" event={"ID":"ed97d0e9-4ae3-4db6-9635-38141f37948e","Type":"ContainerStarted","Data":"a1aaf2d54074841725102e30d0d42434093c1b31a26790c1f02382f9052449f6"} Jan 26 18:32:51 crc kubenswrapper[4737]: I0126 18:32:51.811085 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:32:51 crc kubenswrapper[4737]: I0126 18:32:51.811361 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b29f7821-ed11-4b5d-b946-4562c4c595ef-catalog-content\") pod \"redhat-operators-nmjc5\" (UID: \"b29f7821-ed11-4b5d-b946-4562c4c595ef\") " pod="openshift-marketplace/redhat-operators-nmjc5" Jan 26 18:32:51 crc kubenswrapper[4737]: I0126 18:32:51.811420 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7br78\" (UniqueName: \"kubernetes.io/projected/b29f7821-ed11-4b5d-b946-4562c4c595ef-kube-api-access-7br78\") pod \"redhat-operators-nmjc5\" (UID: \"b29f7821-ed11-4b5d-b946-4562c4c595ef\") " pod="openshift-marketplace/redhat-operators-nmjc5" Jan 26 18:32:51 crc kubenswrapper[4737]: I0126 18:32:51.811526 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b29f7821-ed11-4b5d-b946-4562c4c595ef-utilities\") pod \"redhat-operators-nmjc5\" (UID: \"b29f7821-ed11-4b5d-b946-4562c4c595ef\") " pod="openshift-marketplace/redhat-operators-nmjc5" Jan 26 18:32:51 crc kubenswrapper[4737]: I0126 18:32:51.812603 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b29f7821-ed11-4b5d-b946-4562c4c595ef-utilities\") pod \"redhat-operators-nmjc5\" (UID: \"b29f7821-ed11-4b5d-b946-4562c4c595ef\") " pod="openshift-marketplace/redhat-operators-nmjc5" Jan 26 18:32:51 crc kubenswrapper[4737]: E0126 18:32:51.813626 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:32:52.313598314 +0000 UTC m=+145.621793022 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:51 crc kubenswrapper[4737]: I0126 18:32:51.814003 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b29f7821-ed11-4b5d-b946-4562c4c595ef-catalog-content\") pod \"redhat-operators-nmjc5\" (UID: \"b29f7821-ed11-4b5d-b946-4562c4c595ef\") " pod="openshift-marketplace/redhat-operators-nmjc5" Jan 26 18:32:51 crc kubenswrapper[4737]: I0126 18:32:51.821873 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"73a06f56-82bf-4ba9-b974-aa1465790909","Type":"ContainerStarted","Data":"4143c25c9a2a77bb21ece97f392a9b86d035f75237f6526f82b5167285179e38"} Jan 26 18:32:51 crc kubenswrapper[4737]: I0126 18:32:51.872280 4737 generic.go:334] "Generic (PLEG): container finished" podID="7a6f7537-6a89-4f64-a0a1-c96e49c575db" containerID="5f71ee33ff31bae6f5c88211cf4796a1dd7929c1c3648d8ede100b04f11f95d5" exitCode=0 Jan 26 18:32:51 crc kubenswrapper[4737]: I0126 18:32:51.872394 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lrlts" event={"ID":"7a6f7537-6a89-4f64-a0a1-c96e49c575db","Type":"ContainerDied","Data":"5f71ee33ff31bae6f5c88211cf4796a1dd7929c1c3648d8ede100b04f11f95d5"} Jan 26 18:32:51 crc kubenswrapper[4737]: I0126 18:32:51.893815 4737 patch_prober.go:28] interesting pod/apiserver-76f77b778f-7jxs2 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 26 18:32:51 crc kubenswrapper[4737]: [+]log ok Jan 26 18:32:51 crc kubenswrapper[4737]: [+]etcd ok Jan 26 18:32:51 crc kubenswrapper[4737]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 26 18:32:51 crc kubenswrapper[4737]: [+]poststarthook/generic-apiserver-start-informers ok Jan 26 18:32:51 crc kubenswrapper[4737]: [+]poststarthook/max-in-flight-filter ok Jan 26 18:32:51 crc kubenswrapper[4737]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 26 18:32:51 crc kubenswrapper[4737]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 26 18:32:51 crc kubenswrapper[4737]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 26 18:32:51 crc kubenswrapper[4737]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 26 18:32:51 crc kubenswrapper[4737]: [+]poststarthook/project.openshift.io-projectcache ok Jan 26 18:32:51 crc kubenswrapper[4737]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 26 18:32:51 crc kubenswrapper[4737]: [+]poststarthook/openshift.io-startinformers ok Jan 26 18:32:51 crc kubenswrapper[4737]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 26 18:32:51 crc kubenswrapper[4737]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 26 18:32:51 crc kubenswrapper[4737]: livez check failed Jan 26 18:32:51 crc kubenswrapper[4737]: I0126 18:32:51.893966 4737 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-7jxs2" podUID="858fe62f-567a-47e7-9847-c393790eb41f" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 18:32:51 crc kubenswrapper[4737]: I0126 18:32:51.913180 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:51 crc kubenswrapper[4737]: E0126 18:32:51.913557 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:32:52.413540656 +0000 UTC m=+145.721735354 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7c9pc" (UID: "7cd9832f-e47d-4503-88fb-6a197b2fe89d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:51 crc kubenswrapper[4737]: I0126 18:32:51.953912 4737 generic.go:334] "Generic (PLEG): container finished" podID="74a38d4e-7789-4e8b-abbc-da9d57d1bcc4" containerID="8b1141abf9c354ae94c01135e707e5fe8fbb13bd3d080402585b461c69bca673" exitCode=0 Jan 26 18:32:51 crc kubenswrapper[4737]: I0126 18:32:51.955283 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ndrff" event={"ID":"74a38d4e-7789-4e8b-abbc-da9d57d1bcc4","Type":"ContainerDied","Data":"8b1141abf9c354ae94c01135e707e5fe8fbb13bd3d080402585b461c69bca673"} Jan 26 18:32:51 crc kubenswrapper[4737]: I0126 18:32:51.955331 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ndrff" event={"ID":"74a38d4e-7789-4e8b-abbc-da9d57d1bcc4","Type":"ContainerStarted","Data":"eb1d0d7853d05618ed0e72a3a81c3224d6ea1c3ab1d37e0acd9019d429294510"} Jan 26 18:32:51 crc kubenswrapper[4737]: I0126 18:32:51.984360 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gsfgx" Jan 26 18:32:52 crc kubenswrapper[4737]: I0126 18:32:52.009765 4737 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 26 18:32:52 crc kubenswrapper[4737]: I0126 18:32:52.014017 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:32:52 crc kubenswrapper[4737]: E0126 18:32:52.015094 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:32:52.515055581 +0000 UTC m=+145.823250289 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:52 crc kubenswrapper[4737]: I0126 18:32:52.061922 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5j2cd"] Jan 26 18:32:52 crc kubenswrapper[4737]: I0126 18:32:52.114834 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4cxsx"] Jan 26 18:32:52 crc kubenswrapper[4737]: I0126 18:32:52.138899 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4cxsx" Jan 26 18:32:52 crc kubenswrapper[4737]: I0126 18:32:52.142060 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:52 crc kubenswrapper[4737]: I0126 18:32:52.144973 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7br78\" (UniqueName: \"kubernetes.io/projected/b29f7821-ed11-4b5d-b946-4562c4c595ef-kube-api-access-7br78\") pod \"redhat-operators-nmjc5\" (UID: \"b29f7821-ed11-4b5d-b946-4562c4c595ef\") " pod="openshift-marketplace/redhat-operators-nmjc5" Jan 26 18:32:52 crc kubenswrapper[4737]: E0126 18:32:52.145160 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:32:52.64513207 +0000 UTC m=+145.953326978 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7c9pc" (UID: "7cd9832f-e47d-4503-88fb-6a197b2fe89d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:52 crc kubenswrapper[4737]: I0126 18:32:52.176999 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4cxsx"] Jan 26 18:32:52 crc kubenswrapper[4737]: I0126 18:32:52.209454 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-wwzqx" Jan 26 18:32:52 crc kubenswrapper[4737]: I0126 18:32:52.219189 4737 patch_prober.go:28] interesting pod/router-default-5444994796-wwzqx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 18:32:52 crc kubenswrapper[4737]: [-]has-synced failed: reason withheld Jan 26 18:32:52 crc kubenswrapper[4737]: [+]process-running ok Jan 26 18:32:52 crc kubenswrapper[4737]: healthz check failed Jan 26 18:32:52 crc kubenswrapper[4737]: I0126 18:32:52.219255 4737 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wwzqx" podUID="60a6a19b-baa5-47c5-8733-202b5bfd0c97" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 18:32:52 crc kubenswrapper[4737]: I0126 18:32:52.246257 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:32:52 crc kubenswrapper[4737]: I0126 18:32:52.246529 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01d16131-935e-4d13-8f42-d9ff3ce55769-catalog-content\") pod \"redhat-operators-4cxsx\" (UID: \"01d16131-935e-4d13-8f42-d9ff3ce55769\") " pod="openshift-marketplace/redhat-operators-4cxsx" Jan 26 18:32:52 crc kubenswrapper[4737]: I0126 18:32:52.246558 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01d16131-935e-4d13-8f42-d9ff3ce55769-utilities\") pod \"redhat-operators-4cxsx\" (UID: \"01d16131-935e-4d13-8f42-d9ff3ce55769\") " pod="openshift-marketplace/redhat-operators-4cxsx" Jan 26 18:32:52 crc kubenswrapper[4737]: I0126 18:32:52.246576 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-276bj\" (UniqueName: \"kubernetes.io/projected/01d16131-935e-4d13-8f42-d9ff3ce55769-kube-api-access-276bj\") pod \"redhat-operators-4cxsx\" (UID: \"01d16131-935e-4d13-8f42-d9ff3ce55769\") " pod="openshift-marketplace/redhat-operators-4cxsx" Jan 26 18:32:52 crc kubenswrapper[4737]: E0126 18:32:52.247130 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:32:52.747098398 +0000 UTC m=+146.055293276 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:52 crc kubenswrapper[4737]: I0126 18:32:52.353457 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01d16131-935e-4d13-8f42-d9ff3ce55769-catalog-content\") pod \"redhat-operators-4cxsx\" (UID: \"01d16131-935e-4d13-8f42-d9ff3ce55769\") " pod="openshift-marketplace/redhat-operators-4cxsx" Jan 26 18:32:52 crc kubenswrapper[4737]: I0126 18:32:52.354014 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-gftx9" Jan 26 18:32:52 crc kubenswrapper[4737]: I0126 18:32:52.354053 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01d16131-935e-4d13-8f42-d9ff3ce55769-catalog-content\") pod \"redhat-operators-4cxsx\" (UID: \"01d16131-935e-4d13-8f42-d9ff3ce55769\") " pod="openshift-marketplace/redhat-operators-4cxsx" Jan 26 18:32:52 crc kubenswrapper[4737]: I0126 18:32:52.365809 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01d16131-935e-4d13-8f42-d9ff3ce55769-utilities\") pod \"redhat-operators-4cxsx\" (UID: \"01d16131-935e-4d13-8f42-d9ff3ce55769\") " pod="openshift-marketplace/redhat-operators-4cxsx" Jan 26 18:32:52 crc kubenswrapper[4737]: I0126 18:32:52.365880 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-276bj\" (UniqueName: \"kubernetes.io/projected/01d16131-935e-4d13-8f42-d9ff3ce55769-kube-api-access-276bj\") pod \"redhat-operators-4cxsx\" (UID: \"01d16131-935e-4d13-8f42-d9ff3ce55769\") " pod="openshift-marketplace/redhat-operators-4cxsx" Jan 26 18:32:52 crc kubenswrapper[4737]: I0126 18:32:52.366265 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:52 crc kubenswrapper[4737]: E0126 18:32:52.366802 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:32:52.866781778 +0000 UTC m=+146.174976486 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7c9pc" (UID: "7cd9832f-e47d-4503-88fb-6a197b2fe89d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:32:52 crc kubenswrapper[4737]: I0126 18:32:52.367092 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nmjc5" Jan 26 18:32:52 crc kubenswrapper[4737]: I0126 18:32:52.368526 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01d16131-935e-4d13-8f42-d9ff3ce55769-utilities\") pod \"redhat-operators-4cxsx\" (UID: \"01d16131-935e-4d13-8f42-d9ff3ce55769\") " pod="openshift-marketplace/redhat-operators-4cxsx" Jan 26 18:32:52 crc kubenswrapper[4737]: I0126 18:32:52.368968 4737 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-26T18:32:52.009804567Z","Handler":null,"Name":""} Jan 26 18:32:52 crc kubenswrapper[4737]: I0126 18:32:52.374024 4737 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 26 18:32:52 crc kubenswrapper[4737]: I0126 18:32:52.374094 4737 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 26 18:32:52 crc kubenswrapper[4737]: I0126 18:32:52.410297 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2ql5w"] Jan 26 18:32:52 crc kubenswrapper[4737]: I0126 18:32:52.471228 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:32:52 crc kubenswrapper[4737]: I0126 18:32:52.495441 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-276bj\" (UniqueName: \"kubernetes.io/projected/01d16131-935e-4d13-8f42-d9ff3ce55769-kube-api-access-276bj\") pod \"redhat-operators-4cxsx\" (UID: \"01d16131-935e-4d13-8f42-d9ff3ce55769\") " pod="openshift-marketplace/redhat-operators-4cxsx" Jan 26 18:32:52 crc kubenswrapper[4737]: W0126 18:32:52.507485 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod402663db_5331_4692_8539_f79973a5759b.slice/crio-c308165fc40bb2460cfac4661d0881107d8140919dbee4ebab81c419b168a237 WatchSource:0}: Error finding container c308165fc40bb2460cfac4661d0881107d8140919dbee4ebab81c419b168a237: Status 404 returned error can't find the container with id c308165fc40bb2460cfac4661d0881107d8140919dbee4ebab81c419b168a237 Jan 26 18:32:52 crc kubenswrapper[4737]: I0126 18:32:52.550715 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 18:32:52 crc kubenswrapper[4737]: I0126 18:32:52.552241 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4cxsx" Jan 26 18:32:52 crc kubenswrapper[4737]: I0126 18:32:52.559376 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sb8td" Jan 26 18:32:52 crc kubenswrapper[4737]: I0126 18:32:52.577980 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:52 crc kubenswrapper[4737]: I0126 18:32:52.662050 4737 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 18:32:52 crc kubenswrapper[4737]: I0126 18:32:52.662132 4737 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:52 crc kubenswrapper[4737]: I0126 18:32:52.896363 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:32:52 crc kubenswrapper[4737]: I0126 18:32:52.896419 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:32:52 crc kubenswrapper[4737]: I0126 18:32:52.896442 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:32:52 crc kubenswrapper[4737]: I0126 18:32:52.896476 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:32:52 crc kubenswrapper[4737]: I0126 18:32:52.901561 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:32:52 crc kubenswrapper[4737]: I0126 18:32:52.909799 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:32:52 crc kubenswrapper[4737]: I0126 18:32:52.914793 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:32:52 crc kubenswrapper[4737]: I0126 18:32:52.925923 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:32:53 crc kubenswrapper[4737]: I0126 18:32:53.020397 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 26 18:32:53 crc kubenswrapper[4737]: I0126 18:32:53.047334 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-bbw9t" event={"ID":"ed97d0e9-4ae3-4db6-9635-38141f37948e","Type":"ContainerStarted","Data":"2c53746d8d23705041c04b15f7f4d7447fb77e79b2480d8cfa200d94c5ae5a1c"} Jan 26 18:32:53 crc kubenswrapper[4737]: I0126 18:32:53.064228 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2ql5w" event={"ID":"402663db-5331-4692-8539-f79973a5759b","Type":"ContainerStarted","Data":"c308165fc40bb2460cfac4661d0881107d8140919dbee4ebab81c419b168a237"} Jan 26 18:32:53 crc kubenswrapper[4737]: I0126 18:32:53.085423 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7c9pc\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:53 crc kubenswrapper[4737]: I0126 18:32:53.105552 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:32:53 crc kubenswrapper[4737]: I0126 18:32:53.124666 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:32:53 crc kubenswrapper[4737]: I0126 18:32:53.128552 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5j2cd" event={"ID":"0a348468-634f-4d18-aa1d-ecc9aff08138","Type":"ContainerStarted","Data":"62b12fc853c195be5439930e04a17ff718e7b95940f216b5f4fbe5774671a839"} Jan 26 18:32:53 crc kubenswrapper[4737]: I0126 18:32:53.128613 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5j2cd" event={"ID":"0a348468-634f-4d18-aa1d-ecc9aff08138","Type":"ContainerStarted","Data":"8aa19c5c62ddabae15482e4bbebe3267e8aa28d62e6ab9a1dccc93889621f080"} Jan 26 18:32:53 crc kubenswrapper[4737]: I0126 18:32:53.129471 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:32:53 crc kubenswrapper[4737]: I0126 18:32:53.221310 4737 patch_prober.go:28] interesting pod/router-default-5444994796-wwzqx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 18:32:53 crc kubenswrapper[4737]: [-]has-synced failed: reason withheld Jan 26 18:32:53 crc kubenswrapper[4737]: [+]process-running ok Jan 26 18:32:53 crc kubenswrapper[4737]: healthz check failed Jan 26 18:32:53 crc kubenswrapper[4737]: I0126 18:32:53.221383 4737 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wwzqx" podUID="60a6a19b-baa5-47c5-8733-202b5bfd0c97" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 18:32:53 crc kubenswrapper[4737]: I0126 18:32:53.254191 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-k4f69" Jan 26 18:32:53 crc kubenswrapper[4737]: I0126 18:32:53.255901 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"73a06f56-82bf-4ba9-b974-aa1465790909","Type":"ContainerStarted","Data":"538fdd49d4ebdb8e5f3a33684e32d050352f7bdce7aee6e53d1ff5ad3cd14291"} Jan 26 18:32:53 crc kubenswrapper[4737]: I0126 18:32:53.272431 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:53 crc kubenswrapper[4737]: I0126 18:32:53.332371 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac652a18-5fbd-483e-94d1-0782ee0cc3ac-config-volume\") pod \"ac652a18-5fbd-483e-94d1-0782ee0cc3ac\" (UID: \"ac652a18-5fbd-483e-94d1-0782ee0cc3ac\") " Jan 26 18:32:53 crc kubenswrapper[4737]: I0126 18:32:53.332519 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac652a18-5fbd-483e-94d1-0782ee0cc3ac-secret-volume\") pod \"ac652a18-5fbd-483e-94d1-0782ee0cc3ac\" (UID: \"ac652a18-5fbd-483e-94d1-0782ee0cc3ac\") " Jan 26 18:32:53 crc kubenswrapper[4737]: I0126 18:32:53.333499 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac652a18-5fbd-483e-94d1-0782ee0cc3ac-config-volume" (OuterVolumeSpecName: "config-volume") pod "ac652a18-5fbd-483e-94d1-0782ee0cc3ac" (UID: "ac652a18-5fbd-483e-94d1-0782ee0cc3ac"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:32:53 crc kubenswrapper[4737]: I0126 18:32:53.333764 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkdkp\" (UniqueName: \"kubernetes.io/projected/ac652a18-5fbd-483e-94d1-0782ee0cc3ac-kube-api-access-nkdkp\") pod \"ac652a18-5fbd-483e-94d1-0782ee0cc3ac\" (UID: \"ac652a18-5fbd-483e-94d1-0782ee0cc3ac\") " Jan 26 18:32:53 crc kubenswrapper[4737]: I0126 18:32:53.334196 4737 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac652a18-5fbd-483e-94d1-0782ee0cc3ac-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 18:32:53 crc kubenswrapper[4737]: I0126 18:32:53.361404 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=4.361371845 podStartE2EDuration="4.361371845s" podCreationTimestamp="2026-01-26 18:32:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:32:53.346102251 +0000 UTC m=+146.654296959" watchObservedRunningTime="2026-01-26 18:32:53.361371845 +0000 UTC m=+146.669566553" Jan 26 18:32:53 crc kubenswrapper[4737]: I0126 18:32:53.362192 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac652a18-5fbd-483e-94d1-0782ee0cc3ac-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ac652a18-5fbd-483e-94d1-0782ee0cc3ac" (UID: "ac652a18-5fbd-483e-94d1-0782ee0cc3ac"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:32:53 crc kubenswrapper[4737]: I0126 18:32:53.362852 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac652a18-5fbd-483e-94d1-0782ee0cc3ac-kube-api-access-nkdkp" (OuterVolumeSpecName: "kube-api-access-nkdkp") pod "ac652a18-5fbd-483e-94d1-0782ee0cc3ac" (UID: "ac652a18-5fbd-483e-94d1-0782ee0cc3ac"). InnerVolumeSpecName "kube-api-access-nkdkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:32:53 crc kubenswrapper[4737]: I0126 18:32:53.439307 4737 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac652a18-5fbd-483e-94d1-0782ee0cc3ac-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 18:32:53 crc kubenswrapper[4737]: I0126 18:32:53.448272 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkdkp\" (UniqueName: \"kubernetes.io/projected/ac652a18-5fbd-483e-94d1-0782ee0cc3ac-kube-api-access-nkdkp\") on node \"crc\" DevicePath \"\"" Jan 26 18:32:53 crc kubenswrapper[4737]: I0126 18:32:53.553428 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nmjc5"] Jan 26 18:32:53 crc kubenswrapper[4737]: I0126 18:32:53.715152 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4cxsx"] Jan 26 18:32:53 crc kubenswrapper[4737]: W0126 18:32:53.795608 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01d16131_935e_4d13_8f42_d9ff3ce55769.slice/crio-42cab37fc48e04f9fd7ee09c0f56a72dcdc76c93a626c2235a7641482f6ab4f0 WatchSource:0}: Error finding container 42cab37fc48e04f9fd7ee09c0f56a72dcdc76c93a626c2235a7641482f6ab4f0: Status 404 returned error can't find the container with id 42cab37fc48e04f9fd7ee09c0f56a72dcdc76c93a626c2235a7641482f6ab4f0 Jan 26 18:32:54 crc kubenswrapper[4737]: I0126 18:32:54.216836 4737 patch_prober.go:28] interesting pod/router-default-5444994796-wwzqx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 18:32:54 crc kubenswrapper[4737]: [-]has-synced failed: reason withheld Jan 26 18:32:54 crc kubenswrapper[4737]: [+]process-running ok Jan 26 18:32:54 crc kubenswrapper[4737]: healthz check failed Jan 26 18:32:54 crc kubenswrapper[4737]: I0126 18:32:54.217357 4737 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wwzqx" podUID="60a6a19b-baa5-47c5-8733-202b5bfd0c97" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 18:32:54 crc kubenswrapper[4737]: I0126 18:32:54.266480 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"2bab3826671f7b4f378aa6c621faf4b03b5afbb5eb926ad4f3abec2c8c99a891"} Jan 26 18:32:54 crc kubenswrapper[4737]: I0126 18:32:54.269343 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-k4f69" event={"ID":"ac652a18-5fbd-483e-94d1-0782ee0cc3ac","Type":"ContainerDied","Data":"3717827fc8efbb8b95cc6d13b3247b6fd34c1a1e3bc5b019f720e00d07062152"} Jan 26 18:32:54 crc kubenswrapper[4737]: I0126 18:32:54.269479 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3717827fc8efbb8b95cc6d13b3247b6fd34c1a1e3bc5b019f720e00d07062152" Jan 26 18:32:54 crc kubenswrapper[4737]: I0126 18:32:54.269473 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-k4f69" Jan 26 18:32:54 crc kubenswrapper[4737]: I0126 18:32:54.272435 4737 generic.go:334] "Generic (PLEG): container finished" podID="402663db-5331-4692-8539-f79973a5759b" containerID="1c1d5d815c3e54786ed93282aa94ebfdc7104080d2cdaf25b34daccb90804cd0" exitCode=0 Jan 26 18:32:54 crc kubenswrapper[4737]: I0126 18:32:54.272566 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2ql5w" event={"ID":"402663db-5331-4692-8539-f79973a5759b","Type":"ContainerDied","Data":"1c1d5d815c3e54786ed93282aa94ebfdc7104080d2cdaf25b34daccb90804cd0"} Jan 26 18:32:54 crc kubenswrapper[4737]: I0126 18:32:54.277835 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"bf63e0e09d7e2d34f8b927a240cdc1b5e5f8adc96d8fffffa2033750e5f7a74c"} Jan 26 18:32:54 crc kubenswrapper[4737]: I0126 18:32:54.280398 4737 generic.go:334] "Generic (PLEG): container finished" podID="0a348468-634f-4d18-aa1d-ecc9aff08138" containerID="62b12fc853c195be5439930e04a17ff718e7b95940f216b5f4fbe5774671a839" exitCode=0 Jan 26 18:32:54 crc kubenswrapper[4737]: I0126 18:32:54.280513 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5j2cd" event={"ID":"0a348468-634f-4d18-aa1d-ecc9aff08138","Type":"ContainerDied","Data":"62b12fc853c195be5439930e04a17ff718e7b95940f216b5f4fbe5774671a839"} Jan 26 18:32:54 crc kubenswrapper[4737]: I0126 18:32:54.284926 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4cxsx" event={"ID":"01d16131-935e-4d13-8f42-d9ff3ce55769","Type":"ContainerStarted","Data":"42cab37fc48e04f9fd7ee09c0f56a72dcdc76c93a626c2235a7641482f6ab4f0"} Jan 26 18:32:54 crc kubenswrapper[4737]: I0126 18:32:54.289687 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nmjc5" event={"ID":"b29f7821-ed11-4b5d-b946-4562c4c595ef","Type":"ContainerStarted","Data":"cb3d80818ed16e0ed4a6e2abd51ab92e50846d996e7238e22e5dc42f98134011"} Jan 26 18:32:54 crc kubenswrapper[4737]: I0126 18:32:54.332486 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7c9pc"] Jan 26 18:32:54 crc kubenswrapper[4737]: W0126 18:32:54.353782 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-fde2ae2c23a8aa09b85b6f82830d6cb4344db2fc5547850376e4c8f03dcac125 WatchSource:0}: Error finding container fde2ae2c23a8aa09b85b6f82830d6cb4344db2fc5547850376e4c8f03dcac125: Status 404 returned error can't find the container with id fde2ae2c23a8aa09b85b6f82830d6cb4344db2fc5547850376e4c8f03dcac125 Jan 26 18:32:55 crc kubenswrapper[4737]: I0126 18:32:55.211654 4737 patch_prober.go:28] interesting pod/router-default-5444994796-wwzqx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 18:32:55 crc kubenswrapper[4737]: [-]has-synced failed: reason withheld Jan 26 18:32:55 crc kubenswrapper[4737]: [+]process-running ok Jan 26 18:32:55 crc kubenswrapper[4737]: healthz check failed Jan 26 18:32:55 crc kubenswrapper[4737]: I0126 18:32:55.212126 4737 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wwzqx" podUID="60a6a19b-baa5-47c5-8733-202b5bfd0c97" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 18:32:55 crc kubenswrapper[4737]: I0126 18:32:55.322037 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4cxsx" event={"ID":"01d16131-935e-4d13-8f42-d9ff3ce55769","Type":"ContainerStarted","Data":"c31216b9e0ff870f498c2a70bf1ddbe662212a90e8268b4d5043f36ddcc74554"} Jan 26 18:32:55 crc kubenswrapper[4737]: I0126 18:32:55.326800 4737 generic.go:334] "Generic (PLEG): container finished" podID="73a06f56-82bf-4ba9-b974-aa1465790909" containerID="538fdd49d4ebdb8e5f3a33684e32d050352f7bdce7aee6e53d1ff5ad3cd14291" exitCode=0 Jan 26 18:32:55 crc kubenswrapper[4737]: I0126 18:32:55.326858 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"73a06f56-82bf-4ba9-b974-aa1465790909","Type":"ContainerDied","Data":"538fdd49d4ebdb8e5f3a33684e32d050352f7bdce7aee6e53d1ff5ad3cd14291"} Jan 26 18:32:55 crc kubenswrapper[4737]: I0126 18:32:55.329960 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" event={"ID":"7cd9832f-e47d-4503-88fb-6a197b2fe89d","Type":"ContainerStarted","Data":"de95ff1ca34daa6c09ba39c19bf9f0f591ba02698ddf8a9aa80905f7c696901f"} Jan 26 18:32:55 crc kubenswrapper[4737]: I0126 18:32:55.354685 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-bbw9t" event={"ID":"ed97d0e9-4ae3-4db6-9635-38141f37948e","Type":"ContainerStarted","Data":"9a92335e5ee811ee9e069d56e5e2e1f1824bdcc478194dd5dee2932fdc944802"} Jan 26 18:32:55 crc kubenswrapper[4737]: I0126 18:32:55.362001 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nmjc5" event={"ID":"b29f7821-ed11-4b5d-b946-4562c4c595ef","Type":"ContainerStarted","Data":"f8fd9f29d206f3c87bc1a7b0ddafeec3e43e2471474919345b27ec7f8ff03f6f"} Jan 26 18:32:55 crc kubenswrapper[4737]: I0126 18:32:55.365261 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"fde2ae2c23a8aa09b85b6f82830d6cb4344db2fc5547850376e4c8f03dcac125"} Jan 26 18:32:56 crc kubenswrapper[4737]: I0126 18:32:56.206190 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-7jxs2" Jan 26 18:32:56 crc kubenswrapper[4737]: I0126 18:32:56.220102 4737 patch_prober.go:28] interesting pod/router-default-5444994796-wwzqx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 18:32:56 crc kubenswrapper[4737]: [-]has-synced failed: reason withheld Jan 26 18:32:56 crc kubenswrapper[4737]: [+]process-running ok Jan 26 18:32:56 crc kubenswrapper[4737]: healthz check failed Jan 26 18:32:56 crc kubenswrapper[4737]: I0126 18:32:56.220551 4737 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wwzqx" podUID="60a6a19b-baa5-47c5-8733-202b5bfd0c97" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 18:32:56 crc kubenswrapper[4737]: I0126 18:32:56.223696 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-7jxs2" Jan 26 18:32:56 crc kubenswrapper[4737]: I0126 18:32:56.379168 4737 generic.go:334] "Generic (PLEG): container finished" podID="01d16131-935e-4d13-8f42-d9ff3ce55769" containerID="c31216b9e0ff870f498c2a70bf1ddbe662212a90e8268b4d5043f36ddcc74554" exitCode=0 Jan 26 18:32:56 crc kubenswrapper[4737]: I0126 18:32:56.379254 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4cxsx" event={"ID":"01d16131-935e-4d13-8f42-d9ff3ce55769","Type":"ContainerDied","Data":"c31216b9e0ff870f498c2a70bf1ddbe662212a90e8268b4d5043f36ddcc74554"} Jan 26 18:32:56 crc kubenswrapper[4737]: I0126 18:32:56.386973 4737 generic.go:334] "Generic (PLEG): container finished" podID="b29f7821-ed11-4b5d-b946-4562c4c595ef" containerID="f8fd9f29d206f3c87bc1a7b0ddafeec3e43e2471474919345b27ec7f8ff03f6f" exitCode=0 Jan 26 18:32:56 crc kubenswrapper[4737]: I0126 18:32:56.387279 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nmjc5" event={"ID":"b29f7821-ed11-4b5d-b946-4562c4c595ef","Type":"ContainerDied","Data":"f8fd9f29d206f3c87bc1a7b0ddafeec3e43e2471474919345b27ec7f8ff03f6f"} Jan 26 18:32:56 crc kubenswrapper[4737]: I0126 18:32:56.783422 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 18:32:56 crc kubenswrapper[4737]: I0126 18:32:56.857774 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73a06f56-82bf-4ba9-b974-aa1465790909-kube-api-access\") pod \"73a06f56-82bf-4ba9-b974-aa1465790909\" (UID: \"73a06f56-82bf-4ba9-b974-aa1465790909\") " Jan 26 18:32:56 crc kubenswrapper[4737]: I0126 18:32:56.858389 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/73a06f56-82bf-4ba9-b974-aa1465790909-kubelet-dir\") pod \"73a06f56-82bf-4ba9-b974-aa1465790909\" (UID: \"73a06f56-82bf-4ba9-b974-aa1465790909\") " Jan 26 18:32:56 crc kubenswrapper[4737]: I0126 18:32:56.858885 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73a06f56-82bf-4ba9-b974-aa1465790909-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "73a06f56-82bf-4ba9-b974-aa1465790909" (UID: "73a06f56-82bf-4ba9-b974-aa1465790909"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:32:56 crc kubenswrapper[4737]: I0126 18:32:56.868209 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73a06f56-82bf-4ba9-b974-aa1465790909-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "73a06f56-82bf-4ba9-b974-aa1465790909" (UID: "73a06f56-82bf-4ba9-b974-aa1465790909"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:32:56 crc kubenswrapper[4737]: I0126 18:32:56.961989 4737 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/73a06f56-82bf-4ba9-b974-aa1465790909-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 18:32:56 crc kubenswrapper[4737]: I0126 18:32:56.962042 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73a06f56-82bf-4ba9-b974-aa1465790909-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 18:32:57 crc kubenswrapper[4737]: I0126 18:32:57.216560 4737 patch_prober.go:28] interesting pod/router-default-5444994796-wwzqx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 18:32:57 crc kubenswrapper[4737]: [-]has-synced failed: reason withheld Jan 26 18:32:57 crc kubenswrapper[4737]: [+]process-running ok Jan 26 18:32:57 crc kubenswrapper[4737]: healthz check failed Jan 26 18:32:57 crc kubenswrapper[4737]: I0126 18:32:57.216633 4737 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wwzqx" podUID="60a6a19b-baa5-47c5-8733-202b5bfd0c97" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 18:32:57 crc kubenswrapper[4737]: I0126 18:32:57.367646 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 18:32:57 crc kubenswrapper[4737]: E0126 18:32:57.368433 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac652a18-5fbd-483e-94d1-0782ee0cc3ac" containerName="collect-profiles" Jan 26 18:32:57 crc kubenswrapper[4737]: I0126 18:32:57.368629 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac652a18-5fbd-483e-94d1-0782ee0cc3ac" containerName="collect-profiles" Jan 26 18:32:57 crc kubenswrapper[4737]: E0126 18:32:57.368729 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73a06f56-82bf-4ba9-b974-aa1465790909" containerName="pruner" Jan 26 18:32:57 crc kubenswrapper[4737]: I0126 18:32:57.368868 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="73a06f56-82bf-4ba9-b974-aa1465790909" containerName="pruner" Jan 26 18:32:57 crc kubenswrapper[4737]: I0126 18:32:57.369060 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="73a06f56-82bf-4ba9-b974-aa1465790909" containerName="pruner" Jan 26 18:32:57 crc kubenswrapper[4737]: I0126 18:32:57.369159 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac652a18-5fbd-483e-94d1-0782ee0cc3ac" containerName="collect-profiles" Jan 26 18:32:57 crc kubenswrapper[4737]: I0126 18:32:57.369854 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 18:32:57 crc kubenswrapper[4737]: I0126 18:32:57.372413 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-k965v" Jan 26 18:32:57 crc kubenswrapper[4737]: I0126 18:32:57.372620 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 26 18:32:57 crc kubenswrapper[4737]: I0126 18:32:57.372942 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 26 18:32:57 crc kubenswrapper[4737]: I0126 18:32:57.389199 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 18:32:57 crc kubenswrapper[4737]: I0126 18:32:57.409659 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"5bd105c083d6deb934fabf6d2c16985726d1ecbaf307851eab399005c98edb0e"} Jan 26 18:32:57 crc kubenswrapper[4737]: I0126 18:32:57.413143 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"23e828f95ae6bc3116c5d043aa4db4e847ff34a1cae34a3d57cf1f6ca1eec7c1"} Jan 26 18:32:57 crc kubenswrapper[4737]: I0126 18:32:57.413868 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:32:57 crc kubenswrapper[4737]: I0126 18:32:57.416138 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"73a06f56-82bf-4ba9-b974-aa1465790909","Type":"ContainerDied","Data":"4143c25c9a2a77bb21ece97f392a9b86d035f75237f6526f82b5167285179e38"} Jan 26 18:32:57 crc kubenswrapper[4737]: I0126 18:32:57.416161 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4143c25c9a2a77bb21ece97f392a9b86d035f75237f6526f82b5167285179e38" Jan 26 18:32:57 crc kubenswrapper[4737]: I0126 18:32:57.416241 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 18:32:57 crc kubenswrapper[4737]: I0126 18:32:57.425318 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" event={"ID":"7cd9832f-e47d-4503-88fb-6a197b2fe89d","Type":"ContainerStarted","Data":"ee0aa3383a99cad3a21e6a3bc164ffc3c5a705ceb07fb383879fddc60bb3a825"} Jan 26 18:32:57 crc kubenswrapper[4737]: I0126 18:32:57.426683 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:32:57 crc kubenswrapper[4737]: I0126 18:32:57.439549 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"d70c3cee9a4203f4d58944aa38526eed64e0b502e7d486dc772a2fe2d4a0b43f"} Jan 26 18:32:57 crc kubenswrapper[4737]: I0126 18:32:57.489683 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5abb8c8d-4404-4ef1-ad13-b8a9348b604d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"5abb8c8d-4404-4ef1-ad13-b8a9348b604d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 18:32:57 crc kubenswrapper[4737]: I0126 18:32:57.491772 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5abb8c8d-4404-4ef1-ad13-b8a9348b604d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"5abb8c8d-4404-4ef1-ad13-b8a9348b604d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 18:32:57 crc kubenswrapper[4737]: I0126 18:32:57.521817 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" podStartSLOduration=133.521796781 podStartE2EDuration="2m13.521796781s" podCreationTimestamp="2026-01-26 18:30:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:32:57.521042321 +0000 UTC m=+150.829237039" watchObservedRunningTime="2026-01-26 18:32:57.521796781 +0000 UTC m=+150.829991489" Jan 26 18:32:57 crc kubenswrapper[4737]: I0126 18:32:57.569739 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-bbw9t" podStartSLOduration=18.569711721 podStartE2EDuration="18.569711721s" podCreationTimestamp="2026-01-26 18:32:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:32:57.568513958 +0000 UTC m=+150.876708676" watchObservedRunningTime="2026-01-26 18:32:57.569711721 +0000 UTC m=+150.877906429" Jan 26 18:32:57 crc kubenswrapper[4737]: I0126 18:32:57.594433 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5abb8c8d-4404-4ef1-ad13-b8a9348b604d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"5abb8c8d-4404-4ef1-ad13-b8a9348b604d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 18:32:57 crc kubenswrapper[4737]: I0126 18:32:57.594483 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5abb8c8d-4404-4ef1-ad13-b8a9348b604d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"5abb8c8d-4404-4ef1-ad13-b8a9348b604d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 18:32:57 crc kubenswrapper[4737]: I0126 18:32:57.594671 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5abb8c8d-4404-4ef1-ad13-b8a9348b604d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"5abb8c8d-4404-4ef1-ad13-b8a9348b604d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 18:32:57 crc kubenswrapper[4737]: I0126 18:32:57.633830 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5abb8c8d-4404-4ef1-ad13-b8a9348b604d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"5abb8c8d-4404-4ef1-ad13-b8a9348b604d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 18:32:57 crc kubenswrapper[4737]: I0126 18:32:57.698645 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 18:32:58 crc kubenswrapper[4737]: I0126 18:32:58.138511 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 18:32:58 crc kubenswrapper[4737]: I0126 18:32:58.222435 4737 patch_prober.go:28] interesting pod/router-default-5444994796-wwzqx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 18:32:58 crc kubenswrapper[4737]: [-]has-synced failed: reason withheld Jan 26 18:32:58 crc kubenswrapper[4737]: [+]process-running ok Jan 26 18:32:58 crc kubenswrapper[4737]: healthz check failed Jan 26 18:32:58 crc kubenswrapper[4737]: I0126 18:32:58.222518 4737 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wwzqx" podUID="60a6a19b-baa5-47c5-8733-202b5bfd0c97" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 18:32:58 crc kubenswrapper[4737]: I0126 18:32:58.517928 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"5abb8c8d-4404-4ef1-ad13-b8a9348b604d","Type":"ContainerStarted","Data":"c19ceaafc010e89e6bb082c505d9a8bcbce1345ec2953fa4a2777c918eadf8af"} Jan 26 18:32:59 crc kubenswrapper[4737]: I0126 18:32:59.219615 4737 patch_prober.go:28] interesting pod/router-default-5444994796-wwzqx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 18:32:59 crc kubenswrapper[4737]: [-]has-synced failed: reason withheld Jan 26 18:32:59 crc kubenswrapper[4737]: [+]process-running ok Jan 26 18:32:59 crc kubenswrapper[4737]: healthz check failed Jan 26 18:32:59 crc kubenswrapper[4737]: I0126 18:32:59.219929 4737 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wwzqx" podUID="60a6a19b-baa5-47c5-8733-202b5bfd0c97" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 18:32:59 crc kubenswrapper[4737]: I0126 18:32:59.531556 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"5abb8c8d-4404-4ef1-ad13-b8a9348b604d","Type":"ContainerStarted","Data":"af5edf11b545825a8641cc6358cfe53a1f21af99722c215781e6175e75a1f840"} Jan 26 18:32:59 crc kubenswrapper[4737]: I0126 18:32:59.555837 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.555812888 podStartE2EDuration="2.555812888s" podCreationTimestamp="2026-01-26 18:32:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:32:59.54905295 +0000 UTC m=+152.857247658" watchObservedRunningTime="2026-01-26 18:32:59.555812888 +0000 UTC m=+152.864007596" Jan 26 18:33:00 crc kubenswrapper[4737]: I0126 18:33:00.213321 4737 patch_prober.go:28] interesting pod/router-default-5444994796-wwzqx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 18:33:00 crc kubenswrapper[4737]: [-]has-synced failed: reason withheld Jan 26 18:33:00 crc kubenswrapper[4737]: [+]process-running ok Jan 26 18:33:00 crc kubenswrapper[4737]: healthz check failed Jan 26 18:33:00 crc kubenswrapper[4737]: I0126 18:33:00.213723 4737 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wwzqx" podUID="60a6a19b-baa5-47c5-8733-202b5bfd0c97" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 18:33:00 crc kubenswrapper[4737]: I0126 18:33:00.806195 4737 patch_prober.go:28] interesting pod/console-f9d7485db-hbdm4 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 26 18:33:00 crc kubenswrapper[4737]: I0126 18:33:00.806282 4737 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-hbdm4" podUID="255d9d52-daaf-41e1-be00-4a94de0a6324" containerName="console" probeResult="failure" output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 26 18:33:00 crc kubenswrapper[4737]: I0126 18:33:00.949634 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:33:00 crc kubenswrapper[4737]: I0126 18:33:00.949744 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:33:01 crc kubenswrapper[4737]: I0126 18:33:01.211242 4737 patch_prober.go:28] interesting pod/router-default-5444994796-wwzqx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 18:33:01 crc kubenswrapper[4737]: [-]has-synced failed: reason withheld Jan 26 18:33:01 crc kubenswrapper[4737]: [+]process-running ok Jan 26 18:33:01 crc kubenswrapper[4737]: healthz check failed Jan 26 18:33:01 crc kubenswrapper[4737]: I0126 18:33:01.211331 4737 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wwzqx" podUID="60a6a19b-baa5-47c5-8733-202b5bfd0c97" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 18:33:01 crc kubenswrapper[4737]: I0126 18:33:01.552876 4737 generic.go:334] "Generic (PLEG): container finished" podID="5abb8c8d-4404-4ef1-ad13-b8a9348b604d" containerID="af5edf11b545825a8641cc6358cfe53a1f21af99722c215781e6175e75a1f840" exitCode=0 Jan 26 18:33:01 crc kubenswrapper[4737]: I0126 18:33:01.552936 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"5abb8c8d-4404-4ef1-ad13-b8a9348b604d","Type":"ContainerDied","Data":"af5edf11b545825a8641cc6358cfe53a1f21af99722c215781e6175e75a1f840"} Jan 26 18:33:01 crc kubenswrapper[4737]: I0126 18:33:01.596152 4737 patch_prober.go:28] interesting pod/downloads-7954f5f757-brpd2 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.34:8080/\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Jan 26 18:33:01 crc kubenswrapper[4737]: I0126 18:33:01.596189 4737 patch_prober.go:28] interesting pod/downloads-7954f5f757-brpd2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Jan 26 18:33:01 crc kubenswrapper[4737]: I0126 18:33:01.596288 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-brpd2" podUID="abf4a817-2de4-4f69-9ad8-d15ed857d5ab" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.34:8080/\": dial tcp 10.217.0.34:8080: connect: connection refused" Jan 26 18:33:01 crc kubenswrapper[4737]: I0126 18:33:01.596226 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-brpd2" podUID="abf4a817-2de4-4f69-9ad8-d15ed857d5ab" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.34:8080/\": dial tcp 10.217.0.34:8080: connect: connection refused" Jan 26 18:33:02 crc kubenswrapper[4737]: I0126 18:33:02.215363 4737 patch_prober.go:28] interesting pod/router-default-5444994796-wwzqx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 18:33:02 crc kubenswrapper[4737]: [-]has-synced failed: reason withheld Jan 26 18:33:02 crc kubenswrapper[4737]: [+]process-running ok Jan 26 18:33:02 crc kubenswrapper[4737]: healthz check failed Jan 26 18:33:02 crc kubenswrapper[4737]: I0126 18:33:02.215439 4737 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wwzqx" podUID="60a6a19b-baa5-47c5-8733-202b5bfd0c97" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 18:33:03 crc kubenswrapper[4737]: I0126 18:33:03.213366 4737 patch_prober.go:28] interesting pod/router-default-5444994796-wwzqx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 18:33:03 crc kubenswrapper[4737]: [-]has-synced failed: reason withheld Jan 26 18:33:03 crc kubenswrapper[4737]: [+]process-running ok Jan 26 18:33:03 crc kubenswrapper[4737]: healthz check failed Jan 26 18:33:03 crc kubenswrapper[4737]: I0126 18:33:03.215021 4737 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wwzqx" podUID="60a6a19b-baa5-47c5-8733-202b5bfd0c97" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 18:33:04 crc kubenswrapper[4737]: I0126 18:33:04.211182 4737 patch_prober.go:28] interesting pod/router-default-5444994796-wwzqx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 18:33:04 crc kubenswrapper[4737]: [-]has-synced failed: reason withheld Jan 26 18:33:04 crc kubenswrapper[4737]: [+]process-running ok Jan 26 18:33:04 crc kubenswrapper[4737]: healthz check failed Jan 26 18:33:04 crc kubenswrapper[4737]: I0126 18:33:04.211245 4737 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wwzqx" podUID="60a6a19b-baa5-47c5-8733-202b5bfd0c97" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 18:33:05 crc kubenswrapper[4737]: I0126 18:33:05.212629 4737 patch_prober.go:28] interesting pod/router-default-5444994796-wwzqx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 18:33:05 crc kubenswrapper[4737]: [-]has-synced failed: reason withheld Jan 26 18:33:05 crc kubenswrapper[4737]: [+]process-running ok Jan 26 18:33:05 crc kubenswrapper[4737]: healthz check failed Jan 26 18:33:05 crc kubenswrapper[4737]: I0126 18:33:05.212742 4737 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wwzqx" podUID="60a6a19b-baa5-47c5-8733-202b5bfd0c97" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 18:33:06 crc kubenswrapper[4737]: I0126 18:33:06.211618 4737 patch_prober.go:28] interesting pod/router-default-5444994796-wwzqx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 18:33:06 crc kubenswrapper[4737]: [-]has-synced failed: reason withheld Jan 26 18:33:06 crc kubenswrapper[4737]: [+]process-running ok Jan 26 18:33:06 crc kubenswrapper[4737]: healthz check failed Jan 26 18:33:06 crc kubenswrapper[4737]: I0126 18:33:06.212059 4737 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wwzqx" podUID="60a6a19b-baa5-47c5-8733-202b5bfd0c97" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 18:33:07 crc kubenswrapper[4737]: I0126 18:33:07.212629 4737 patch_prober.go:28] interesting pod/router-default-5444994796-wwzqx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 18:33:07 crc kubenswrapper[4737]: [-]has-synced failed: reason withheld Jan 26 18:33:07 crc kubenswrapper[4737]: [+]process-running ok Jan 26 18:33:07 crc kubenswrapper[4737]: healthz check failed Jan 26 18:33:07 crc kubenswrapper[4737]: I0126 18:33:07.212729 4737 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wwzqx" podUID="60a6a19b-baa5-47c5-8733-202b5bfd0c97" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 18:33:07 crc kubenswrapper[4737]: I0126 18:33:07.489394 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a3aadb5-b908-4300-af5f-e3c37dff9e14-metrics-certs\") pod \"network-metrics-daemon-4pv7r\" (UID: \"1a3aadb5-b908-4300-af5f-e3c37dff9e14\") " pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:33:07 crc kubenswrapper[4737]: I0126 18:33:07.503446 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a3aadb5-b908-4300-af5f-e3c37dff9e14-metrics-certs\") pod \"network-metrics-daemon-4pv7r\" (UID: \"1a3aadb5-b908-4300-af5f-e3c37dff9e14\") " pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:33:07 crc kubenswrapper[4737]: I0126 18:33:07.510733 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4pv7r" Jan 26 18:33:08 crc kubenswrapper[4737]: I0126 18:33:08.214005 4737 patch_prober.go:28] interesting pod/router-default-5444994796-wwzqx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 18:33:08 crc kubenswrapper[4737]: [-]has-synced failed: reason withheld Jan 26 18:33:08 crc kubenswrapper[4737]: [+]process-running ok Jan 26 18:33:08 crc kubenswrapper[4737]: healthz check failed Jan 26 18:33:08 crc kubenswrapper[4737]: I0126 18:33:08.214151 4737 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wwzqx" podUID="60a6a19b-baa5-47c5-8733-202b5bfd0c97" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 18:33:09 crc kubenswrapper[4737]: I0126 18:33:09.212331 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-wwzqx" Jan 26 18:33:09 crc kubenswrapper[4737]: I0126 18:33:09.216258 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-wwzqx" Jan 26 18:33:10 crc kubenswrapper[4737]: I0126 18:33:10.806001 4737 patch_prober.go:28] interesting pod/console-f9d7485db-hbdm4 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 26 18:33:10 crc kubenswrapper[4737]: I0126 18:33:10.806089 4737 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-hbdm4" podUID="255d9d52-daaf-41e1-be00-4a94de0a6324" containerName="console" probeResult="failure" output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 26 18:33:11 crc kubenswrapper[4737]: I0126 18:33:11.621294 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-brpd2" Jan 26 18:33:13 crc kubenswrapper[4737]: I0126 18:33:13.282234 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:33:15 crc kubenswrapper[4737]: I0126 18:33:15.828749 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 18:33:15 crc kubenswrapper[4737]: I0126 18:33:15.914023 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5abb8c8d-4404-4ef1-ad13-b8a9348b604d-kube-api-access\") pod \"5abb8c8d-4404-4ef1-ad13-b8a9348b604d\" (UID: \"5abb8c8d-4404-4ef1-ad13-b8a9348b604d\") " Jan 26 18:33:15 crc kubenswrapper[4737]: I0126 18:33:15.914617 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5abb8c8d-4404-4ef1-ad13-b8a9348b604d-kubelet-dir\") pod \"5abb8c8d-4404-4ef1-ad13-b8a9348b604d\" (UID: \"5abb8c8d-4404-4ef1-ad13-b8a9348b604d\") " Jan 26 18:33:15 crc kubenswrapper[4737]: I0126 18:33:15.914837 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5abb8c8d-4404-4ef1-ad13-b8a9348b604d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5abb8c8d-4404-4ef1-ad13-b8a9348b604d" (UID: "5abb8c8d-4404-4ef1-ad13-b8a9348b604d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:33:15 crc kubenswrapper[4737]: I0126 18:33:15.923420 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5abb8c8d-4404-4ef1-ad13-b8a9348b604d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5abb8c8d-4404-4ef1-ad13-b8a9348b604d" (UID: "5abb8c8d-4404-4ef1-ad13-b8a9348b604d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:33:16 crc kubenswrapper[4737]: I0126 18:33:16.016756 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5abb8c8d-4404-4ef1-ad13-b8a9348b604d-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 18:33:16 crc kubenswrapper[4737]: I0126 18:33:16.016807 4737 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5abb8c8d-4404-4ef1-ad13-b8a9348b604d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 18:33:16 crc kubenswrapper[4737]: I0126 18:33:16.689490 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"5abb8c8d-4404-4ef1-ad13-b8a9348b604d","Type":"ContainerDied","Data":"c19ceaafc010e89e6bb082c505d9a8bcbce1345ec2953fa4a2777c918eadf8af"} Jan 26 18:33:16 crc kubenswrapper[4737]: I0126 18:33:16.689567 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c19ceaafc010e89e6bb082c505d9a8bcbce1345ec2953fa4a2777c918eadf8af" Jan 26 18:33:16 crc kubenswrapper[4737]: I0126 18:33:16.689672 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 18:33:20 crc kubenswrapper[4737]: I0126 18:33:20.811473 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-hbdm4" Jan 26 18:33:20 crc kubenswrapper[4737]: I0126 18:33:20.816841 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-hbdm4" Jan 26 18:33:22 crc kubenswrapper[4737]: I0126 18:33:22.258685 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6jt9w" Jan 26 18:33:30 crc kubenswrapper[4737]: I0126 18:33:30.948660 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:33:30 crc kubenswrapper[4737]: I0126 18:33:30.949876 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:33:33 crc kubenswrapper[4737]: I0126 18:33:33.273966 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:33:33 crc kubenswrapper[4737]: I0126 18:33:33.555786 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 18:33:33 crc kubenswrapper[4737]: E0126 18:33:33.556128 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5abb8c8d-4404-4ef1-ad13-b8a9348b604d" containerName="pruner" Jan 26 18:33:33 crc kubenswrapper[4737]: I0126 18:33:33.556145 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="5abb8c8d-4404-4ef1-ad13-b8a9348b604d" containerName="pruner" Jan 26 18:33:33 crc kubenswrapper[4737]: I0126 18:33:33.556277 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="5abb8c8d-4404-4ef1-ad13-b8a9348b604d" containerName="pruner" Jan 26 18:33:33 crc kubenswrapper[4737]: I0126 18:33:33.556765 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 18:33:33 crc kubenswrapper[4737]: I0126 18:33:33.562765 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 26 18:33:33 crc kubenswrapper[4737]: I0126 18:33:33.564318 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 26 18:33:33 crc kubenswrapper[4737]: I0126 18:33:33.570102 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 18:33:33 crc kubenswrapper[4737]: I0126 18:33:33.580102 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e262210-e029-484c-a86e-3e2c50becd95-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8e262210-e029-484c-a86e-3e2c50becd95\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 18:33:33 crc kubenswrapper[4737]: I0126 18:33:33.580169 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e262210-e029-484c-a86e-3e2c50becd95-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8e262210-e029-484c-a86e-3e2c50becd95\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 18:33:33 crc kubenswrapper[4737]: I0126 18:33:33.680811 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e262210-e029-484c-a86e-3e2c50becd95-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8e262210-e029-484c-a86e-3e2c50becd95\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 18:33:33 crc kubenswrapper[4737]: I0126 18:33:33.680866 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e262210-e029-484c-a86e-3e2c50becd95-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8e262210-e029-484c-a86e-3e2c50becd95\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 18:33:33 crc kubenswrapper[4737]: I0126 18:33:33.680971 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e262210-e029-484c-a86e-3e2c50becd95-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8e262210-e029-484c-a86e-3e2c50becd95\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 18:33:33 crc kubenswrapper[4737]: I0126 18:33:33.703142 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e262210-e029-484c-a86e-3e2c50becd95-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8e262210-e029-484c-a86e-3e2c50becd95\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 18:33:33 crc kubenswrapper[4737]: I0126 18:33:33.878128 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 18:33:35 crc kubenswrapper[4737]: E0126 18:33:35.151479 4737 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 26 18:33:35 crc kubenswrapper[4737]: E0126 18:33:35.152316 4737 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jnsnx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-ndrff_openshift-marketplace(74a38d4e-7789-4e8b-abbc-da9d57d1bcc4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 18:33:35 crc kubenswrapper[4737]: E0126 18:33:35.153570 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-ndrff" podUID="74a38d4e-7789-4e8b-abbc-da9d57d1bcc4" Jan 26 18:33:38 crc kubenswrapper[4737]: I0126 18:33:38.549104 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 18:33:38 crc kubenswrapper[4737]: I0126 18:33:38.550130 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 18:33:38 crc kubenswrapper[4737]: I0126 18:33:38.599803 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 18:33:38 crc kubenswrapper[4737]: I0126 18:33:38.652769 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3628597d-09b4-4169-ba4b-ddedf59fce32-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3628597d-09b4-4169-ba4b-ddedf59fce32\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 18:33:38 crc kubenswrapper[4737]: I0126 18:33:38.652856 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3628597d-09b4-4169-ba4b-ddedf59fce32-kube-api-access\") pod \"installer-9-crc\" (UID: \"3628597d-09b4-4169-ba4b-ddedf59fce32\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 18:33:38 crc kubenswrapper[4737]: I0126 18:33:38.652901 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3628597d-09b4-4169-ba4b-ddedf59fce32-var-lock\") pod \"installer-9-crc\" (UID: \"3628597d-09b4-4169-ba4b-ddedf59fce32\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 18:33:38 crc kubenswrapper[4737]: I0126 18:33:38.754916 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3628597d-09b4-4169-ba4b-ddedf59fce32-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3628597d-09b4-4169-ba4b-ddedf59fce32\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 18:33:38 crc kubenswrapper[4737]: I0126 18:33:38.754986 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3628597d-09b4-4169-ba4b-ddedf59fce32-kube-api-access\") pod \"installer-9-crc\" (UID: \"3628597d-09b4-4169-ba4b-ddedf59fce32\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 18:33:38 crc kubenswrapper[4737]: I0126 18:33:38.755018 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3628597d-09b4-4169-ba4b-ddedf59fce32-var-lock\") pod \"installer-9-crc\" (UID: \"3628597d-09b4-4169-ba4b-ddedf59fce32\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 18:33:38 crc kubenswrapper[4737]: I0126 18:33:38.755018 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3628597d-09b4-4169-ba4b-ddedf59fce32-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3628597d-09b4-4169-ba4b-ddedf59fce32\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 18:33:38 crc kubenswrapper[4737]: I0126 18:33:38.755166 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3628597d-09b4-4169-ba4b-ddedf59fce32-var-lock\") pod \"installer-9-crc\" (UID: \"3628597d-09b4-4169-ba4b-ddedf59fce32\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 18:33:38 crc kubenswrapper[4737]: I0126 18:33:38.785422 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3628597d-09b4-4169-ba4b-ddedf59fce32-kube-api-access\") pod \"installer-9-crc\" (UID: \"3628597d-09b4-4169-ba4b-ddedf59fce32\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 18:33:38 crc kubenswrapper[4737]: I0126 18:33:38.868660 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 18:33:39 crc kubenswrapper[4737]: E0126 18:33:39.166330 4737 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 26 18:33:39 crc kubenswrapper[4737]: E0126 18:33:39.167019 4737 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k6xnx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-f4ldv_openshift-marketplace(7acd9116-baab-48b1-ab22-7310f60fada8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 18:33:39 crc kubenswrapper[4737]: E0126 18:33:39.168204 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-f4ldv" podUID="7acd9116-baab-48b1-ab22-7310f60fada8" Jan 26 18:33:39 crc kubenswrapper[4737]: E0126 18:33:39.168559 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-ndrff" podUID="74a38d4e-7789-4e8b-abbc-da9d57d1bcc4" Jan 26 18:33:40 crc kubenswrapper[4737]: E0126 18:33:40.532125 4737 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 26 18:33:40 crc kubenswrapper[4737]: E0126 18:33:40.532413 4737 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5ww4w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-lrlts_openshift-marketplace(7a6f7537-6a89-4f64-a0a1-c96e49c575db): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 18:33:40 crc kubenswrapper[4737]: E0126 18:33:40.533530 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-lrlts" podUID="7a6f7537-6a89-4f64-a0a1-c96e49c575db" Jan 26 18:33:41 crc kubenswrapper[4737]: E0126 18:33:41.825741 4737 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 26 18:33:41 crc kubenswrapper[4737]: E0126 18:33:41.826496 4737 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9xhsg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-6tf2g_openshift-marketplace(0bd24ab7-1242-4a05-afc2-bd24d931cb3d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 18:33:41 crc kubenswrapper[4737]: E0126 18:33:41.828305 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-6tf2g" podUID="0bd24ab7-1242-4a05-afc2-bd24d931cb3d" Jan 26 18:33:43 crc kubenswrapper[4737]: E0126 18:33:43.267956 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-f4ldv" podUID="7acd9116-baab-48b1-ab22-7310f60fada8" Jan 26 18:33:43 crc kubenswrapper[4737]: E0126 18:33:43.267971 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-6tf2g" podUID="0bd24ab7-1242-4a05-afc2-bd24d931cb3d" Jan 26 18:33:43 crc kubenswrapper[4737]: E0126 18:33:43.268777 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-lrlts" podUID="7a6f7537-6a89-4f64-a0a1-c96e49c575db" Jan 26 18:33:43 crc kubenswrapper[4737]: E0126 18:33:43.379471 4737 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 26 18:33:43 crc kubenswrapper[4737]: E0126 18:33:43.379684 4737 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7br78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-nmjc5_openshift-marketplace(b29f7821-ed11-4b5d-b946-4562c4c595ef): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 18:33:43 crc kubenswrapper[4737]: E0126 18:33:43.382596 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-nmjc5" podUID="b29f7821-ed11-4b5d-b946-4562c4c595ef" Jan 26 18:33:44 crc kubenswrapper[4737]: E0126 18:33:44.699457 4737 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 26 18:33:44 crc kubenswrapper[4737]: E0126 18:33:44.699663 4737 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-59fs5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-2ql5w_openshift-marketplace(402663db-5331-4692-8539-f79973a5759b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 18:33:44 crc kubenswrapper[4737]: E0126 18:33:44.701924 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-2ql5w" podUID="402663db-5331-4692-8539-f79973a5759b" Jan 26 18:33:44 crc kubenswrapper[4737]: E0126 18:33:44.706746 4737 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 26 18:33:44 crc kubenswrapper[4737]: E0126 18:33:44.706888 4737 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-276bj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-4cxsx_openshift-marketplace(01d16131-935e-4d13-8f42-d9ff3ce55769): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 18:33:44 crc kubenswrapper[4737]: E0126 18:33:44.708169 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-4cxsx" podUID="01d16131-935e-4d13-8f42-d9ff3ce55769" Jan 26 18:33:44 crc kubenswrapper[4737]: E0126 18:33:44.829125 4737 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 26 18:33:44 crc kubenswrapper[4737]: E0126 18:33:44.829579 4737 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lv8qk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-5j2cd_openshift-marketplace(0a348468-634f-4d18-aa1d-ecc9aff08138): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 18:33:44 crc kubenswrapper[4737]: E0126 18:33:44.830973 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-5j2cd" podUID="0a348468-634f-4d18-aa1d-ecc9aff08138" Jan 26 18:33:44 crc kubenswrapper[4737]: E0126 18:33:44.904776 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-5j2cd" podUID="0a348468-634f-4d18-aa1d-ecc9aff08138" Jan 26 18:33:45 crc kubenswrapper[4737]: I0126 18:33:45.125958 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 18:33:45 crc kubenswrapper[4737]: I0126 18:33:45.137417 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-4pv7r"] Jan 26 18:33:45 crc kubenswrapper[4737]: I0126 18:33:45.140694 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 18:33:45 crc kubenswrapper[4737]: I0126 18:33:45.909157 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3628597d-09b4-4169-ba4b-ddedf59fce32","Type":"ContainerStarted","Data":"21e2c44b3593d982d4f25a5a465e6677bdb8f2550e805f738b92a6f6df97bd52"} Jan 26 18:33:45 crc kubenswrapper[4737]: I0126 18:33:45.911560 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-4pv7r" event={"ID":"1a3aadb5-b908-4300-af5f-e3c37dff9e14","Type":"ContainerStarted","Data":"d4e8975c9abc7589e9b8d06385b9823d29aaa0793a7a79597b9cdecb7a241b88"} Jan 26 18:33:45 crc kubenswrapper[4737]: I0126 18:33:45.911624 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-4pv7r" event={"ID":"1a3aadb5-b908-4300-af5f-e3c37dff9e14","Type":"ContainerStarted","Data":"d743afcc278cadea9456a1f381b7dbd8bc9981c10c1e2c947fdf77d34d6434d4"} Jan 26 18:33:45 crc kubenswrapper[4737]: I0126 18:33:45.912929 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"8e262210-e029-484c-a86e-3e2c50becd95","Type":"ContainerStarted","Data":"e381e57559eba35bf49538dd62d75d7326bbdcea67fb734af1362d1f8714d9a5"} Jan 26 18:33:46 crc kubenswrapper[4737]: I0126 18:33:46.921653 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-4pv7r" event={"ID":"1a3aadb5-b908-4300-af5f-e3c37dff9e14","Type":"ContainerStarted","Data":"2ad2493573dba6c7aa36adb824ed37e101aff195fb9821b3e63ddec66153712e"} Jan 26 18:33:46 crc kubenswrapper[4737]: I0126 18:33:46.925122 4737 generic.go:334] "Generic (PLEG): container finished" podID="8e262210-e029-484c-a86e-3e2c50becd95" containerID="62d1d24ca8e1a8a9d5e07f75488ba932f7a27c1989f1bd729cc3e0a91344855e" exitCode=0 Jan 26 18:33:46 crc kubenswrapper[4737]: I0126 18:33:46.925215 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"8e262210-e029-484c-a86e-3e2c50becd95","Type":"ContainerDied","Data":"62d1d24ca8e1a8a9d5e07f75488ba932f7a27c1989f1bd729cc3e0a91344855e"} Jan 26 18:33:46 crc kubenswrapper[4737]: I0126 18:33:46.927731 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3628597d-09b4-4169-ba4b-ddedf59fce32","Type":"ContainerStarted","Data":"de7bc978ffb7f2ad06dfd08eb169f38ca80433cc84f513169b174729d4de5a3c"} Jan 26 18:33:46 crc kubenswrapper[4737]: I0126 18:33:46.937007 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-4pv7r" podStartSLOduration=182.936986671 podStartE2EDuration="3m2.936986671s" podCreationTimestamp="2026-01-26 18:30:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:33:46.936214301 +0000 UTC m=+200.244409009" watchObservedRunningTime="2026-01-26 18:33:46.936986671 +0000 UTC m=+200.245181379" Jan 26 18:33:46 crc kubenswrapper[4737]: I0126 18:33:46.980126 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=8.980105175 podStartE2EDuration="8.980105175s" podCreationTimestamp="2026-01-26 18:33:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:33:46.964115727 +0000 UTC m=+200.272310435" watchObservedRunningTime="2026-01-26 18:33:46.980105175 +0000 UTC m=+200.288299883" Jan 26 18:33:48 crc kubenswrapper[4737]: I0126 18:33:48.195278 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 18:33:48 crc kubenswrapper[4737]: I0126 18:33:48.310194 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e262210-e029-484c-a86e-3e2c50becd95-kube-api-access\") pod \"8e262210-e029-484c-a86e-3e2c50becd95\" (UID: \"8e262210-e029-484c-a86e-3e2c50becd95\") " Jan 26 18:33:48 crc kubenswrapper[4737]: I0126 18:33:48.310280 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e262210-e029-484c-a86e-3e2c50becd95-kubelet-dir\") pod \"8e262210-e029-484c-a86e-3e2c50becd95\" (UID: \"8e262210-e029-484c-a86e-3e2c50becd95\") " Jan 26 18:33:48 crc kubenswrapper[4737]: I0126 18:33:48.310432 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e262210-e029-484c-a86e-3e2c50becd95-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8e262210-e029-484c-a86e-3e2c50becd95" (UID: "8e262210-e029-484c-a86e-3e2c50becd95"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:33:48 crc kubenswrapper[4737]: I0126 18:33:48.310764 4737 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e262210-e029-484c-a86e-3e2c50becd95-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 18:33:48 crc kubenswrapper[4737]: I0126 18:33:48.318911 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e262210-e029-484c-a86e-3e2c50becd95-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8e262210-e029-484c-a86e-3e2c50becd95" (UID: "8e262210-e029-484c-a86e-3e2c50becd95"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:33:48 crc kubenswrapper[4737]: I0126 18:33:48.411560 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e262210-e029-484c-a86e-3e2c50becd95-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 18:33:48 crc kubenswrapper[4737]: I0126 18:33:48.943279 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"8e262210-e029-484c-a86e-3e2c50becd95","Type":"ContainerDied","Data":"e381e57559eba35bf49538dd62d75d7326bbdcea67fb734af1362d1f8714d9a5"} Jan 26 18:33:48 crc kubenswrapper[4737]: I0126 18:33:48.943564 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e381e57559eba35bf49538dd62d75d7326bbdcea67fb734af1362d1f8714d9a5" Jan 26 18:33:48 crc kubenswrapper[4737]: I0126 18:33:48.943342 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 18:33:58 crc kubenswrapper[4737]: I0126 18:33:58.722174 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-9kjp9"] Jan 26 18:34:00 crc kubenswrapper[4737]: I0126 18:34:00.027824 4737 generic.go:334] "Generic (PLEG): container finished" podID="74a38d4e-7789-4e8b-abbc-da9d57d1bcc4" containerID="1b3d10bc428a2ea865a0a4ce256ae811530ed545c39aad4cbb8a6a1b74c1c6b2" exitCode=0 Jan 26 18:34:00 crc kubenswrapper[4737]: I0126 18:34:00.027911 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ndrff" event={"ID":"74a38d4e-7789-4e8b-abbc-da9d57d1bcc4","Type":"ContainerDied","Data":"1b3d10bc428a2ea865a0a4ce256ae811530ed545c39aad4cbb8a6a1b74c1c6b2"} Jan 26 18:34:00 crc kubenswrapper[4737]: I0126 18:34:00.952415 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:34:00 crc kubenswrapper[4737]: I0126 18:34:00.952897 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:34:00 crc kubenswrapper[4737]: I0126 18:34:00.952960 4737 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" Jan 26 18:34:00 crc kubenswrapper[4737]: I0126 18:34:00.953529 4737 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bea5fce0e1e77606f5e8f6cb2c1b339d6b7b8174e1f68a050834be1f5bedfec6"} pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 18:34:00 crc kubenswrapper[4737]: I0126 18:34:00.953661 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" containerID="cri-o://bea5fce0e1e77606f5e8f6cb2c1b339d6b7b8174e1f68a050834be1f5bedfec6" gracePeriod=600 Jan 26 18:34:02 crc kubenswrapper[4737]: I0126 18:34:02.060189 4737 generic.go:334] "Generic (PLEG): container finished" podID="afd75772-7900-46c3-b392-afb075e1cc08" containerID="bea5fce0e1e77606f5e8f6cb2c1b339d6b7b8174e1f68a050834be1f5bedfec6" exitCode=0 Jan 26 18:34:02 crc kubenswrapper[4737]: I0126 18:34:02.060233 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" event={"ID":"afd75772-7900-46c3-b392-afb075e1cc08","Type":"ContainerDied","Data":"bea5fce0e1e77606f5e8f6cb2c1b339d6b7b8174e1f68a050834be1f5bedfec6"} Jan 26 18:34:03 crc kubenswrapper[4737]: I0126 18:34:03.067575 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nmjc5" event={"ID":"b29f7821-ed11-4b5d-b946-4562c4c595ef","Type":"ContainerStarted","Data":"f40783bb9b568b9258c666cd5f416e61a600bd9ddddbe5688a8c33e0758c3fa0"} Jan 26 18:34:03 crc kubenswrapper[4737]: I0126 18:34:03.071670 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" event={"ID":"afd75772-7900-46c3-b392-afb075e1cc08","Type":"ContainerStarted","Data":"8783fe741322f0ba5562aa3c7abb35f1d6a9263f4a157b075924b1c99832d130"} Jan 26 18:34:03 crc kubenswrapper[4737]: I0126 18:34:03.074616 4737 generic.go:334] "Generic (PLEG): container finished" podID="0a348468-634f-4d18-aa1d-ecc9aff08138" containerID="95225ca6d9f22406abd55ce795aab2ba9ba467bd8b67a6bb0c94a9e039dfd744" exitCode=0 Jan 26 18:34:03 crc kubenswrapper[4737]: I0126 18:34:03.074677 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5j2cd" event={"ID":"0a348468-634f-4d18-aa1d-ecc9aff08138","Type":"ContainerDied","Data":"95225ca6d9f22406abd55ce795aab2ba9ba467bd8b67a6bb0c94a9e039dfd744"} Jan 26 18:34:03 crc kubenswrapper[4737]: I0126 18:34:03.078341 4737 generic.go:334] "Generic (PLEG): container finished" podID="7a6f7537-6a89-4f64-a0a1-c96e49c575db" containerID="6a1d023a4649553aff40b0a9bd57ad0bc6d226fea827783590be3ba0504c15b6" exitCode=0 Jan 26 18:34:03 crc kubenswrapper[4737]: I0126 18:34:03.078477 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lrlts" event={"ID":"7a6f7537-6a89-4f64-a0a1-c96e49c575db","Type":"ContainerDied","Data":"6a1d023a4649553aff40b0a9bd57ad0bc6d226fea827783590be3ba0504c15b6"} Jan 26 18:34:03 crc kubenswrapper[4737]: I0126 18:34:03.083704 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ndrff" event={"ID":"74a38d4e-7789-4e8b-abbc-da9d57d1bcc4","Type":"ContainerStarted","Data":"d661aa57bebf951b78e012e16a95f09a77574bc4ef40083a2ce6d9d3aeea9000"} Jan 26 18:34:03 crc kubenswrapper[4737]: I0126 18:34:03.092786 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f4ldv" event={"ID":"7acd9116-baab-48b1-ab22-7310f60fada8","Type":"ContainerStarted","Data":"f4c58c1a5a76fa4c57db377d5ab92367950e651c7f1d84f1b6286d1583822707"} Jan 26 18:34:03 crc kubenswrapper[4737]: I0126 18:34:03.112379 4737 generic.go:334] "Generic (PLEG): container finished" podID="0bd24ab7-1242-4a05-afc2-bd24d931cb3d" containerID="c7d03ae45b8110a35d88dcabe2b15422331a9a427c878838d5047bc555143b09" exitCode=0 Jan 26 18:34:03 crc kubenswrapper[4737]: I0126 18:34:03.112445 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6tf2g" event={"ID":"0bd24ab7-1242-4a05-afc2-bd24d931cb3d","Type":"ContainerDied","Data":"c7d03ae45b8110a35d88dcabe2b15422331a9a427c878838d5047bc555143b09"} Jan 26 18:34:03 crc kubenswrapper[4737]: I0126 18:34:03.116533 4737 generic.go:334] "Generic (PLEG): container finished" podID="402663db-5331-4692-8539-f79973a5759b" containerID="7346e019814d7b368403cb66cc8100101e707c6aca653930974f646c8c9cd5c8" exitCode=0 Jan 26 18:34:03 crc kubenswrapper[4737]: I0126 18:34:03.117040 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2ql5w" event={"ID":"402663db-5331-4692-8539-f79973a5759b","Type":"ContainerDied","Data":"7346e019814d7b368403cb66cc8100101e707c6aca653930974f646c8c9cd5c8"} Jan 26 18:34:03 crc kubenswrapper[4737]: I0126 18:34:03.125436 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4cxsx" event={"ID":"01d16131-935e-4d13-8f42-d9ff3ce55769","Type":"ContainerStarted","Data":"a2a5d7c5df2f473c1bbc0c5b76e4c1e90e552e6602da2da3c4ab24e487277da3"} Jan 26 18:34:03 crc kubenswrapper[4737]: I0126 18:34:03.153187 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ndrff" podStartSLOduration=4.945948907 podStartE2EDuration="1m15.153166702s" podCreationTimestamp="2026-01-26 18:32:48 +0000 UTC" firstStartedPulling="2026-01-26 18:32:51.957298229 +0000 UTC m=+145.265492937" lastFinishedPulling="2026-01-26 18:34:02.164516024 +0000 UTC m=+215.472710732" observedRunningTime="2026-01-26 18:34:03.130468685 +0000 UTC m=+216.438663383" watchObservedRunningTime="2026-01-26 18:34:03.153166702 +0000 UTC m=+216.461361410" Jan 26 18:34:04 crc kubenswrapper[4737]: I0126 18:34:04.133546 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5j2cd" event={"ID":"0a348468-634f-4d18-aa1d-ecc9aff08138","Type":"ContainerStarted","Data":"52ea918acb02ea5114f82eff17b4c7301ac4eb2ad1d798ec9e7528ca1f3c8dad"} Jan 26 18:34:04 crc kubenswrapper[4737]: I0126 18:34:04.138354 4737 generic.go:334] "Generic (PLEG): container finished" podID="01d16131-935e-4d13-8f42-d9ff3ce55769" containerID="a2a5d7c5df2f473c1bbc0c5b76e4c1e90e552e6602da2da3c4ab24e487277da3" exitCode=0 Jan 26 18:34:04 crc kubenswrapper[4737]: I0126 18:34:04.138416 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4cxsx" event={"ID":"01d16131-935e-4d13-8f42-d9ff3ce55769","Type":"ContainerDied","Data":"a2a5d7c5df2f473c1bbc0c5b76e4c1e90e552e6602da2da3c4ab24e487277da3"} Jan 26 18:34:04 crc kubenswrapper[4737]: I0126 18:34:04.142294 4737 generic.go:334] "Generic (PLEG): container finished" podID="7acd9116-baab-48b1-ab22-7310f60fada8" containerID="f4c58c1a5a76fa4c57db377d5ab92367950e651c7f1d84f1b6286d1583822707" exitCode=0 Jan 26 18:34:04 crc kubenswrapper[4737]: I0126 18:34:04.142345 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f4ldv" event={"ID":"7acd9116-baab-48b1-ab22-7310f60fada8","Type":"ContainerDied","Data":"f4c58c1a5a76fa4c57db377d5ab92367950e651c7f1d84f1b6286d1583822707"} Jan 26 18:34:04 crc kubenswrapper[4737]: I0126 18:34:04.152861 4737 generic.go:334] "Generic (PLEG): container finished" podID="b29f7821-ed11-4b5d-b946-4562c4c595ef" containerID="f40783bb9b568b9258c666cd5f416e61a600bd9ddddbe5688a8c33e0758c3fa0" exitCode=0 Jan 26 18:34:04 crc kubenswrapper[4737]: I0126 18:34:04.152931 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nmjc5" event={"ID":"b29f7821-ed11-4b5d-b946-4562c4c595ef","Type":"ContainerDied","Data":"f40783bb9b568b9258c666cd5f416e61a600bd9ddddbe5688a8c33e0758c3fa0"} Jan 26 18:34:04 crc kubenswrapper[4737]: I0126 18:34:04.167139 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5j2cd" podStartSLOduration=3.484766822 podStartE2EDuration="1m14.167118558s" podCreationTimestamp="2026-01-26 18:32:50 +0000 UTC" firstStartedPulling="2026-01-26 18:32:53.14489808 +0000 UTC m=+146.453092788" lastFinishedPulling="2026-01-26 18:34:03.827249816 +0000 UTC m=+217.135444524" observedRunningTime="2026-01-26 18:34:04.159816693 +0000 UTC m=+217.468011401" watchObservedRunningTime="2026-01-26 18:34:04.167118558 +0000 UTC m=+217.475313276" Jan 26 18:34:04 crc kubenswrapper[4737]: I0126 18:34:04.173198 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6tf2g" event={"ID":"0bd24ab7-1242-4a05-afc2-bd24d931cb3d","Type":"ContainerStarted","Data":"2edbe879efcb559f15a3d3f855130d51d9cf622672b2294bec0f7d4e78c26fbd"} Jan 26 18:34:04 crc kubenswrapper[4737]: I0126 18:34:04.258564 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6tf2g" podStartSLOduration=3.020297066 podStartE2EDuration="1m16.258537604s" podCreationTimestamp="2026-01-26 18:32:48 +0000 UTC" firstStartedPulling="2026-01-26 18:32:50.670154559 +0000 UTC m=+143.978349267" lastFinishedPulling="2026-01-26 18:34:03.908395087 +0000 UTC m=+217.216589805" observedRunningTime="2026-01-26 18:34:04.254815684 +0000 UTC m=+217.563010392" watchObservedRunningTime="2026-01-26 18:34:04.258537604 +0000 UTC m=+217.566732312" Jan 26 18:34:05 crc kubenswrapper[4737]: I0126 18:34:05.180963 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2ql5w" event={"ID":"402663db-5331-4692-8539-f79973a5759b","Type":"ContainerStarted","Data":"8a817a7c79a7cc5c7c2664c1eca00880fbaf0791758e16874c13214a3be17120"} Jan 26 18:34:05 crc kubenswrapper[4737]: I0126 18:34:05.185102 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lrlts" event={"ID":"7a6f7537-6a89-4f64-a0a1-c96e49c575db","Type":"ContainerStarted","Data":"bf28613d96c82a9e36032920a97738260d866e2acc87f44ce2eb8d80250d514d"} Jan 26 18:34:05 crc kubenswrapper[4737]: I0126 18:34:05.207643 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2ql5w" podStartSLOduration=6.553703223 podStartE2EDuration="1m15.207622634s" podCreationTimestamp="2026-01-26 18:32:50 +0000 UTC" firstStartedPulling="2026-01-26 18:32:55.369267417 +0000 UTC m=+148.677462125" lastFinishedPulling="2026-01-26 18:34:04.023186828 +0000 UTC m=+217.331381536" observedRunningTime="2026-01-26 18:34:05.203518045 +0000 UTC m=+218.511712773" watchObservedRunningTime="2026-01-26 18:34:05.207622634 +0000 UTC m=+218.515817342" Jan 26 18:34:05 crc kubenswrapper[4737]: I0126 18:34:05.232379 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lrlts" podStartSLOduration=5.119459695 podStartE2EDuration="1m17.232360266s" podCreationTimestamp="2026-01-26 18:32:48 +0000 UTC" firstStartedPulling="2026-01-26 18:32:51.950842121 +0000 UTC m=+145.259036829" lastFinishedPulling="2026-01-26 18:34:04.063742692 +0000 UTC m=+217.371937400" observedRunningTime="2026-01-26 18:34:05.22951687 +0000 UTC m=+218.537711578" watchObservedRunningTime="2026-01-26 18:34:05.232360266 +0000 UTC m=+218.540554974" Jan 26 18:34:06 crc kubenswrapper[4737]: I0126 18:34:06.192637 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f4ldv" event={"ID":"7acd9116-baab-48b1-ab22-7310f60fada8","Type":"ContainerStarted","Data":"814d1b960114e4158a347e60bd2a0b55832520a8df14191ce7afa97e33da0cc0"} Jan 26 18:34:06 crc kubenswrapper[4737]: I0126 18:34:06.194861 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nmjc5" event={"ID":"b29f7821-ed11-4b5d-b946-4562c4c595ef","Type":"ContainerStarted","Data":"d8c0f96fa74c12eb06608cd0966cc6c6bb7c15c5110082f83d27fc0ea772d03f"} Jan 26 18:34:06 crc kubenswrapper[4737]: I0126 18:34:06.198057 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4cxsx" event={"ID":"01d16131-935e-4d13-8f42-d9ff3ce55769","Type":"ContainerStarted","Data":"e9e878007d84395c685f8f7f259da0d9d0e816d01f706c53f2c68cb824134393"} Jan 26 18:34:06 crc kubenswrapper[4737]: I0126 18:34:06.215413 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-f4ldv" podStartSLOduration=3.536410744 podStartE2EDuration="1m18.215389864s" podCreationTimestamp="2026-01-26 18:32:48 +0000 UTC" firstStartedPulling="2026-01-26 18:32:50.640342362 +0000 UTC m=+143.948537070" lastFinishedPulling="2026-01-26 18:34:05.319321482 +0000 UTC m=+218.627516190" observedRunningTime="2026-01-26 18:34:06.213027081 +0000 UTC m=+219.521221799" watchObservedRunningTime="2026-01-26 18:34:06.215389864 +0000 UTC m=+219.523584572" Jan 26 18:34:06 crc kubenswrapper[4737]: I0126 18:34:06.257025 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4cxsx" podStartSLOduration=6.491849797 podStartE2EDuration="1m14.257005638s" podCreationTimestamp="2026-01-26 18:32:52 +0000 UTC" firstStartedPulling="2026-01-26 18:32:57.444409805 +0000 UTC m=+150.752604513" lastFinishedPulling="2026-01-26 18:34:05.209565646 +0000 UTC m=+218.517760354" observedRunningTime="2026-01-26 18:34:06.253834543 +0000 UTC m=+219.562029251" watchObservedRunningTime="2026-01-26 18:34:06.257005638 +0000 UTC m=+219.565200346" Jan 26 18:34:06 crc kubenswrapper[4737]: I0126 18:34:06.288808 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nmjc5" podStartSLOduration=7.586268113 podStartE2EDuration="1m15.288788818s" podCreationTimestamp="2026-01-26 18:32:51 +0000 UTC" firstStartedPulling="2026-01-26 18:32:57.444134138 +0000 UTC m=+150.752328846" lastFinishedPulling="2026-01-26 18:34:05.146654843 +0000 UTC m=+218.454849551" observedRunningTime="2026-01-26 18:34:06.287631847 +0000 UTC m=+219.595826565" watchObservedRunningTime="2026-01-26 18:34:06.288788818 +0000 UTC m=+219.596983526" Jan 26 18:34:08 crc kubenswrapper[4737]: I0126 18:34:08.728602 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-f4ldv" Jan 26 18:34:08 crc kubenswrapper[4737]: I0126 18:34:08.730004 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-f4ldv" Jan 26 18:34:08 crc kubenswrapper[4737]: I0126 18:34:08.780399 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6tf2g" Jan 26 18:34:08 crc kubenswrapper[4737]: I0126 18:34:08.781549 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6tf2g" Jan 26 18:34:08 crc kubenswrapper[4737]: I0126 18:34:08.797376 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-f4ldv" Jan 26 18:34:08 crc kubenswrapper[4737]: I0126 18:34:08.823980 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6tf2g" Jan 26 18:34:09 crc kubenswrapper[4737]: I0126 18:34:09.258593 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6tf2g" Jan 26 18:34:09 crc kubenswrapper[4737]: I0126 18:34:09.596720 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lrlts" Jan 26 18:34:09 crc kubenswrapper[4737]: I0126 18:34:09.597106 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lrlts" Jan 26 18:34:09 crc kubenswrapper[4737]: I0126 18:34:09.635525 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lrlts" Jan 26 18:34:09 crc kubenswrapper[4737]: I0126 18:34:09.879922 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ndrff" Jan 26 18:34:09 crc kubenswrapper[4737]: I0126 18:34:09.879988 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ndrff" Jan 26 18:34:09 crc kubenswrapper[4737]: I0126 18:34:09.930364 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ndrff" Jan 26 18:34:10 crc kubenswrapper[4737]: I0126 18:34:10.257156 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ndrff" Jan 26 18:34:10 crc kubenswrapper[4737]: I0126 18:34:10.262030 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lrlts" Jan 26 18:34:10 crc kubenswrapper[4737]: I0126 18:34:10.742256 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5j2cd" Jan 26 18:34:10 crc kubenswrapper[4737]: I0126 18:34:10.742312 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5j2cd" Jan 26 18:34:10 crc kubenswrapper[4737]: I0126 18:34:10.783823 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5j2cd" Jan 26 18:34:11 crc kubenswrapper[4737]: I0126 18:34:11.098382 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2ql5w" Jan 26 18:34:11 crc kubenswrapper[4737]: I0126 18:34:11.098450 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2ql5w" Jan 26 18:34:11 crc kubenswrapper[4737]: I0126 18:34:11.146944 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2ql5w" Jan 26 18:34:11 crc kubenswrapper[4737]: I0126 18:34:11.232798 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lrlts"] Jan 26 18:34:11 crc kubenswrapper[4737]: I0126 18:34:11.281516 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5j2cd" Jan 26 18:34:11 crc kubenswrapper[4737]: I0126 18:34:11.287990 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2ql5w" Jan 26 18:34:12 crc kubenswrapper[4737]: I0126 18:34:12.229812 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lrlts" podUID="7a6f7537-6a89-4f64-a0a1-c96e49c575db" containerName="registry-server" containerID="cri-o://bf28613d96c82a9e36032920a97738260d866e2acc87f44ce2eb8d80250d514d" gracePeriod=2 Jan 26 18:34:12 crc kubenswrapper[4737]: I0126 18:34:12.370422 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nmjc5" Jan 26 18:34:12 crc kubenswrapper[4737]: I0126 18:34:12.370990 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nmjc5" Jan 26 18:34:12 crc kubenswrapper[4737]: I0126 18:34:12.410609 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nmjc5" Jan 26 18:34:12 crc kubenswrapper[4737]: I0126 18:34:12.553450 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4cxsx" Jan 26 18:34:12 crc kubenswrapper[4737]: I0126 18:34:12.553530 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4cxsx" Jan 26 18:34:12 crc kubenswrapper[4737]: I0126 18:34:12.605010 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4cxsx" Jan 26 18:34:13 crc kubenswrapper[4737]: I0126 18:34:13.279660 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nmjc5" Jan 26 18:34:13 crc kubenswrapper[4737]: I0126 18:34:13.308880 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4cxsx" Jan 26 18:34:13 crc kubenswrapper[4737]: I0126 18:34:13.435502 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ndrff"] Jan 26 18:34:13 crc kubenswrapper[4737]: I0126 18:34:13.435987 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ndrff" podUID="74a38d4e-7789-4e8b-abbc-da9d57d1bcc4" containerName="registry-server" containerID="cri-o://d661aa57bebf951b78e012e16a95f09a77574bc4ef40083a2ce6d9d3aeea9000" gracePeriod=2 Jan 26 18:34:13 crc kubenswrapper[4737]: I0126 18:34:13.630371 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2ql5w"] Jan 26 18:34:13 crc kubenswrapper[4737]: I0126 18:34:13.630698 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2ql5w" podUID="402663db-5331-4692-8539-f79973a5759b" containerName="registry-server" containerID="cri-o://8a817a7c79a7cc5c7c2664c1eca00880fbaf0791758e16874c13214a3be17120" gracePeriod=2 Jan 26 18:34:16 crc kubenswrapper[4737]: I0126 18:34:16.027371 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4cxsx"] Jan 26 18:34:16 crc kubenswrapper[4737]: I0126 18:34:16.027695 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4cxsx" podUID="01d16131-935e-4d13-8f42-d9ff3ce55769" containerName="registry-server" containerID="cri-o://e9e878007d84395c685f8f7f259da0d9d0e816d01f706c53f2c68cb824134393" gracePeriod=2 Jan 26 18:34:16 crc kubenswrapper[4737]: I0126 18:34:16.256880 4737 generic.go:334] "Generic (PLEG): container finished" podID="402663db-5331-4692-8539-f79973a5759b" containerID="8a817a7c79a7cc5c7c2664c1eca00880fbaf0791758e16874c13214a3be17120" exitCode=0 Jan 26 18:34:16 crc kubenswrapper[4737]: I0126 18:34:16.256914 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2ql5w" event={"ID":"402663db-5331-4692-8539-f79973a5759b","Type":"ContainerDied","Data":"8a817a7c79a7cc5c7c2664c1eca00880fbaf0791758e16874c13214a3be17120"} Jan 26 18:34:16 crc kubenswrapper[4737]: I0126 18:34:16.272221 4737 generic.go:334] "Generic (PLEG): container finished" podID="7a6f7537-6a89-4f64-a0a1-c96e49c575db" containerID="bf28613d96c82a9e36032920a97738260d866e2acc87f44ce2eb8d80250d514d" exitCode=0 Jan 26 18:34:16 crc kubenswrapper[4737]: I0126 18:34:16.272291 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lrlts" event={"ID":"7a6f7537-6a89-4f64-a0a1-c96e49c575db","Type":"ContainerDied","Data":"bf28613d96c82a9e36032920a97738260d866e2acc87f44ce2eb8d80250d514d"} Jan 26 18:34:16 crc kubenswrapper[4737]: I0126 18:34:16.274573 4737 generic.go:334] "Generic (PLEG): container finished" podID="74a38d4e-7789-4e8b-abbc-da9d57d1bcc4" containerID="d661aa57bebf951b78e012e16a95f09a77574bc4ef40083a2ce6d9d3aeea9000" exitCode=0 Jan 26 18:34:16 crc kubenswrapper[4737]: I0126 18:34:16.274613 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ndrff" event={"ID":"74a38d4e-7789-4e8b-abbc-da9d57d1bcc4","Type":"ContainerDied","Data":"d661aa57bebf951b78e012e16a95f09a77574bc4ef40083a2ce6d9d3aeea9000"} Jan 26 18:34:16 crc kubenswrapper[4737]: I0126 18:34:16.511597 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2ql5w" Jan 26 18:34:16 crc kubenswrapper[4737]: I0126 18:34:16.593216 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ndrff" Jan 26 18:34:16 crc kubenswrapper[4737]: I0126 18:34:16.650630 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/402663db-5331-4692-8539-f79973a5759b-utilities\") pod \"402663db-5331-4692-8539-f79973a5759b\" (UID: \"402663db-5331-4692-8539-f79973a5759b\") " Jan 26 18:34:16 crc kubenswrapper[4737]: I0126 18:34:16.650708 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/402663db-5331-4692-8539-f79973a5759b-catalog-content\") pod \"402663db-5331-4692-8539-f79973a5759b\" (UID: \"402663db-5331-4692-8539-f79973a5759b\") " Jan 26 18:34:16 crc kubenswrapper[4737]: I0126 18:34:16.650796 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59fs5\" (UniqueName: \"kubernetes.io/projected/402663db-5331-4692-8539-f79973a5759b-kube-api-access-59fs5\") pod \"402663db-5331-4692-8539-f79973a5759b\" (UID: \"402663db-5331-4692-8539-f79973a5759b\") " Jan 26 18:34:16 crc kubenswrapper[4737]: I0126 18:34:16.651781 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/402663db-5331-4692-8539-f79973a5759b-utilities" (OuterVolumeSpecName: "utilities") pod "402663db-5331-4692-8539-f79973a5759b" (UID: "402663db-5331-4692-8539-f79973a5759b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:34:16 crc kubenswrapper[4737]: I0126 18:34:16.655948 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/402663db-5331-4692-8539-f79973a5759b-kube-api-access-59fs5" (OuterVolumeSpecName: "kube-api-access-59fs5") pod "402663db-5331-4692-8539-f79973a5759b" (UID: "402663db-5331-4692-8539-f79973a5759b"). InnerVolumeSpecName "kube-api-access-59fs5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:34:16 crc kubenswrapper[4737]: I0126 18:34:16.667054 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lrlts" Jan 26 18:34:16 crc kubenswrapper[4737]: I0126 18:34:16.752430 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74a38d4e-7789-4e8b-abbc-da9d57d1bcc4-catalog-content\") pod \"74a38d4e-7789-4e8b-abbc-da9d57d1bcc4\" (UID: \"74a38d4e-7789-4e8b-abbc-da9d57d1bcc4\") " Jan 26 18:34:16 crc kubenswrapper[4737]: I0126 18:34:16.752578 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnsnx\" (UniqueName: \"kubernetes.io/projected/74a38d4e-7789-4e8b-abbc-da9d57d1bcc4-kube-api-access-jnsnx\") pod \"74a38d4e-7789-4e8b-abbc-da9d57d1bcc4\" (UID: \"74a38d4e-7789-4e8b-abbc-da9d57d1bcc4\") " Jan 26 18:34:16 crc kubenswrapper[4737]: I0126 18:34:16.752616 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a6f7537-6a89-4f64-a0a1-c96e49c575db-utilities\") pod \"7a6f7537-6a89-4f64-a0a1-c96e49c575db\" (UID: \"7a6f7537-6a89-4f64-a0a1-c96e49c575db\") " Jan 26 18:34:16 crc kubenswrapper[4737]: I0126 18:34:16.752693 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ww4w\" (UniqueName: \"kubernetes.io/projected/7a6f7537-6a89-4f64-a0a1-c96e49c575db-kube-api-access-5ww4w\") pod \"7a6f7537-6a89-4f64-a0a1-c96e49c575db\" (UID: \"7a6f7537-6a89-4f64-a0a1-c96e49c575db\") " Jan 26 18:34:16 crc kubenswrapper[4737]: I0126 18:34:16.752722 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a6f7537-6a89-4f64-a0a1-c96e49c575db-catalog-content\") pod \"7a6f7537-6a89-4f64-a0a1-c96e49c575db\" (UID: \"7a6f7537-6a89-4f64-a0a1-c96e49c575db\") " Jan 26 18:34:16 crc kubenswrapper[4737]: I0126 18:34:16.752739 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74a38d4e-7789-4e8b-abbc-da9d57d1bcc4-utilities\") pod \"74a38d4e-7789-4e8b-abbc-da9d57d1bcc4\" (UID: \"74a38d4e-7789-4e8b-abbc-da9d57d1bcc4\") " Jan 26 18:34:16 crc kubenswrapper[4737]: I0126 18:34:16.752976 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59fs5\" (UniqueName: \"kubernetes.io/projected/402663db-5331-4692-8539-f79973a5759b-kube-api-access-59fs5\") on node \"crc\" DevicePath \"\"" Jan 26 18:34:16 crc kubenswrapper[4737]: I0126 18:34:16.752994 4737 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/402663db-5331-4692-8539-f79973a5759b-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:34:16 crc kubenswrapper[4737]: I0126 18:34:16.753798 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a6f7537-6a89-4f64-a0a1-c96e49c575db-utilities" (OuterVolumeSpecName: "utilities") pod "7a6f7537-6a89-4f64-a0a1-c96e49c575db" (UID: "7a6f7537-6a89-4f64-a0a1-c96e49c575db"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:34:16 crc kubenswrapper[4737]: I0126 18:34:16.753892 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74a38d4e-7789-4e8b-abbc-da9d57d1bcc4-utilities" (OuterVolumeSpecName: "utilities") pod "74a38d4e-7789-4e8b-abbc-da9d57d1bcc4" (UID: "74a38d4e-7789-4e8b-abbc-da9d57d1bcc4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:34:16 crc kubenswrapper[4737]: I0126 18:34:16.755923 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a6f7537-6a89-4f64-a0a1-c96e49c575db-kube-api-access-5ww4w" (OuterVolumeSpecName: "kube-api-access-5ww4w") pod "7a6f7537-6a89-4f64-a0a1-c96e49c575db" (UID: "7a6f7537-6a89-4f64-a0a1-c96e49c575db"). InnerVolumeSpecName "kube-api-access-5ww4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:34:16 crc kubenswrapper[4737]: I0126 18:34:16.755978 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74a38d4e-7789-4e8b-abbc-da9d57d1bcc4-kube-api-access-jnsnx" (OuterVolumeSpecName: "kube-api-access-jnsnx") pod "74a38d4e-7789-4e8b-abbc-da9d57d1bcc4" (UID: "74a38d4e-7789-4e8b-abbc-da9d57d1bcc4"). InnerVolumeSpecName "kube-api-access-jnsnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:34:16 crc kubenswrapper[4737]: I0126 18:34:16.852493 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a6f7537-6a89-4f64-a0a1-c96e49c575db-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7a6f7537-6a89-4f64-a0a1-c96e49c575db" (UID: "7a6f7537-6a89-4f64-a0a1-c96e49c575db"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:34:16 crc kubenswrapper[4737]: I0126 18:34:16.854733 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnsnx\" (UniqueName: \"kubernetes.io/projected/74a38d4e-7789-4e8b-abbc-da9d57d1bcc4-kube-api-access-jnsnx\") on node \"crc\" DevicePath \"\"" Jan 26 18:34:16 crc kubenswrapper[4737]: I0126 18:34:16.854787 4737 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a6f7537-6a89-4f64-a0a1-c96e49c575db-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:34:16 crc kubenswrapper[4737]: I0126 18:34:16.854801 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ww4w\" (UniqueName: \"kubernetes.io/projected/7a6f7537-6a89-4f64-a0a1-c96e49c575db-kube-api-access-5ww4w\") on node \"crc\" DevicePath \"\"" Jan 26 18:34:16 crc kubenswrapper[4737]: I0126 18:34:16.854812 4737 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a6f7537-6a89-4f64-a0a1-c96e49c575db-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:34:16 crc kubenswrapper[4737]: I0126 18:34:16.854822 4737 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74a38d4e-7789-4e8b-abbc-da9d57d1bcc4-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:34:16 crc kubenswrapper[4737]: I0126 18:34:16.861912 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74a38d4e-7789-4e8b-abbc-da9d57d1bcc4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "74a38d4e-7789-4e8b-abbc-da9d57d1bcc4" (UID: "74a38d4e-7789-4e8b-abbc-da9d57d1bcc4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:34:16 crc kubenswrapper[4737]: I0126 18:34:16.956526 4737 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74a38d4e-7789-4e8b-abbc-da9d57d1bcc4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:34:16 crc kubenswrapper[4737]: I0126 18:34:16.981793 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/402663db-5331-4692-8539-f79973a5759b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "402663db-5331-4692-8539-f79973a5759b" (UID: "402663db-5331-4692-8539-f79973a5759b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:34:17 crc kubenswrapper[4737]: I0126 18:34:17.058916 4737 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/402663db-5331-4692-8539-f79973a5759b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:34:17 crc kubenswrapper[4737]: I0126 18:34:17.281657 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2ql5w" event={"ID":"402663db-5331-4692-8539-f79973a5759b","Type":"ContainerDied","Data":"c308165fc40bb2460cfac4661d0881107d8140919dbee4ebab81c419b168a237"} Jan 26 18:34:17 crc kubenswrapper[4737]: I0126 18:34:17.281714 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2ql5w" Jan 26 18:34:17 crc kubenswrapper[4737]: I0126 18:34:17.281732 4737 scope.go:117] "RemoveContainer" containerID="8a817a7c79a7cc5c7c2664c1eca00880fbaf0791758e16874c13214a3be17120" Jan 26 18:34:17 crc kubenswrapper[4737]: I0126 18:34:17.284038 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lrlts" Jan 26 18:34:17 crc kubenswrapper[4737]: I0126 18:34:17.284021 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lrlts" event={"ID":"7a6f7537-6a89-4f64-a0a1-c96e49c575db","Type":"ContainerDied","Data":"67632fe5344747d496459b7026b3bb49d41e3c1f87f7c82a5f544dc67917c9b4"} Jan 26 18:34:17 crc kubenswrapper[4737]: I0126 18:34:17.286779 4737 generic.go:334] "Generic (PLEG): container finished" podID="01d16131-935e-4d13-8f42-d9ff3ce55769" containerID="e9e878007d84395c685f8f7f259da0d9d0e816d01f706c53f2c68cb824134393" exitCode=0 Jan 26 18:34:17 crc kubenswrapper[4737]: I0126 18:34:17.286823 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4cxsx" event={"ID":"01d16131-935e-4d13-8f42-d9ff3ce55769","Type":"ContainerDied","Data":"e9e878007d84395c685f8f7f259da0d9d0e816d01f706c53f2c68cb824134393"} Jan 26 18:34:17 crc kubenswrapper[4737]: I0126 18:34:17.289606 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ndrff" event={"ID":"74a38d4e-7789-4e8b-abbc-da9d57d1bcc4","Type":"ContainerDied","Data":"eb1d0d7853d05618ed0e72a3a81c3224d6ea1c3ab1d37e0acd9019d429294510"} Jan 26 18:34:17 crc kubenswrapper[4737]: I0126 18:34:17.289658 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ndrff" Jan 26 18:34:17 crc kubenswrapper[4737]: I0126 18:34:17.298427 4737 scope.go:117] "RemoveContainer" containerID="7346e019814d7b368403cb66cc8100101e707c6aca653930974f646c8c9cd5c8" Jan 26 18:34:17 crc kubenswrapper[4737]: I0126 18:34:17.308213 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2ql5w"] Jan 26 18:34:17 crc kubenswrapper[4737]: I0126 18:34:17.311017 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2ql5w"] Jan 26 18:34:17 crc kubenswrapper[4737]: I0126 18:34:17.318846 4737 scope.go:117] "RemoveContainer" containerID="1c1d5d815c3e54786ed93282aa94ebfdc7104080d2cdaf25b34daccb90804cd0" Jan 26 18:34:17 crc kubenswrapper[4737]: I0126 18:34:17.320663 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ndrff"] Jan 26 18:34:17 crc kubenswrapper[4737]: I0126 18:34:17.323583 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ndrff"] Jan 26 18:34:17 crc kubenswrapper[4737]: I0126 18:34:17.331391 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lrlts"] Jan 26 18:34:17 crc kubenswrapper[4737]: I0126 18:34:17.334470 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lrlts"] Jan 26 18:34:17 crc kubenswrapper[4737]: I0126 18:34:17.339477 4737 scope.go:117] "RemoveContainer" containerID="bf28613d96c82a9e36032920a97738260d866e2acc87f44ce2eb8d80250d514d" Jan 26 18:34:17 crc kubenswrapper[4737]: I0126 18:34:17.376576 4737 scope.go:117] "RemoveContainer" containerID="6a1d023a4649553aff40b0a9bd57ad0bc6d226fea827783590be3ba0504c15b6" Jan 26 18:34:17 crc kubenswrapper[4737]: I0126 18:34:17.393725 4737 scope.go:117] "RemoveContainer" containerID="5f71ee33ff31bae6f5c88211cf4796a1dd7929c1c3648d8ede100b04f11f95d5" Jan 26 18:34:17 crc kubenswrapper[4737]: I0126 18:34:17.411987 4737 scope.go:117] "RemoveContainer" containerID="d661aa57bebf951b78e012e16a95f09a77574bc4ef40083a2ce6d9d3aeea9000" Jan 26 18:34:17 crc kubenswrapper[4737]: I0126 18:34:17.428403 4737 scope.go:117] "RemoveContainer" containerID="1b3d10bc428a2ea865a0a4ce256ae811530ed545c39aad4cbb8a6a1b74c1c6b2" Jan 26 18:34:17 crc kubenswrapper[4737]: I0126 18:34:17.441946 4737 scope.go:117] "RemoveContainer" containerID="8b1141abf9c354ae94c01135e707e5fe8fbb13bd3d080402585b461c69bca673" Jan 26 18:34:18 crc kubenswrapper[4737]: I0126 18:34:18.270050 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4cxsx" Jan 26 18:34:18 crc kubenswrapper[4737]: I0126 18:34:18.303201 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4cxsx" Jan 26 18:34:18 crc kubenswrapper[4737]: I0126 18:34:18.303209 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4cxsx" event={"ID":"01d16131-935e-4d13-8f42-d9ff3ce55769","Type":"ContainerDied","Data":"42cab37fc48e04f9fd7ee09c0f56a72dcdc76c93a626c2235a7641482f6ab4f0"} Jan 26 18:34:18 crc kubenswrapper[4737]: I0126 18:34:18.303285 4737 scope.go:117] "RemoveContainer" containerID="e9e878007d84395c685f8f7f259da0d9d0e816d01f706c53f2c68cb824134393" Jan 26 18:34:18 crc kubenswrapper[4737]: I0126 18:34:18.321987 4737 scope.go:117] "RemoveContainer" containerID="a2a5d7c5df2f473c1bbc0c5b76e4c1e90e552e6602da2da3c4ab24e487277da3" Jan 26 18:34:18 crc kubenswrapper[4737]: I0126 18:34:18.339433 4737 scope.go:117] "RemoveContainer" containerID="c31216b9e0ff870f498c2a70bf1ddbe662212a90e8268b4d5043f36ddcc74554" Jan 26 18:34:18 crc kubenswrapper[4737]: I0126 18:34:18.416458 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01d16131-935e-4d13-8f42-d9ff3ce55769-utilities\") pod \"01d16131-935e-4d13-8f42-d9ff3ce55769\" (UID: \"01d16131-935e-4d13-8f42-d9ff3ce55769\") " Jan 26 18:34:18 crc kubenswrapper[4737]: I0126 18:34:18.416493 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01d16131-935e-4d13-8f42-d9ff3ce55769-catalog-content\") pod \"01d16131-935e-4d13-8f42-d9ff3ce55769\" (UID: \"01d16131-935e-4d13-8f42-d9ff3ce55769\") " Jan 26 18:34:18 crc kubenswrapper[4737]: I0126 18:34:18.416535 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-276bj\" (UniqueName: \"kubernetes.io/projected/01d16131-935e-4d13-8f42-d9ff3ce55769-kube-api-access-276bj\") pod \"01d16131-935e-4d13-8f42-d9ff3ce55769\" (UID: \"01d16131-935e-4d13-8f42-d9ff3ce55769\") " Jan 26 18:34:18 crc kubenswrapper[4737]: I0126 18:34:18.418633 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01d16131-935e-4d13-8f42-d9ff3ce55769-utilities" (OuterVolumeSpecName: "utilities") pod "01d16131-935e-4d13-8f42-d9ff3ce55769" (UID: "01d16131-935e-4d13-8f42-d9ff3ce55769"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:34:18 crc kubenswrapper[4737]: I0126 18:34:18.422407 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01d16131-935e-4d13-8f42-d9ff3ce55769-kube-api-access-276bj" (OuterVolumeSpecName: "kube-api-access-276bj") pod "01d16131-935e-4d13-8f42-d9ff3ce55769" (UID: "01d16131-935e-4d13-8f42-d9ff3ce55769"). InnerVolumeSpecName "kube-api-access-276bj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:34:18 crc kubenswrapper[4737]: I0126 18:34:18.517986 4737 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01d16131-935e-4d13-8f42-d9ff3ce55769-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:34:18 crc kubenswrapper[4737]: I0126 18:34:18.518021 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-276bj\" (UniqueName: \"kubernetes.io/projected/01d16131-935e-4d13-8f42-d9ff3ce55769-kube-api-access-276bj\") on node \"crc\" DevicePath \"\"" Jan 26 18:34:18 crc kubenswrapper[4737]: I0126 18:34:18.765536 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-f4ldv" Jan 26 18:34:18 crc kubenswrapper[4737]: I0126 18:34:18.991312 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="402663db-5331-4692-8539-f79973a5759b" path="/var/lib/kubelet/pods/402663db-5331-4692-8539-f79973a5759b/volumes" Jan 26 18:34:18 crc kubenswrapper[4737]: I0126 18:34:18.992107 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74a38d4e-7789-4e8b-abbc-da9d57d1bcc4" path="/var/lib/kubelet/pods/74a38d4e-7789-4e8b-abbc-da9d57d1bcc4/volumes" Jan 26 18:34:18 crc kubenswrapper[4737]: I0126 18:34:18.992896 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a6f7537-6a89-4f64-a0a1-c96e49c575db" path="/var/lib/kubelet/pods/7a6f7537-6a89-4f64-a0a1-c96e49c575db/volumes" Jan 26 18:34:19 crc kubenswrapper[4737]: I0126 18:34:19.300736 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01d16131-935e-4d13-8f42-d9ff3ce55769-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "01d16131-935e-4d13-8f42-d9ff3ce55769" (UID: "01d16131-935e-4d13-8f42-d9ff3ce55769"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:34:19 crc kubenswrapper[4737]: I0126 18:34:19.328737 4737 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01d16131-935e-4d13-8f42-d9ff3ce55769-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:34:19 crc kubenswrapper[4737]: I0126 18:34:19.530640 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4cxsx"] Jan 26 18:34:19 crc kubenswrapper[4737]: I0126 18:34:19.533184 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4cxsx"] Jan 26 18:34:20 crc kubenswrapper[4737]: I0126 18:34:20.989179 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01d16131-935e-4d13-8f42-d9ff3ce55769" path="/var/lib/kubelet/pods/01d16131-935e-4d13-8f42-d9ff3ce55769/volumes" Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.680821 4737 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 18:34:23 crc kubenswrapper[4737]: E0126 18:34:23.681810 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74a38d4e-7789-4e8b-abbc-da9d57d1bcc4" containerName="extract-content" Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.681827 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="74a38d4e-7789-4e8b-abbc-da9d57d1bcc4" containerName="extract-content" Jan 26 18:34:23 crc kubenswrapper[4737]: E0126 18:34:23.681836 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="402663db-5331-4692-8539-f79973a5759b" containerName="registry-server" Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.681841 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="402663db-5331-4692-8539-f79973a5759b" containerName="registry-server" Jan 26 18:34:23 crc kubenswrapper[4737]: E0126 18:34:23.681852 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a6f7537-6a89-4f64-a0a1-c96e49c575db" containerName="registry-server" Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.681859 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a6f7537-6a89-4f64-a0a1-c96e49c575db" containerName="registry-server" Jan 26 18:34:23 crc kubenswrapper[4737]: E0126 18:34:23.681869 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74a38d4e-7789-4e8b-abbc-da9d57d1bcc4" containerName="extract-utilities" Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.681875 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="74a38d4e-7789-4e8b-abbc-da9d57d1bcc4" containerName="extract-utilities" Jan 26 18:34:23 crc kubenswrapper[4737]: E0126 18:34:23.681881 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="402663db-5331-4692-8539-f79973a5759b" containerName="extract-utilities" Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.681887 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="402663db-5331-4692-8539-f79973a5759b" containerName="extract-utilities" Jan 26 18:34:23 crc kubenswrapper[4737]: E0126 18:34:23.681894 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e262210-e029-484c-a86e-3e2c50becd95" containerName="pruner" Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.681901 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e262210-e029-484c-a86e-3e2c50becd95" containerName="pruner" Jan 26 18:34:23 crc kubenswrapper[4737]: E0126 18:34:23.681912 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a6f7537-6a89-4f64-a0a1-c96e49c575db" containerName="extract-utilities" Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.681919 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a6f7537-6a89-4f64-a0a1-c96e49c575db" containerName="extract-utilities" Jan 26 18:34:23 crc kubenswrapper[4737]: E0126 18:34:23.681926 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74a38d4e-7789-4e8b-abbc-da9d57d1bcc4" containerName="registry-server" Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.681932 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="74a38d4e-7789-4e8b-abbc-da9d57d1bcc4" containerName="registry-server" Jan 26 18:34:23 crc kubenswrapper[4737]: E0126 18:34:23.681942 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01d16131-935e-4d13-8f42-d9ff3ce55769" containerName="registry-server" Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.681948 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="01d16131-935e-4d13-8f42-d9ff3ce55769" containerName="registry-server" Jan 26 18:34:23 crc kubenswrapper[4737]: E0126 18:34:23.681957 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="402663db-5331-4692-8539-f79973a5759b" containerName="extract-content" Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.681963 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="402663db-5331-4692-8539-f79973a5759b" containerName="extract-content" Jan 26 18:34:23 crc kubenswrapper[4737]: E0126 18:34:23.681972 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01d16131-935e-4d13-8f42-d9ff3ce55769" containerName="extract-content" Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.681977 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="01d16131-935e-4d13-8f42-d9ff3ce55769" containerName="extract-content" Jan 26 18:34:23 crc kubenswrapper[4737]: E0126 18:34:23.681987 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a6f7537-6a89-4f64-a0a1-c96e49c575db" containerName="extract-content" Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.681992 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a6f7537-6a89-4f64-a0a1-c96e49c575db" containerName="extract-content" Jan 26 18:34:23 crc kubenswrapper[4737]: E0126 18:34:23.682002 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01d16131-935e-4d13-8f42-d9ff3ce55769" containerName="extract-utilities" Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.682008 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="01d16131-935e-4d13-8f42-d9ff3ce55769" containerName="extract-utilities" Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.682118 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="01d16131-935e-4d13-8f42-d9ff3ce55769" containerName="registry-server" Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.682133 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="402663db-5331-4692-8539-f79973a5759b" containerName="registry-server" Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.682141 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e262210-e029-484c-a86e-3e2c50becd95" containerName="pruner" Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.682151 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="74a38d4e-7789-4e8b-abbc-da9d57d1bcc4" containerName="registry-server" Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.682159 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a6f7537-6a89-4f64-a0a1-c96e49c575db" containerName="registry-server" Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.682479 4737 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.682501 4737 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 18:34:23 crc kubenswrapper[4737]: E0126 18:34:23.682611 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.682618 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 18:34:23 crc kubenswrapper[4737]: E0126 18:34:23.682627 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.682632 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 18:34:23 crc kubenswrapper[4737]: E0126 18:34:23.682640 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.682646 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 18:34:23 crc kubenswrapper[4737]: E0126 18:34:23.682655 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.682660 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 18:34:23 crc kubenswrapper[4737]: E0126 18:34:23.682668 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.682675 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 18:34:23 crc kubenswrapper[4737]: E0126 18:34:23.682682 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.682687 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.682772 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.682781 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.682789 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.682797 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.682804 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.682812 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 18:34:23 crc kubenswrapper[4737]: E0126 18:34:23.682895 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.682902 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.683019 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.683216 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://03d782bb5883158eb31686ef882923bc0fe18907ec34b462ad7641b8d0a6e675" gracePeriod=15 Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.683369 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://209ecbbc6838b629efde256a421bfd4b6926d2a9cd2f02e4fb7df9325fdecfc5" gracePeriod=15 Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.683406 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://45b34a9d70cf8504fd809f816a326a74e9a3c422a1ed1ffc221e72f90629b420" gracePeriod=15 Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.683436 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://bcce3c0b3eaf0ab467b2dbcadc4770536de6e0abf901c9636df113498aff77a1" gracePeriod=15 Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.683467 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://e96d13541d78d88ffb1e1dcff16556814da8c438d160fef0ea16468954f300dd" gracePeriod=15 Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.687604 4737 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.693902 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.693992 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.694016 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.694044 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.694098 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.694119 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.694150 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.694174 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.755742 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" podUID="fdc44942-56de-4694-bcd4-bca48f1e1e08" containerName="oauth-openshift" containerID="cri-o://a7dd5c5a40c38e57b127df6dfb7900c2f3b7b3dc73cb475cba8fabacacbb037e" gracePeriod=15 Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.795998 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.796084 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.796098 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.796117 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.796149 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.796200 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.796243 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.796264 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.796255 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.796298 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.796323 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.796275 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.796415 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.796508 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.796518 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 18:34:23 crc kubenswrapper[4737]: I0126 18:34:23.796569 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 18:34:24 crc kubenswrapper[4737]: I0126 18:34:24.336037 4737 generic.go:334] "Generic (PLEG): container finished" podID="3628597d-09b4-4169-ba4b-ddedf59fce32" containerID="de7bc978ffb7f2ad06dfd08eb169f38ca80433cc84f513169b174729d4de5a3c" exitCode=0 Jan 26 18:34:24 crc kubenswrapper[4737]: I0126 18:34:24.336169 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3628597d-09b4-4169-ba4b-ddedf59fce32","Type":"ContainerDied","Data":"de7bc978ffb7f2ad06dfd08eb169f38ca80433cc84f513169b174729d4de5a3c"} Jan 26 18:34:24 crc kubenswrapper[4737]: I0126 18:34:24.337359 4737 status_manager.go:851] "Failed to get status for pod" podUID="3628597d-09b4-4169-ba4b-ddedf59fce32" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 26 18:34:24 crc kubenswrapper[4737]: I0126 18:34:24.339676 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 26 18:34:24 crc kubenswrapper[4737]: I0126 18:34:24.340867 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 18:34:24 crc kubenswrapper[4737]: I0126 18:34:24.341609 4737 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="209ecbbc6838b629efde256a421bfd4b6926d2a9cd2f02e4fb7df9325fdecfc5" exitCode=0 Jan 26 18:34:24 crc kubenswrapper[4737]: I0126 18:34:24.341648 4737 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="45b34a9d70cf8504fd809f816a326a74e9a3c422a1ed1ffc221e72f90629b420" exitCode=0 Jan 26 18:34:24 crc kubenswrapper[4737]: I0126 18:34:24.341658 4737 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="bcce3c0b3eaf0ab467b2dbcadc4770536de6e0abf901c9636df113498aff77a1" exitCode=0 Jan 26 18:34:24 crc kubenswrapper[4737]: I0126 18:34:24.341667 4737 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e96d13541d78d88ffb1e1dcff16556814da8c438d160fef0ea16468954f300dd" exitCode=2 Jan 26 18:34:24 crc kubenswrapper[4737]: I0126 18:34:24.341704 4737 scope.go:117] "RemoveContainer" containerID="d2968ec8a8ae174c006de379e7fae84b111c90cb44e51bb8d0fdcbc0e66a5842" Jan 26 18:34:24 crc kubenswrapper[4737]: I0126 18:34:24.351778 4737 generic.go:334] "Generic (PLEG): container finished" podID="fdc44942-56de-4694-bcd4-bca48f1e1e08" containerID="a7dd5c5a40c38e57b127df6dfb7900c2f3b7b3dc73cb475cba8fabacacbb037e" exitCode=0 Jan 26 18:34:24 crc kubenswrapper[4737]: I0126 18:34:24.351822 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" event={"ID":"fdc44942-56de-4694-bcd4-bca48f1e1e08","Type":"ContainerDied","Data":"a7dd5c5a40c38e57b127df6dfb7900c2f3b7b3dc73cb475cba8fabacacbb037e"} Jan 26 18:34:24 crc kubenswrapper[4737]: I0126 18:34:24.602956 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" Jan 26 18:34:24 crc kubenswrapper[4737]: I0126 18:34:24.603593 4737 status_manager.go:851] "Failed to get status for pod" podUID="fdc44942-56de-4694-bcd4-bca48f1e1e08" pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-9kjp9\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 26 18:34:24 crc kubenswrapper[4737]: I0126 18:34:24.603945 4737 status_manager.go:851] "Failed to get status for pod" podUID="3628597d-09b4-4169-ba4b-ddedf59fce32" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 26 18:34:24 crc kubenswrapper[4737]: I0126 18:34:24.604978 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-system-router-certs\") pod \"fdc44942-56de-4694-bcd4-bca48f1e1e08\" (UID: \"fdc44942-56de-4694-bcd4-bca48f1e1e08\") " Jan 26 18:34:24 crc kubenswrapper[4737]: I0126 18:34:24.605016 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-user-template-error\") pod \"fdc44942-56de-4694-bcd4-bca48f1e1e08\" (UID: \"fdc44942-56de-4694-bcd4-bca48f1e1e08\") " Jan 26 18:34:24 crc kubenswrapper[4737]: I0126 18:34:24.605055 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-system-service-ca\") pod \"fdc44942-56de-4694-bcd4-bca48f1e1e08\" (UID: \"fdc44942-56de-4694-bcd4-bca48f1e1e08\") " Jan 26 18:34:24 crc kubenswrapper[4737]: I0126 18:34:24.605098 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-user-idp-0-file-data\") pod \"fdc44942-56de-4694-bcd4-bca48f1e1e08\" (UID: \"fdc44942-56de-4694-bcd4-bca48f1e1e08\") " Jan 26 18:34:24 crc kubenswrapper[4737]: I0126 18:34:24.605161 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-system-cliconfig\") pod \"fdc44942-56de-4694-bcd4-bca48f1e1e08\" (UID: \"fdc44942-56de-4694-bcd4-bca48f1e1e08\") " Jan 26 18:34:24 crc kubenswrapper[4737]: I0126 18:34:24.605194 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-system-trusted-ca-bundle\") pod \"fdc44942-56de-4694-bcd4-bca48f1e1e08\" (UID: \"fdc44942-56de-4694-bcd4-bca48f1e1e08\") " Jan 26 18:34:24 crc kubenswrapper[4737]: I0126 18:34:24.605225 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gv66w\" (UniqueName: \"kubernetes.io/projected/fdc44942-56de-4694-bcd4-bca48f1e1e08-kube-api-access-gv66w\") pod \"fdc44942-56de-4694-bcd4-bca48f1e1e08\" (UID: \"fdc44942-56de-4694-bcd4-bca48f1e1e08\") " Jan 26 18:34:24 crc kubenswrapper[4737]: I0126 18:34:24.605252 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-system-ocp-branding-template\") pod \"fdc44942-56de-4694-bcd4-bca48f1e1e08\" (UID: \"fdc44942-56de-4694-bcd4-bca48f1e1e08\") " Jan 26 18:34:24 crc kubenswrapper[4737]: I0126 18:34:24.605278 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-system-serving-cert\") pod \"fdc44942-56de-4694-bcd4-bca48f1e1e08\" (UID: \"fdc44942-56de-4694-bcd4-bca48f1e1e08\") " Jan 26 18:34:24 crc kubenswrapper[4737]: I0126 18:34:24.605304 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fdc44942-56de-4694-bcd4-bca48f1e1e08-audit-policies\") pod \"fdc44942-56de-4694-bcd4-bca48f1e1e08\" (UID: \"fdc44942-56de-4694-bcd4-bca48f1e1e08\") " Jan 26 18:34:24 crc kubenswrapper[4737]: I0126 18:34:24.605332 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fdc44942-56de-4694-bcd4-bca48f1e1e08-audit-dir\") pod \"fdc44942-56de-4694-bcd4-bca48f1e1e08\" (UID: \"fdc44942-56de-4694-bcd4-bca48f1e1e08\") " Jan 26 18:34:24 crc kubenswrapper[4737]: I0126 18:34:24.605356 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-user-template-provider-selection\") pod \"fdc44942-56de-4694-bcd4-bca48f1e1e08\" (UID: \"fdc44942-56de-4694-bcd4-bca48f1e1e08\") " Jan 26 18:34:24 crc kubenswrapper[4737]: I0126 18:34:24.605388 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-user-template-login\") pod \"fdc44942-56de-4694-bcd4-bca48f1e1e08\" (UID: \"fdc44942-56de-4694-bcd4-bca48f1e1e08\") " Jan 26 18:34:24 crc kubenswrapper[4737]: I0126 18:34:24.605410 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-system-session\") pod \"fdc44942-56de-4694-bcd4-bca48f1e1e08\" (UID: \"fdc44942-56de-4694-bcd4-bca48f1e1e08\") " Jan 26 18:34:24 crc kubenswrapper[4737]: I0126 18:34:24.606001 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "fdc44942-56de-4694-bcd4-bca48f1e1e08" (UID: "fdc44942-56de-4694-bcd4-bca48f1e1e08"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:34:24 crc kubenswrapper[4737]: I0126 18:34:24.606038 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "fdc44942-56de-4694-bcd4-bca48f1e1e08" (UID: "fdc44942-56de-4694-bcd4-bca48f1e1e08"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:34:24 crc kubenswrapper[4737]: I0126 18:34:24.606434 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdc44942-56de-4694-bcd4-bca48f1e1e08-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "fdc44942-56de-4694-bcd4-bca48f1e1e08" (UID: "fdc44942-56de-4694-bcd4-bca48f1e1e08"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:34:24 crc kubenswrapper[4737]: I0126 18:34:24.606815 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdc44942-56de-4694-bcd4-bca48f1e1e08-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "fdc44942-56de-4694-bcd4-bca48f1e1e08" (UID: "fdc44942-56de-4694-bcd4-bca48f1e1e08"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:34:24 crc kubenswrapper[4737]: I0126 18:34:24.607641 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "fdc44942-56de-4694-bcd4-bca48f1e1e08" (UID: "fdc44942-56de-4694-bcd4-bca48f1e1e08"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:34:24 crc kubenswrapper[4737]: I0126 18:34:24.611504 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "fdc44942-56de-4694-bcd4-bca48f1e1e08" (UID: "fdc44942-56de-4694-bcd4-bca48f1e1e08"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:34:24 crc kubenswrapper[4737]: I0126 18:34:24.611749 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "fdc44942-56de-4694-bcd4-bca48f1e1e08" (UID: "fdc44942-56de-4694-bcd4-bca48f1e1e08"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:34:24 crc kubenswrapper[4737]: I0126 18:34:24.611804 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdc44942-56de-4694-bcd4-bca48f1e1e08-kube-api-access-gv66w" (OuterVolumeSpecName: "kube-api-access-gv66w") pod "fdc44942-56de-4694-bcd4-bca48f1e1e08" (UID: "fdc44942-56de-4694-bcd4-bca48f1e1e08"). InnerVolumeSpecName "kube-api-access-gv66w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:34:24 crc kubenswrapper[4737]: I0126 18:34:24.612061 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "fdc44942-56de-4694-bcd4-bca48f1e1e08" (UID: "fdc44942-56de-4694-bcd4-bca48f1e1e08"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:34:24 crc kubenswrapper[4737]: I0126 18:34:24.612172 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "fdc44942-56de-4694-bcd4-bca48f1e1e08" (UID: "fdc44942-56de-4694-bcd4-bca48f1e1e08"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:34:24 crc kubenswrapper[4737]: I0126 18:34:24.612311 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "fdc44942-56de-4694-bcd4-bca48f1e1e08" (UID: "fdc44942-56de-4694-bcd4-bca48f1e1e08"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:34:24 crc kubenswrapper[4737]: I0126 18:34:24.612385 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "fdc44942-56de-4694-bcd4-bca48f1e1e08" (UID: "fdc44942-56de-4694-bcd4-bca48f1e1e08"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:34:24 crc kubenswrapper[4737]: I0126 18:34:24.612464 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "fdc44942-56de-4694-bcd4-bca48f1e1e08" (UID: "fdc44942-56de-4694-bcd4-bca48f1e1e08"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:34:24 crc kubenswrapper[4737]: I0126 18:34:24.612625 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "fdc44942-56de-4694-bcd4-bca48f1e1e08" (UID: "fdc44942-56de-4694-bcd4-bca48f1e1e08"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:34:24 crc kubenswrapper[4737]: I0126 18:34:24.706876 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gv66w\" (UniqueName: \"kubernetes.io/projected/fdc44942-56de-4694-bcd4-bca48f1e1e08-kube-api-access-gv66w\") on node \"crc\" DevicePath \"\"" Jan 26 18:34:24 crc kubenswrapper[4737]: I0126 18:34:24.706918 4737 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 26 18:34:24 crc kubenswrapper[4737]: I0126 18:34:24.706949 4737 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:34:24 crc kubenswrapper[4737]: I0126 18:34:24.706961 4737 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fdc44942-56de-4694-bcd4-bca48f1e1e08-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 18:34:24 crc kubenswrapper[4737]: I0126 18:34:24.706972 4737 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fdc44942-56de-4694-bcd4-bca48f1e1e08-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 26 18:34:24 crc kubenswrapper[4737]: I0126 18:34:24.706981 4737 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 26 18:34:24 crc kubenswrapper[4737]: I0126 18:34:24.706992 4737 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 26 18:34:24 crc kubenswrapper[4737]: I0126 18:34:24.707003 4737 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 26 18:34:24 crc kubenswrapper[4737]: I0126 18:34:24.707012 4737 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:34:24 crc kubenswrapper[4737]: I0126 18:34:24.707020 4737 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 26 18:34:24 crc kubenswrapper[4737]: I0126 18:34:24.707028 4737 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 18:34:24 crc kubenswrapper[4737]: I0126 18:34:24.707037 4737 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:34:24 crc kubenswrapper[4737]: I0126 18:34:24.707046 4737 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 26 18:34:24 crc kubenswrapper[4737]: I0126 18:34:24.707054 4737 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fdc44942-56de-4694-bcd4-bca48f1e1e08-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:34:25 crc kubenswrapper[4737]: I0126 18:34:25.358895 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" event={"ID":"fdc44942-56de-4694-bcd4-bca48f1e1e08","Type":"ContainerDied","Data":"7590ec628d4c165b51e9a8a05ac09c509e26161d57da8ee9ed3598ec56b7dd4b"} Jan 26 18:34:25 crc kubenswrapper[4737]: I0126 18:34:25.358962 4737 scope.go:117] "RemoveContainer" containerID="a7dd5c5a40c38e57b127df6dfb7900c2f3b7b3dc73cb475cba8fabacacbb037e" Jan 26 18:34:25 crc kubenswrapper[4737]: I0126 18:34:25.359118 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" Jan 26 18:34:25 crc kubenswrapper[4737]: I0126 18:34:25.360761 4737 status_manager.go:851] "Failed to get status for pod" podUID="fdc44942-56de-4694-bcd4-bca48f1e1e08" pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-9kjp9\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 26 18:34:25 crc kubenswrapper[4737]: I0126 18:34:25.361445 4737 status_manager.go:851] "Failed to get status for pod" podUID="3628597d-09b4-4169-ba4b-ddedf59fce32" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 26 18:34:25 crc kubenswrapper[4737]: I0126 18:34:25.366602 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 18:34:25 crc kubenswrapper[4737]: I0126 18:34:25.369772 4737 status_manager.go:851] "Failed to get status for pod" podUID="fdc44942-56de-4694-bcd4-bca48f1e1e08" pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-9kjp9\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 26 18:34:25 crc kubenswrapper[4737]: I0126 18:34:25.370208 4737 status_manager.go:851] "Failed to get status for pod" podUID="3628597d-09b4-4169-ba4b-ddedf59fce32" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 26 18:34:25 crc kubenswrapper[4737]: I0126 18:34:25.601859 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 18:34:25 crc kubenswrapper[4737]: I0126 18:34:25.602714 4737 status_manager.go:851] "Failed to get status for pod" podUID="fdc44942-56de-4694-bcd4-bca48f1e1e08" pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-9kjp9\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 26 18:34:25 crc kubenswrapper[4737]: I0126 18:34:25.603380 4737 status_manager.go:851] "Failed to get status for pod" podUID="3628597d-09b4-4169-ba4b-ddedf59fce32" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 26 18:34:25 crc kubenswrapper[4737]: I0126 18:34:25.718619 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3628597d-09b4-4169-ba4b-ddedf59fce32-var-lock\") pod \"3628597d-09b4-4169-ba4b-ddedf59fce32\" (UID: \"3628597d-09b4-4169-ba4b-ddedf59fce32\") " Jan 26 18:34:25 crc kubenswrapper[4737]: I0126 18:34:25.718776 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3628597d-09b4-4169-ba4b-ddedf59fce32-kube-api-access\") pod \"3628597d-09b4-4169-ba4b-ddedf59fce32\" (UID: \"3628597d-09b4-4169-ba4b-ddedf59fce32\") " Jan 26 18:34:25 crc kubenswrapper[4737]: I0126 18:34:25.718780 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3628597d-09b4-4169-ba4b-ddedf59fce32-var-lock" (OuterVolumeSpecName: "var-lock") pod "3628597d-09b4-4169-ba4b-ddedf59fce32" (UID: "3628597d-09b4-4169-ba4b-ddedf59fce32"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:34:25 crc kubenswrapper[4737]: I0126 18:34:25.718820 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3628597d-09b4-4169-ba4b-ddedf59fce32-kubelet-dir\") pod \"3628597d-09b4-4169-ba4b-ddedf59fce32\" (UID: \"3628597d-09b4-4169-ba4b-ddedf59fce32\") " Jan 26 18:34:25 crc kubenswrapper[4737]: I0126 18:34:25.718922 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3628597d-09b4-4169-ba4b-ddedf59fce32-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3628597d-09b4-4169-ba4b-ddedf59fce32" (UID: "3628597d-09b4-4169-ba4b-ddedf59fce32"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:34:25 crc kubenswrapper[4737]: I0126 18:34:25.719047 4737 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3628597d-09b4-4169-ba4b-ddedf59fce32-var-lock\") on node \"crc\" DevicePath \"\"" Jan 26 18:34:25 crc kubenswrapper[4737]: I0126 18:34:25.719059 4737 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3628597d-09b4-4169-ba4b-ddedf59fce32-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 18:34:25 crc kubenswrapper[4737]: I0126 18:34:25.722990 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3628597d-09b4-4169-ba4b-ddedf59fce32-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3628597d-09b4-4169-ba4b-ddedf59fce32" (UID: "3628597d-09b4-4169-ba4b-ddedf59fce32"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:34:25 crc kubenswrapper[4737]: I0126 18:34:25.820707 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3628597d-09b4-4169-ba4b-ddedf59fce32-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 18:34:26 crc kubenswrapper[4737]: I0126 18:34:26.111533 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 18:34:26 crc kubenswrapper[4737]: I0126 18:34:26.112531 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:34:26 crc kubenswrapper[4737]: I0126 18:34:26.113184 4737 status_manager.go:851] "Failed to get status for pod" podUID="fdc44942-56de-4694-bcd4-bca48f1e1e08" pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-9kjp9\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 26 18:34:26 crc kubenswrapper[4737]: I0126 18:34:26.113505 4737 status_manager.go:851] "Failed to get status for pod" podUID="3628597d-09b4-4169-ba4b-ddedf59fce32" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 26 18:34:26 crc kubenswrapper[4737]: I0126 18:34:26.113940 4737 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 26 18:34:26 crc kubenswrapper[4737]: I0126 18:34:26.225999 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 18:34:26 crc kubenswrapper[4737]: I0126 18:34:26.226099 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 18:34:26 crc kubenswrapper[4737]: I0126 18:34:26.226146 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 18:34:26 crc kubenswrapper[4737]: I0126 18:34:26.226163 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:34:26 crc kubenswrapper[4737]: I0126 18:34:26.226272 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:34:26 crc kubenswrapper[4737]: I0126 18:34:26.226299 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:34:26 crc kubenswrapper[4737]: I0126 18:34:26.226627 4737 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 26 18:34:26 crc kubenswrapper[4737]: I0126 18:34:26.226651 4737 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 26 18:34:26 crc kubenswrapper[4737]: I0126 18:34:26.226664 4737 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 18:34:26 crc kubenswrapper[4737]: I0126 18:34:26.375505 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 18:34:26 crc kubenswrapper[4737]: I0126 18:34:26.375556 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3628597d-09b4-4169-ba4b-ddedf59fce32","Type":"ContainerDied","Data":"21e2c44b3593d982d4f25a5a465e6677bdb8f2550e805f738b92a6f6df97bd52"} Jan 26 18:34:26 crc kubenswrapper[4737]: I0126 18:34:26.375965 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21e2c44b3593d982d4f25a5a465e6677bdb8f2550e805f738b92a6f6df97bd52" Jan 26 18:34:26 crc kubenswrapper[4737]: I0126 18:34:26.378732 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 18:34:26 crc kubenswrapper[4737]: I0126 18:34:26.379505 4737 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="03d782bb5883158eb31686ef882923bc0fe18907ec34b462ad7641b8d0a6e675" exitCode=0 Jan 26 18:34:26 crc kubenswrapper[4737]: I0126 18:34:26.379562 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:34:26 crc kubenswrapper[4737]: I0126 18:34:26.379565 4737 scope.go:117] "RemoveContainer" containerID="209ecbbc6838b629efde256a421bfd4b6926d2a9cd2f02e4fb7df9325fdecfc5" Jan 26 18:34:26 crc kubenswrapper[4737]: I0126 18:34:26.391501 4737 status_manager.go:851] "Failed to get status for pod" podUID="fdc44942-56de-4694-bcd4-bca48f1e1e08" pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-9kjp9\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 26 18:34:26 crc kubenswrapper[4737]: I0126 18:34:26.391849 4737 status_manager.go:851] "Failed to get status for pod" podUID="3628597d-09b4-4169-ba4b-ddedf59fce32" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 26 18:34:26 crc kubenswrapper[4737]: I0126 18:34:26.392142 4737 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 26 18:34:26 crc kubenswrapper[4737]: I0126 18:34:26.393667 4737 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 26 18:34:26 crc kubenswrapper[4737]: I0126 18:34:26.393974 4737 status_manager.go:851] "Failed to get status for pod" podUID="fdc44942-56de-4694-bcd4-bca48f1e1e08" pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-9kjp9\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 26 18:34:26 crc kubenswrapper[4737]: I0126 18:34:26.394341 4737 status_manager.go:851] "Failed to get status for pod" podUID="3628597d-09b4-4169-ba4b-ddedf59fce32" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 26 18:34:26 crc kubenswrapper[4737]: I0126 18:34:26.397330 4737 scope.go:117] "RemoveContainer" containerID="45b34a9d70cf8504fd809f816a326a74e9a3c422a1ed1ffc221e72f90629b420" Jan 26 18:34:26 crc kubenswrapper[4737]: I0126 18:34:26.416399 4737 scope.go:117] "RemoveContainer" containerID="bcce3c0b3eaf0ab467b2dbcadc4770536de6e0abf901c9636df113498aff77a1" Jan 26 18:34:26 crc kubenswrapper[4737]: I0126 18:34:26.427340 4737 scope.go:117] "RemoveContainer" containerID="e96d13541d78d88ffb1e1dcff16556814da8c438d160fef0ea16468954f300dd" Jan 26 18:34:26 crc kubenswrapper[4737]: I0126 18:34:26.440104 4737 scope.go:117] "RemoveContainer" containerID="03d782bb5883158eb31686ef882923bc0fe18907ec34b462ad7641b8d0a6e675" Jan 26 18:34:26 crc kubenswrapper[4737]: I0126 18:34:26.452004 4737 scope.go:117] "RemoveContainer" containerID="f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291" Jan 26 18:34:26 crc kubenswrapper[4737]: I0126 18:34:26.468216 4737 scope.go:117] "RemoveContainer" containerID="209ecbbc6838b629efde256a421bfd4b6926d2a9cd2f02e4fb7df9325fdecfc5" Jan 26 18:34:26 crc kubenswrapper[4737]: E0126 18:34:26.468702 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"209ecbbc6838b629efde256a421bfd4b6926d2a9cd2f02e4fb7df9325fdecfc5\": container with ID starting with 209ecbbc6838b629efde256a421bfd4b6926d2a9cd2f02e4fb7df9325fdecfc5 not found: ID does not exist" containerID="209ecbbc6838b629efde256a421bfd4b6926d2a9cd2f02e4fb7df9325fdecfc5" Jan 26 18:34:26 crc kubenswrapper[4737]: I0126 18:34:26.468739 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"209ecbbc6838b629efde256a421bfd4b6926d2a9cd2f02e4fb7df9325fdecfc5"} err="failed to get container status \"209ecbbc6838b629efde256a421bfd4b6926d2a9cd2f02e4fb7df9325fdecfc5\": rpc error: code = NotFound desc = could not find container \"209ecbbc6838b629efde256a421bfd4b6926d2a9cd2f02e4fb7df9325fdecfc5\": container with ID starting with 209ecbbc6838b629efde256a421bfd4b6926d2a9cd2f02e4fb7df9325fdecfc5 not found: ID does not exist" Jan 26 18:34:26 crc kubenswrapper[4737]: I0126 18:34:26.468769 4737 scope.go:117] "RemoveContainer" containerID="45b34a9d70cf8504fd809f816a326a74e9a3c422a1ed1ffc221e72f90629b420" Jan 26 18:34:26 crc kubenswrapper[4737]: E0126 18:34:26.469168 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45b34a9d70cf8504fd809f816a326a74e9a3c422a1ed1ffc221e72f90629b420\": container with ID starting with 45b34a9d70cf8504fd809f816a326a74e9a3c422a1ed1ffc221e72f90629b420 not found: ID does not exist" containerID="45b34a9d70cf8504fd809f816a326a74e9a3c422a1ed1ffc221e72f90629b420" Jan 26 18:34:26 crc kubenswrapper[4737]: I0126 18:34:26.469213 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45b34a9d70cf8504fd809f816a326a74e9a3c422a1ed1ffc221e72f90629b420"} err="failed to get container status \"45b34a9d70cf8504fd809f816a326a74e9a3c422a1ed1ffc221e72f90629b420\": rpc error: code = NotFound desc = could not find container \"45b34a9d70cf8504fd809f816a326a74e9a3c422a1ed1ffc221e72f90629b420\": container with ID starting with 45b34a9d70cf8504fd809f816a326a74e9a3c422a1ed1ffc221e72f90629b420 not found: ID does not exist" Jan 26 18:34:26 crc kubenswrapper[4737]: I0126 18:34:26.469252 4737 scope.go:117] "RemoveContainer" containerID="bcce3c0b3eaf0ab467b2dbcadc4770536de6e0abf901c9636df113498aff77a1" Jan 26 18:34:26 crc kubenswrapper[4737]: E0126 18:34:26.469638 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcce3c0b3eaf0ab467b2dbcadc4770536de6e0abf901c9636df113498aff77a1\": container with ID starting with bcce3c0b3eaf0ab467b2dbcadc4770536de6e0abf901c9636df113498aff77a1 not found: ID does not exist" containerID="bcce3c0b3eaf0ab467b2dbcadc4770536de6e0abf901c9636df113498aff77a1" Jan 26 18:34:26 crc kubenswrapper[4737]: I0126 18:34:26.469664 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcce3c0b3eaf0ab467b2dbcadc4770536de6e0abf901c9636df113498aff77a1"} err="failed to get container status \"bcce3c0b3eaf0ab467b2dbcadc4770536de6e0abf901c9636df113498aff77a1\": rpc error: code = NotFound desc = could not find container \"bcce3c0b3eaf0ab467b2dbcadc4770536de6e0abf901c9636df113498aff77a1\": container with ID starting with bcce3c0b3eaf0ab467b2dbcadc4770536de6e0abf901c9636df113498aff77a1 not found: ID does not exist" Jan 26 18:34:26 crc kubenswrapper[4737]: I0126 18:34:26.469685 4737 scope.go:117] "RemoveContainer" containerID="e96d13541d78d88ffb1e1dcff16556814da8c438d160fef0ea16468954f300dd" Jan 26 18:34:26 crc kubenswrapper[4737]: E0126 18:34:26.469919 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e96d13541d78d88ffb1e1dcff16556814da8c438d160fef0ea16468954f300dd\": container with ID starting with e96d13541d78d88ffb1e1dcff16556814da8c438d160fef0ea16468954f300dd not found: ID does not exist" containerID="e96d13541d78d88ffb1e1dcff16556814da8c438d160fef0ea16468954f300dd" Jan 26 18:34:26 crc kubenswrapper[4737]: I0126 18:34:26.469941 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e96d13541d78d88ffb1e1dcff16556814da8c438d160fef0ea16468954f300dd"} err="failed to get container status \"e96d13541d78d88ffb1e1dcff16556814da8c438d160fef0ea16468954f300dd\": rpc error: code = NotFound desc = could not find container \"e96d13541d78d88ffb1e1dcff16556814da8c438d160fef0ea16468954f300dd\": container with ID starting with e96d13541d78d88ffb1e1dcff16556814da8c438d160fef0ea16468954f300dd not found: ID does not exist" Jan 26 18:34:26 crc kubenswrapper[4737]: I0126 18:34:26.469959 4737 scope.go:117] "RemoveContainer" containerID="03d782bb5883158eb31686ef882923bc0fe18907ec34b462ad7641b8d0a6e675" Jan 26 18:34:26 crc kubenswrapper[4737]: E0126 18:34:26.470207 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03d782bb5883158eb31686ef882923bc0fe18907ec34b462ad7641b8d0a6e675\": container with ID starting with 03d782bb5883158eb31686ef882923bc0fe18907ec34b462ad7641b8d0a6e675 not found: ID does not exist" containerID="03d782bb5883158eb31686ef882923bc0fe18907ec34b462ad7641b8d0a6e675" Jan 26 18:34:26 crc kubenswrapper[4737]: I0126 18:34:26.470230 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03d782bb5883158eb31686ef882923bc0fe18907ec34b462ad7641b8d0a6e675"} err="failed to get container status \"03d782bb5883158eb31686ef882923bc0fe18907ec34b462ad7641b8d0a6e675\": rpc error: code = NotFound desc = could not find container \"03d782bb5883158eb31686ef882923bc0fe18907ec34b462ad7641b8d0a6e675\": container with ID starting with 03d782bb5883158eb31686ef882923bc0fe18907ec34b462ad7641b8d0a6e675 not found: ID does not exist" Jan 26 18:34:26 crc kubenswrapper[4737]: I0126 18:34:26.470249 4737 scope.go:117] "RemoveContainer" containerID="f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291" Jan 26 18:34:26 crc kubenswrapper[4737]: E0126 18:34:26.470528 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\": container with ID starting with f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291 not found: ID does not exist" containerID="f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291" Jan 26 18:34:26 crc kubenswrapper[4737]: I0126 18:34:26.470550 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291"} err="failed to get container status \"f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\": rpc error: code = NotFound desc = could not find container \"f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291\": container with ID starting with f085ef263eafe48cecfbfe1f5287470c72262710a6fd4e7f68af9c8261317291 not found: ID does not exist" Jan 26 18:34:26 crc kubenswrapper[4737]: I0126 18:34:26.984545 4737 status_manager.go:851] "Failed to get status for pod" podUID="fdc44942-56de-4694-bcd4-bca48f1e1e08" pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-9kjp9\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 26 18:34:26 crc kubenswrapper[4737]: I0126 18:34:26.984774 4737 status_manager.go:851] "Failed to get status for pod" podUID="3628597d-09b4-4169-ba4b-ddedf59fce32" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 26 18:34:26 crc kubenswrapper[4737]: I0126 18:34:26.984917 4737 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 26 18:34:26 crc kubenswrapper[4737]: I0126 18:34:26.987986 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 26 18:34:28 crc kubenswrapper[4737]: E0126 18:34:28.049430 4737 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.236:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" volumeName="registry-storage" Jan 26 18:34:28 crc kubenswrapper[4737]: E0126 18:34:28.721908 4737 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.236:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 18:34:28 crc kubenswrapper[4737]: I0126 18:34:28.722451 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 18:34:28 crc kubenswrapper[4737]: E0126 18:34:28.748253 4737 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.236:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e5ba53691f09c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 18:34:28.746449052 +0000 UTC m=+242.054643760,LastTimestamp:2026-01-26 18:34:28.746449052 +0000 UTC m=+242.054643760,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 18:34:29 crc kubenswrapper[4737]: I0126 18:34:29.395865 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"4c0c748cf75e083d2270decda9faea591260a375dedad5f27801cc7eb2e4c560"} Jan 26 18:34:30 crc kubenswrapper[4737]: E0126 18:34:30.929340 4737 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.236:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e5ba53691f09c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 18:34:28.746449052 +0000 UTC m=+242.054643760,LastTimestamp:2026-01-26 18:34:28.746449052 +0000 UTC m=+242.054643760,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 18:34:31 crc kubenswrapper[4737]: I0126 18:34:31.406446 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"c8d48d62ab7e208dd9f540ca8db01800a5e7e7218cc429d3726d156daff7f674"} Jan 26 18:34:31 crc kubenswrapper[4737]: E0126 18:34:31.407054 4737 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.236:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 18:34:31 crc kubenswrapper[4737]: I0126 18:34:31.407211 4737 status_manager.go:851] "Failed to get status for pod" podUID="fdc44942-56de-4694-bcd4-bca48f1e1e08" pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-9kjp9\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 26 18:34:31 crc kubenswrapper[4737]: I0126 18:34:31.407594 4737 status_manager.go:851] "Failed to get status for pod" podUID="3628597d-09b4-4169-ba4b-ddedf59fce32" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 26 18:34:31 crc kubenswrapper[4737]: E0126 18:34:31.870759 4737 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 26 18:34:31 crc kubenswrapper[4737]: E0126 18:34:31.871413 4737 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 26 18:34:31 crc kubenswrapper[4737]: E0126 18:34:31.872140 4737 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 26 18:34:31 crc kubenswrapper[4737]: E0126 18:34:31.872696 4737 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 26 18:34:31 crc kubenswrapper[4737]: E0126 18:34:31.873271 4737 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 26 18:34:31 crc kubenswrapper[4737]: I0126 18:34:31.873319 4737 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 26 18:34:31 crc kubenswrapper[4737]: E0126 18:34:31.873772 4737 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.236:6443: connect: connection refused" interval="200ms" Jan 26 18:34:32 crc kubenswrapper[4737]: E0126 18:34:32.075049 4737 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.236:6443: connect: connection refused" interval="400ms" Jan 26 18:34:32 crc kubenswrapper[4737]: E0126 18:34:32.414061 4737 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.236:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 18:34:32 crc kubenswrapper[4737]: E0126 18:34:32.476461 4737 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.236:6443: connect: connection refused" interval="800ms" Jan 26 18:34:33 crc kubenswrapper[4737]: E0126 18:34:33.277552 4737 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.236:6443: connect: connection refused" interval="1.6s" Jan 26 18:34:34 crc kubenswrapper[4737]: E0126 18:34:34.881134 4737 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.236:6443: connect: connection refused" interval="3.2s" Jan 26 18:34:35 crc kubenswrapper[4737]: I0126 18:34:35.981056 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:34:35 crc kubenswrapper[4737]: I0126 18:34:35.981771 4737 status_manager.go:851] "Failed to get status for pod" podUID="fdc44942-56de-4694-bcd4-bca48f1e1e08" pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-9kjp9\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 26 18:34:35 crc kubenswrapper[4737]: I0126 18:34:35.982377 4737 status_manager.go:851] "Failed to get status for pod" podUID="3628597d-09b4-4169-ba4b-ddedf59fce32" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 26 18:34:35 crc kubenswrapper[4737]: I0126 18:34:35.996457 4737 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="00d641e5-0291-480c-9413-478267450e45" Jan 26 18:34:35 crc kubenswrapper[4737]: I0126 18:34:35.996509 4737 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="00d641e5-0291-480c-9413-478267450e45" Jan 26 18:34:35 crc kubenswrapper[4737]: E0126 18:34:35.997085 4737 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:34:35 crc kubenswrapper[4737]: I0126 18:34:35.997550 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:34:36 crc kubenswrapper[4737]: I0126 18:34:36.442094 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 26 18:34:36 crc kubenswrapper[4737]: I0126 18:34:36.442444 4737 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="a7338aa3bff3561881f454689b4ae1ab8b46ddf950c45dd080107c7b78e6766a" exitCode=1 Jan 26 18:34:36 crc kubenswrapper[4737]: I0126 18:34:36.442515 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"a7338aa3bff3561881f454689b4ae1ab8b46ddf950c45dd080107c7b78e6766a"} Jan 26 18:34:36 crc kubenswrapper[4737]: I0126 18:34:36.443118 4737 scope.go:117] "RemoveContainer" containerID="a7338aa3bff3561881f454689b4ae1ab8b46ddf950c45dd080107c7b78e6766a" Jan 26 18:34:36 crc kubenswrapper[4737]: I0126 18:34:36.443555 4737 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 26 18:34:36 crc kubenswrapper[4737]: I0126 18:34:36.444150 4737 status_manager.go:851] "Failed to get status for pod" podUID="fdc44942-56de-4694-bcd4-bca48f1e1e08" pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-9kjp9\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 26 18:34:36 crc kubenswrapper[4737]: I0126 18:34:36.444446 4737 status_manager.go:851] "Failed to get status for pod" podUID="3628597d-09b4-4169-ba4b-ddedf59fce32" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 26 18:34:36 crc kubenswrapper[4737]: I0126 18:34:36.445160 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6c27c2e45a94f2ce1c9859bb9795bb6d899182d6f980d32764849f6fc7665f2b"} Jan 26 18:34:36 crc kubenswrapper[4737]: I0126 18:34:36.445216 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"952a50e94ca71652c096ff4a8a14a691d0db8f6fd7c61848e5f0b3e0d32638ee"} Jan 26 18:34:36 crc kubenswrapper[4737]: I0126 18:34:36.445521 4737 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="00d641e5-0291-480c-9413-478267450e45" Jan 26 18:34:36 crc kubenswrapper[4737]: I0126 18:34:36.445545 4737 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="00d641e5-0291-480c-9413-478267450e45" Jan 26 18:34:36 crc kubenswrapper[4737]: E0126 18:34:36.446265 4737 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:34:36 crc kubenswrapper[4737]: I0126 18:34:36.446360 4737 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 26 18:34:36 crc kubenswrapper[4737]: I0126 18:34:36.447056 4737 status_manager.go:851] "Failed to get status for pod" podUID="fdc44942-56de-4694-bcd4-bca48f1e1e08" pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-9kjp9\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 26 18:34:36 crc kubenswrapper[4737]: I0126 18:34:36.447364 4737 status_manager.go:851] "Failed to get status for pod" podUID="3628597d-09b4-4169-ba4b-ddedf59fce32" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 26 18:34:36 crc kubenswrapper[4737]: I0126 18:34:36.987533 4737 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 26 18:34:36 crc kubenswrapper[4737]: I0126 18:34:36.988090 4737 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 26 18:34:36 crc kubenswrapper[4737]: I0126 18:34:36.988411 4737 status_manager.go:851] "Failed to get status for pod" podUID="fdc44942-56de-4694-bcd4-bca48f1e1e08" pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-9kjp9\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 26 18:34:36 crc kubenswrapper[4737]: I0126 18:34:36.988726 4737 status_manager.go:851] "Failed to get status for pod" podUID="3628597d-09b4-4169-ba4b-ddedf59fce32" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 26 18:34:37 crc kubenswrapper[4737]: I0126 18:34:37.454965 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 26 18:34:37 crc kubenswrapper[4737]: I0126 18:34:37.455364 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4350e2afa363b9e505608b903f8870381fc688527f200433f4ecee669f6c468b"} Jan 26 18:34:37 crc kubenswrapper[4737]: I0126 18:34:37.457346 4737 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="6c27c2e45a94f2ce1c9859bb9795bb6d899182d6f980d32764849f6fc7665f2b" exitCode=0 Jan 26 18:34:37 crc kubenswrapper[4737]: I0126 18:34:37.457395 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"6c27c2e45a94f2ce1c9859bb9795bb6d899182d6f980d32764849f6fc7665f2b"} Jan 26 18:34:37 crc kubenswrapper[4737]: I0126 18:34:37.457706 4737 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="00d641e5-0291-480c-9413-478267450e45" Jan 26 18:34:37 crc kubenswrapper[4737]: I0126 18:34:37.457725 4737 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="00d641e5-0291-480c-9413-478267450e45" Jan 26 18:34:37 crc kubenswrapper[4737]: I0126 18:34:37.458260 4737 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 26 18:34:37 crc kubenswrapper[4737]: E0126 18:34:37.458342 4737 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:34:37 crc kubenswrapper[4737]: I0126 18:34:37.458577 4737 status_manager.go:851] "Failed to get status for pod" podUID="fdc44942-56de-4694-bcd4-bca48f1e1e08" pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-9kjp9\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 26 18:34:37 crc kubenswrapper[4737]: I0126 18:34:37.459090 4737 status_manager.go:851] "Failed to get status for pod" podUID="3628597d-09b4-4169-ba4b-ddedf59fce32" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 26 18:34:37 crc kubenswrapper[4737]: I0126 18:34:37.459529 4737 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 26 18:34:38 crc kubenswrapper[4737]: E0126 18:34:38.082928 4737 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.236:6443: connect: connection refused" interval="6.4s" Jan 26 18:34:38 crc kubenswrapper[4737]: I0126 18:34:38.408629 4737 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 18:34:38 crc kubenswrapper[4737]: I0126 18:34:38.464307 4737 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 26 18:34:38 crc kubenswrapper[4737]: I0126 18:34:38.464597 4737 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 26 18:34:38 crc kubenswrapper[4737]: I0126 18:34:38.464830 4737 status_manager.go:851] "Failed to get status for pod" podUID="fdc44942-56de-4694-bcd4-bca48f1e1e08" pod="openshift-authentication/oauth-openshift-558db77b4-9kjp9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-9kjp9\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 26 18:34:38 crc kubenswrapper[4737]: I0126 18:34:38.465296 4737 status_manager.go:851] "Failed to get status for pod" podUID="3628597d-09b4-4169-ba4b-ddedf59fce32" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 26 18:34:38 crc kubenswrapper[4737]: I0126 18:34:38.966146 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 18:34:39 crc kubenswrapper[4737]: I0126 18:34:39.473825 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"327778432e6c53cbf340e6889073a7de76bb5078214717fbb695264bf0e96cae"} Jan 26 18:34:39 crc kubenswrapper[4737]: I0126 18:34:39.475093 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c3343648be2dd9a13d6e7748713cfaacb6d59be66dacca8496b798105137fe8f"} Jan 26 18:34:39 crc kubenswrapper[4737]: I0126 18:34:39.475225 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"30ccfbe1290ec36717d7849377c4009196432e598a5d4f5d329a415f1a5925a0"} Jan 26 18:34:39 crc kubenswrapper[4737]: I0126 18:34:39.475324 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"20476326139e33738edfd98e671cb7806d8ff521ab53200e5fba5c966fc09baa"} Jan 26 18:34:39 crc kubenswrapper[4737]: I0126 18:34:39.475419 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:34:39 crc kubenswrapper[4737]: I0126 18:34:39.475506 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3f1be105164f1fc67abf8c55b3211ba5e0e060bdd992261a4a6e8601c0e50f2a"} Jan 26 18:34:39 crc kubenswrapper[4737]: I0126 18:34:39.475361 4737 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="00d641e5-0291-480c-9413-478267450e45" Jan 26 18:34:39 crc kubenswrapper[4737]: I0126 18:34:39.475680 4737 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="00d641e5-0291-480c-9413-478267450e45" Jan 26 18:34:40 crc kubenswrapper[4737]: I0126 18:34:40.997988 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:34:40 crc kubenswrapper[4737]: I0126 18:34:40.998887 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:34:41 crc kubenswrapper[4737]: I0126 18:34:41.002914 4737 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 26 18:34:41 crc kubenswrapper[4737]: [+]log ok Jan 26 18:34:41 crc kubenswrapper[4737]: [+]etcd ok Jan 26 18:34:41 crc kubenswrapper[4737]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 26 18:34:41 crc kubenswrapper[4737]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 26 18:34:41 crc kubenswrapper[4737]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 26 18:34:41 crc kubenswrapper[4737]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 26 18:34:41 crc kubenswrapper[4737]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 26 18:34:41 crc kubenswrapper[4737]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 26 18:34:41 crc kubenswrapper[4737]: [+]poststarthook/generic-apiserver-start-informers ok Jan 26 18:34:41 crc kubenswrapper[4737]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 26 18:34:41 crc kubenswrapper[4737]: [+]poststarthook/priority-and-fairness-filter ok Jan 26 18:34:41 crc kubenswrapper[4737]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 26 18:34:41 crc kubenswrapper[4737]: [+]poststarthook/start-apiextensions-informers ok Jan 26 18:34:41 crc kubenswrapper[4737]: [+]poststarthook/start-apiextensions-controllers ok Jan 26 18:34:41 crc kubenswrapper[4737]: [+]poststarthook/crd-informer-synced ok Jan 26 18:34:41 crc kubenswrapper[4737]: [+]poststarthook/start-system-namespaces-controller ok Jan 26 18:34:41 crc kubenswrapper[4737]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 26 18:34:41 crc kubenswrapper[4737]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 26 18:34:41 crc kubenswrapper[4737]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 26 18:34:41 crc kubenswrapper[4737]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 26 18:34:41 crc kubenswrapper[4737]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 26 18:34:41 crc kubenswrapper[4737]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 26 18:34:41 crc kubenswrapper[4737]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Jan 26 18:34:41 crc kubenswrapper[4737]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 26 18:34:41 crc kubenswrapper[4737]: [+]poststarthook/bootstrap-controller ok Jan 26 18:34:41 crc kubenswrapper[4737]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 26 18:34:41 crc kubenswrapper[4737]: [+]poststarthook/start-kube-aggregator-informers ok Jan 26 18:34:41 crc kubenswrapper[4737]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 26 18:34:41 crc kubenswrapper[4737]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 26 18:34:41 crc kubenswrapper[4737]: [+]poststarthook/apiservice-registration-controller ok Jan 26 18:34:41 crc kubenswrapper[4737]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 26 18:34:41 crc kubenswrapper[4737]: [+]poststarthook/apiservice-discovery-controller ok Jan 26 18:34:41 crc kubenswrapper[4737]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 26 18:34:41 crc kubenswrapper[4737]: [+]autoregister-completion ok Jan 26 18:34:41 crc kubenswrapper[4737]: [+]poststarthook/apiservice-openapi-controller ok Jan 26 18:34:41 crc kubenswrapper[4737]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 26 18:34:41 crc kubenswrapper[4737]: livez check failed Jan 26 18:34:41 crc kubenswrapper[4737]: I0126 18:34:41.004388 4737 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 18:34:43 crc kubenswrapper[4737]: I0126 18:34:43.511992 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 18:34:43 crc kubenswrapper[4737]: I0126 18:34:43.515508 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 18:34:44 crc kubenswrapper[4737]: I0126 18:34:44.860961 4737 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:34:44 crc kubenswrapper[4737]: I0126 18:34:44.888490 4737 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="1427ea61-f1a6-4291-be7d-ad8a166c93cc" Jan 26 18:34:45 crc kubenswrapper[4737]: I0126 18:34:45.507763 4737 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="00d641e5-0291-480c-9413-478267450e45" Jan 26 18:34:45 crc kubenswrapper[4737]: I0126 18:34:45.507797 4737 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="00d641e5-0291-480c-9413-478267450e45" Jan 26 18:34:45 crc kubenswrapper[4737]: I0126 18:34:45.512238 4737 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="1427ea61-f1a6-4291-be7d-ad8a166c93cc" Jan 26 18:34:48 crc kubenswrapper[4737]: I0126 18:34:48.969663 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 18:34:54 crc kubenswrapper[4737]: I0126 18:34:54.854592 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 26 18:34:54 crc kubenswrapper[4737]: I0126 18:34:54.959483 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 26 18:34:55 crc kubenswrapper[4737]: I0126 18:34:55.555133 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 18:34:55 crc kubenswrapper[4737]: I0126 18:34:55.796892 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 26 18:34:56 crc kubenswrapper[4737]: I0126 18:34:56.551749 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 26 18:34:56 crc kubenswrapper[4737]: I0126 18:34:56.870400 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 26 18:34:56 crc kubenswrapper[4737]: I0126 18:34:56.920561 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 26 18:34:57 crc kubenswrapper[4737]: I0126 18:34:57.091492 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 26 18:34:57 crc kubenswrapper[4737]: I0126 18:34:57.139678 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 26 18:34:57 crc kubenswrapper[4737]: I0126 18:34:57.342723 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 26 18:34:57 crc kubenswrapper[4737]: I0126 18:34:57.423200 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 26 18:34:57 crc kubenswrapper[4737]: I0126 18:34:57.585126 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 26 18:34:57 crc kubenswrapper[4737]: I0126 18:34:57.613141 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 26 18:34:57 crc kubenswrapper[4737]: I0126 18:34:57.683791 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 26 18:34:57 crc kubenswrapper[4737]: I0126 18:34:57.713741 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 26 18:34:57 crc kubenswrapper[4737]: I0126 18:34:57.788656 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 26 18:34:58 crc kubenswrapper[4737]: I0126 18:34:58.088293 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 26 18:34:58 crc kubenswrapper[4737]: I0126 18:34:58.096420 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 26 18:34:58 crc kubenswrapper[4737]: I0126 18:34:58.124553 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 26 18:34:58 crc kubenswrapper[4737]: I0126 18:34:58.202672 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 26 18:34:58 crc kubenswrapper[4737]: I0126 18:34:58.448415 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 26 18:34:58 crc kubenswrapper[4737]: I0126 18:34:58.463288 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 26 18:34:58 crc kubenswrapper[4737]: I0126 18:34:58.463677 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 26 18:34:58 crc kubenswrapper[4737]: I0126 18:34:58.464234 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 26 18:34:58 crc kubenswrapper[4737]: I0126 18:34:58.488660 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 26 18:34:58 crc kubenswrapper[4737]: I0126 18:34:58.567631 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 26 18:34:58 crc kubenswrapper[4737]: I0126 18:34:58.569873 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 26 18:34:58 crc kubenswrapper[4737]: I0126 18:34:58.590924 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 26 18:34:58 crc kubenswrapper[4737]: I0126 18:34:58.603300 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 26 18:34:58 crc kubenswrapper[4737]: I0126 18:34:58.624327 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 26 18:34:58 crc kubenswrapper[4737]: I0126 18:34:58.735650 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 26 18:34:58 crc kubenswrapper[4737]: I0126 18:34:58.755288 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 26 18:34:58 crc kubenswrapper[4737]: I0126 18:34:58.795694 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 26 18:34:58 crc kubenswrapper[4737]: I0126 18:34:58.885769 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 26 18:34:58 crc kubenswrapper[4737]: I0126 18:34:58.886563 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 26 18:34:58 crc kubenswrapper[4737]: I0126 18:34:58.902238 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 26 18:34:59 crc kubenswrapper[4737]: I0126 18:34:59.081889 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 26 18:34:59 crc kubenswrapper[4737]: I0126 18:34:59.085824 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 26 18:34:59 crc kubenswrapper[4737]: I0126 18:34:59.188796 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 26 18:34:59 crc kubenswrapper[4737]: I0126 18:34:59.277480 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 18:34:59 crc kubenswrapper[4737]: I0126 18:34:59.367311 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 26 18:34:59 crc kubenswrapper[4737]: I0126 18:34:59.368828 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 26 18:34:59 crc kubenswrapper[4737]: I0126 18:34:59.374596 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 26 18:34:59 crc kubenswrapper[4737]: I0126 18:34:59.473715 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 26 18:34:59 crc kubenswrapper[4737]: I0126 18:34:59.643375 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 26 18:34:59 crc kubenswrapper[4737]: I0126 18:34:59.795859 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 26 18:34:59 crc kubenswrapper[4737]: I0126 18:34:59.836284 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 18:34:59 crc kubenswrapper[4737]: I0126 18:34:59.836573 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 26 18:34:59 crc kubenswrapper[4737]: I0126 18:34:59.841807 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 26 18:34:59 crc kubenswrapper[4737]: I0126 18:34:59.866668 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 26 18:34:59 crc kubenswrapper[4737]: I0126 18:34:59.917166 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 26 18:34:59 crc kubenswrapper[4737]: I0126 18:34:59.936495 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 26 18:34:59 crc kubenswrapper[4737]: I0126 18:34:59.938574 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 26 18:35:00 crc kubenswrapper[4737]: I0126 18:35:00.161214 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 26 18:35:00 crc kubenswrapper[4737]: I0126 18:35:00.175215 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 26 18:35:00 crc kubenswrapper[4737]: I0126 18:35:00.178492 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 26 18:35:00 crc kubenswrapper[4737]: I0126 18:35:00.217887 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 26 18:35:00 crc kubenswrapper[4737]: I0126 18:35:00.320321 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 26 18:35:00 crc kubenswrapper[4737]: I0126 18:35:00.403295 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 26 18:35:00 crc kubenswrapper[4737]: I0126 18:35:00.431827 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 26 18:35:00 crc kubenswrapper[4737]: I0126 18:35:00.464395 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 26 18:35:00 crc kubenswrapper[4737]: I0126 18:35:00.499578 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 26 18:35:00 crc kubenswrapper[4737]: I0126 18:35:00.556618 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 26 18:35:00 crc kubenswrapper[4737]: I0126 18:35:00.569848 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 26 18:35:00 crc kubenswrapper[4737]: I0126 18:35:00.630539 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 26 18:35:00 crc kubenswrapper[4737]: I0126 18:35:00.664969 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 26 18:35:00 crc kubenswrapper[4737]: I0126 18:35:00.714472 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 26 18:35:00 crc kubenswrapper[4737]: I0126 18:35:00.879201 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 26 18:35:00 crc kubenswrapper[4737]: I0126 18:35:00.879340 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 26 18:35:01 crc kubenswrapper[4737]: I0126 18:35:01.080788 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 26 18:35:01 crc kubenswrapper[4737]: I0126 18:35:01.099830 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 26 18:35:01 crc kubenswrapper[4737]: I0126 18:35:01.130664 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 26 18:35:01 crc kubenswrapper[4737]: I0126 18:35:01.154483 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 26 18:35:01 crc kubenswrapper[4737]: I0126 18:35:01.175738 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 26 18:35:01 crc kubenswrapper[4737]: I0126 18:35:01.202798 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 26 18:35:01 crc kubenswrapper[4737]: I0126 18:35:01.270313 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 26 18:35:01 crc kubenswrapper[4737]: I0126 18:35:01.522529 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 18:35:01 crc kubenswrapper[4737]: I0126 18:35:01.624894 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 26 18:35:01 crc kubenswrapper[4737]: I0126 18:35:01.836233 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 26 18:35:01 crc kubenswrapper[4737]: I0126 18:35:01.875762 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 26 18:35:01 crc kubenswrapper[4737]: I0126 18:35:01.936739 4737 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 26 18:35:01 crc kubenswrapper[4737]: I0126 18:35:01.955676 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 26 18:35:01 crc kubenswrapper[4737]: I0126 18:35:01.958260 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 26 18:35:01 crc kubenswrapper[4737]: I0126 18:35:01.987834 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 26 18:35:02 crc kubenswrapper[4737]: I0126 18:35:02.053585 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 26 18:35:02 crc kubenswrapper[4737]: I0126 18:35:02.189267 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 26 18:35:02 crc kubenswrapper[4737]: I0126 18:35:02.213560 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 26 18:35:02 crc kubenswrapper[4737]: I0126 18:35:02.394058 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 26 18:35:02 crc kubenswrapper[4737]: I0126 18:35:02.412614 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 26 18:35:02 crc kubenswrapper[4737]: I0126 18:35:02.423047 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 26 18:35:02 crc kubenswrapper[4737]: I0126 18:35:02.562421 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 26 18:35:02 crc kubenswrapper[4737]: I0126 18:35:02.602208 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 26 18:35:02 crc kubenswrapper[4737]: I0126 18:35:02.711327 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 26 18:35:02 crc kubenswrapper[4737]: I0126 18:35:02.778653 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 26 18:35:02 crc kubenswrapper[4737]: I0126 18:35:02.803865 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 18:35:02 crc kubenswrapper[4737]: I0126 18:35:02.861231 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 26 18:35:02 crc kubenswrapper[4737]: I0126 18:35:02.902097 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 26 18:35:02 crc kubenswrapper[4737]: I0126 18:35:02.905354 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 26 18:35:02 crc kubenswrapper[4737]: I0126 18:35:02.943409 4737 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 26 18:35:02 crc kubenswrapper[4737]: I0126 18:35:02.954946 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 26 18:35:03 crc kubenswrapper[4737]: I0126 18:35:03.089253 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 26 18:35:03 crc kubenswrapper[4737]: I0126 18:35:03.154737 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 26 18:35:03 crc kubenswrapper[4737]: I0126 18:35:03.191275 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 26 18:35:03 crc kubenswrapper[4737]: I0126 18:35:03.204306 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 26 18:35:03 crc kubenswrapper[4737]: I0126 18:35:03.216432 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 26 18:35:03 crc kubenswrapper[4737]: I0126 18:35:03.232712 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 26 18:35:03 crc kubenswrapper[4737]: I0126 18:35:03.285127 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 26 18:35:03 crc kubenswrapper[4737]: I0126 18:35:03.336414 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 26 18:35:03 crc kubenswrapper[4737]: I0126 18:35:03.344065 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 26 18:35:03 crc kubenswrapper[4737]: I0126 18:35:03.519053 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 26 18:35:03 crc kubenswrapper[4737]: I0126 18:35:03.529926 4737 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 26 18:35:03 crc kubenswrapper[4737]: I0126 18:35:03.534878 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-9kjp9","openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 18:35:03 crc kubenswrapper[4737]: I0126 18:35:03.534979 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 18:35:03 crc kubenswrapper[4737]: I0126 18:35:03.543260 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:35:03 crc kubenswrapper[4737]: I0126 18:35:03.547505 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 26 18:35:03 crc kubenswrapper[4737]: I0126 18:35:03.557872 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=19.557832897 podStartE2EDuration="19.557832897s" podCreationTimestamp="2026-01-26 18:34:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:35:03.556739227 +0000 UTC m=+276.864933955" watchObservedRunningTime="2026-01-26 18:35:03.557832897 +0000 UTC m=+276.866027645" Jan 26 18:35:03 crc kubenswrapper[4737]: I0126 18:35:03.633920 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 26 18:35:03 crc kubenswrapper[4737]: I0126 18:35:03.739344 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 26 18:35:03 crc kubenswrapper[4737]: I0126 18:35:03.762835 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 26 18:35:03 crc kubenswrapper[4737]: I0126 18:35:03.767451 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 26 18:35:03 crc kubenswrapper[4737]: I0126 18:35:03.783315 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 26 18:35:03 crc kubenswrapper[4737]: I0126 18:35:03.811961 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 26 18:35:03 crc kubenswrapper[4737]: I0126 18:35:03.812255 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 26 18:35:03 crc kubenswrapper[4737]: I0126 18:35:03.870668 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 26 18:35:03 crc kubenswrapper[4737]: I0126 18:35:03.934501 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 26 18:35:03 crc kubenswrapper[4737]: I0126 18:35:03.934923 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 26 18:35:03 crc kubenswrapper[4737]: I0126 18:35:03.937825 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 26 18:35:03 crc kubenswrapper[4737]: I0126 18:35:03.942110 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 26 18:35:03 crc kubenswrapper[4737]: I0126 18:35:03.958491 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 26 18:35:04 crc kubenswrapper[4737]: I0126 18:35:04.056401 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 26 18:35:04 crc kubenswrapper[4737]: I0126 18:35:04.081885 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 26 18:35:04 crc kubenswrapper[4737]: I0126 18:35:04.296326 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 26 18:35:04 crc kubenswrapper[4737]: I0126 18:35:04.325938 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 26 18:35:04 crc kubenswrapper[4737]: I0126 18:35:04.334534 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 26 18:35:04 crc kubenswrapper[4737]: I0126 18:35:04.361027 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 26 18:35:04 crc kubenswrapper[4737]: I0126 18:35:04.426607 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 26 18:35:04 crc kubenswrapper[4737]: I0126 18:35:04.453510 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 26 18:35:04 crc kubenswrapper[4737]: I0126 18:35:04.464201 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 26 18:35:04 crc kubenswrapper[4737]: I0126 18:35:04.473744 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 26 18:35:04 crc kubenswrapper[4737]: I0126 18:35:04.483101 4737 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 26 18:35:04 crc kubenswrapper[4737]: I0126 18:35:04.587091 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 18:35:04 crc kubenswrapper[4737]: I0126 18:35:04.633385 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 26 18:35:04 crc kubenswrapper[4737]: I0126 18:35:04.636137 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 26 18:35:04 crc kubenswrapper[4737]: I0126 18:35:04.660486 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 26 18:35:04 crc kubenswrapper[4737]: I0126 18:35:04.716822 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 26 18:35:04 crc kubenswrapper[4737]: I0126 18:35:04.886546 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 26 18:35:04 crc kubenswrapper[4737]: I0126 18:35:04.917342 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 26 18:35:04 crc kubenswrapper[4737]: I0126 18:35:04.964772 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 26 18:35:04 crc kubenswrapper[4737]: I0126 18:35:04.996018 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 26 18:35:04 crc kubenswrapper[4737]: I0126 18:35:04.999664 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdc44942-56de-4694-bcd4-bca48f1e1e08" path="/var/lib/kubelet/pods/fdc44942-56de-4694-bcd4-bca48f1e1e08/volumes" Jan 26 18:35:05 crc kubenswrapper[4737]: I0126 18:35:05.038560 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 26 18:35:05 crc kubenswrapper[4737]: I0126 18:35:05.061563 4737 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 26 18:35:05 crc kubenswrapper[4737]: I0126 18:35:05.087764 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 18:35:05 crc kubenswrapper[4737]: I0126 18:35:05.109481 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 26 18:35:05 crc kubenswrapper[4737]: I0126 18:35:05.217156 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 26 18:35:05 crc kubenswrapper[4737]: I0126 18:35:05.231194 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 26 18:35:05 crc kubenswrapper[4737]: I0126 18:35:05.386048 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 26 18:35:05 crc kubenswrapper[4737]: I0126 18:35:05.386149 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 26 18:35:05 crc kubenswrapper[4737]: I0126 18:35:05.409295 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 26 18:35:05 crc kubenswrapper[4737]: I0126 18:35:05.592129 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 26 18:35:05 crc kubenswrapper[4737]: I0126 18:35:05.661190 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 26 18:35:05 crc kubenswrapper[4737]: I0126 18:35:05.846375 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 26 18:35:05 crc kubenswrapper[4737]: I0126 18:35:05.846378 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 18:35:05 crc kubenswrapper[4737]: I0126 18:35:05.855340 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 26 18:35:05 crc kubenswrapper[4737]: I0126 18:35:05.914477 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 26 18:35:05 crc kubenswrapper[4737]: I0126 18:35:05.946751 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 26 18:35:05 crc kubenswrapper[4737]: I0126 18:35:05.994530 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.007462 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.011444 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.013049 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.048907 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.069920 4737 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.101706 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.135625 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.153686 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.206099 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.327524 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.335830 4737 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.336187 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://c8d48d62ab7e208dd9f540ca8db01800a5e7e7218cc429d3726d156daff7f674" gracePeriod=5 Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.345993 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7b57696689-sfzsn"] Jan 26 18:35:06 crc kubenswrapper[4737]: E0126 18:35:06.346368 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3628597d-09b4-4169-ba4b-ddedf59fce32" containerName="installer" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.346399 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="3628597d-09b4-4169-ba4b-ddedf59fce32" containerName="installer" Jan 26 18:35:06 crc kubenswrapper[4737]: E0126 18:35:06.346418 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.346432 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 18:35:06 crc kubenswrapper[4737]: E0126 18:35:06.346462 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdc44942-56de-4694-bcd4-bca48f1e1e08" containerName="oauth-openshift" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.346475 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdc44942-56de-4694-bcd4-bca48f1e1e08" containerName="oauth-openshift" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.346639 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdc44942-56de-4694-bcd4-bca48f1e1e08" containerName="oauth-openshift" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.346672 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="3628597d-09b4-4169-ba4b-ddedf59fce32" containerName="installer" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.346688 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.347486 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7b57696689-sfzsn" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.351907 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.353150 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.354693 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.354716 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.355158 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.354809 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.354927 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.355367 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.355941 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.356219 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.356239 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.356426 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.365674 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7b57696689-sfzsn"] Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.379290 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.380603 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.405364 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.408219 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.476717 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d854d47c-32cf-46b3-8add-2c1ffaf1af88-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7b57696689-sfzsn\" (UID: \"d854d47c-32cf-46b3-8add-2c1ffaf1af88\") " pod="openshift-authentication/oauth-openshift-7b57696689-sfzsn" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.476769 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d854d47c-32cf-46b3-8add-2c1ffaf1af88-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7b57696689-sfzsn\" (UID: \"d854d47c-32cf-46b3-8add-2c1ffaf1af88\") " pod="openshift-authentication/oauth-openshift-7b57696689-sfzsn" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.476851 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d854d47c-32cf-46b3-8add-2c1ffaf1af88-v4-0-config-system-service-ca\") pod \"oauth-openshift-7b57696689-sfzsn\" (UID: \"d854d47c-32cf-46b3-8add-2c1ffaf1af88\") " pod="openshift-authentication/oauth-openshift-7b57696689-sfzsn" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.476894 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d854d47c-32cf-46b3-8add-2c1ffaf1af88-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7b57696689-sfzsn\" (UID: \"d854d47c-32cf-46b3-8add-2c1ffaf1af88\") " pod="openshift-authentication/oauth-openshift-7b57696689-sfzsn" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.476962 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d854d47c-32cf-46b3-8add-2c1ffaf1af88-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7b57696689-sfzsn\" (UID: \"d854d47c-32cf-46b3-8add-2c1ffaf1af88\") " pod="openshift-authentication/oauth-openshift-7b57696689-sfzsn" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.477059 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d854d47c-32cf-46b3-8add-2c1ffaf1af88-audit-dir\") pod \"oauth-openshift-7b57696689-sfzsn\" (UID: \"d854d47c-32cf-46b3-8add-2c1ffaf1af88\") " pod="openshift-authentication/oauth-openshift-7b57696689-sfzsn" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.477098 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvk7p\" (UniqueName: \"kubernetes.io/projected/d854d47c-32cf-46b3-8add-2c1ffaf1af88-kube-api-access-gvk7p\") pod \"oauth-openshift-7b57696689-sfzsn\" (UID: \"d854d47c-32cf-46b3-8add-2c1ffaf1af88\") " pod="openshift-authentication/oauth-openshift-7b57696689-sfzsn" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.477124 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d854d47c-32cf-46b3-8add-2c1ffaf1af88-v4-0-config-user-template-login\") pod \"oauth-openshift-7b57696689-sfzsn\" (UID: \"d854d47c-32cf-46b3-8add-2c1ffaf1af88\") " pod="openshift-authentication/oauth-openshift-7b57696689-sfzsn" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.477141 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d854d47c-32cf-46b3-8add-2c1ffaf1af88-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7b57696689-sfzsn\" (UID: \"d854d47c-32cf-46b3-8add-2c1ffaf1af88\") " pod="openshift-authentication/oauth-openshift-7b57696689-sfzsn" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.477282 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d854d47c-32cf-46b3-8add-2c1ffaf1af88-v4-0-config-system-router-certs\") pod \"oauth-openshift-7b57696689-sfzsn\" (UID: \"d854d47c-32cf-46b3-8add-2c1ffaf1af88\") " pod="openshift-authentication/oauth-openshift-7b57696689-sfzsn" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.477302 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d854d47c-32cf-46b3-8add-2c1ffaf1af88-audit-policies\") pod \"oauth-openshift-7b57696689-sfzsn\" (UID: \"d854d47c-32cf-46b3-8add-2c1ffaf1af88\") " pod="openshift-authentication/oauth-openshift-7b57696689-sfzsn" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.477326 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d854d47c-32cf-46b3-8add-2c1ffaf1af88-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7b57696689-sfzsn\" (UID: \"d854d47c-32cf-46b3-8add-2c1ffaf1af88\") " pod="openshift-authentication/oauth-openshift-7b57696689-sfzsn" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.477439 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d854d47c-32cf-46b3-8add-2c1ffaf1af88-v4-0-config-user-template-error\") pod \"oauth-openshift-7b57696689-sfzsn\" (UID: \"d854d47c-32cf-46b3-8add-2c1ffaf1af88\") " pod="openshift-authentication/oauth-openshift-7b57696689-sfzsn" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.477548 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d854d47c-32cf-46b3-8add-2c1ffaf1af88-v4-0-config-system-session\") pod \"oauth-openshift-7b57696689-sfzsn\" (UID: \"d854d47c-32cf-46b3-8add-2c1ffaf1af88\") " pod="openshift-authentication/oauth-openshift-7b57696689-sfzsn" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.497565 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.578715 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d854d47c-32cf-46b3-8add-2c1ffaf1af88-v4-0-config-system-service-ca\") pod \"oauth-openshift-7b57696689-sfzsn\" (UID: \"d854d47c-32cf-46b3-8add-2c1ffaf1af88\") " pod="openshift-authentication/oauth-openshift-7b57696689-sfzsn" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.578847 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d854d47c-32cf-46b3-8add-2c1ffaf1af88-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7b57696689-sfzsn\" (UID: \"d854d47c-32cf-46b3-8add-2c1ffaf1af88\") " pod="openshift-authentication/oauth-openshift-7b57696689-sfzsn" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.578918 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d854d47c-32cf-46b3-8add-2c1ffaf1af88-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7b57696689-sfzsn\" (UID: \"d854d47c-32cf-46b3-8add-2c1ffaf1af88\") " pod="openshift-authentication/oauth-openshift-7b57696689-sfzsn" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.578982 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d854d47c-32cf-46b3-8add-2c1ffaf1af88-audit-dir\") pod \"oauth-openshift-7b57696689-sfzsn\" (UID: \"d854d47c-32cf-46b3-8add-2c1ffaf1af88\") " pod="openshift-authentication/oauth-openshift-7b57696689-sfzsn" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.579058 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvk7p\" (UniqueName: \"kubernetes.io/projected/d854d47c-32cf-46b3-8add-2c1ffaf1af88-kube-api-access-gvk7p\") pod \"oauth-openshift-7b57696689-sfzsn\" (UID: \"d854d47c-32cf-46b3-8add-2c1ffaf1af88\") " pod="openshift-authentication/oauth-openshift-7b57696689-sfzsn" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.583811 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d854d47c-32cf-46b3-8add-2c1ffaf1af88-v4-0-config-user-template-login\") pod \"oauth-openshift-7b57696689-sfzsn\" (UID: \"d854d47c-32cf-46b3-8add-2c1ffaf1af88\") " pod="openshift-authentication/oauth-openshift-7b57696689-sfzsn" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.580342 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d854d47c-32cf-46b3-8add-2c1ffaf1af88-v4-0-config-system-service-ca\") pod \"oauth-openshift-7b57696689-sfzsn\" (UID: \"d854d47c-32cf-46b3-8add-2c1ffaf1af88\") " pod="openshift-authentication/oauth-openshift-7b57696689-sfzsn" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.579211 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d854d47c-32cf-46b3-8add-2c1ffaf1af88-audit-dir\") pod \"oauth-openshift-7b57696689-sfzsn\" (UID: \"d854d47c-32cf-46b3-8add-2c1ffaf1af88\") " pod="openshift-authentication/oauth-openshift-7b57696689-sfzsn" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.584561 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d854d47c-32cf-46b3-8add-2c1ffaf1af88-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7b57696689-sfzsn\" (UID: \"d854d47c-32cf-46b3-8add-2c1ffaf1af88\") " pod="openshift-authentication/oauth-openshift-7b57696689-sfzsn" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.584781 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d854d47c-32cf-46b3-8add-2c1ffaf1af88-v4-0-config-system-router-certs\") pod \"oauth-openshift-7b57696689-sfzsn\" (UID: \"d854d47c-32cf-46b3-8add-2c1ffaf1af88\") " pod="openshift-authentication/oauth-openshift-7b57696689-sfzsn" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.585537 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d854d47c-32cf-46b3-8add-2c1ffaf1af88-audit-policies\") pod \"oauth-openshift-7b57696689-sfzsn\" (UID: \"d854d47c-32cf-46b3-8add-2c1ffaf1af88\") " pod="openshift-authentication/oauth-openshift-7b57696689-sfzsn" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.585624 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d854d47c-32cf-46b3-8add-2c1ffaf1af88-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7b57696689-sfzsn\" (UID: \"d854d47c-32cf-46b3-8add-2c1ffaf1af88\") " pod="openshift-authentication/oauth-openshift-7b57696689-sfzsn" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.585873 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d854d47c-32cf-46b3-8add-2c1ffaf1af88-v4-0-config-user-template-error\") pod \"oauth-openshift-7b57696689-sfzsn\" (UID: \"d854d47c-32cf-46b3-8add-2c1ffaf1af88\") " pod="openshift-authentication/oauth-openshift-7b57696689-sfzsn" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.585955 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d854d47c-32cf-46b3-8add-2c1ffaf1af88-v4-0-config-system-session\") pod \"oauth-openshift-7b57696689-sfzsn\" (UID: \"d854d47c-32cf-46b3-8add-2c1ffaf1af88\") " pod="openshift-authentication/oauth-openshift-7b57696689-sfzsn" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.586117 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d854d47c-32cf-46b3-8add-2c1ffaf1af88-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7b57696689-sfzsn\" (UID: \"d854d47c-32cf-46b3-8add-2c1ffaf1af88\") " pod="openshift-authentication/oauth-openshift-7b57696689-sfzsn" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.586194 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d854d47c-32cf-46b3-8add-2c1ffaf1af88-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7b57696689-sfzsn\" (UID: \"d854d47c-32cf-46b3-8add-2c1ffaf1af88\") " pod="openshift-authentication/oauth-openshift-7b57696689-sfzsn" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.586442 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d854d47c-32cf-46b3-8add-2c1ffaf1af88-audit-policies\") pod \"oauth-openshift-7b57696689-sfzsn\" (UID: \"d854d47c-32cf-46b3-8add-2c1ffaf1af88\") " pod="openshift-authentication/oauth-openshift-7b57696689-sfzsn" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.587279 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d854d47c-32cf-46b3-8add-2c1ffaf1af88-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7b57696689-sfzsn\" (UID: \"d854d47c-32cf-46b3-8add-2c1ffaf1af88\") " pod="openshift-authentication/oauth-openshift-7b57696689-sfzsn" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.589862 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d854d47c-32cf-46b3-8add-2c1ffaf1af88-v4-0-config-system-router-certs\") pod \"oauth-openshift-7b57696689-sfzsn\" (UID: \"d854d47c-32cf-46b3-8add-2c1ffaf1af88\") " pod="openshift-authentication/oauth-openshift-7b57696689-sfzsn" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.590546 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d854d47c-32cf-46b3-8add-2c1ffaf1af88-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7b57696689-sfzsn\" (UID: \"d854d47c-32cf-46b3-8add-2c1ffaf1af88\") " pod="openshift-authentication/oauth-openshift-7b57696689-sfzsn" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.591185 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d854d47c-32cf-46b3-8add-2c1ffaf1af88-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7b57696689-sfzsn\" (UID: \"d854d47c-32cf-46b3-8add-2c1ffaf1af88\") " pod="openshift-authentication/oauth-openshift-7b57696689-sfzsn" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.591587 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d854d47c-32cf-46b3-8add-2c1ffaf1af88-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7b57696689-sfzsn\" (UID: \"d854d47c-32cf-46b3-8add-2c1ffaf1af88\") " pod="openshift-authentication/oauth-openshift-7b57696689-sfzsn" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.591632 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d854d47c-32cf-46b3-8add-2c1ffaf1af88-v4-0-config-user-template-login\") pod \"oauth-openshift-7b57696689-sfzsn\" (UID: \"d854d47c-32cf-46b3-8add-2c1ffaf1af88\") " pod="openshift-authentication/oauth-openshift-7b57696689-sfzsn" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.592205 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d854d47c-32cf-46b3-8add-2c1ffaf1af88-v4-0-config-system-session\") pod \"oauth-openshift-7b57696689-sfzsn\" (UID: \"d854d47c-32cf-46b3-8add-2c1ffaf1af88\") " pod="openshift-authentication/oauth-openshift-7b57696689-sfzsn" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.594762 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d854d47c-32cf-46b3-8add-2c1ffaf1af88-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7b57696689-sfzsn\" (UID: \"d854d47c-32cf-46b3-8add-2c1ffaf1af88\") " pod="openshift-authentication/oauth-openshift-7b57696689-sfzsn" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.607919 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvk7p\" (UniqueName: \"kubernetes.io/projected/d854d47c-32cf-46b3-8add-2c1ffaf1af88-kube-api-access-gvk7p\") pod \"oauth-openshift-7b57696689-sfzsn\" (UID: \"d854d47c-32cf-46b3-8add-2c1ffaf1af88\") " pod="openshift-authentication/oauth-openshift-7b57696689-sfzsn" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.613028 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d854d47c-32cf-46b3-8add-2c1ffaf1af88-v4-0-config-user-template-error\") pod \"oauth-openshift-7b57696689-sfzsn\" (UID: \"d854d47c-32cf-46b3-8add-2c1ffaf1af88\") " pod="openshift-authentication/oauth-openshift-7b57696689-sfzsn" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.613389 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d854d47c-32cf-46b3-8add-2c1ffaf1af88-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7b57696689-sfzsn\" (UID: \"d854d47c-32cf-46b3-8add-2c1ffaf1af88\") " pod="openshift-authentication/oauth-openshift-7b57696689-sfzsn" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.627196 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.670947 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.685532 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7b57696689-sfzsn" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.761590 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.773088 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 26 18:35:06 crc kubenswrapper[4737]: I0126 18:35:06.782469 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 18:35:07 crc kubenswrapper[4737]: I0126 18:35:07.005397 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 26 18:35:07 crc kubenswrapper[4737]: I0126 18:35:07.057910 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 26 18:35:07 crc kubenswrapper[4737]: I0126 18:35:07.074011 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 26 18:35:07 crc kubenswrapper[4737]: I0126 18:35:07.159478 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 26 18:35:07 crc kubenswrapper[4737]: I0126 18:35:07.184100 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 26 18:35:07 crc kubenswrapper[4737]: I0126 18:35:07.215195 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 26 18:35:07 crc kubenswrapper[4737]: I0126 18:35:07.248104 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 26 18:35:07 crc kubenswrapper[4737]: I0126 18:35:07.323274 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 26 18:35:07 crc kubenswrapper[4737]: I0126 18:35:07.328587 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 18:35:07 crc kubenswrapper[4737]: I0126 18:35:07.368345 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 26 18:35:07 crc kubenswrapper[4737]: I0126 18:35:07.388833 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 26 18:35:07 crc kubenswrapper[4737]: I0126 18:35:07.402223 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 26 18:35:07 crc kubenswrapper[4737]: I0126 18:35:07.433233 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7b57696689-sfzsn"] Jan 26 18:35:07 crc kubenswrapper[4737]: W0126 18:35:07.437441 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd854d47c_32cf_46b3_8add_2c1ffaf1af88.slice/crio-6867872f6cedc9da7b1720a25f305df88a662289a1fdf69a23fd877bd0721b5c WatchSource:0}: Error finding container 6867872f6cedc9da7b1720a25f305df88a662289a1fdf69a23fd877bd0721b5c: Status 404 returned error can't find the container with id 6867872f6cedc9da7b1720a25f305df88a662289a1fdf69a23fd877bd0721b5c Jan 26 18:35:07 crc kubenswrapper[4737]: I0126 18:35:07.452518 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 26 18:35:07 crc kubenswrapper[4737]: I0126 18:35:07.575035 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 26 18:35:07 crc kubenswrapper[4737]: I0126 18:35:07.628767 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7b57696689-sfzsn" event={"ID":"d854d47c-32cf-46b3-8add-2c1ffaf1af88","Type":"ContainerStarted","Data":"6867872f6cedc9da7b1720a25f305df88a662289a1fdf69a23fd877bd0721b5c"} Jan 26 18:35:07 crc kubenswrapper[4737]: I0126 18:35:07.663055 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 26 18:35:07 crc kubenswrapper[4737]: I0126 18:35:07.701864 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 26 18:35:07 crc kubenswrapper[4737]: I0126 18:35:07.774230 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 26 18:35:07 crc kubenswrapper[4737]: I0126 18:35:07.800627 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 26 18:35:07 crc kubenswrapper[4737]: I0126 18:35:07.834398 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 26 18:35:08 crc kubenswrapper[4737]: I0126 18:35:08.040113 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 26 18:35:08 crc kubenswrapper[4737]: I0126 18:35:08.180159 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 18:35:08 crc kubenswrapper[4737]: I0126 18:35:08.285936 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 26 18:35:08 crc kubenswrapper[4737]: I0126 18:35:08.291777 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 26 18:35:08 crc kubenswrapper[4737]: I0126 18:35:08.315671 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 26 18:35:08 crc kubenswrapper[4737]: I0126 18:35:08.488927 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 26 18:35:08 crc kubenswrapper[4737]: I0126 18:35:08.618552 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 26 18:35:08 crc kubenswrapper[4737]: I0126 18:35:08.635529 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7b57696689-sfzsn" event={"ID":"d854d47c-32cf-46b3-8add-2c1ffaf1af88","Type":"ContainerStarted","Data":"0321a12f5b328ca4c1b638d182bdd54a32543cbb5dddc7aec6b57e34f6452a34"} Jan 26 18:35:08 crc kubenswrapper[4737]: I0126 18:35:08.635857 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7b57696689-sfzsn" Jan 26 18:35:08 crc kubenswrapper[4737]: I0126 18:35:08.640334 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7b57696689-sfzsn" Jan 26 18:35:08 crc kubenswrapper[4737]: I0126 18:35:08.647003 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 26 18:35:08 crc kubenswrapper[4737]: I0126 18:35:08.662481 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7b57696689-sfzsn" podStartSLOduration=70.662461812 podStartE2EDuration="1m10.662461812s" podCreationTimestamp="2026-01-26 18:33:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:35:08.661363552 +0000 UTC m=+281.969558280" watchObservedRunningTime="2026-01-26 18:35:08.662461812 +0000 UTC m=+281.970656520" Jan 26 18:35:08 crc kubenswrapper[4737]: I0126 18:35:08.767218 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 26 18:35:08 crc kubenswrapper[4737]: I0126 18:35:08.777761 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 26 18:35:08 crc kubenswrapper[4737]: I0126 18:35:08.885326 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 26 18:35:09 crc kubenswrapper[4737]: I0126 18:35:09.104012 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 26 18:35:09 crc kubenswrapper[4737]: I0126 18:35:09.158886 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 26 18:35:09 crc kubenswrapper[4737]: I0126 18:35:09.161359 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 26 18:35:09 crc kubenswrapper[4737]: I0126 18:35:09.164024 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 26 18:35:09 crc kubenswrapper[4737]: I0126 18:35:09.393172 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 26 18:35:09 crc kubenswrapper[4737]: I0126 18:35:09.568666 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 18:35:09 crc kubenswrapper[4737]: I0126 18:35:09.651548 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 26 18:35:09 crc kubenswrapper[4737]: I0126 18:35:09.682892 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 26 18:35:09 crc kubenswrapper[4737]: I0126 18:35:09.750957 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 26 18:35:09 crc kubenswrapper[4737]: I0126 18:35:09.924012 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 26 18:35:09 crc kubenswrapper[4737]: I0126 18:35:09.924152 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 26 18:35:10 crc kubenswrapper[4737]: I0126 18:35:10.254277 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 26 18:35:10 crc kubenswrapper[4737]: I0126 18:35:10.263240 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 18:35:10 crc kubenswrapper[4737]: I0126 18:35:10.357362 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 26 18:35:10 crc kubenswrapper[4737]: I0126 18:35:10.413172 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 26 18:35:10 crc kubenswrapper[4737]: I0126 18:35:10.604134 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 26 18:35:10 crc kubenswrapper[4737]: I0126 18:35:10.709722 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 26 18:35:10 crc kubenswrapper[4737]: I0126 18:35:10.756880 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 18:35:10 crc kubenswrapper[4737]: I0126 18:35:10.999023 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 26 18:35:11 crc kubenswrapper[4737]: I0126 18:35:11.022167 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 18:35:11 crc kubenswrapper[4737]: I0126 18:35:11.226047 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 26 18:35:11 crc kubenswrapper[4737]: I0126 18:35:11.441837 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 26 18:35:11 crc kubenswrapper[4737]: I0126 18:35:11.518311 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 26 18:35:11 crc kubenswrapper[4737]: I0126 18:35:11.631730 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 26 18:35:11 crc kubenswrapper[4737]: I0126 18:35:11.658056 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 26 18:35:11 crc kubenswrapper[4737]: I0126 18:35:11.658162 4737 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="c8d48d62ab7e208dd9f540ca8db01800a5e7e7218cc429d3726d156daff7f674" exitCode=137 Jan 26 18:35:11 crc kubenswrapper[4737]: I0126 18:35:11.835921 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 26 18:35:11 crc kubenswrapper[4737]: I0126 18:35:11.837521 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 26 18:35:11 crc kubenswrapper[4737]: I0126 18:35:11.847908 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 18:35:11 crc kubenswrapper[4737]: I0126 18:35:11.907507 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 26 18:35:11 crc kubenswrapper[4737]: I0126 18:35:11.907586 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 18:35:12 crc kubenswrapper[4737]: I0126 18:35:12.077787 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 18:35:12 crc kubenswrapper[4737]: I0126 18:35:12.077861 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 18:35:12 crc kubenswrapper[4737]: I0126 18:35:12.077914 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 18:35:12 crc kubenswrapper[4737]: I0126 18:35:12.077937 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 18:35:12 crc kubenswrapper[4737]: I0126 18:35:12.077938 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:35:12 crc kubenswrapper[4737]: I0126 18:35:12.077966 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 18:35:12 crc kubenswrapper[4737]: I0126 18:35:12.077993 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:35:12 crc kubenswrapper[4737]: I0126 18:35:12.078252 4737 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 26 18:35:12 crc kubenswrapper[4737]: I0126 18:35:12.078266 4737 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 18:35:12 crc kubenswrapper[4737]: I0126 18:35:12.078284 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:35:12 crc kubenswrapper[4737]: I0126 18:35:12.078315 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:35:12 crc kubenswrapper[4737]: I0126 18:35:12.084395 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:35:12 crc kubenswrapper[4737]: I0126 18:35:12.158518 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 26 18:35:12 crc kubenswrapper[4737]: I0126 18:35:12.160061 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 26 18:35:12 crc kubenswrapper[4737]: I0126 18:35:12.179828 4737 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 18:35:12 crc kubenswrapper[4737]: I0126 18:35:12.180640 4737 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 26 18:35:12 crc kubenswrapper[4737]: I0126 18:35:12.181797 4737 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 26 18:35:12 crc kubenswrapper[4737]: I0126 18:35:12.665663 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 26 18:35:12 crc kubenswrapper[4737]: I0126 18:35:12.665761 4737 scope.go:117] "RemoveContainer" containerID="c8d48d62ab7e208dd9f540ca8db01800a5e7e7218cc429d3726d156daff7f674" Jan 26 18:35:12 crc kubenswrapper[4737]: I0126 18:35:12.665935 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 18:35:12 crc kubenswrapper[4737]: I0126 18:35:12.990436 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 26 18:35:13 crc kubenswrapper[4737]: I0126 18:35:13.110114 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 18:35:26 crc kubenswrapper[4737]: I0126 18:35:26.838434 4737 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 26 18:35:46 crc kubenswrapper[4737]: I0126 18:35:46.497392 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-n7cr7"] Jan 26 18:35:46 crc kubenswrapper[4737]: I0126 18:35:46.500374 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-n7cr7" podUID="9b4a67b3-c096-4abe-80d8-f15e2d4ab72d" containerName="controller-manager" containerID="cri-o://fa77fb6d7269f7a354d8e59ea280185366db1926f843ac68ba566c117fc068f6" gracePeriod=30 Jan 26 18:35:46 crc kubenswrapper[4737]: I0126 18:35:46.603190 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7h9cs"] Jan 26 18:35:46 crc kubenswrapper[4737]: I0126 18:35:46.603419 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7h9cs" podUID="9ceadcc2-c87f-4382-895a-f052e3c3597d" containerName="route-controller-manager" containerID="cri-o://11a0ae5f0b174de66e703b99bfc2b5d02f9a22aa60ae32587ca86366804c4487" gracePeriod=30 Jan 26 18:35:46 crc kubenswrapper[4737]: I0126 18:35:46.875333 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-n7cr7" Jan 26 18:35:46 crc kubenswrapper[4737]: I0126 18:35:46.918621 4737 generic.go:334] "Generic (PLEG): container finished" podID="9ceadcc2-c87f-4382-895a-f052e3c3597d" containerID="11a0ae5f0b174de66e703b99bfc2b5d02f9a22aa60ae32587ca86366804c4487" exitCode=0 Jan 26 18:35:46 crc kubenswrapper[4737]: I0126 18:35:46.918797 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7h9cs" event={"ID":"9ceadcc2-c87f-4382-895a-f052e3c3597d","Type":"ContainerDied","Data":"11a0ae5f0b174de66e703b99bfc2b5d02f9a22aa60ae32587ca86366804c4487"} Jan 26 18:35:46 crc kubenswrapper[4737]: I0126 18:35:46.920703 4737 generic.go:334] "Generic (PLEG): container finished" podID="9b4a67b3-c096-4abe-80d8-f15e2d4ab72d" containerID="fa77fb6d7269f7a354d8e59ea280185366db1926f843ac68ba566c117fc068f6" exitCode=0 Jan 26 18:35:46 crc kubenswrapper[4737]: I0126 18:35:46.920737 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-n7cr7" event={"ID":"9b4a67b3-c096-4abe-80d8-f15e2d4ab72d","Type":"ContainerDied","Data":"fa77fb6d7269f7a354d8e59ea280185366db1926f843ac68ba566c117fc068f6"} Jan 26 18:35:46 crc kubenswrapper[4737]: I0126 18:35:46.920755 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-n7cr7" event={"ID":"9b4a67b3-c096-4abe-80d8-f15e2d4ab72d","Type":"ContainerDied","Data":"de152d55b63860c94e56c78a1aee141c43521fc3edbbf92a63419be8ba723178"} Jan 26 18:35:46 crc kubenswrapper[4737]: I0126 18:35:46.920760 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-n7cr7" Jan 26 18:35:46 crc kubenswrapper[4737]: I0126 18:35:46.920775 4737 scope.go:117] "RemoveContainer" containerID="fa77fb6d7269f7a354d8e59ea280185366db1926f843ac68ba566c117fc068f6" Jan 26 18:35:46 crc kubenswrapper[4737]: I0126 18:35:46.936350 4737 scope.go:117] "RemoveContainer" containerID="fa77fb6d7269f7a354d8e59ea280185366db1926f843ac68ba566c117fc068f6" Jan 26 18:35:46 crc kubenswrapper[4737]: E0126 18:35:46.936952 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa77fb6d7269f7a354d8e59ea280185366db1926f843ac68ba566c117fc068f6\": container with ID starting with fa77fb6d7269f7a354d8e59ea280185366db1926f843ac68ba566c117fc068f6 not found: ID does not exist" containerID="fa77fb6d7269f7a354d8e59ea280185366db1926f843ac68ba566c117fc068f6" Jan 26 18:35:46 crc kubenswrapper[4737]: I0126 18:35:46.936999 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa77fb6d7269f7a354d8e59ea280185366db1926f843ac68ba566c117fc068f6"} err="failed to get container status \"fa77fb6d7269f7a354d8e59ea280185366db1926f843ac68ba566c117fc068f6\": rpc error: code = NotFound desc = could not find container \"fa77fb6d7269f7a354d8e59ea280185366db1926f843ac68ba566c117fc068f6\": container with ID starting with fa77fb6d7269f7a354d8e59ea280185366db1926f843ac68ba566c117fc068f6 not found: ID does not exist" Jan 26 18:35:46 crc kubenswrapper[4737]: I0126 18:35:46.971424 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9b4a67b3-c096-4abe-80d8-f15e2d4ab72d-client-ca\") pod \"9b4a67b3-c096-4abe-80d8-f15e2d4ab72d\" (UID: \"9b4a67b3-c096-4abe-80d8-f15e2d4ab72d\") " Jan 26 18:35:46 crc kubenswrapper[4737]: I0126 18:35:46.971519 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qx6p\" (UniqueName: \"kubernetes.io/projected/9b4a67b3-c096-4abe-80d8-f15e2d4ab72d-kube-api-access-6qx6p\") pod \"9b4a67b3-c096-4abe-80d8-f15e2d4ab72d\" (UID: \"9b4a67b3-c096-4abe-80d8-f15e2d4ab72d\") " Jan 26 18:35:46 crc kubenswrapper[4737]: I0126 18:35:46.971550 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b4a67b3-c096-4abe-80d8-f15e2d4ab72d-serving-cert\") pod \"9b4a67b3-c096-4abe-80d8-f15e2d4ab72d\" (UID: \"9b4a67b3-c096-4abe-80d8-f15e2d4ab72d\") " Jan 26 18:35:46 crc kubenswrapper[4737]: I0126 18:35:46.971622 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b4a67b3-c096-4abe-80d8-f15e2d4ab72d-config\") pod \"9b4a67b3-c096-4abe-80d8-f15e2d4ab72d\" (UID: \"9b4a67b3-c096-4abe-80d8-f15e2d4ab72d\") " Jan 26 18:35:46 crc kubenswrapper[4737]: I0126 18:35:46.971659 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9b4a67b3-c096-4abe-80d8-f15e2d4ab72d-proxy-ca-bundles\") pod \"9b4a67b3-c096-4abe-80d8-f15e2d4ab72d\" (UID: \"9b4a67b3-c096-4abe-80d8-f15e2d4ab72d\") " Jan 26 18:35:46 crc kubenswrapper[4737]: I0126 18:35:46.972720 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b4a67b3-c096-4abe-80d8-f15e2d4ab72d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "9b4a67b3-c096-4abe-80d8-f15e2d4ab72d" (UID: "9b4a67b3-c096-4abe-80d8-f15e2d4ab72d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:35:46 crc kubenswrapper[4737]: I0126 18:35:46.973099 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b4a67b3-c096-4abe-80d8-f15e2d4ab72d-client-ca" (OuterVolumeSpecName: "client-ca") pod "9b4a67b3-c096-4abe-80d8-f15e2d4ab72d" (UID: "9b4a67b3-c096-4abe-80d8-f15e2d4ab72d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:35:46 crc kubenswrapper[4737]: I0126 18:35:46.974470 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b4a67b3-c096-4abe-80d8-f15e2d4ab72d-config" (OuterVolumeSpecName: "config") pod "9b4a67b3-c096-4abe-80d8-f15e2d4ab72d" (UID: "9b4a67b3-c096-4abe-80d8-f15e2d4ab72d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:35:46 crc kubenswrapper[4737]: I0126 18:35:46.977623 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b4a67b3-c096-4abe-80d8-f15e2d4ab72d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9b4a67b3-c096-4abe-80d8-f15e2d4ab72d" (UID: "9b4a67b3-c096-4abe-80d8-f15e2d4ab72d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:35:46 crc kubenswrapper[4737]: I0126 18:35:46.978464 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b4a67b3-c096-4abe-80d8-f15e2d4ab72d-kube-api-access-6qx6p" (OuterVolumeSpecName: "kube-api-access-6qx6p") pod "9b4a67b3-c096-4abe-80d8-f15e2d4ab72d" (UID: "9b4a67b3-c096-4abe-80d8-f15e2d4ab72d"). InnerVolumeSpecName "kube-api-access-6qx6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:35:46 crc kubenswrapper[4737]: I0126 18:35:46.981725 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7h9cs" Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.072986 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9ceadcc2-c87f-4382-895a-f052e3c3597d-client-ca\") pod \"9ceadcc2-c87f-4382-895a-f052e3c3597d\" (UID: \"9ceadcc2-c87f-4382-895a-f052e3c3597d\") " Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.073117 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ceadcc2-c87f-4382-895a-f052e3c3597d-serving-cert\") pod \"9ceadcc2-c87f-4382-895a-f052e3c3597d\" (UID: \"9ceadcc2-c87f-4382-895a-f052e3c3597d\") " Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.073153 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prdjb\" (UniqueName: \"kubernetes.io/projected/9ceadcc2-c87f-4382-895a-f052e3c3597d-kube-api-access-prdjb\") pod \"9ceadcc2-c87f-4382-895a-f052e3c3597d\" (UID: \"9ceadcc2-c87f-4382-895a-f052e3c3597d\") " Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.073197 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ceadcc2-c87f-4382-895a-f052e3c3597d-config\") pod \"9ceadcc2-c87f-4382-895a-f052e3c3597d\" (UID: \"9ceadcc2-c87f-4382-895a-f052e3c3597d\") " Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.073413 4737 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9b4a67b3-c096-4abe-80d8-f15e2d4ab72d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.073425 4737 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9b4a67b3-c096-4abe-80d8-f15e2d4ab72d-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.073434 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qx6p\" (UniqueName: \"kubernetes.io/projected/9b4a67b3-c096-4abe-80d8-f15e2d4ab72d-kube-api-access-6qx6p\") on node \"crc\" DevicePath \"\"" Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.073445 4737 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b4a67b3-c096-4abe-80d8-f15e2d4ab72d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.073454 4737 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b4a67b3-c096-4abe-80d8-f15e2d4ab72d-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.074340 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ceadcc2-c87f-4382-895a-f052e3c3597d-config" (OuterVolumeSpecName: "config") pod "9ceadcc2-c87f-4382-895a-f052e3c3597d" (UID: "9ceadcc2-c87f-4382-895a-f052e3c3597d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.075048 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ceadcc2-c87f-4382-895a-f052e3c3597d-client-ca" (OuterVolumeSpecName: "client-ca") pod "9ceadcc2-c87f-4382-895a-f052e3c3597d" (UID: "9ceadcc2-c87f-4382-895a-f052e3c3597d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.080531 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ceadcc2-c87f-4382-895a-f052e3c3597d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9ceadcc2-c87f-4382-895a-f052e3c3597d" (UID: "9ceadcc2-c87f-4382-895a-f052e3c3597d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.081359 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ceadcc2-c87f-4382-895a-f052e3c3597d-kube-api-access-prdjb" (OuterVolumeSpecName: "kube-api-access-prdjb") pod "9ceadcc2-c87f-4382-895a-f052e3c3597d" (UID: "9ceadcc2-c87f-4382-895a-f052e3c3597d"). InnerVolumeSpecName "kube-api-access-prdjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.103185 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-89bd9866d-hqr74"] Jan 26 18:35:47 crc kubenswrapper[4737]: E0126 18:35:47.103462 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ceadcc2-c87f-4382-895a-f052e3c3597d" containerName="route-controller-manager" Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.103485 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ceadcc2-c87f-4382-895a-f052e3c3597d" containerName="route-controller-manager" Jan 26 18:35:47 crc kubenswrapper[4737]: E0126 18:35:47.103507 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b4a67b3-c096-4abe-80d8-f15e2d4ab72d" containerName="controller-manager" Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.103515 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b4a67b3-c096-4abe-80d8-f15e2d4ab72d" containerName="controller-manager" Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.103624 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ceadcc2-c87f-4382-895a-f052e3c3597d" containerName="route-controller-manager" Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.103642 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b4a67b3-c096-4abe-80d8-f15e2d4ab72d" containerName="controller-manager" Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.104140 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-89bd9866d-hqr74" Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.111936 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-89bd9866d-hqr74"] Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.128930 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77f45d8f46-l74ft"] Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.129715 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77f45d8f46-l74ft" Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.135737 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77f45d8f46-l74ft"] Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.173815 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acad5266-f03a-4178-9ef3-83378661a2d7-config\") pod \"controller-manager-89bd9866d-hqr74\" (UID: \"acad5266-f03a-4178-9ef3-83378661a2d7\") " pod="openshift-controller-manager/controller-manager-89bd9866d-hqr74" Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.173882 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2f0da27-aeb1-4150-aa7c-545f5dd5b18f-serving-cert\") pod \"route-controller-manager-77f45d8f46-l74ft\" (UID: \"d2f0da27-aeb1-4150-aa7c-545f5dd5b18f\") " pod="openshift-route-controller-manager/route-controller-manager-77f45d8f46-l74ft" Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.173919 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2f0da27-aeb1-4150-aa7c-545f5dd5b18f-config\") pod \"route-controller-manager-77f45d8f46-l74ft\" (UID: \"d2f0da27-aeb1-4150-aa7c-545f5dd5b18f\") " pod="openshift-route-controller-manager/route-controller-manager-77f45d8f46-l74ft" Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.173975 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/acad5266-f03a-4178-9ef3-83378661a2d7-proxy-ca-bundles\") pod \"controller-manager-89bd9866d-hqr74\" (UID: \"acad5266-f03a-4178-9ef3-83378661a2d7\") " pod="openshift-controller-manager/controller-manager-89bd9866d-hqr74" Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.174021 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/acad5266-f03a-4178-9ef3-83378661a2d7-serving-cert\") pod \"controller-manager-89bd9866d-hqr74\" (UID: \"acad5266-f03a-4178-9ef3-83378661a2d7\") " pod="openshift-controller-manager/controller-manager-89bd9866d-hqr74" Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.174052 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxn42\" (UniqueName: \"kubernetes.io/projected/acad5266-f03a-4178-9ef3-83378661a2d7-kube-api-access-qxn42\") pod \"controller-manager-89bd9866d-hqr74\" (UID: \"acad5266-f03a-4178-9ef3-83378661a2d7\") " pod="openshift-controller-manager/controller-manager-89bd9866d-hqr74" Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.174123 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/acad5266-f03a-4178-9ef3-83378661a2d7-client-ca\") pod \"controller-manager-89bd9866d-hqr74\" (UID: \"acad5266-f03a-4178-9ef3-83378661a2d7\") " pod="openshift-controller-manager/controller-manager-89bd9866d-hqr74" Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.174208 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d2f0da27-aeb1-4150-aa7c-545f5dd5b18f-client-ca\") pod \"route-controller-manager-77f45d8f46-l74ft\" (UID: \"d2f0da27-aeb1-4150-aa7c-545f5dd5b18f\") " pod="openshift-route-controller-manager/route-controller-manager-77f45d8f46-l74ft" Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.174265 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thvhr\" (UniqueName: \"kubernetes.io/projected/d2f0da27-aeb1-4150-aa7c-545f5dd5b18f-kube-api-access-thvhr\") pod \"route-controller-manager-77f45d8f46-l74ft\" (UID: \"d2f0da27-aeb1-4150-aa7c-545f5dd5b18f\") " pod="openshift-route-controller-manager/route-controller-manager-77f45d8f46-l74ft" Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.174324 4737 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9ceadcc2-c87f-4382-895a-f052e3c3597d-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.174339 4737 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ceadcc2-c87f-4382-895a-f052e3c3597d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.174352 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prdjb\" (UniqueName: \"kubernetes.io/projected/9ceadcc2-c87f-4382-895a-f052e3c3597d-kube-api-access-prdjb\") on node \"crc\" DevicePath \"\"" Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.174366 4737 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ceadcc2-c87f-4382-895a-f052e3c3597d-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.235497 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-n7cr7"] Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.239429 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-n7cr7"] Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.275470 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acad5266-f03a-4178-9ef3-83378661a2d7-config\") pod \"controller-manager-89bd9866d-hqr74\" (UID: \"acad5266-f03a-4178-9ef3-83378661a2d7\") " pod="openshift-controller-manager/controller-manager-89bd9866d-hqr74" Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.275799 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2f0da27-aeb1-4150-aa7c-545f5dd5b18f-serving-cert\") pod \"route-controller-manager-77f45d8f46-l74ft\" (UID: \"d2f0da27-aeb1-4150-aa7c-545f5dd5b18f\") " pod="openshift-route-controller-manager/route-controller-manager-77f45d8f46-l74ft" Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.275900 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/acad5266-f03a-4178-9ef3-83378661a2d7-proxy-ca-bundles\") pod \"controller-manager-89bd9866d-hqr74\" (UID: \"acad5266-f03a-4178-9ef3-83378661a2d7\") " pod="openshift-controller-manager/controller-manager-89bd9866d-hqr74" Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.275972 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2f0da27-aeb1-4150-aa7c-545f5dd5b18f-config\") pod \"route-controller-manager-77f45d8f46-l74ft\" (UID: \"d2f0da27-aeb1-4150-aa7c-545f5dd5b18f\") " pod="openshift-route-controller-manager/route-controller-manager-77f45d8f46-l74ft" Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.276064 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/acad5266-f03a-4178-9ef3-83378661a2d7-serving-cert\") pod \"controller-manager-89bd9866d-hqr74\" (UID: \"acad5266-f03a-4178-9ef3-83378661a2d7\") " pod="openshift-controller-manager/controller-manager-89bd9866d-hqr74" Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.276169 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxn42\" (UniqueName: \"kubernetes.io/projected/acad5266-f03a-4178-9ef3-83378661a2d7-kube-api-access-qxn42\") pod \"controller-manager-89bd9866d-hqr74\" (UID: \"acad5266-f03a-4178-9ef3-83378661a2d7\") " pod="openshift-controller-manager/controller-manager-89bd9866d-hqr74" Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.276290 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/acad5266-f03a-4178-9ef3-83378661a2d7-client-ca\") pod \"controller-manager-89bd9866d-hqr74\" (UID: \"acad5266-f03a-4178-9ef3-83378661a2d7\") " pod="openshift-controller-manager/controller-manager-89bd9866d-hqr74" Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.276378 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d2f0da27-aeb1-4150-aa7c-545f5dd5b18f-client-ca\") pod \"route-controller-manager-77f45d8f46-l74ft\" (UID: \"d2f0da27-aeb1-4150-aa7c-545f5dd5b18f\") " pod="openshift-route-controller-manager/route-controller-manager-77f45d8f46-l74ft" Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.276473 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thvhr\" (UniqueName: \"kubernetes.io/projected/d2f0da27-aeb1-4150-aa7c-545f5dd5b18f-kube-api-access-thvhr\") pod \"route-controller-manager-77f45d8f46-l74ft\" (UID: \"d2f0da27-aeb1-4150-aa7c-545f5dd5b18f\") " pod="openshift-route-controller-manager/route-controller-manager-77f45d8f46-l74ft" Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.277683 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d2f0da27-aeb1-4150-aa7c-545f5dd5b18f-client-ca\") pod \"route-controller-manager-77f45d8f46-l74ft\" (UID: \"d2f0da27-aeb1-4150-aa7c-545f5dd5b18f\") " pod="openshift-route-controller-manager/route-controller-manager-77f45d8f46-l74ft" Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.277736 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/acad5266-f03a-4178-9ef3-83378661a2d7-client-ca\") pod \"controller-manager-89bd9866d-hqr74\" (UID: \"acad5266-f03a-4178-9ef3-83378661a2d7\") " pod="openshift-controller-manager/controller-manager-89bd9866d-hqr74" Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.277978 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/acad5266-f03a-4178-9ef3-83378661a2d7-proxy-ca-bundles\") pod \"controller-manager-89bd9866d-hqr74\" (UID: \"acad5266-f03a-4178-9ef3-83378661a2d7\") " pod="openshift-controller-manager/controller-manager-89bd9866d-hqr74" Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.278085 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acad5266-f03a-4178-9ef3-83378661a2d7-config\") pod \"controller-manager-89bd9866d-hqr74\" (UID: \"acad5266-f03a-4178-9ef3-83378661a2d7\") " pod="openshift-controller-manager/controller-manager-89bd9866d-hqr74" Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.279883 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2f0da27-aeb1-4150-aa7c-545f5dd5b18f-serving-cert\") pod \"route-controller-manager-77f45d8f46-l74ft\" (UID: \"d2f0da27-aeb1-4150-aa7c-545f5dd5b18f\") " pod="openshift-route-controller-manager/route-controller-manager-77f45d8f46-l74ft" Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.279945 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/acad5266-f03a-4178-9ef3-83378661a2d7-serving-cert\") pod \"controller-manager-89bd9866d-hqr74\" (UID: \"acad5266-f03a-4178-9ef3-83378661a2d7\") " pod="openshift-controller-manager/controller-manager-89bd9866d-hqr74" Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.282970 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2f0da27-aeb1-4150-aa7c-545f5dd5b18f-config\") pod \"route-controller-manager-77f45d8f46-l74ft\" (UID: \"d2f0da27-aeb1-4150-aa7c-545f5dd5b18f\") " pod="openshift-route-controller-manager/route-controller-manager-77f45d8f46-l74ft" Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.293248 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxn42\" (UniqueName: \"kubernetes.io/projected/acad5266-f03a-4178-9ef3-83378661a2d7-kube-api-access-qxn42\") pod \"controller-manager-89bd9866d-hqr74\" (UID: \"acad5266-f03a-4178-9ef3-83378661a2d7\") " pod="openshift-controller-manager/controller-manager-89bd9866d-hqr74" Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.293364 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thvhr\" (UniqueName: \"kubernetes.io/projected/d2f0da27-aeb1-4150-aa7c-545f5dd5b18f-kube-api-access-thvhr\") pod \"route-controller-manager-77f45d8f46-l74ft\" (UID: \"d2f0da27-aeb1-4150-aa7c-545f5dd5b18f\") " pod="openshift-route-controller-manager/route-controller-manager-77f45d8f46-l74ft" Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.421542 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-89bd9866d-hqr74" Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.452215 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77f45d8f46-l74ft" Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.600382 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-89bd9866d-hqr74"] Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.930239 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-89bd9866d-hqr74" event={"ID":"acad5266-f03a-4178-9ef3-83378661a2d7","Type":"ContainerStarted","Data":"d4f664134476361b7dad3a0ecdcec62fe5a0e857a505edf0b2116bf567fe9b25"} Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.934006 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7h9cs" event={"ID":"9ceadcc2-c87f-4382-895a-f052e3c3597d","Type":"ContainerDied","Data":"9f408e8e9550ffb6d5a4cc6221e30443ddda359fcec4f12c81e0c6981597e4c5"} Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.934090 4737 scope.go:117] "RemoveContainer" containerID="11a0ae5f0b174de66e703b99bfc2b5d02f9a22aa60ae32587ca86366804c4487" Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.934124 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7h9cs" Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.968817 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7h9cs"] Jan 26 18:35:47 crc kubenswrapper[4737]: I0126 18:35:47.973088 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7h9cs"] Jan 26 18:35:48 crc kubenswrapper[4737]: I0126 18:35:48.129586 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77f45d8f46-l74ft"] Jan 26 18:35:48 crc kubenswrapper[4737]: I0126 18:35:48.940513 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77f45d8f46-l74ft" event={"ID":"d2f0da27-aeb1-4150-aa7c-545f5dd5b18f","Type":"ContainerStarted","Data":"c36843d5984f71ee93ca20700d00dc50ce07e510c50474da3a23adec17b78201"} Jan 26 18:35:48 crc kubenswrapper[4737]: I0126 18:35:48.940859 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77f45d8f46-l74ft" event={"ID":"d2f0da27-aeb1-4150-aa7c-545f5dd5b18f","Type":"ContainerStarted","Data":"990a98c03027369b20828661a355e424ceead68dbd7dc7c3f6d66fd701153c54"} Jan 26 18:35:48 crc kubenswrapper[4737]: I0126 18:35:48.942801 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-89bd9866d-hqr74" event={"ID":"acad5266-f03a-4178-9ef3-83378661a2d7","Type":"ContainerStarted","Data":"997fb153a3549def33bb6292cecabb15a318bd902ae77c2e9600abf56d816cd2"} Jan 26 18:35:48 crc kubenswrapper[4737]: I0126 18:35:48.942961 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-89bd9866d-hqr74" Jan 26 18:35:48 crc kubenswrapper[4737]: I0126 18:35:48.947783 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-89bd9866d-hqr74" Jan 26 18:35:48 crc kubenswrapper[4737]: I0126 18:35:48.956398 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-77f45d8f46-l74ft" podStartSLOduration=1.956378011 podStartE2EDuration="1.956378011s" podCreationTimestamp="2026-01-26 18:35:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:35:48.954058707 +0000 UTC m=+322.262253415" watchObservedRunningTime="2026-01-26 18:35:48.956378011 +0000 UTC m=+322.264572719" Jan 26 18:35:48 crc kubenswrapper[4737]: I0126 18:35:48.969933 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-89bd9866d-hqr74" podStartSLOduration=1.969892358 podStartE2EDuration="1.969892358s" podCreationTimestamp="2026-01-26 18:35:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:35:48.967875103 +0000 UTC m=+322.276069821" watchObservedRunningTime="2026-01-26 18:35:48.969892358 +0000 UTC m=+322.278087076" Jan 26 18:35:48 crc kubenswrapper[4737]: I0126 18:35:48.996970 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b4a67b3-c096-4abe-80d8-f15e2d4ab72d" path="/var/lib/kubelet/pods/9b4a67b3-c096-4abe-80d8-f15e2d4ab72d/volumes" Jan 26 18:35:48 crc kubenswrapper[4737]: I0126 18:35:48.997691 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ceadcc2-c87f-4382-895a-f052e3c3597d" path="/var/lib/kubelet/pods/9ceadcc2-c87f-4382-895a-f052e3c3597d/volumes" Jan 26 18:35:49 crc kubenswrapper[4737]: I0126 18:35:49.951816 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-77f45d8f46-l74ft" Jan 26 18:35:49 crc kubenswrapper[4737]: I0126 18:35:49.956790 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-77f45d8f46-l74ft" Jan 26 18:36:23 crc kubenswrapper[4737]: I0126 18:36:23.255786 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-2vxjt"] Jan 26 18:36:23 crc kubenswrapper[4737]: I0126 18:36:23.257270 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-2vxjt" Jan 26 18:36:23 crc kubenswrapper[4737]: I0126 18:36:23.282064 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-2vxjt"] Jan 26 18:36:23 crc kubenswrapper[4737]: I0126 18:36:23.305164 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq88r\" (UniqueName: \"kubernetes.io/projected/be2e5820-2a4f-4b3a-88d3-7b6697825ddf-kube-api-access-wq88r\") pod \"image-registry-66df7c8f76-2vxjt\" (UID: \"be2e5820-2a4f-4b3a-88d3-7b6697825ddf\") " pod="openshift-image-registry/image-registry-66df7c8f76-2vxjt" Jan 26 18:36:23 crc kubenswrapper[4737]: I0126 18:36:23.305286 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/be2e5820-2a4f-4b3a-88d3-7b6697825ddf-trusted-ca\") pod \"image-registry-66df7c8f76-2vxjt\" (UID: \"be2e5820-2a4f-4b3a-88d3-7b6697825ddf\") " pod="openshift-image-registry/image-registry-66df7c8f76-2vxjt" Jan 26 18:36:23 crc kubenswrapper[4737]: I0126 18:36:23.305384 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/be2e5820-2a4f-4b3a-88d3-7b6697825ddf-registry-tls\") pod \"image-registry-66df7c8f76-2vxjt\" (UID: \"be2e5820-2a4f-4b3a-88d3-7b6697825ddf\") " pod="openshift-image-registry/image-registry-66df7c8f76-2vxjt" Jan 26 18:36:23 crc kubenswrapper[4737]: I0126 18:36:23.305587 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-2vxjt\" (UID: \"be2e5820-2a4f-4b3a-88d3-7b6697825ddf\") " pod="openshift-image-registry/image-registry-66df7c8f76-2vxjt" Jan 26 18:36:23 crc kubenswrapper[4737]: I0126 18:36:23.305755 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/be2e5820-2a4f-4b3a-88d3-7b6697825ddf-bound-sa-token\") pod \"image-registry-66df7c8f76-2vxjt\" (UID: \"be2e5820-2a4f-4b3a-88d3-7b6697825ddf\") " pod="openshift-image-registry/image-registry-66df7c8f76-2vxjt" Jan 26 18:36:23 crc kubenswrapper[4737]: I0126 18:36:23.305860 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/be2e5820-2a4f-4b3a-88d3-7b6697825ddf-ca-trust-extracted\") pod \"image-registry-66df7c8f76-2vxjt\" (UID: \"be2e5820-2a4f-4b3a-88d3-7b6697825ddf\") " pod="openshift-image-registry/image-registry-66df7c8f76-2vxjt" Jan 26 18:36:23 crc kubenswrapper[4737]: I0126 18:36:23.305965 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/be2e5820-2a4f-4b3a-88d3-7b6697825ddf-installation-pull-secrets\") pod \"image-registry-66df7c8f76-2vxjt\" (UID: \"be2e5820-2a4f-4b3a-88d3-7b6697825ddf\") " pod="openshift-image-registry/image-registry-66df7c8f76-2vxjt" Jan 26 18:36:23 crc kubenswrapper[4737]: I0126 18:36:23.306126 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/be2e5820-2a4f-4b3a-88d3-7b6697825ddf-registry-certificates\") pod \"image-registry-66df7c8f76-2vxjt\" (UID: \"be2e5820-2a4f-4b3a-88d3-7b6697825ddf\") " pod="openshift-image-registry/image-registry-66df7c8f76-2vxjt" Jan 26 18:36:23 crc kubenswrapper[4737]: I0126 18:36:23.332700 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-2vxjt\" (UID: \"be2e5820-2a4f-4b3a-88d3-7b6697825ddf\") " pod="openshift-image-registry/image-registry-66df7c8f76-2vxjt" Jan 26 18:36:23 crc kubenswrapper[4737]: I0126 18:36:23.406802 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/be2e5820-2a4f-4b3a-88d3-7b6697825ddf-trusted-ca\") pod \"image-registry-66df7c8f76-2vxjt\" (UID: \"be2e5820-2a4f-4b3a-88d3-7b6697825ddf\") " pod="openshift-image-registry/image-registry-66df7c8f76-2vxjt" Jan 26 18:36:23 crc kubenswrapper[4737]: I0126 18:36:23.406879 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/be2e5820-2a4f-4b3a-88d3-7b6697825ddf-registry-tls\") pod \"image-registry-66df7c8f76-2vxjt\" (UID: \"be2e5820-2a4f-4b3a-88d3-7b6697825ddf\") " pod="openshift-image-registry/image-registry-66df7c8f76-2vxjt" Jan 26 18:36:23 crc kubenswrapper[4737]: I0126 18:36:23.406904 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/be2e5820-2a4f-4b3a-88d3-7b6697825ddf-bound-sa-token\") pod \"image-registry-66df7c8f76-2vxjt\" (UID: \"be2e5820-2a4f-4b3a-88d3-7b6697825ddf\") " pod="openshift-image-registry/image-registry-66df7c8f76-2vxjt" Jan 26 18:36:23 crc kubenswrapper[4737]: I0126 18:36:23.406930 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/be2e5820-2a4f-4b3a-88d3-7b6697825ddf-ca-trust-extracted\") pod \"image-registry-66df7c8f76-2vxjt\" (UID: \"be2e5820-2a4f-4b3a-88d3-7b6697825ddf\") " pod="openshift-image-registry/image-registry-66df7c8f76-2vxjt" Jan 26 18:36:23 crc kubenswrapper[4737]: I0126 18:36:23.406955 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/be2e5820-2a4f-4b3a-88d3-7b6697825ddf-installation-pull-secrets\") pod \"image-registry-66df7c8f76-2vxjt\" (UID: \"be2e5820-2a4f-4b3a-88d3-7b6697825ddf\") " pod="openshift-image-registry/image-registry-66df7c8f76-2vxjt" Jan 26 18:36:23 crc kubenswrapper[4737]: I0126 18:36:23.407007 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/be2e5820-2a4f-4b3a-88d3-7b6697825ddf-registry-certificates\") pod \"image-registry-66df7c8f76-2vxjt\" (UID: \"be2e5820-2a4f-4b3a-88d3-7b6697825ddf\") " pod="openshift-image-registry/image-registry-66df7c8f76-2vxjt" Jan 26 18:36:23 crc kubenswrapper[4737]: I0126 18:36:23.407043 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wq88r\" (UniqueName: \"kubernetes.io/projected/be2e5820-2a4f-4b3a-88d3-7b6697825ddf-kube-api-access-wq88r\") pod \"image-registry-66df7c8f76-2vxjt\" (UID: \"be2e5820-2a4f-4b3a-88d3-7b6697825ddf\") " pod="openshift-image-registry/image-registry-66df7c8f76-2vxjt" Jan 26 18:36:23 crc kubenswrapper[4737]: I0126 18:36:23.408208 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/be2e5820-2a4f-4b3a-88d3-7b6697825ddf-ca-trust-extracted\") pod \"image-registry-66df7c8f76-2vxjt\" (UID: \"be2e5820-2a4f-4b3a-88d3-7b6697825ddf\") " pod="openshift-image-registry/image-registry-66df7c8f76-2vxjt" Jan 26 18:36:23 crc kubenswrapper[4737]: I0126 18:36:23.408851 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/be2e5820-2a4f-4b3a-88d3-7b6697825ddf-trusted-ca\") pod \"image-registry-66df7c8f76-2vxjt\" (UID: \"be2e5820-2a4f-4b3a-88d3-7b6697825ddf\") " pod="openshift-image-registry/image-registry-66df7c8f76-2vxjt" Jan 26 18:36:23 crc kubenswrapper[4737]: I0126 18:36:23.408904 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/be2e5820-2a4f-4b3a-88d3-7b6697825ddf-registry-certificates\") pod \"image-registry-66df7c8f76-2vxjt\" (UID: \"be2e5820-2a4f-4b3a-88d3-7b6697825ddf\") " pod="openshift-image-registry/image-registry-66df7c8f76-2vxjt" Jan 26 18:36:23 crc kubenswrapper[4737]: I0126 18:36:23.415443 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/be2e5820-2a4f-4b3a-88d3-7b6697825ddf-registry-tls\") pod \"image-registry-66df7c8f76-2vxjt\" (UID: \"be2e5820-2a4f-4b3a-88d3-7b6697825ddf\") " pod="openshift-image-registry/image-registry-66df7c8f76-2vxjt" Jan 26 18:36:23 crc kubenswrapper[4737]: I0126 18:36:23.415901 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/be2e5820-2a4f-4b3a-88d3-7b6697825ddf-installation-pull-secrets\") pod \"image-registry-66df7c8f76-2vxjt\" (UID: \"be2e5820-2a4f-4b3a-88d3-7b6697825ddf\") " pod="openshift-image-registry/image-registry-66df7c8f76-2vxjt" Jan 26 18:36:23 crc kubenswrapper[4737]: I0126 18:36:23.431532 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wq88r\" (UniqueName: \"kubernetes.io/projected/be2e5820-2a4f-4b3a-88d3-7b6697825ddf-kube-api-access-wq88r\") pod \"image-registry-66df7c8f76-2vxjt\" (UID: \"be2e5820-2a4f-4b3a-88d3-7b6697825ddf\") " pod="openshift-image-registry/image-registry-66df7c8f76-2vxjt" Jan 26 18:36:23 crc kubenswrapper[4737]: I0126 18:36:23.432211 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/be2e5820-2a4f-4b3a-88d3-7b6697825ddf-bound-sa-token\") pod \"image-registry-66df7c8f76-2vxjt\" (UID: \"be2e5820-2a4f-4b3a-88d3-7b6697825ddf\") " pod="openshift-image-registry/image-registry-66df7c8f76-2vxjt" Jan 26 18:36:23 crc kubenswrapper[4737]: I0126 18:36:23.587972 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-2vxjt" Jan 26 18:36:24 crc kubenswrapper[4737]: I0126 18:36:24.074726 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-2vxjt"] Jan 26 18:36:24 crc kubenswrapper[4737]: I0126 18:36:24.151948 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-2vxjt" event={"ID":"be2e5820-2a4f-4b3a-88d3-7b6697825ddf","Type":"ContainerStarted","Data":"33132614ae5eee7244fdc8192b9e41db8c196a531129cab0705e84a9696550cd"} Jan 26 18:36:25 crc kubenswrapper[4737]: I0126 18:36:25.160867 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-2vxjt" event={"ID":"be2e5820-2a4f-4b3a-88d3-7b6697825ddf","Type":"ContainerStarted","Data":"a903bed88cea4e536a0c9e1c3478983e308eaa816764128f78683aa4922dcd72"} Jan 26 18:36:25 crc kubenswrapper[4737]: I0126 18:36:25.161136 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-2vxjt" Jan 26 18:36:25 crc kubenswrapper[4737]: I0126 18:36:25.194192 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-2vxjt" podStartSLOduration=2.194147611 podStartE2EDuration="2.194147611s" podCreationTimestamp="2026-01-26 18:36:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:36:25.187637634 +0000 UTC m=+358.495832362" watchObservedRunningTime="2026-01-26 18:36:25.194147611 +0000 UTC m=+358.502342359" Jan 26 18:36:26 crc kubenswrapper[4737]: I0126 18:36:26.512768 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-89bd9866d-hqr74"] Jan 26 18:36:26 crc kubenswrapper[4737]: I0126 18:36:26.514127 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-89bd9866d-hqr74" podUID="acad5266-f03a-4178-9ef3-83378661a2d7" containerName="controller-manager" containerID="cri-o://997fb153a3549def33bb6292cecabb15a318bd902ae77c2e9600abf56d816cd2" gracePeriod=30 Jan 26 18:36:26 crc kubenswrapper[4737]: I0126 18:36:26.527906 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77f45d8f46-l74ft"] Jan 26 18:36:26 crc kubenswrapper[4737]: I0126 18:36:26.528349 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-77f45d8f46-l74ft" podUID="d2f0da27-aeb1-4150-aa7c-545f5dd5b18f" containerName="route-controller-manager" containerID="cri-o://c36843d5984f71ee93ca20700d00dc50ce07e510c50474da3a23adec17b78201" gracePeriod=30 Jan 26 18:36:27 crc kubenswrapper[4737]: I0126 18:36:27.114473 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-89bd9866d-hqr74" Jan 26 18:36:27 crc kubenswrapper[4737]: I0126 18:36:27.118445 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77f45d8f46-l74ft" Jan 26 18:36:27 crc kubenswrapper[4737]: I0126 18:36:27.173123 4737 generic.go:334] "Generic (PLEG): container finished" podID="acad5266-f03a-4178-9ef3-83378661a2d7" containerID="997fb153a3549def33bb6292cecabb15a318bd902ae77c2e9600abf56d816cd2" exitCode=0 Jan 26 18:36:27 crc kubenswrapper[4737]: I0126 18:36:27.173185 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-89bd9866d-hqr74" event={"ID":"acad5266-f03a-4178-9ef3-83378661a2d7","Type":"ContainerDied","Data":"997fb153a3549def33bb6292cecabb15a318bd902ae77c2e9600abf56d816cd2"} Jan 26 18:36:27 crc kubenswrapper[4737]: I0126 18:36:27.173215 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-89bd9866d-hqr74" event={"ID":"acad5266-f03a-4178-9ef3-83378661a2d7","Type":"ContainerDied","Data":"d4f664134476361b7dad3a0ecdcec62fe5a0e857a505edf0b2116bf567fe9b25"} Jan 26 18:36:27 crc kubenswrapper[4737]: I0126 18:36:27.173237 4737 scope.go:117] "RemoveContainer" containerID="997fb153a3549def33bb6292cecabb15a318bd902ae77c2e9600abf56d816cd2" Jan 26 18:36:27 crc kubenswrapper[4737]: I0126 18:36:27.173332 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-89bd9866d-hqr74" Jan 26 18:36:27 crc kubenswrapper[4737]: I0126 18:36:27.177659 4737 generic.go:334] "Generic (PLEG): container finished" podID="d2f0da27-aeb1-4150-aa7c-545f5dd5b18f" containerID="c36843d5984f71ee93ca20700d00dc50ce07e510c50474da3a23adec17b78201" exitCode=0 Jan 26 18:36:27 crc kubenswrapper[4737]: I0126 18:36:27.177717 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77f45d8f46-l74ft" event={"ID":"d2f0da27-aeb1-4150-aa7c-545f5dd5b18f","Type":"ContainerDied","Data":"c36843d5984f71ee93ca20700d00dc50ce07e510c50474da3a23adec17b78201"} Jan 26 18:36:27 crc kubenswrapper[4737]: I0126 18:36:27.177754 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77f45d8f46-l74ft" event={"ID":"d2f0da27-aeb1-4150-aa7c-545f5dd5b18f","Type":"ContainerDied","Data":"990a98c03027369b20828661a355e424ceead68dbd7dc7c3f6d66fd701153c54"} Jan 26 18:36:27 crc kubenswrapper[4737]: I0126 18:36:27.177828 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77f45d8f46-l74ft" Jan 26 18:36:27 crc kubenswrapper[4737]: I0126 18:36:27.209585 4737 scope.go:117] "RemoveContainer" containerID="997fb153a3549def33bb6292cecabb15a318bd902ae77c2e9600abf56d816cd2" Jan 26 18:36:27 crc kubenswrapper[4737]: E0126 18:36:27.210636 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"997fb153a3549def33bb6292cecabb15a318bd902ae77c2e9600abf56d816cd2\": container with ID starting with 997fb153a3549def33bb6292cecabb15a318bd902ae77c2e9600abf56d816cd2 not found: ID does not exist" containerID="997fb153a3549def33bb6292cecabb15a318bd902ae77c2e9600abf56d816cd2" Jan 26 18:36:27 crc kubenswrapper[4737]: I0126 18:36:27.210687 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"997fb153a3549def33bb6292cecabb15a318bd902ae77c2e9600abf56d816cd2"} err="failed to get container status \"997fb153a3549def33bb6292cecabb15a318bd902ae77c2e9600abf56d816cd2\": rpc error: code = NotFound desc = could not find container \"997fb153a3549def33bb6292cecabb15a318bd902ae77c2e9600abf56d816cd2\": container with ID starting with 997fb153a3549def33bb6292cecabb15a318bd902ae77c2e9600abf56d816cd2 not found: ID does not exist" Jan 26 18:36:27 crc kubenswrapper[4737]: I0126 18:36:27.210729 4737 scope.go:117] "RemoveContainer" containerID="c36843d5984f71ee93ca20700d00dc50ce07e510c50474da3a23adec17b78201" Jan 26 18:36:27 crc kubenswrapper[4737]: I0126 18:36:27.228450 4737 scope.go:117] "RemoveContainer" containerID="c36843d5984f71ee93ca20700d00dc50ce07e510c50474da3a23adec17b78201" Jan 26 18:36:27 crc kubenswrapper[4737]: E0126 18:36:27.229840 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c36843d5984f71ee93ca20700d00dc50ce07e510c50474da3a23adec17b78201\": container with ID starting with c36843d5984f71ee93ca20700d00dc50ce07e510c50474da3a23adec17b78201 not found: ID does not exist" containerID="c36843d5984f71ee93ca20700d00dc50ce07e510c50474da3a23adec17b78201" Jan 26 18:36:27 crc kubenswrapper[4737]: I0126 18:36:27.229882 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c36843d5984f71ee93ca20700d00dc50ce07e510c50474da3a23adec17b78201"} err="failed to get container status \"c36843d5984f71ee93ca20700d00dc50ce07e510c50474da3a23adec17b78201\": rpc error: code = NotFound desc = could not find container \"c36843d5984f71ee93ca20700d00dc50ce07e510c50474da3a23adec17b78201\": container with ID starting with c36843d5984f71ee93ca20700d00dc50ce07e510c50474da3a23adec17b78201 not found: ID does not exist" Jan 26 18:36:27 crc kubenswrapper[4737]: I0126 18:36:27.268715 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d2f0da27-aeb1-4150-aa7c-545f5dd5b18f-client-ca\") pod \"d2f0da27-aeb1-4150-aa7c-545f5dd5b18f\" (UID: \"d2f0da27-aeb1-4150-aa7c-545f5dd5b18f\") " Jan 26 18:36:27 crc kubenswrapper[4737]: I0126 18:36:27.268773 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thvhr\" (UniqueName: \"kubernetes.io/projected/d2f0da27-aeb1-4150-aa7c-545f5dd5b18f-kube-api-access-thvhr\") pod \"d2f0da27-aeb1-4150-aa7c-545f5dd5b18f\" (UID: \"d2f0da27-aeb1-4150-aa7c-545f5dd5b18f\") " Jan 26 18:36:27 crc kubenswrapper[4737]: I0126 18:36:27.268805 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/acad5266-f03a-4178-9ef3-83378661a2d7-serving-cert\") pod \"acad5266-f03a-4178-9ef3-83378661a2d7\" (UID: \"acad5266-f03a-4178-9ef3-83378661a2d7\") " Jan 26 18:36:27 crc kubenswrapper[4737]: I0126 18:36:27.268869 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/acad5266-f03a-4178-9ef3-83378661a2d7-client-ca\") pod \"acad5266-f03a-4178-9ef3-83378661a2d7\" (UID: \"acad5266-f03a-4178-9ef3-83378661a2d7\") " Jan 26 18:36:27 crc kubenswrapper[4737]: I0126 18:36:27.268895 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2f0da27-aeb1-4150-aa7c-545f5dd5b18f-config\") pod \"d2f0da27-aeb1-4150-aa7c-545f5dd5b18f\" (UID: \"d2f0da27-aeb1-4150-aa7c-545f5dd5b18f\") " Jan 26 18:36:27 crc kubenswrapper[4737]: I0126 18:36:27.268915 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxn42\" (UniqueName: \"kubernetes.io/projected/acad5266-f03a-4178-9ef3-83378661a2d7-kube-api-access-qxn42\") pod \"acad5266-f03a-4178-9ef3-83378661a2d7\" (UID: \"acad5266-f03a-4178-9ef3-83378661a2d7\") " Jan 26 18:36:27 crc kubenswrapper[4737]: I0126 18:36:27.268940 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acad5266-f03a-4178-9ef3-83378661a2d7-config\") pod \"acad5266-f03a-4178-9ef3-83378661a2d7\" (UID: \"acad5266-f03a-4178-9ef3-83378661a2d7\") " Jan 26 18:36:27 crc kubenswrapper[4737]: I0126 18:36:27.268979 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/acad5266-f03a-4178-9ef3-83378661a2d7-proxy-ca-bundles\") pod \"acad5266-f03a-4178-9ef3-83378661a2d7\" (UID: \"acad5266-f03a-4178-9ef3-83378661a2d7\") " Jan 26 18:36:27 crc kubenswrapper[4737]: I0126 18:36:27.269055 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2f0da27-aeb1-4150-aa7c-545f5dd5b18f-serving-cert\") pod \"d2f0da27-aeb1-4150-aa7c-545f5dd5b18f\" (UID: \"d2f0da27-aeb1-4150-aa7c-545f5dd5b18f\") " Jan 26 18:36:27 crc kubenswrapper[4737]: I0126 18:36:27.269263 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2f0da27-aeb1-4150-aa7c-545f5dd5b18f-client-ca" (OuterVolumeSpecName: "client-ca") pod "d2f0da27-aeb1-4150-aa7c-545f5dd5b18f" (UID: "d2f0da27-aeb1-4150-aa7c-545f5dd5b18f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:36:27 crc kubenswrapper[4737]: I0126 18:36:27.269541 4737 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d2f0da27-aeb1-4150-aa7c-545f5dd5b18f-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 18:36:27 crc kubenswrapper[4737]: I0126 18:36:27.269823 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acad5266-f03a-4178-9ef3-83378661a2d7-client-ca" (OuterVolumeSpecName: "client-ca") pod "acad5266-f03a-4178-9ef3-83378661a2d7" (UID: "acad5266-f03a-4178-9ef3-83378661a2d7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:36:27 crc kubenswrapper[4737]: I0126 18:36:27.270267 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acad5266-f03a-4178-9ef3-83378661a2d7-config" (OuterVolumeSpecName: "config") pod "acad5266-f03a-4178-9ef3-83378661a2d7" (UID: "acad5266-f03a-4178-9ef3-83378661a2d7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:36:27 crc kubenswrapper[4737]: I0126 18:36:27.270518 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acad5266-f03a-4178-9ef3-83378661a2d7-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "acad5266-f03a-4178-9ef3-83378661a2d7" (UID: "acad5266-f03a-4178-9ef3-83378661a2d7"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:36:27 crc kubenswrapper[4737]: I0126 18:36:27.270572 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2f0da27-aeb1-4150-aa7c-545f5dd5b18f-config" (OuterVolumeSpecName: "config") pod "d2f0da27-aeb1-4150-aa7c-545f5dd5b18f" (UID: "d2f0da27-aeb1-4150-aa7c-545f5dd5b18f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:36:27 crc kubenswrapper[4737]: I0126 18:36:27.275427 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2f0da27-aeb1-4150-aa7c-545f5dd5b18f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d2f0da27-aeb1-4150-aa7c-545f5dd5b18f" (UID: "d2f0da27-aeb1-4150-aa7c-545f5dd5b18f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:36:27 crc kubenswrapper[4737]: I0126 18:36:27.276454 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acad5266-f03a-4178-9ef3-83378661a2d7-kube-api-access-qxn42" (OuterVolumeSpecName: "kube-api-access-qxn42") pod "acad5266-f03a-4178-9ef3-83378661a2d7" (UID: "acad5266-f03a-4178-9ef3-83378661a2d7"). InnerVolumeSpecName "kube-api-access-qxn42". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:36:27 crc kubenswrapper[4737]: I0126 18:36:27.276835 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2f0da27-aeb1-4150-aa7c-545f5dd5b18f-kube-api-access-thvhr" (OuterVolumeSpecName: "kube-api-access-thvhr") pod "d2f0da27-aeb1-4150-aa7c-545f5dd5b18f" (UID: "d2f0da27-aeb1-4150-aa7c-545f5dd5b18f"). InnerVolumeSpecName "kube-api-access-thvhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:36:27 crc kubenswrapper[4737]: I0126 18:36:27.277734 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acad5266-f03a-4178-9ef3-83378661a2d7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "acad5266-f03a-4178-9ef3-83378661a2d7" (UID: "acad5266-f03a-4178-9ef3-83378661a2d7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:36:27 crc kubenswrapper[4737]: I0126 18:36:27.370573 4737 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2f0da27-aeb1-4150-aa7c-545f5dd5b18f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:36:27 crc kubenswrapper[4737]: I0126 18:36:27.370625 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thvhr\" (UniqueName: \"kubernetes.io/projected/d2f0da27-aeb1-4150-aa7c-545f5dd5b18f-kube-api-access-thvhr\") on node \"crc\" DevicePath \"\"" Jan 26 18:36:27 crc kubenswrapper[4737]: I0126 18:36:27.370641 4737 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/acad5266-f03a-4178-9ef3-83378661a2d7-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:36:27 crc kubenswrapper[4737]: I0126 18:36:27.370656 4737 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/acad5266-f03a-4178-9ef3-83378661a2d7-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 18:36:27 crc kubenswrapper[4737]: I0126 18:36:27.370669 4737 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2f0da27-aeb1-4150-aa7c-545f5dd5b18f-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:36:27 crc kubenswrapper[4737]: I0126 18:36:27.370682 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxn42\" (UniqueName: \"kubernetes.io/projected/acad5266-f03a-4178-9ef3-83378661a2d7-kube-api-access-qxn42\") on node \"crc\" DevicePath \"\"" Jan 26 18:36:27 crc kubenswrapper[4737]: I0126 18:36:27.370695 4737 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acad5266-f03a-4178-9ef3-83378661a2d7-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:36:27 crc kubenswrapper[4737]: I0126 18:36:27.370707 4737 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/acad5266-f03a-4178-9ef3-83378661a2d7-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 18:36:27 crc kubenswrapper[4737]: I0126 18:36:27.515185 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-89bd9866d-hqr74"] Jan 26 18:36:27 crc kubenswrapper[4737]: I0126 18:36:27.517972 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-89bd9866d-hqr74"] Jan 26 18:36:27 crc kubenswrapper[4737]: I0126 18:36:27.534213 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77f45d8f46-l74ft"] Jan 26 18:36:27 crc kubenswrapper[4737]: I0126 18:36:27.541439 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77f45d8f46-l74ft"] Jan 26 18:36:28 crc kubenswrapper[4737]: I0126 18:36:28.034209 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7957c7947-t25kq"] Jan 26 18:36:28 crc kubenswrapper[4737]: E0126 18:36:28.034974 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2f0da27-aeb1-4150-aa7c-545f5dd5b18f" containerName="route-controller-manager" Jan 26 18:36:28 crc kubenswrapper[4737]: I0126 18:36:28.035131 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2f0da27-aeb1-4150-aa7c-545f5dd5b18f" containerName="route-controller-manager" Jan 26 18:36:28 crc kubenswrapper[4737]: E0126 18:36:28.035224 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acad5266-f03a-4178-9ef3-83378661a2d7" containerName="controller-manager" Jan 26 18:36:28 crc kubenswrapper[4737]: I0126 18:36:28.035286 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="acad5266-f03a-4178-9ef3-83378661a2d7" containerName="controller-manager" Jan 26 18:36:28 crc kubenswrapper[4737]: I0126 18:36:28.035454 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="acad5266-f03a-4178-9ef3-83378661a2d7" containerName="controller-manager" Jan 26 18:36:28 crc kubenswrapper[4737]: I0126 18:36:28.035519 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2f0da27-aeb1-4150-aa7c-545f5dd5b18f" containerName="route-controller-manager" Jan 26 18:36:28 crc kubenswrapper[4737]: I0126 18:36:28.036243 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7957c7947-t25kq" Jan 26 18:36:28 crc kubenswrapper[4737]: I0126 18:36:28.037279 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-55bf5fbd4d-wxp4k"] Jan 26 18:36:28 crc kubenswrapper[4737]: I0126 18:36:28.038094 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55bf5fbd4d-wxp4k" Jan 26 18:36:28 crc kubenswrapper[4737]: I0126 18:36:28.038351 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 18:36:28 crc kubenswrapper[4737]: I0126 18:36:28.038779 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 18:36:28 crc kubenswrapper[4737]: I0126 18:36:28.038857 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 18:36:28 crc kubenswrapper[4737]: I0126 18:36:28.038967 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 18:36:28 crc kubenswrapper[4737]: I0126 18:36:28.040274 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 18:36:28 crc kubenswrapper[4737]: I0126 18:36:28.041888 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 18:36:28 crc kubenswrapper[4737]: I0126 18:36:28.044056 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 18:36:28 crc kubenswrapper[4737]: I0126 18:36:28.044143 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 18:36:28 crc kubenswrapper[4737]: I0126 18:36:28.044643 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 18:36:28 crc kubenswrapper[4737]: I0126 18:36:28.044836 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 18:36:28 crc kubenswrapper[4737]: I0126 18:36:28.046337 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 18:36:28 crc kubenswrapper[4737]: I0126 18:36:28.047089 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 18:36:28 crc kubenswrapper[4737]: I0126 18:36:28.059039 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7957c7947-t25kq"] Jan 26 18:36:28 crc kubenswrapper[4737]: I0126 18:36:28.062084 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-55bf5fbd4d-wxp4k"] Jan 26 18:36:28 crc kubenswrapper[4737]: I0126 18:36:28.069732 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 18:36:28 crc kubenswrapper[4737]: I0126 18:36:28.180651 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6434127b-6bcf-4362-8cad-c53729ae7833-client-ca\") pod \"route-controller-manager-7957c7947-t25kq\" (UID: \"6434127b-6bcf-4362-8cad-c53729ae7833\") " pod="openshift-route-controller-manager/route-controller-manager-7957c7947-t25kq" Jan 26 18:36:28 crc kubenswrapper[4737]: I0126 18:36:28.180707 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6434127b-6bcf-4362-8cad-c53729ae7833-serving-cert\") pod \"route-controller-manager-7957c7947-t25kq\" (UID: \"6434127b-6bcf-4362-8cad-c53729ae7833\") " pod="openshift-route-controller-manager/route-controller-manager-7957c7947-t25kq" Jan 26 18:36:28 crc kubenswrapper[4737]: I0126 18:36:28.180743 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c9aed03-6d12-4fdb-b21a-5bf4538c9bf9-config\") pod \"controller-manager-55bf5fbd4d-wxp4k\" (UID: \"4c9aed03-6d12-4fdb-b21a-5bf4538c9bf9\") " pod="openshift-controller-manager/controller-manager-55bf5fbd4d-wxp4k" Jan 26 18:36:28 crc kubenswrapper[4737]: I0126 18:36:28.180765 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c9aed03-6d12-4fdb-b21a-5bf4538c9bf9-serving-cert\") pod \"controller-manager-55bf5fbd4d-wxp4k\" (UID: \"4c9aed03-6d12-4fdb-b21a-5bf4538c9bf9\") " pod="openshift-controller-manager/controller-manager-55bf5fbd4d-wxp4k" Jan 26 18:36:28 crc kubenswrapper[4737]: I0126 18:36:28.180790 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4c9aed03-6d12-4fdb-b21a-5bf4538c9bf9-proxy-ca-bundles\") pod \"controller-manager-55bf5fbd4d-wxp4k\" (UID: \"4c9aed03-6d12-4fdb-b21a-5bf4538c9bf9\") " pod="openshift-controller-manager/controller-manager-55bf5fbd4d-wxp4k" Jan 26 18:36:28 crc kubenswrapper[4737]: I0126 18:36:28.180807 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6434127b-6bcf-4362-8cad-c53729ae7833-config\") pod \"route-controller-manager-7957c7947-t25kq\" (UID: \"6434127b-6bcf-4362-8cad-c53729ae7833\") " pod="openshift-route-controller-manager/route-controller-manager-7957c7947-t25kq" Jan 26 18:36:28 crc kubenswrapper[4737]: I0126 18:36:28.180823 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb6qq\" (UniqueName: \"kubernetes.io/projected/6434127b-6bcf-4362-8cad-c53729ae7833-kube-api-access-gb6qq\") pod \"route-controller-manager-7957c7947-t25kq\" (UID: \"6434127b-6bcf-4362-8cad-c53729ae7833\") " pod="openshift-route-controller-manager/route-controller-manager-7957c7947-t25kq" Jan 26 18:36:28 crc kubenswrapper[4737]: I0126 18:36:28.180892 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4c9aed03-6d12-4fdb-b21a-5bf4538c9bf9-client-ca\") pod \"controller-manager-55bf5fbd4d-wxp4k\" (UID: \"4c9aed03-6d12-4fdb-b21a-5bf4538c9bf9\") " pod="openshift-controller-manager/controller-manager-55bf5fbd4d-wxp4k" Jan 26 18:36:28 crc kubenswrapper[4737]: I0126 18:36:28.180917 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz4nv\" (UniqueName: \"kubernetes.io/projected/4c9aed03-6d12-4fdb-b21a-5bf4538c9bf9-kube-api-access-jz4nv\") pod \"controller-manager-55bf5fbd4d-wxp4k\" (UID: \"4c9aed03-6d12-4fdb-b21a-5bf4538c9bf9\") " pod="openshift-controller-manager/controller-manager-55bf5fbd4d-wxp4k" Jan 26 18:36:28 crc kubenswrapper[4737]: I0126 18:36:28.281687 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6434127b-6bcf-4362-8cad-c53729ae7833-serving-cert\") pod \"route-controller-manager-7957c7947-t25kq\" (UID: \"6434127b-6bcf-4362-8cad-c53729ae7833\") " pod="openshift-route-controller-manager/route-controller-manager-7957c7947-t25kq" Jan 26 18:36:28 crc kubenswrapper[4737]: I0126 18:36:28.281753 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c9aed03-6d12-4fdb-b21a-5bf4538c9bf9-config\") pod \"controller-manager-55bf5fbd4d-wxp4k\" (UID: \"4c9aed03-6d12-4fdb-b21a-5bf4538c9bf9\") " pod="openshift-controller-manager/controller-manager-55bf5fbd4d-wxp4k" Jan 26 18:36:28 crc kubenswrapper[4737]: I0126 18:36:28.281777 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c9aed03-6d12-4fdb-b21a-5bf4538c9bf9-serving-cert\") pod \"controller-manager-55bf5fbd4d-wxp4k\" (UID: \"4c9aed03-6d12-4fdb-b21a-5bf4538c9bf9\") " pod="openshift-controller-manager/controller-manager-55bf5fbd4d-wxp4k" Jan 26 18:36:28 crc kubenswrapper[4737]: I0126 18:36:28.281800 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4c9aed03-6d12-4fdb-b21a-5bf4538c9bf9-proxy-ca-bundles\") pod \"controller-manager-55bf5fbd4d-wxp4k\" (UID: \"4c9aed03-6d12-4fdb-b21a-5bf4538c9bf9\") " pod="openshift-controller-manager/controller-manager-55bf5fbd4d-wxp4k" Jan 26 18:36:28 crc kubenswrapper[4737]: I0126 18:36:28.281819 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6434127b-6bcf-4362-8cad-c53729ae7833-config\") pod \"route-controller-manager-7957c7947-t25kq\" (UID: \"6434127b-6bcf-4362-8cad-c53729ae7833\") " pod="openshift-route-controller-manager/route-controller-manager-7957c7947-t25kq" Jan 26 18:36:28 crc kubenswrapper[4737]: I0126 18:36:28.281833 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gb6qq\" (UniqueName: \"kubernetes.io/projected/6434127b-6bcf-4362-8cad-c53729ae7833-kube-api-access-gb6qq\") pod \"route-controller-manager-7957c7947-t25kq\" (UID: \"6434127b-6bcf-4362-8cad-c53729ae7833\") " pod="openshift-route-controller-manager/route-controller-manager-7957c7947-t25kq" Jan 26 18:36:28 crc kubenswrapper[4737]: I0126 18:36:28.281859 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4c9aed03-6d12-4fdb-b21a-5bf4538c9bf9-client-ca\") pod \"controller-manager-55bf5fbd4d-wxp4k\" (UID: \"4c9aed03-6d12-4fdb-b21a-5bf4538c9bf9\") " pod="openshift-controller-manager/controller-manager-55bf5fbd4d-wxp4k" Jan 26 18:36:28 crc kubenswrapper[4737]: I0126 18:36:28.281885 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jz4nv\" (UniqueName: \"kubernetes.io/projected/4c9aed03-6d12-4fdb-b21a-5bf4538c9bf9-kube-api-access-jz4nv\") pod \"controller-manager-55bf5fbd4d-wxp4k\" (UID: \"4c9aed03-6d12-4fdb-b21a-5bf4538c9bf9\") " pod="openshift-controller-manager/controller-manager-55bf5fbd4d-wxp4k" Jan 26 18:36:28 crc kubenswrapper[4737]: I0126 18:36:28.281904 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6434127b-6bcf-4362-8cad-c53729ae7833-client-ca\") pod \"route-controller-manager-7957c7947-t25kq\" (UID: \"6434127b-6bcf-4362-8cad-c53729ae7833\") " pod="openshift-route-controller-manager/route-controller-manager-7957c7947-t25kq" Jan 26 18:36:28 crc kubenswrapper[4737]: I0126 18:36:28.282693 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6434127b-6bcf-4362-8cad-c53729ae7833-client-ca\") pod \"route-controller-manager-7957c7947-t25kq\" (UID: \"6434127b-6bcf-4362-8cad-c53729ae7833\") " pod="openshift-route-controller-manager/route-controller-manager-7957c7947-t25kq" Jan 26 18:36:28 crc kubenswrapper[4737]: I0126 18:36:28.284542 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6434127b-6bcf-4362-8cad-c53729ae7833-config\") pod \"route-controller-manager-7957c7947-t25kq\" (UID: \"6434127b-6bcf-4362-8cad-c53729ae7833\") " pod="openshift-route-controller-manager/route-controller-manager-7957c7947-t25kq" Jan 26 18:36:28 crc kubenswrapper[4737]: I0126 18:36:28.284655 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c9aed03-6d12-4fdb-b21a-5bf4538c9bf9-config\") pod \"controller-manager-55bf5fbd4d-wxp4k\" (UID: \"4c9aed03-6d12-4fdb-b21a-5bf4538c9bf9\") " pod="openshift-controller-manager/controller-manager-55bf5fbd4d-wxp4k" Jan 26 18:36:28 crc kubenswrapper[4737]: I0126 18:36:28.285100 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4c9aed03-6d12-4fdb-b21a-5bf4538c9bf9-proxy-ca-bundles\") pod \"controller-manager-55bf5fbd4d-wxp4k\" (UID: \"4c9aed03-6d12-4fdb-b21a-5bf4538c9bf9\") " pod="openshift-controller-manager/controller-manager-55bf5fbd4d-wxp4k" Jan 26 18:36:28 crc kubenswrapper[4737]: I0126 18:36:28.285167 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4c9aed03-6d12-4fdb-b21a-5bf4538c9bf9-client-ca\") pod \"controller-manager-55bf5fbd4d-wxp4k\" (UID: \"4c9aed03-6d12-4fdb-b21a-5bf4538c9bf9\") " pod="openshift-controller-manager/controller-manager-55bf5fbd4d-wxp4k" Jan 26 18:36:28 crc kubenswrapper[4737]: I0126 18:36:28.288714 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6434127b-6bcf-4362-8cad-c53729ae7833-serving-cert\") pod \"route-controller-manager-7957c7947-t25kq\" (UID: \"6434127b-6bcf-4362-8cad-c53729ae7833\") " pod="openshift-route-controller-manager/route-controller-manager-7957c7947-t25kq" Jan 26 18:36:28 crc kubenswrapper[4737]: I0126 18:36:28.293346 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c9aed03-6d12-4fdb-b21a-5bf4538c9bf9-serving-cert\") pod \"controller-manager-55bf5fbd4d-wxp4k\" (UID: \"4c9aed03-6d12-4fdb-b21a-5bf4538c9bf9\") " pod="openshift-controller-manager/controller-manager-55bf5fbd4d-wxp4k" Jan 26 18:36:28 crc kubenswrapper[4737]: I0126 18:36:28.303654 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gb6qq\" (UniqueName: \"kubernetes.io/projected/6434127b-6bcf-4362-8cad-c53729ae7833-kube-api-access-gb6qq\") pod \"route-controller-manager-7957c7947-t25kq\" (UID: \"6434127b-6bcf-4362-8cad-c53729ae7833\") " pod="openshift-route-controller-manager/route-controller-manager-7957c7947-t25kq" Jan 26 18:36:28 crc kubenswrapper[4737]: I0126 18:36:28.308344 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jz4nv\" (UniqueName: \"kubernetes.io/projected/4c9aed03-6d12-4fdb-b21a-5bf4538c9bf9-kube-api-access-jz4nv\") pod \"controller-manager-55bf5fbd4d-wxp4k\" (UID: \"4c9aed03-6d12-4fdb-b21a-5bf4538c9bf9\") " pod="openshift-controller-manager/controller-manager-55bf5fbd4d-wxp4k" Jan 26 18:36:28 crc kubenswrapper[4737]: I0126 18:36:28.352476 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7957c7947-t25kq" Jan 26 18:36:28 crc kubenswrapper[4737]: I0126 18:36:28.372666 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55bf5fbd4d-wxp4k" Jan 26 18:36:28 crc kubenswrapper[4737]: I0126 18:36:28.632240 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7957c7947-t25kq"] Jan 26 18:36:28 crc kubenswrapper[4737]: I0126 18:36:28.688230 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-55bf5fbd4d-wxp4k"] Jan 26 18:36:28 crc kubenswrapper[4737]: W0126 18:36:28.701043 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c9aed03_6d12_4fdb_b21a_5bf4538c9bf9.slice/crio-c6b6c33011675ee685d86971a8654b2f38ba45817e261577d6b9f61ebd30ccfb WatchSource:0}: Error finding container c6b6c33011675ee685d86971a8654b2f38ba45817e261577d6b9f61ebd30ccfb: Status 404 returned error can't find the container with id c6b6c33011675ee685d86971a8654b2f38ba45817e261577d6b9f61ebd30ccfb Jan 26 18:36:28 crc kubenswrapper[4737]: I0126 18:36:28.991978 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acad5266-f03a-4178-9ef3-83378661a2d7" path="/var/lib/kubelet/pods/acad5266-f03a-4178-9ef3-83378661a2d7/volumes" Jan 26 18:36:28 crc kubenswrapper[4737]: I0126 18:36:28.993268 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2f0da27-aeb1-4150-aa7c-545f5dd5b18f" path="/var/lib/kubelet/pods/d2f0da27-aeb1-4150-aa7c-545f5dd5b18f/volumes" Jan 26 18:36:29 crc kubenswrapper[4737]: I0126 18:36:29.192974 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7957c7947-t25kq" event={"ID":"6434127b-6bcf-4362-8cad-c53729ae7833","Type":"ContainerStarted","Data":"a5732168562c8785a8963419e97bf8f1270e255bed881b33e4b662df7199585c"} Jan 26 18:36:29 crc kubenswrapper[4737]: I0126 18:36:29.193042 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7957c7947-t25kq" event={"ID":"6434127b-6bcf-4362-8cad-c53729ae7833","Type":"ContainerStarted","Data":"ef9e866253e0367327fa033a6cea3b0c50baa5f65ccf352c5173309e832919c7"} Jan 26 18:36:29 crc kubenswrapper[4737]: I0126 18:36:29.193342 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7957c7947-t25kq" Jan 26 18:36:29 crc kubenswrapper[4737]: I0126 18:36:29.194492 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55bf5fbd4d-wxp4k" event={"ID":"4c9aed03-6d12-4fdb-b21a-5bf4538c9bf9","Type":"ContainerStarted","Data":"d762d45bcac011321a3242cdcdc01963407b6a6faa15b62b4681033df5a95170"} Jan 26 18:36:29 crc kubenswrapper[4737]: I0126 18:36:29.194526 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55bf5fbd4d-wxp4k" event={"ID":"4c9aed03-6d12-4fdb-b21a-5bf4538c9bf9","Type":"ContainerStarted","Data":"c6b6c33011675ee685d86971a8654b2f38ba45817e261577d6b9f61ebd30ccfb"} Jan 26 18:36:29 crc kubenswrapper[4737]: I0126 18:36:29.194814 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-55bf5fbd4d-wxp4k" Jan 26 18:36:29 crc kubenswrapper[4737]: I0126 18:36:29.216966 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7957c7947-t25kq" Jan 26 18:36:29 crc kubenswrapper[4737]: I0126 18:36:29.221901 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-55bf5fbd4d-wxp4k" Jan 26 18:36:29 crc kubenswrapper[4737]: I0126 18:36:29.258458 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7957c7947-t25kq" podStartSLOduration=3.258435316 podStartE2EDuration="3.258435316s" podCreationTimestamp="2026-01-26 18:36:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:36:29.254404147 +0000 UTC m=+362.562598865" watchObservedRunningTime="2026-01-26 18:36:29.258435316 +0000 UTC m=+362.566630024" Jan 26 18:36:29 crc kubenswrapper[4737]: I0126 18:36:29.280139 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-55bf5fbd4d-wxp4k" podStartSLOduration=3.2801208649999998 podStartE2EDuration="3.280120865s" podCreationTimestamp="2026-01-26 18:36:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:36:29.277465583 +0000 UTC m=+362.585660291" watchObservedRunningTime="2026-01-26 18:36:29.280120865 +0000 UTC m=+362.588315573" Jan 26 18:36:30 crc kubenswrapper[4737]: I0126 18:36:30.948781 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:36:30 crc kubenswrapper[4737]: I0126 18:36:30.949230 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:36:43 crc kubenswrapper[4737]: I0126 18:36:43.596424 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-2vxjt" Jan 26 18:36:43 crc kubenswrapper[4737]: I0126 18:36:43.686381 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7c9pc"] Jan 26 18:37:00 crc kubenswrapper[4737]: I0126 18:37:00.949229 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:37:00 crc kubenswrapper[4737]: I0126 18:37:00.949826 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.042584 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f4ldv"] Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.043480 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-f4ldv" podUID="7acd9116-baab-48b1-ab22-7310f60fada8" containerName="registry-server" containerID="cri-o://814d1b960114e4158a347e60bd2a0b55832520a8df14191ce7afa97e33da0cc0" gracePeriod=30 Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.066490 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6tf2g"] Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.067107 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6tf2g" podUID="0bd24ab7-1242-4a05-afc2-bd24d931cb3d" containerName="registry-server" containerID="cri-o://2edbe879efcb559f15a3d3f855130d51d9cf622672b2294bec0f7d4e78c26fbd" gracePeriod=30 Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.079443 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gftx9"] Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.084204 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5j2cd"] Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.084560 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5j2cd" podUID="0a348468-634f-4d18-aa1d-ecc9aff08138" containerName="registry-server" containerID="cri-o://52ea918acb02ea5114f82eff17b4c7301ac4eb2ad1d798ec9e7528ca1f3c8dad" gracePeriod=30 Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.091789 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nmjc5"] Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.095863 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nmjc5" podUID="b29f7821-ed11-4b5d-b946-4562c4c595ef" containerName="registry-server" containerID="cri-o://d8c0f96fa74c12eb06608cd0966cc6c6bb7c15c5110082f83d27fc0ea772d03f" gracePeriod=30 Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.104552 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dr8sf"] Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.108339 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-dr8sf" Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.113613 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dr8sf"] Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.222240 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpf2q\" (UniqueName: \"kubernetes.io/projected/faf30849-7c19-44f9-ba42-3ad3f14efe0d-kube-api-access-gpf2q\") pod \"marketplace-operator-79b997595-dr8sf\" (UID: \"faf30849-7c19-44f9-ba42-3ad3f14efe0d\") " pod="openshift-marketplace/marketplace-operator-79b997595-dr8sf" Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.222313 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/faf30849-7c19-44f9-ba42-3ad3f14efe0d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-dr8sf\" (UID: \"faf30849-7c19-44f9-ba42-3ad3f14efe0d\") " pod="openshift-marketplace/marketplace-operator-79b997595-dr8sf" Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.222359 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/faf30849-7c19-44f9-ba42-3ad3f14efe0d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-dr8sf\" (UID: \"faf30849-7c19-44f9-ba42-3ad3f14efe0d\") " pod="openshift-marketplace/marketplace-operator-79b997595-dr8sf" Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.323356 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpf2q\" (UniqueName: \"kubernetes.io/projected/faf30849-7c19-44f9-ba42-3ad3f14efe0d-kube-api-access-gpf2q\") pod \"marketplace-operator-79b997595-dr8sf\" (UID: \"faf30849-7c19-44f9-ba42-3ad3f14efe0d\") " pod="openshift-marketplace/marketplace-operator-79b997595-dr8sf" Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.323432 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/faf30849-7c19-44f9-ba42-3ad3f14efe0d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-dr8sf\" (UID: \"faf30849-7c19-44f9-ba42-3ad3f14efe0d\") " pod="openshift-marketplace/marketplace-operator-79b997595-dr8sf" Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.323472 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/faf30849-7c19-44f9-ba42-3ad3f14efe0d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-dr8sf\" (UID: \"faf30849-7c19-44f9-ba42-3ad3f14efe0d\") " pod="openshift-marketplace/marketplace-operator-79b997595-dr8sf" Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.324840 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/faf30849-7c19-44f9-ba42-3ad3f14efe0d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-dr8sf\" (UID: \"faf30849-7c19-44f9-ba42-3ad3f14efe0d\") " pod="openshift-marketplace/marketplace-operator-79b997595-dr8sf" Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.336590 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/faf30849-7c19-44f9-ba42-3ad3f14efe0d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-dr8sf\" (UID: \"faf30849-7c19-44f9-ba42-3ad3f14efe0d\") " pod="openshift-marketplace/marketplace-operator-79b997595-dr8sf" Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.347290 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpf2q\" (UniqueName: \"kubernetes.io/projected/faf30849-7c19-44f9-ba42-3ad3f14efe0d-kube-api-access-gpf2q\") pod \"marketplace-operator-79b997595-dr8sf\" (UID: \"faf30849-7c19-44f9-ba42-3ad3f14efe0d\") " pod="openshift-marketplace/marketplace-operator-79b997595-dr8sf" Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.406017 4737 generic.go:334] "Generic (PLEG): container finished" podID="7acd9116-baab-48b1-ab22-7310f60fada8" containerID="814d1b960114e4158a347e60bd2a0b55832520a8df14191ce7afa97e33da0cc0" exitCode=0 Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.406105 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f4ldv" event={"ID":"7acd9116-baab-48b1-ab22-7310f60fada8","Type":"ContainerDied","Data":"814d1b960114e4158a347e60bd2a0b55832520a8df14191ce7afa97e33da0cc0"} Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.413903 4737 generic.go:334] "Generic (PLEG): container finished" podID="b29f7821-ed11-4b5d-b946-4562c4c595ef" containerID="d8c0f96fa74c12eb06608cd0966cc6c6bb7c15c5110082f83d27fc0ea772d03f" exitCode=0 Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.414036 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nmjc5" event={"ID":"b29f7821-ed11-4b5d-b946-4562c4c595ef","Type":"ContainerDied","Data":"d8c0f96fa74c12eb06608cd0966cc6c6bb7c15c5110082f83d27fc0ea772d03f"} Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.418231 4737 generic.go:334] "Generic (PLEG): container finished" podID="0bd24ab7-1242-4a05-afc2-bd24d931cb3d" containerID="2edbe879efcb559f15a3d3f855130d51d9cf622672b2294bec0f7d4e78c26fbd" exitCode=0 Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.418285 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6tf2g" event={"ID":"0bd24ab7-1242-4a05-afc2-bd24d931cb3d","Type":"ContainerDied","Data":"2edbe879efcb559f15a3d3f855130d51d9cf622672b2294bec0f7d4e78c26fbd"} Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.423160 4737 generic.go:334] "Generic (PLEG): container finished" podID="0a348468-634f-4d18-aa1d-ecc9aff08138" containerID="52ea918acb02ea5114f82eff17b4c7301ac4eb2ad1d798ec9e7528ca1f3c8dad" exitCode=0 Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.423193 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5j2cd" event={"ID":"0a348468-634f-4d18-aa1d-ecc9aff08138","Type":"ContainerDied","Data":"52ea918acb02ea5114f82eff17b4c7301ac4eb2ad1d798ec9e7528ca1f3c8dad"} Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.423476 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-gftx9" podUID="eec275ca-9658-4733-b311-48a052e4e843" containerName="marketplace-operator" containerID="cri-o://76206cc768750069b7b9304646afbc03eb00334c9214c13020b6d7fd15730fe5" gracePeriod=30 Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.530569 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-dr8sf" Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.539228 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f4ldv" Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.546285 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6tf2g" Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.552575 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nmjc5" Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.563799 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5j2cd" Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.626416 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7br78\" (UniqueName: \"kubernetes.io/projected/b29f7821-ed11-4b5d-b946-4562c4c595ef-kube-api-access-7br78\") pod \"b29f7821-ed11-4b5d-b946-4562c4c595ef\" (UID: \"b29f7821-ed11-4b5d-b946-4562c4c595ef\") " Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.626469 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lv8qk\" (UniqueName: \"kubernetes.io/projected/0a348468-634f-4d18-aa1d-ecc9aff08138-kube-api-access-lv8qk\") pod \"0a348468-634f-4d18-aa1d-ecc9aff08138\" (UID: \"0a348468-634f-4d18-aa1d-ecc9aff08138\") " Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.626527 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bd24ab7-1242-4a05-afc2-bd24d931cb3d-catalog-content\") pod \"0bd24ab7-1242-4a05-afc2-bd24d931cb3d\" (UID: \"0bd24ab7-1242-4a05-afc2-bd24d931cb3d\") " Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.626545 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xhsg\" (UniqueName: \"kubernetes.io/projected/0bd24ab7-1242-4a05-afc2-bd24d931cb3d-kube-api-access-9xhsg\") pod \"0bd24ab7-1242-4a05-afc2-bd24d931cb3d\" (UID: \"0bd24ab7-1242-4a05-afc2-bd24d931cb3d\") " Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.626568 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bd24ab7-1242-4a05-afc2-bd24d931cb3d-utilities\") pod \"0bd24ab7-1242-4a05-afc2-bd24d931cb3d\" (UID: \"0bd24ab7-1242-4a05-afc2-bd24d931cb3d\") " Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.626588 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a348468-634f-4d18-aa1d-ecc9aff08138-catalog-content\") pod \"0a348468-634f-4d18-aa1d-ecc9aff08138\" (UID: \"0a348468-634f-4d18-aa1d-ecc9aff08138\") " Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.626604 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b29f7821-ed11-4b5d-b946-4562c4c595ef-utilities\") pod \"b29f7821-ed11-4b5d-b946-4562c4c595ef\" (UID: \"b29f7821-ed11-4b5d-b946-4562c4c595ef\") " Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.626625 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a348468-634f-4d18-aa1d-ecc9aff08138-utilities\") pod \"0a348468-634f-4d18-aa1d-ecc9aff08138\" (UID: \"0a348468-634f-4d18-aa1d-ecc9aff08138\") " Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.626640 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b29f7821-ed11-4b5d-b946-4562c4c595ef-catalog-content\") pod \"b29f7821-ed11-4b5d-b946-4562c4c595ef\" (UID: \"b29f7821-ed11-4b5d-b946-4562c4c595ef\") " Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.626659 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7acd9116-baab-48b1-ab22-7310f60fada8-catalog-content\") pod \"7acd9116-baab-48b1-ab22-7310f60fada8\" (UID: \"7acd9116-baab-48b1-ab22-7310f60fada8\") " Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.626674 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7acd9116-baab-48b1-ab22-7310f60fada8-utilities\") pod \"7acd9116-baab-48b1-ab22-7310f60fada8\" (UID: \"7acd9116-baab-48b1-ab22-7310f60fada8\") " Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.626705 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6xnx\" (UniqueName: \"kubernetes.io/projected/7acd9116-baab-48b1-ab22-7310f60fada8-kube-api-access-k6xnx\") pod \"7acd9116-baab-48b1-ab22-7310f60fada8\" (UID: \"7acd9116-baab-48b1-ab22-7310f60fada8\") " Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.630025 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a348468-634f-4d18-aa1d-ecc9aff08138-utilities" (OuterVolumeSpecName: "utilities") pod "0a348468-634f-4d18-aa1d-ecc9aff08138" (UID: "0a348468-634f-4d18-aa1d-ecc9aff08138"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.630931 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bd24ab7-1242-4a05-afc2-bd24d931cb3d-utilities" (OuterVolumeSpecName: "utilities") pod "0bd24ab7-1242-4a05-afc2-bd24d931cb3d" (UID: "0bd24ab7-1242-4a05-afc2-bd24d931cb3d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.633017 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bd24ab7-1242-4a05-afc2-bd24d931cb3d-kube-api-access-9xhsg" (OuterVolumeSpecName: "kube-api-access-9xhsg") pod "0bd24ab7-1242-4a05-afc2-bd24d931cb3d" (UID: "0bd24ab7-1242-4a05-afc2-bd24d931cb3d"). InnerVolumeSpecName "kube-api-access-9xhsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.634315 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b29f7821-ed11-4b5d-b946-4562c4c595ef-utilities" (OuterVolumeSpecName: "utilities") pod "b29f7821-ed11-4b5d-b946-4562c4c595ef" (UID: "b29f7821-ed11-4b5d-b946-4562c4c595ef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.634584 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7acd9116-baab-48b1-ab22-7310f60fada8-kube-api-access-k6xnx" (OuterVolumeSpecName: "kube-api-access-k6xnx") pod "7acd9116-baab-48b1-ab22-7310f60fada8" (UID: "7acd9116-baab-48b1-ab22-7310f60fada8"). InnerVolumeSpecName "kube-api-access-k6xnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.634999 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7acd9116-baab-48b1-ab22-7310f60fada8-utilities" (OuterVolumeSpecName: "utilities") pod "7acd9116-baab-48b1-ab22-7310f60fada8" (UID: "7acd9116-baab-48b1-ab22-7310f60fada8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.637652 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a348468-634f-4d18-aa1d-ecc9aff08138-kube-api-access-lv8qk" (OuterVolumeSpecName: "kube-api-access-lv8qk") pod "0a348468-634f-4d18-aa1d-ecc9aff08138" (UID: "0a348468-634f-4d18-aa1d-ecc9aff08138"). InnerVolumeSpecName "kube-api-access-lv8qk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.666979 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b29f7821-ed11-4b5d-b946-4562c4c595ef-kube-api-access-7br78" (OuterVolumeSpecName: "kube-api-access-7br78") pod "b29f7821-ed11-4b5d-b946-4562c4c595ef" (UID: "b29f7821-ed11-4b5d-b946-4562c4c595ef"). InnerVolumeSpecName "kube-api-access-7br78". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.677708 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a348468-634f-4d18-aa1d-ecc9aff08138-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0a348468-634f-4d18-aa1d-ecc9aff08138" (UID: "0a348468-634f-4d18-aa1d-ecc9aff08138"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.696268 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bd24ab7-1242-4a05-afc2-bd24d931cb3d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0bd24ab7-1242-4a05-afc2-bd24d931cb3d" (UID: "0bd24ab7-1242-4a05-afc2-bd24d931cb3d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.708582 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7acd9116-baab-48b1-ab22-7310f60fada8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7acd9116-baab-48b1-ab22-7310f60fada8" (UID: "7acd9116-baab-48b1-ab22-7310f60fada8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.738404 4737 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7acd9116-baab-48b1-ab22-7310f60fada8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.738448 4737 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7acd9116-baab-48b1-ab22-7310f60fada8-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.738463 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6xnx\" (UniqueName: \"kubernetes.io/projected/7acd9116-baab-48b1-ab22-7310f60fada8-kube-api-access-k6xnx\") on node \"crc\" DevicePath \"\"" Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.738478 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7br78\" (UniqueName: \"kubernetes.io/projected/b29f7821-ed11-4b5d-b946-4562c4c595ef-kube-api-access-7br78\") on node \"crc\" DevicePath \"\"" Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.738491 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lv8qk\" (UniqueName: \"kubernetes.io/projected/0a348468-634f-4d18-aa1d-ecc9aff08138-kube-api-access-lv8qk\") on node \"crc\" DevicePath \"\"" Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.738505 4737 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bd24ab7-1242-4a05-afc2-bd24d931cb3d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.738514 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xhsg\" (UniqueName: \"kubernetes.io/projected/0bd24ab7-1242-4a05-afc2-bd24d931cb3d-kube-api-access-9xhsg\") on node \"crc\" DevicePath \"\"" Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.738546 4737 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bd24ab7-1242-4a05-afc2-bd24d931cb3d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.738555 4737 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a348468-634f-4d18-aa1d-ecc9aff08138-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.738565 4737 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b29f7821-ed11-4b5d-b946-4562c4c595ef-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.738573 4737 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a348468-634f-4d18-aa1d-ecc9aff08138-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.789841 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b29f7821-ed11-4b5d-b946-4562c4c595ef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b29f7821-ed11-4b5d-b946-4562c4c595ef" (UID: "b29f7821-ed11-4b5d-b946-4562c4c595ef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.815341 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gftx9" Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.839283 4737 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b29f7821-ed11-4b5d-b946-4562c4c595ef-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.939927 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eec275ca-9658-4733-b311-48a052e4e843-marketplace-trusted-ca\") pod \"eec275ca-9658-4733-b311-48a052e4e843\" (UID: \"eec275ca-9658-4733-b311-48a052e4e843\") " Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.940064 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6n6tf\" (UniqueName: \"kubernetes.io/projected/eec275ca-9658-4733-b311-48a052e4e843-kube-api-access-6n6tf\") pod \"eec275ca-9658-4733-b311-48a052e4e843\" (UID: \"eec275ca-9658-4733-b311-48a052e4e843\") " Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.940110 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/eec275ca-9658-4733-b311-48a052e4e843-marketplace-operator-metrics\") pod \"eec275ca-9658-4733-b311-48a052e4e843\" (UID: \"eec275ca-9658-4733-b311-48a052e4e843\") " Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.940768 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eec275ca-9658-4733-b311-48a052e4e843-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "eec275ca-9658-4733-b311-48a052e4e843" (UID: "eec275ca-9658-4733-b311-48a052e4e843"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.944547 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eec275ca-9658-4733-b311-48a052e4e843-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "eec275ca-9658-4733-b311-48a052e4e843" (UID: "eec275ca-9658-4733-b311-48a052e4e843"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.944568 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eec275ca-9658-4733-b311-48a052e4e843-kube-api-access-6n6tf" (OuterVolumeSpecName: "kube-api-access-6n6tf") pod "eec275ca-9658-4733-b311-48a052e4e843" (UID: "eec275ca-9658-4733-b311-48a052e4e843"). InnerVolumeSpecName "kube-api-access-6n6tf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:37:06 crc kubenswrapper[4737]: I0126 18:37:06.979621 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dr8sf"] Jan 26 18:37:06 crc kubenswrapper[4737]: W0126 18:37:06.986001 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfaf30849_7c19_44f9_ba42_3ad3f14efe0d.slice/crio-7837ba3b94365c925656e37df1201de74b9dd91146db2cc05446238b8309bfc2 WatchSource:0}: Error finding container 7837ba3b94365c925656e37df1201de74b9dd91146db2cc05446238b8309bfc2: Status 404 returned error can't find the container with id 7837ba3b94365c925656e37df1201de74b9dd91146db2cc05446238b8309bfc2 Jan 26 18:37:07 crc kubenswrapper[4737]: I0126 18:37:07.041697 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6n6tf\" (UniqueName: \"kubernetes.io/projected/eec275ca-9658-4733-b311-48a052e4e843-kube-api-access-6n6tf\") on node \"crc\" DevicePath \"\"" Jan 26 18:37:07 crc kubenswrapper[4737]: I0126 18:37:07.041737 4737 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/eec275ca-9658-4733-b311-48a052e4e843-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 26 18:37:07 crc kubenswrapper[4737]: I0126 18:37:07.041750 4737 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eec275ca-9658-4733-b311-48a052e4e843-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 18:37:07 crc kubenswrapper[4737]: I0126 18:37:07.433851 4737 generic.go:334] "Generic (PLEG): container finished" podID="eec275ca-9658-4733-b311-48a052e4e843" containerID="76206cc768750069b7b9304646afbc03eb00334c9214c13020b6d7fd15730fe5" exitCode=0 Jan 26 18:37:07 crc kubenswrapper[4737]: I0126 18:37:07.433936 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gftx9" event={"ID":"eec275ca-9658-4733-b311-48a052e4e843","Type":"ContainerDied","Data":"76206cc768750069b7b9304646afbc03eb00334c9214c13020b6d7fd15730fe5"} Jan 26 18:37:07 crc kubenswrapper[4737]: I0126 18:37:07.434182 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gftx9" event={"ID":"eec275ca-9658-4733-b311-48a052e4e843","Type":"ContainerDied","Data":"04301efa9b877195639b5cd7785d45543d2b29c7d79cd2cd0eae22c876e0fcc1"} Jan 26 18:37:07 crc kubenswrapper[4737]: I0126 18:37:07.434206 4737 scope.go:117] "RemoveContainer" containerID="76206cc768750069b7b9304646afbc03eb00334c9214c13020b6d7fd15730fe5" Jan 26 18:37:07 crc kubenswrapper[4737]: I0126 18:37:07.434008 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gftx9" Jan 26 18:37:07 crc kubenswrapper[4737]: I0126 18:37:07.438388 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f4ldv" event={"ID":"7acd9116-baab-48b1-ab22-7310f60fada8","Type":"ContainerDied","Data":"42a8b0280e26f30c15d929c4022b28250c8cb4087a58203a5a92cc70a84622f3"} Jan 26 18:37:07 crc kubenswrapper[4737]: I0126 18:37:07.438476 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f4ldv" Jan 26 18:37:07 crc kubenswrapper[4737]: I0126 18:37:07.442184 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nmjc5" Jan 26 18:37:07 crc kubenswrapper[4737]: I0126 18:37:07.442245 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nmjc5" event={"ID":"b29f7821-ed11-4b5d-b946-4562c4c595ef","Type":"ContainerDied","Data":"cb3d80818ed16e0ed4a6e2abd51ab92e50846d996e7238e22e5dc42f98134011"} Jan 26 18:37:07 crc kubenswrapper[4737]: I0126 18:37:07.448412 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6tf2g" event={"ID":"0bd24ab7-1242-4a05-afc2-bd24d931cb3d","Type":"ContainerDied","Data":"30baa7b350004a5bac49fb79337f01d8672b087a2948e604cd1185f6a6b9c2cf"} Jan 26 18:37:07 crc kubenswrapper[4737]: I0126 18:37:07.448743 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6tf2g" Jan 26 18:37:07 crc kubenswrapper[4737]: I0126 18:37:07.451618 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5j2cd" event={"ID":"0a348468-634f-4d18-aa1d-ecc9aff08138","Type":"ContainerDied","Data":"8aa19c5c62ddabae15482e4bbebe3267e8aa28d62e6ab9a1dccc93889621f080"} Jan 26 18:37:07 crc kubenswrapper[4737]: I0126 18:37:07.452418 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5j2cd" Jan 26 18:37:07 crc kubenswrapper[4737]: I0126 18:37:07.453693 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dr8sf" event={"ID":"faf30849-7c19-44f9-ba42-3ad3f14efe0d","Type":"ContainerStarted","Data":"aaa74cfeecf22644d823a6a2ad6faff178f1deb2e006237901a2fbbc01ab2dc0"} Jan 26 18:37:07 crc kubenswrapper[4737]: I0126 18:37:07.453732 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dr8sf" event={"ID":"faf30849-7c19-44f9-ba42-3ad3f14efe0d","Type":"ContainerStarted","Data":"7837ba3b94365c925656e37df1201de74b9dd91146db2cc05446238b8309bfc2"} Jan 26 18:37:07 crc kubenswrapper[4737]: I0126 18:37:07.453995 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-dr8sf" Jan 26 18:37:07 crc kubenswrapper[4737]: I0126 18:37:07.461165 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gftx9"] Jan 26 18:37:07 crc kubenswrapper[4737]: I0126 18:37:07.462438 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-dr8sf" Jan 26 18:37:07 crc kubenswrapper[4737]: I0126 18:37:07.462854 4737 scope.go:117] "RemoveContainer" containerID="76206cc768750069b7b9304646afbc03eb00334c9214c13020b6d7fd15730fe5" Jan 26 18:37:07 crc kubenswrapper[4737]: E0126 18:37:07.463291 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76206cc768750069b7b9304646afbc03eb00334c9214c13020b6d7fd15730fe5\": container with ID starting with 76206cc768750069b7b9304646afbc03eb00334c9214c13020b6d7fd15730fe5 not found: ID does not exist" containerID="76206cc768750069b7b9304646afbc03eb00334c9214c13020b6d7fd15730fe5" Jan 26 18:37:07 crc kubenswrapper[4737]: I0126 18:37:07.463330 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76206cc768750069b7b9304646afbc03eb00334c9214c13020b6d7fd15730fe5"} err="failed to get container status \"76206cc768750069b7b9304646afbc03eb00334c9214c13020b6d7fd15730fe5\": rpc error: code = NotFound desc = could not find container \"76206cc768750069b7b9304646afbc03eb00334c9214c13020b6d7fd15730fe5\": container with ID starting with 76206cc768750069b7b9304646afbc03eb00334c9214c13020b6d7fd15730fe5 not found: ID does not exist" Jan 26 18:37:07 crc kubenswrapper[4737]: I0126 18:37:07.463351 4737 scope.go:117] "RemoveContainer" containerID="814d1b960114e4158a347e60bd2a0b55832520a8df14191ce7afa97e33da0cc0" Jan 26 18:37:07 crc kubenswrapper[4737]: I0126 18:37:07.481193 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gftx9"] Jan 26 18:37:07 crc kubenswrapper[4737]: I0126 18:37:07.490824 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f4ldv"] Jan 26 18:37:07 crc kubenswrapper[4737]: I0126 18:37:07.491968 4737 scope.go:117] "RemoveContainer" containerID="f4c58c1a5a76fa4c57db377d5ab92367950e651c7f1d84f1b6286d1583822707" Jan 26 18:37:07 crc kubenswrapper[4737]: I0126 18:37:07.494773 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-f4ldv"] Jan 26 18:37:07 crc kubenswrapper[4737]: I0126 18:37:07.500629 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nmjc5"] Jan 26 18:37:07 crc kubenswrapper[4737]: I0126 18:37:07.509084 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nmjc5"] Jan 26 18:37:07 crc kubenswrapper[4737]: I0126 18:37:07.513713 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6tf2g"] Jan 26 18:37:07 crc kubenswrapper[4737]: I0126 18:37:07.515311 4737 scope.go:117] "RemoveContainer" containerID="1803deef02265f1d97ac124d2f1daf6de0fbee22510ca792151b3ca7b7f44922" Jan 26 18:37:07 crc kubenswrapper[4737]: I0126 18:37:07.520556 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6tf2g"] Jan 26 18:37:07 crc kubenswrapper[4737]: I0126 18:37:07.523842 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-dr8sf" podStartSLOduration=1.523817363 podStartE2EDuration="1.523817363s" podCreationTimestamp="2026-01-26 18:37:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:37:07.523056333 +0000 UTC m=+400.831251041" watchObservedRunningTime="2026-01-26 18:37:07.523817363 +0000 UTC m=+400.832012071" Jan 26 18:37:07 crc kubenswrapper[4737]: I0126 18:37:07.530320 4737 scope.go:117] "RemoveContainer" containerID="d8c0f96fa74c12eb06608cd0966cc6c6bb7c15c5110082f83d27fc0ea772d03f" Jan 26 18:37:07 crc kubenswrapper[4737]: I0126 18:37:07.544106 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5j2cd"] Jan 26 18:37:07 crc kubenswrapper[4737]: I0126 18:37:07.546996 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5j2cd"] Jan 26 18:37:07 crc kubenswrapper[4737]: I0126 18:37:07.547667 4737 scope.go:117] "RemoveContainer" containerID="f40783bb9b568b9258c666cd5f416e61a600bd9ddddbe5688a8c33e0758c3fa0" Jan 26 18:37:07 crc kubenswrapper[4737]: I0126 18:37:07.568695 4737 scope.go:117] "RemoveContainer" containerID="f8fd9f29d206f3c87bc1a7b0ddafeec3e43e2471474919345b27ec7f8ff03f6f" Jan 26 18:37:07 crc kubenswrapper[4737]: I0126 18:37:07.591301 4737 scope.go:117] "RemoveContainer" containerID="2edbe879efcb559f15a3d3f855130d51d9cf622672b2294bec0f7d4e78c26fbd" Jan 26 18:37:07 crc kubenswrapper[4737]: I0126 18:37:07.617835 4737 scope.go:117] "RemoveContainer" containerID="c7d03ae45b8110a35d88dcabe2b15422331a9a427c878838d5047bc555143b09" Jan 26 18:37:07 crc kubenswrapper[4737]: I0126 18:37:07.635239 4737 scope.go:117] "RemoveContainer" containerID="662e43ac99f0d65716cd00ff4843a9ef4ed1637173c0916f1cc2c052cb169073" Jan 26 18:37:07 crc kubenswrapper[4737]: I0126 18:37:07.648408 4737 scope.go:117] "RemoveContainer" containerID="52ea918acb02ea5114f82eff17b4c7301ac4eb2ad1d798ec9e7528ca1f3c8dad" Jan 26 18:37:07 crc kubenswrapper[4737]: I0126 18:37:07.661900 4737 scope.go:117] "RemoveContainer" containerID="95225ca6d9f22406abd55ce795aab2ba9ba467bd8b67a6bb0c94a9e039dfd744" Jan 26 18:37:07 crc kubenswrapper[4737]: I0126 18:37:07.676191 4737 scope.go:117] "RemoveContainer" containerID="62b12fc853c195be5439930e04a17ff718e7b95940f216b5f4fbe5774671a839" Jan 26 18:37:08 crc kubenswrapper[4737]: I0126 18:37:08.259594 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kgrfg"] Jan 26 18:37:08 crc kubenswrapper[4737]: E0126 18:37:08.259792 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eec275ca-9658-4733-b311-48a052e4e843" containerName="marketplace-operator" Jan 26 18:37:08 crc kubenswrapper[4737]: I0126 18:37:08.259803 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="eec275ca-9658-4733-b311-48a052e4e843" containerName="marketplace-operator" Jan 26 18:37:08 crc kubenswrapper[4737]: E0126 18:37:08.259813 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bd24ab7-1242-4a05-afc2-bd24d931cb3d" containerName="extract-utilities" Jan 26 18:37:08 crc kubenswrapper[4737]: I0126 18:37:08.259819 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bd24ab7-1242-4a05-afc2-bd24d931cb3d" containerName="extract-utilities" Jan 26 18:37:08 crc kubenswrapper[4737]: E0126 18:37:08.259830 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a348468-634f-4d18-aa1d-ecc9aff08138" containerName="extract-content" Jan 26 18:37:08 crc kubenswrapper[4737]: I0126 18:37:08.259837 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a348468-634f-4d18-aa1d-ecc9aff08138" containerName="extract-content" Jan 26 18:37:08 crc kubenswrapper[4737]: E0126 18:37:08.259844 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7acd9116-baab-48b1-ab22-7310f60fada8" containerName="extract-content" Jan 26 18:37:08 crc kubenswrapper[4737]: I0126 18:37:08.259850 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="7acd9116-baab-48b1-ab22-7310f60fada8" containerName="extract-content" Jan 26 18:37:08 crc kubenswrapper[4737]: E0126 18:37:08.259859 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bd24ab7-1242-4a05-afc2-bd24d931cb3d" containerName="extract-content" Jan 26 18:37:08 crc kubenswrapper[4737]: I0126 18:37:08.259864 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bd24ab7-1242-4a05-afc2-bd24d931cb3d" containerName="extract-content" Jan 26 18:37:08 crc kubenswrapper[4737]: E0126 18:37:08.259872 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bd24ab7-1242-4a05-afc2-bd24d931cb3d" containerName="registry-server" Jan 26 18:37:08 crc kubenswrapper[4737]: I0126 18:37:08.259877 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bd24ab7-1242-4a05-afc2-bd24d931cb3d" containerName="registry-server" Jan 26 18:37:08 crc kubenswrapper[4737]: E0126 18:37:08.259886 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a348468-634f-4d18-aa1d-ecc9aff08138" containerName="registry-server" Jan 26 18:37:08 crc kubenswrapper[4737]: I0126 18:37:08.259891 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a348468-634f-4d18-aa1d-ecc9aff08138" containerName="registry-server" Jan 26 18:37:08 crc kubenswrapper[4737]: E0126 18:37:08.259901 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b29f7821-ed11-4b5d-b946-4562c4c595ef" containerName="extract-content" Jan 26 18:37:08 crc kubenswrapper[4737]: I0126 18:37:08.259906 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="b29f7821-ed11-4b5d-b946-4562c4c595ef" containerName="extract-content" Jan 26 18:37:08 crc kubenswrapper[4737]: E0126 18:37:08.259913 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7acd9116-baab-48b1-ab22-7310f60fada8" containerName="registry-server" Jan 26 18:37:08 crc kubenswrapper[4737]: I0126 18:37:08.259918 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="7acd9116-baab-48b1-ab22-7310f60fada8" containerName="registry-server" Jan 26 18:37:08 crc kubenswrapper[4737]: E0126 18:37:08.259927 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7acd9116-baab-48b1-ab22-7310f60fada8" containerName="extract-utilities" Jan 26 18:37:08 crc kubenswrapper[4737]: I0126 18:37:08.259933 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="7acd9116-baab-48b1-ab22-7310f60fada8" containerName="extract-utilities" Jan 26 18:37:08 crc kubenswrapper[4737]: E0126 18:37:08.259940 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a348468-634f-4d18-aa1d-ecc9aff08138" containerName="extract-utilities" Jan 26 18:37:08 crc kubenswrapper[4737]: I0126 18:37:08.259945 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a348468-634f-4d18-aa1d-ecc9aff08138" containerName="extract-utilities" Jan 26 18:37:08 crc kubenswrapper[4737]: E0126 18:37:08.259953 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b29f7821-ed11-4b5d-b946-4562c4c595ef" containerName="extract-utilities" Jan 26 18:37:08 crc kubenswrapper[4737]: I0126 18:37:08.259958 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="b29f7821-ed11-4b5d-b946-4562c4c595ef" containerName="extract-utilities" Jan 26 18:37:08 crc kubenswrapper[4737]: E0126 18:37:08.259968 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b29f7821-ed11-4b5d-b946-4562c4c595ef" containerName="registry-server" Jan 26 18:37:08 crc kubenswrapper[4737]: I0126 18:37:08.259973 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="b29f7821-ed11-4b5d-b946-4562c4c595ef" containerName="registry-server" Jan 26 18:37:08 crc kubenswrapper[4737]: I0126 18:37:08.260093 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a348468-634f-4d18-aa1d-ecc9aff08138" containerName="registry-server" Jan 26 18:37:08 crc kubenswrapper[4737]: I0126 18:37:08.260105 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="eec275ca-9658-4733-b311-48a052e4e843" containerName="marketplace-operator" Jan 26 18:37:08 crc kubenswrapper[4737]: I0126 18:37:08.260117 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bd24ab7-1242-4a05-afc2-bd24d931cb3d" containerName="registry-server" Jan 26 18:37:08 crc kubenswrapper[4737]: I0126 18:37:08.260126 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="b29f7821-ed11-4b5d-b946-4562c4c595ef" containerName="registry-server" Jan 26 18:37:08 crc kubenswrapper[4737]: I0126 18:37:08.260133 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="7acd9116-baab-48b1-ab22-7310f60fada8" containerName="registry-server" Jan 26 18:37:08 crc kubenswrapper[4737]: I0126 18:37:08.260782 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kgrfg" Jan 26 18:37:08 crc kubenswrapper[4737]: I0126 18:37:08.263230 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 18:37:08 crc kubenswrapper[4737]: I0126 18:37:08.271405 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kgrfg"] Jan 26 18:37:08 crc kubenswrapper[4737]: I0126 18:37:08.372736 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhhcq\" (UniqueName: \"kubernetes.io/projected/927a6ff0-afc5-477b-b139-e02a9f9b4452-kube-api-access-jhhcq\") pod \"redhat-marketplace-kgrfg\" (UID: \"927a6ff0-afc5-477b-b139-e02a9f9b4452\") " pod="openshift-marketplace/redhat-marketplace-kgrfg" Jan 26 18:37:08 crc kubenswrapper[4737]: I0126 18:37:08.372817 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/927a6ff0-afc5-477b-b139-e02a9f9b4452-catalog-content\") pod \"redhat-marketplace-kgrfg\" (UID: \"927a6ff0-afc5-477b-b139-e02a9f9b4452\") " pod="openshift-marketplace/redhat-marketplace-kgrfg" Jan 26 18:37:08 crc kubenswrapper[4737]: I0126 18:37:08.372996 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/927a6ff0-afc5-477b-b139-e02a9f9b4452-utilities\") pod \"redhat-marketplace-kgrfg\" (UID: \"927a6ff0-afc5-477b-b139-e02a9f9b4452\") " pod="openshift-marketplace/redhat-marketplace-kgrfg" Jan 26 18:37:08 crc kubenswrapper[4737]: I0126 18:37:08.461962 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nx2jv"] Jan 26 18:37:08 crc kubenswrapper[4737]: I0126 18:37:08.462958 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nx2jv" Jan 26 18:37:08 crc kubenswrapper[4737]: I0126 18:37:08.466162 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 18:37:08 crc kubenswrapper[4737]: I0126 18:37:08.474227 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhhcq\" (UniqueName: \"kubernetes.io/projected/927a6ff0-afc5-477b-b139-e02a9f9b4452-kube-api-access-jhhcq\") pod \"redhat-marketplace-kgrfg\" (UID: \"927a6ff0-afc5-477b-b139-e02a9f9b4452\") " pod="openshift-marketplace/redhat-marketplace-kgrfg" Jan 26 18:37:08 crc kubenswrapper[4737]: I0126 18:37:08.474305 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/927a6ff0-afc5-477b-b139-e02a9f9b4452-catalog-content\") pod \"redhat-marketplace-kgrfg\" (UID: \"927a6ff0-afc5-477b-b139-e02a9f9b4452\") " pod="openshift-marketplace/redhat-marketplace-kgrfg" Jan 26 18:37:08 crc kubenswrapper[4737]: I0126 18:37:08.474338 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/927a6ff0-afc5-477b-b139-e02a9f9b4452-utilities\") pod \"redhat-marketplace-kgrfg\" (UID: \"927a6ff0-afc5-477b-b139-e02a9f9b4452\") " pod="openshift-marketplace/redhat-marketplace-kgrfg" Jan 26 18:37:08 crc kubenswrapper[4737]: I0126 18:37:08.474826 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/927a6ff0-afc5-477b-b139-e02a9f9b4452-catalog-content\") pod \"redhat-marketplace-kgrfg\" (UID: \"927a6ff0-afc5-477b-b139-e02a9f9b4452\") " pod="openshift-marketplace/redhat-marketplace-kgrfg" Jan 26 18:37:08 crc kubenswrapper[4737]: I0126 18:37:08.474977 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/927a6ff0-afc5-477b-b139-e02a9f9b4452-utilities\") pod \"redhat-marketplace-kgrfg\" (UID: \"927a6ff0-afc5-477b-b139-e02a9f9b4452\") " pod="openshift-marketplace/redhat-marketplace-kgrfg" Jan 26 18:37:08 crc kubenswrapper[4737]: I0126 18:37:08.476942 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nx2jv"] Jan 26 18:37:08 crc kubenswrapper[4737]: I0126 18:37:08.500048 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhhcq\" (UniqueName: \"kubernetes.io/projected/927a6ff0-afc5-477b-b139-e02a9f9b4452-kube-api-access-jhhcq\") pod \"redhat-marketplace-kgrfg\" (UID: \"927a6ff0-afc5-477b-b139-e02a9f9b4452\") " pod="openshift-marketplace/redhat-marketplace-kgrfg" Jan 26 18:37:08 crc kubenswrapper[4737]: I0126 18:37:08.575657 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh65l\" (UniqueName: \"kubernetes.io/projected/89059a8c-e6df-4f31-afd5-78a98ee6b4e5-kube-api-access-sh65l\") pod \"redhat-operators-nx2jv\" (UID: \"89059a8c-e6df-4f31-afd5-78a98ee6b4e5\") " pod="openshift-marketplace/redhat-operators-nx2jv" Jan 26 18:37:08 crc kubenswrapper[4737]: I0126 18:37:08.575702 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89059a8c-e6df-4f31-afd5-78a98ee6b4e5-utilities\") pod \"redhat-operators-nx2jv\" (UID: \"89059a8c-e6df-4f31-afd5-78a98ee6b4e5\") " pod="openshift-marketplace/redhat-operators-nx2jv" Jan 26 18:37:08 crc kubenswrapper[4737]: I0126 18:37:08.575935 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89059a8c-e6df-4f31-afd5-78a98ee6b4e5-catalog-content\") pod \"redhat-operators-nx2jv\" (UID: \"89059a8c-e6df-4f31-afd5-78a98ee6b4e5\") " pod="openshift-marketplace/redhat-operators-nx2jv" Jan 26 18:37:08 crc kubenswrapper[4737]: I0126 18:37:08.582164 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kgrfg" Jan 26 18:37:08 crc kubenswrapper[4737]: I0126 18:37:08.677330 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89059a8c-e6df-4f31-afd5-78a98ee6b4e5-catalog-content\") pod \"redhat-operators-nx2jv\" (UID: \"89059a8c-e6df-4f31-afd5-78a98ee6b4e5\") " pod="openshift-marketplace/redhat-operators-nx2jv" Jan 26 18:37:08 crc kubenswrapper[4737]: I0126 18:37:08.677438 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sh65l\" (UniqueName: \"kubernetes.io/projected/89059a8c-e6df-4f31-afd5-78a98ee6b4e5-kube-api-access-sh65l\") pod \"redhat-operators-nx2jv\" (UID: \"89059a8c-e6df-4f31-afd5-78a98ee6b4e5\") " pod="openshift-marketplace/redhat-operators-nx2jv" Jan 26 18:37:08 crc kubenswrapper[4737]: I0126 18:37:08.677464 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89059a8c-e6df-4f31-afd5-78a98ee6b4e5-utilities\") pod \"redhat-operators-nx2jv\" (UID: \"89059a8c-e6df-4f31-afd5-78a98ee6b4e5\") " pod="openshift-marketplace/redhat-operators-nx2jv" Jan 26 18:37:08 crc kubenswrapper[4737]: I0126 18:37:08.678012 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89059a8c-e6df-4f31-afd5-78a98ee6b4e5-utilities\") pod \"redhat-operators-nx2jv\" (UID: \"89059a8c-e6df-4f31-afd5-78a98ee6b4e5\") " pod="openshift-marketplace/redhat-operators-nx2jv" Jan 26 18:37:08 crc kubenswrapper[4737]: I0126 18:37:08.678349 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89059a8c-e6df-4f31-afd5-78a98ee6b4e5-catalog-content\") pod \"redhat-operators-nx2jv\" (UID: \"89059a8c-e6df-4f31-afd5-78a98ee6b4e5\") " pod="openshift-marketplace/redhat-operators-nx2jv" Jan 26 18:37:08 crc kubenswrapper[4737]: I0126 18:37:08.703577 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sh65l\" (UniqueName: \"kubernetes.io/projected/89059a8c-e6df-4f31-afd5-78a98ee6b4e5-kube-api-access-sh65l\") pod \"redhat-operators-nx2jv\" (UID: \"89059a8c-e6df-4f31-afd5-78a98ee6b4e5\") " pod="openshift-marketplace/redhat-operators-nx2jv" Jan 26 18:37:08 crc kubenswrapper[4737]: I0126 18:37:08.723880 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" podUID="7cd9832f-e47d-4503-88fb-6a197b2fe89d" containerName="registry" containerID="cri-o://ee0aa3383a99cad3a21e6a3bc164ffc3c5a705ceb07fb383879fddc60bb3a825" gracePeriod=30 Jan 26 18:37:08 crc kubenswrapper[4737]: I0126 18:37:08.781667 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nx2jv" Jan 26 18:37:08 crc kubenswrapper[4737]: I0126 18:37:08.966799 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kgrfg"] Jan 26 18:37:08 crc kubenswrapper[4737]: W0126 18:37:08.972400 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod927a6ff0_afc5_477b_b139_e02a9f9b4452.slice/crio-b410caabc8da53b751d14374a77378eaebd19ed7b98e4508edc51f5cf62d38b7 WatchSource:0}: Error finding container b410caabc8da53b751d14374a77378eaebd19ed7b98e4508edc51f5cf62d38b7: Status 404 returned error can't find the container with id b410caabc8da53b751d14374a77378eaebd19ed7b98e4508edc51f5cf62d38b7 Jan 26 18:37:08 crc kubenswrapper[4737]: I0126 18:37:08.991374 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a348468-634f-4d18-aa1d-ecc9aff08138" path="/var/lib/kubelet/pods/0a348468-634f-4d18-aa1d-ecc9aff08138/volumes" Jan 26 18:37:08 crc kubenswrapper[4737]: I0126 18:37:08.992490 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bd24ab7-1242-4a05-afc2-bd24d931cb3d" path="/var/lib/kubelet/pods/0bd24ab7-1242-4a05-afc2-bd24d931cb3d/volumes" Jan 26 18:37:08 crc kubenswrapper[4737]: I0126 18:37:08.993207 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7acd9116-baab-48b1-ab22-7310f60fada8" path="/var/lib/kubelet/pods/7acd9116-baab-48b1-ab22-7310f60fada8/volumes" Jan 26 18:37:08 crc kubenswrapper[4737]: I0126 18:37:08.994380 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b29f7821-ed11-4b5d-b946-4562c4c595ef" path="/var/lib/kubelet/pods/b29f7821-ed11-4b5d-b946-4562c4c595ef/volumes" Jan 26 18:37:08 crc kubenswrapper[4737]: I0126 18:37:08.995013 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eec275ca-9658-4733-b311-48a052e4e843" path="/var/lib/kubelet/pods/eec275ca-9658-4733-b311-48a052e4e843/volumes" Jan 26 18:37:09 crc kubenswrapper[4737]: I0126 18:37:09.127494 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:37:09 crc kubenswrapper[4737]: I0126 18:37:09.169706 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nx2jv"] Jan 26 18:37:09 crc kubenswrapper[4737]: I0126 18:37:09.288535 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7cd9832f-e47d-4503-88fb-6a197b2fe89d-bound-sa-token\") pod \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " Jan 26 18:37:09 crc kubenswrapper[4737]: I0126 18:37:09.288647 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7cd9832f-e47d-4503-88fb-6a197b2fe89d-ca-trust-extracted\") pod \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " Jan 26 18:37:09 crc kubenswrapper[4737]: I0126 18:37:09.288680 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7cd9832f-e47d-4503-88fb-6a197b2fe89d-trusted-ca\") pod \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " Jan 26 18:37:09 crc kubenswrapper[4737]: I0126 18:37:09.288766 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7cd9832f-e47d-4503-88fb-6a197b2fe89d-installation-pull-secrets\") pod \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " Jan 26 18:37:09 crc kubenswrapper[4737]: I0126 18:37:09.288801 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7cd9832f-e47d-4503-88fb-6a197b2fe89d-registry-tls\") pod \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " Jan 26 18:37:09 crc kubenswrapper[4737]: I0126 18:37:09.288866 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7cd9832f-e47d-4503-88fb-6a197b2fe89d-registry-certificates\") pod \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " Jan 26 18:37:09 crc kubenswrapper[4737]: I0126 18:37:09.289136 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " Jan 26 18:37:09 crc kubenswrapper[4737]: I0126 18:37:09.289199 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hh7x4\" (UniqueName: \"kubernetes.io/projected/7cd9832f-e47d-4503-88fb-6a197b2fe89d-kube-api-access-hh7x4\") pod \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\" (UID: \"7cd9832f-e47d-4503-88fb-6a197b2fe89d\") " Jan 26 18:37:09 crc kubenswrapper[4737]: I0126 18:37:09.290096 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cd9832f-e47d-4503-88fb-6a197b2fe89d-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "7cd9832f-e47d-4503-88fb-6a197b2fe89d" (UID: "7cd9832f-e47d-4503-88fb-6a197b2fe89d"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:37:09 crc kubenswrapper[4737]: I0126 18:37:09.290272 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cd9832f-e47d-4503-88fb-6a197b2fe89d-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "7cd9832f-e47d-4503-88fb-6a197b2fe89d" (UID: "7cd9832f-e47d-4503-88fb-6a197b2fe89d"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:37:09 crc kubenswrapper[4737]: I0126 18:37:09.297201 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cd9832f-e47d-4503-88fb-6a197b2fe89d-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "7cd9832f-e47d-4503-88fb-6a197b2fe89d" (UID: "7cd9832f-e47d-4503-88fb-6a197b2fe89d"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:37:09 crc kubenswrapper[4737]: I0126 18:37:09.298296 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cd9832f-e47d-4503-88fb-6a197b2fe89d-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "7cd9832f-e47d-4503-88fb-6a197b2fe89d" (UID: "7cd9832f-e47d-4503-88fb-6a197b2fe89d"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:37:09 crc kubenswrapper[4737]: I0126 18:37:09.298441 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cd9832f-e47d-4503-88fb-6a197b2fe89d-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "7cd9832f-e47d-4503-88fb-6a197b2fe89d" (UID: "7cd9832f-e47d-4503-88fb-6a197b2fe89d"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:37:09 crc kubenswrapper[4737]: I0126 18:37:09.300014 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "7cd9832f-e47d-4503-88fb-6a197b2fe89d" (UID: "7cd9832f-e47d-4503-88fb-6a197b2fe89d"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 18:37:09 crc kubenswrapper[4737]: I0126 18:37:09.300146 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cd9832f-e47d-4503-88fb-6a197b2fe89d-kube-api-access-hh7x4" (OuterVolumeSpecName: "kube-api-access-hh7x4") pod "7cd9832f-e47d-4503-88fb-6a197b2fe89d" (UID: "7cd9832f-e47d-4503-88fb-6a197b2fe89d"). InnerVolumeSpecName "kube-api-access-hh7x4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:37:09 crc kubenswrapper[4737]: I0126 18:37:09.306369 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cd9832f-e47d-4503-88fb-6a197b2fe89d-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "7cd9832f-e47d-4503-88fb-6a197b2fe89d" (UID: "7cd9832f-e47d-4503-88fb-6a197b2fe89d"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:37:09 crc kubenswrapper[4737]: I0126 18:37:09.390386 4737 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7cd9832f-e47d-4503-88fb-6a197b2fe89d-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 26 18:37:09 crc kubenswrapper[4737]: I0126 18:37:09.390431 4737 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7cd9832f-e47d-4503-88fb-6a197b2fe89d-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 26 18:37:09 crc kubenswrapper[4737]: I0126 18:37:09.390442 4737 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7cd9832f-e47d-4503-88fb-6a197b2fe89d-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 26 18:37:09 crc kubenswrapper[4737]: I0126 18:37:09.390451 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hh7x4\" (UniqueName: \"kubernetes.io/projected/7cd9832f-e47d-4503-88fb-6a197b2fe89d-kube-api-access-hh7x4\") on node \"crc\" DevicePath \"\"" Jan 26 18:37:09 crc kubenswrapper[4737]: I0126 18:37:09.390466 4737 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7cd9832f-e47d-4503-88fb-6a197b2fe89d-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 18:37:09 crc kubenswrapper[4737]: I0126 18:37:09.390474 4737 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7cd9832f-e47d-4503-88fb-6a197b2fe89d-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 26 18:37:09 crc kubenswrapper[4737]: I0126 18:37:09.390482 4737 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7cd9832f-e47d-4503-88fb-6a197b2fe89d-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 18:37:09 crc kubenswrapper[4737]: I0126 18:37:09.486454 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nx2jv" event={"ID":"89059a8c-e6df-4f31-afd5-78a98ee6b4e5","Type":"ContainerStarted","Data":"9cd8e78e463710ca95af8c95630fb4b0b42c0296cedabdd4bee830cb8f719e37"} Jan 26 18:37:09 crc kubenswrapper[4737]: I0126 18:37:09.486502 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nx2jv" event={"ID":"89059a8c-e6df-4f31-afd5-78a98ee6b4e5","Type":"ContainerStarted","Data":"398db5477600f43e0e43c581216c8e9d9fe775f32558669bfaa8e5cf22ccd685"} Jan 26 18:37:09 crc kubenswrapper[4737]: I0126 18:37:09.491668 4737 generic.go:334] "Generic (PLEG): container finished" podID="7cd9832f-e47d-4503-88fb-6a197b2fe89d" containerID="ee0aa3383a99cad3a21e6a3bc164ffc3c5a705ceb07fb383879fddc60bb3a825" exitCode=0 Jan 26 18:37:09 crc kubenswrapper[4737]: I0126 18:37:09.491740 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" event={"ID":"7cd9832f-e47d-4503-88fb-6a197b2fe89d","Type":"ContainerDied","Data":"ee0aa3383a99cad3a21e6a3bc164ffc3c5a705ceb07fb383879fddc60bb3a825"} Jan 26 18:37:09 crc kubenswrapper[4737]: I0126 18:37:09.491786 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" Jan 26 18:37:09 crc kubenswrapper[4737]: I0126 18:37:09.491837 4737 scope.go:117] "RemoveContainer" containerID="ee0aa3383a99cad3a21e6a3bc164ffc3c5a705ceb07fb383879fddc60bb3a825" Jan 26 18:37:09 crc kubenswrapper[4737]: I0126 18:37:09.491818 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-7c9pc" event={"ID":"7cd9832f-e47d-4503-88fb-6a197b2fe89d","Type":"ContainerDied","Data":"de95ff1ca34daa6c09ba39c19bf9f0f591ba02698ddf8a9aa80905f7c696901f"} Jan 26 18:37:09 crc kubenswrapper[4737]: I0126 18:37:09.493533 4737 generic.go:334] "Generic (PLEG): container finished" podID="927a6ff0-afc5-477b-b139-e02a9f9b4452" containerID="c3be08424de4a878e159c3bb1f3836febe259b9c0c728eef2651512bdaead1b4" exitCode=0 Jan 26 18:37:09 crc kubenswrapper[4737]: I0126 18:37:09.493620 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kgrfg" event={"ID":"927a6ff0-afc5-477b-b139-e02a9f9b4452","Type":"ContainerDied","Data":"c3be08424de4a878e159c3bb1f3836febe259b9c0c728eef2651512bdaead1b4"} Jan 26 18:37:09 crc kubenswrapper[4737]: I0126 18:37:09.493655 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kgrfg" event={"ID":"927a6ff0-afc5-477b-b139-e02a9f9b4452","Type":"ContainerStarted","Data":"b410caabc8da53b751d14374a77378eaebd19ed7b98e4508edc51f5cf62d38b7"} Jan 26 18:37:09 crc kubenswrapper[4737]: I0126 18:37:09.555281 4737 scope.go:117] "RemoveContainer" containerID="ee0aa3383a99cad3a21e6a3bc164ffc3c5a705ceb07fb383879fddc60bb3a825" Jan 26 18:37:09 crc kubenswrapper[4737]: E0126 18:37:09.559907 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee0aa3383a99cad3a21e6a3bc164ffc3c5a705ceb07fb383879fddc60bb3a825\": container with ID starting with ee0aa3383a99cad3a21e6a3bc164ffc3c5a705ceb07fb383879fddc60bb3a825 not found: ID does not exist" containerID="ee0aa3383a99cad3a21e6a3bc164ffc3c5a705ceb07fb383879fddc60bb3a825" Jan 26 18:37:09 crc kubenswrapper[4737]: I0126 18:37:09.559986 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee0aa3383a99cad3a21e6a3bc164ffc3c5a705ceb07fb383879fddc60bb3a825"} err="failed to get container status \"ee0aa3383a99cad3a21e6a3bc164ffc3c5a705ceb07fb383879fddc60bb3a825\": rpc error: code = NotFound desc = could not find container \"ee0aa3383a99cad3a21e6a3bc164ffc3c5a705ceb07fb383879fddc60bb3a825\": container with ID starting with ee0aa3383a99cad3a21e6a3bc164ffc3c5a705ceb07fb383879fddc60bb3a825 not found: ID does not exist" Jan 26 18:37:09 crc kubenswrapper[4737]: I0126 18:37:09.563293 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7c9pc"] Jan 26 18:37:09 crc kubenswrapper[4737]: I0126 18:37:09.570054 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7c9pc"] Jan 26 18:37:10 crc kubenswrapper[4737]: I0126 18:37:10.501867 4737 generic.go:334] "Generic (PLEG): container finished" podID="89059a8c-e6df-4f31-afd5-78a98ee6b4e5" containerID="9cd8e78e463710ca95af8c95630fb4b0b42c0296cedabdd4bee830cb8f719e37" exitCode=0 Jan 26 18:37:10 crc kubenswrapper[4737]: I0126 18:37:10.501919 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nx2jv" event={"ID":"89059a8c-e6df-4f31-afd5-78a98ee6b4e5","Type":"ContainerDied","Data":"9cd8e78e463710ca95af8c95630fb4b0b42c0296cedabdd4bee830cb8f719e37"} Jan 26 18:37:10 crc kubenswrapper[4737]: I0126 18:37:10.505935 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kgrfg" event={"ID":"927a6ff0-afc5-477b-b139-e02a9f9b4452","Type":"ContainerStarted","Data":"2f72a1d8e7dbe500c5d45d897efea6eeb3ea9d27702b7b076ba274209a6070d3"} Jan 26 18:37:10 crc kubenswrapper[4737]: I0126 18:37:10.666400 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hjhjz"] Jan 26 18:37:10 crc kubenswrapper[4737]: E0126 18:37:10.666788 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cd9832f-e47d-4503-88fb-6a197b2fe89d" containerName="registry" Jan 26 18:37:10 crc kubenswrapper[4737]: I0126 18:37:10.666806 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cd9832f-e47d-4503-88fb-6a197b2fe89d" containerName="registry" Jan 26 18:37:10 crc kubenswrapper[4737]: I0126 18:37:10.666963 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cd9832f-e47d-4503-88fb-6a197b2fe89d" containerName="registry" Jan 26 18:37:10 crc kubenswrapper[4737]: I0126 18:37:10.668104 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hjhjz" Jan 26 18:37:10 crc kubenswrapper[4737]: I0126 18:37:10.671135 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 18:37:10 crc kubenswrapper[4737]: I0126 18:37:10.673274 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hjhjz"] Jan 26 18:37:10 crc kubenswrapper[4737]: I0126 18:37:10.807680 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhs6m\" (UniqueName: \"kubernetes.io/projected/99f52814-0bfb-4fa6-9bfd-a9bcf704d8f2-kube-api-access-fhs6m\") pod \"certified-operators-hjhjz\" (UID: \"99f52814-0bfb-4fa6-9bfd-a9bcf704d8f2\") " pod="openshift-marketplace/certified-operators-hjhjz" Jan 26 18:37:10 crc kubenswrapper[4737]: I0126 18:37:10.807731 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99f52814-0bfb-4fa6-9bfd-a9bcf704d8f2-catalog-content\") pod \"certified-operators-hjhjz\" (UID: \"99f52814-0bfb-4fa6-9bfd-a9bcf704d8f2\") " pod="openshift-marketplace/certified-operators-hjhjz" Jan 26 18:37:10 crc kubenswrapper[4737]: I0126 18:37:10.807815 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99f52814-0bfb-4fa6-9bfd-a9bcf704d8f2-utilities\") pod \"certified-operators-hjhjz\" (UID: \"99f52814-0bfb-4fa6-9bfd-a9bcf704d8f2\") " pod="openshift-marketplace/certified-operators-hjhjz" Jan 26 18:37:10 crc kubenswrapper[4737]: I0126 18:37:10.853546 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2v6jg"] Jan 26 18:37:10 crc kubenswrapper[4737]: I0126 18:37:10.856479 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2v6jg" Jan 26 18:37:10 crc kubenswrapper[4737]: I0126 18:37:10.860653 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 18:37:10 crc kubenswrapper[4737]: I0126 18:37:10.864506 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2v6jg"] Jan 26 18:37:10 crc kubenswrapper[4737]: I0126 18:37:10.909530 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99f52814-0bfb-4fa6-9bfd-a9bcf704d8f2-utilities\") pod \"certified-operators-hjhjz\" (UID: \"99f52814-0bfb-4fa6-9bfd-a9bcf704d8f2\") " pod="openshift-marketplace/certified-operators-hjhjz" Jan 26 18:37:10 crc kubenswrapper[4737]: I0126 18:37:10.909606 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhs6m\" (UniqueName: \"kubernetes.io/projected/99f52814-0bfb-4fa6-9bfd-a9bcf704d8f2-kube-api-access-fhs6m\") pod \"certified-operators-hjhjz\" (UID: \"99f52814-0bfb-4fa6-9bfd-a9bcf704d8f2\") " pod="openshift-marketplace/certified-operators-hjhjz" Jan 26 18:37:10 crc kubenswrapper[4737]: I0126 18:37:10.909646 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99f52814-0bfb-4fa6-9bfd-a9bcf704d8f2-catalog-content\") pod \"certified-operators-hjhjz\" (UID: \"99f52814-0bfb-4fa6-9bfd-a9bcf704d8f2\") " pod="openshift-marketplace/certified-operators-hjhjz" Jan 26 18:37:10 crc kubenswrapper[4737]: I0126 18:37:10.910030 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99f52814-0bfb-4fa6-9bfd-a9bcf704d8f2-catalog-content\") pod \"certified-operators-hjhjz\" (UID: \"99f52814-0bfb-4fa6-9bfd-a9bcf704d8f2\") " pod="openshift-marketplace/certified-operators-hjhjz" Jan 26 18:37:10 crc kubenswrapper[4737]: I0126 18:37:10.910415 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99f52814-0bfb-4fa6-9bfd-a9bcf704d8f2-utilities\") pod \"certified-operators-hjhjz\" (UID: \"99f52814-0bfb-4fa6-9bfd-a9bcf704d8f2\") " pod="openshift-marketplace/certified-operators-hjhjz" Jan 26 18:37:10 crc kubenswrapper[4737]: I0126 18:37:10.927840 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhs6m\" (UniqueName: \"kubernetes.io/projected/99f52814-0bfb-4fa6-9bfd-a9bcf704d8f2-kube-api-access-fhs6m\") pod \"certified-operators-hjhjz\" (UID: \"99f52814-0bfb-4fa6-9bfd-a9bcf704d8f2\") " pod="openshift-marketplace/certified-operators-hjhjz" Jan 26 18:37:10 crc kubenswrapper[4737]: I0126 18:37:10.989530 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cd9832f-e47d-4503-88fb-6a197b2fe89d" path="/var/lib/kubelet/pods/7cd9832f-e47d-4503-88fb-6a197b2fe89d/volumes" Jan 26 18:37:10 crc kubenswrapper[4737]: I0126 18:37:10.993439 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hjhjz" Jan 26 18:37:11 crc kubenswrapper[4737]: I0126 18:37:11.010654 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6t98\" (UniqueName: \"kubernetes.io/projected/575ea0ec-a40c-47ca-b30d-a1907aca111e-kube-api-access-n6t98\") pod \"community-operators-2v6jg\" (UID: \"575ea0ec-a40c-47ca-b30d-a1907aca111e\") " pod="openshift-marketplace/community-operators-2v6jg" Jan 26 18:37:11 crc kubenswrapper[4737]: I0126 18:37:11.010735 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/575ea0ec-a40c-47ca-b30d-a1907aca111e-utilities\") pod \"community-operators-2v6jg\" (UID: \"575ea0ec-a40c-47ca-b30d-a1907aca111e\") " pod="openshift-marketplace/community-operators-2v6jg" Jan 26 18:37:11 crc kubenswrapper[4737]: I0126 18:37:11.010807 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/575ea0ec-a40c-47ca-b30d-a1907aca111e-catalog-content\") pod \"community-operators-2v6jg\" (UID: \"575ea0ec-a40c-47ca-b30d-a1907aca111e\") " pod="openshift-marketplace/community-operators-2v6jg" Jan 26 18:37:11 crc kubenswrapper[4737]: I0126 18:37:11.112640 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6t98\" (UniqueName: \"kubernetes.io/projected/575ea0ec-a40c-47ca-b30d-a1907aca111e-kube-api-access-n6t98\") pod \"community-operators-2v6jg\" (UID: \"575ea0ec-a40c-47ca-b30d-a1907aca111e\") " pod="openshift-marketplace/community-operators-2v6jg" Jan 26 18:37:11 crc kubenswrapper[4737]: I0126 18:37:11.113271 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/575ea0ec-a40c-47ca-b30d-a1907aca111e-utilities\") pod \"community-operators-2v6jg\" (UID: \"575ea0ec-a40c-47ca-b30d-a1907aca111e\") " pod="openshift-marketplace/community-operators-2v6jg" Jan 26 18:37:11 crc kubenswrapper[4737]: I0126 18:37:11.113328 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/575ea0ec-a40c-47ca-b30d-a1907aca111e-catalog-content\") pod \"community-operators-2v6jg\" (UID: \"575ea0ec-a40c-47ca-b30d-a1907aca111e\") " pod="openshift-marketplace/community-operators-2v6jg" Jan 26 18:37:11 crc kubenswrapper[4737]: I0126 18:37:11.113930 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/575ea0ec-a40c-47ca-b30d-a1907aca111e-utilities\") pod \"community-operators-2v6jg\" (UID: \"575ea0ec-a40c-47ca-b30d-a1907aca111e\") " pod="openshift-marketplace/community-operators-2v6jg" Jan 26 18:37:11 crc kubenswrapper[4737]: I0126 18:37:11.113930 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/575ea0ec-a40c-47ca-b30d-a1907aca111e-catalog-content\") pod \"community-operators-2v6jg\" (UID: \"575ea0ec-a40c-47ca-b30d-a1907aca111e\") " pod="openshift-marketplace/community-operators-2v6jg" Jan 26 18:37:11 crc kubenswrapper[4737]: I0126 18:37:11.138311 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6t98\" (UniqueName: \"kubernetes.io/projected/575ea0ec-a40c-47ca-b30d-a1907aca111e-kube-api-access-n6t98\") pod \"community-operators-2v6jg\" (UID: \"575ea0ec-a40c-47ca-b30d-a1907aca111e\") " pod="openshift-marketplace/community-operators-2v6jg" Jan 26 18:37:11 crc kubenswrapper[4737]: I0126 18:37:11.179658 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2v6jg" Jan 26 18:37:11 crc kubenswrapper[4737]: I0126 18:37:11.376536 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hjhjz"] Jan 26 18:37:11 crc kubenswrapper[4737]: I0126 18:37:11.511897 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hjhjz" event={"ID":"99f52814-0bfb-4fa6-9bfd-a9bcf704d8f2","Type":"ContainerStarted","Data":"a334636fe33c53d572e791c59bc0bf5746d6ff829fab56cad580ab9c38f17070"} Jan 26 18:37:11 crc kubenswrapper[4737]: I0126 18:37:11.514696 4737 generic.go:334] "Generic (PLEG): container finished" podID="927a6ff0-afc5-477b-b139-e02a9f9b4452" containerID="2f72a1d8e7dbe500c5d45d897efea6eeb3ea9d27702b7b076ba274209a6070d3" exitCode=0 Jan 26 18:37:11 crc kubenswrapper[4737]: I0126 18:37:11.514728 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kgrfg" event={"ID":"927a6ff0-afc5-477b-b139-e02a9f9b4452","Type":"ContainerDied","Data":"2f72a1d8e7dbe500c5d45d897efea6eeb3ea9d27702b7b076ba274209a6070d3"} Jan 26 18:37:11 crc kubenswrapper[4737]: I0126 18:37:11.560768 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2v6jg"] Jan 26 18:37:11 crc kubenswrapper[4737]: W0126 18:37:11.633817 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod575ea0ec_a40c_47ca_b30d_a1907aca111e.slice/crio-ae2399c04ea4321b84e4a1447f6109fa675d51426d46929665b53a42fd760314 WatchSource:0}: Error finding container ae2399c04ea4321b84e4a1447f6109fa675d51426d46929665b53a42fd760314: Status 404 returned error can't find the container with id ae2399c04ea4321b84e4a1447f6109fa675d51426d46929665b53a42fd760314 Jan 26 18:37:12 crc kubenswrapper[4737]: I0126 18:37:12.521103 4737 generic.go:334] "Generic (PLEG): container finished" podID="575ea0ec-a40c-47ca-b30d-a1907aca111e" containerID="3e13a1c6f0f86958a752bebeb338d0cdc4c99611ddedb77902aa1f616b602e10" exitCode=0 Jan 26 18:37:12 crc kubenswrapper[4737]: I0126 18:37:12.521165 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2v6jg" event={"ID":"575ea0ec-a40c-47ca-b30d-a1907aca111e","Type":"ContainerDied","Data":"3e13a1c6f0f86958a752bebeb338d0cdc4c99611ddedb77902aa1f616b602e10"} Jan 26 18:37:12 crc kubenswrapper[4737]: I0126 18:37:12.521558 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2v6jg" event={"ID":"575ea0ec-a40c-47ca-b30d-a1907aca111e","Type":"ContainerStarted","Data":"ae2399c04ea4321b84e4a1447f6109fa675d51426d46929665b53a42fd760314"} Jan 26 18:37:12 crc kubenswrapper[4737]: I0126 18:37:12.524791 4737 generic.go:334] "Generic (PLEG): container finished" podID="89059a8c-e6df-4f31-afd5-78a98ee6b4e5" containerID="4e5c60279c465584736b5b1a7075d946bf90ca3a5dcaf7f7a256dfbb6330c21c" exitCode=0 Jan 26 18:37:12 crc kubenswrapper[4737]: I0126 18:37:12.524872 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nx2jv" event={"ID":"89059a8c-e6df-4f31-afd5-78a98ee6b4e5","Type":"ContainerDied","Data":"4e5c60279c465584736b5b1a7075d946bf90ca3a5dcaf7f7a256dfbb6330c21c"} Jan 26 18:37:12 crc kubenswrapper[4737]: I0126 18:37:12.526610 4737 generic.go:334] "Generic (PLEG): container finished" podID="99f52814-0bfb-4fa6-9bfd-a9bcf704d8f2" containerID="3729edaf7bccfc60d2de7dd9c5bf2d8b85bb531dc5c7b63be8d2c645ba8d7bab" exitCode=0 Jan 26 18:37:12 crc kubenswrapper[4737]: I0126 18:37:12.526711 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hjhjz" event={"ID":"99f52814-0bfb-4fa6-9bfd-a9bcf704d8f2","Type":"ContainerDied","Data":"3729edaf7bccfc60d2de7dd9c5bf2d8b85bb531dc5c7b63be8d2c645ba8d7bab"} Jan 26 18:37:12 crc kubenswrapper[4737]: I0126 18:37:12.530344 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kgrfg" event={"ID":"927a6ff0-afc5-477b-b139-e02a9f9b4452","Type":"ContainerStarted","Data":"9a9513eb76d352419f5a24a466617b06e45d048f160b43869003304cb3eb0053"} Jan 26 18:37:12 crc kubenswrapper[4737]: I0126 18:37:12.594176 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kgrfg" podStartSLOduration=2.146876507 podStartE2EDuration="4.594154961s" podCreationTimestamp="2026-01-26 18:37:08 +0000 UTC" firstStartedPulling="2026-01-26 18:37:09.495182349 +0000 UTC m=+402.803377057" lastFinishedPulling="2026-01-26 18:37:11.942460803 +0000 UTC m=+405.250655511" observedRunningTime="2026-01-26 18:37:12.589817639 +0000 UTC m=+405.898012367" watchObservedRunningTime="2026-01-26 18:37:12.594154961 +0000 UTC m=+405.902349669" Jan 26 18:37:13 crc kubenswrapper[4737]: I0126 18:37:13.540116 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2v6jg" event={"ID":"575ea0ec-a40c-47ca-b30d-a1907aca111e","Type":"ContainerStarted","Data":"086facb998d04828823338abf5e23dee88c969e3333006c51a2bcc3193ea85e2"} Jan 26 18:37:13 crc kubenswrapper[4737]: I0126 18:37:13.542718 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nx2jv" event={"ID":"89059a8c-e6df-4f31-afd5-78a98ee6b4e5","Type":"ContainerStarted","Data":"519417d6b8b0392f3c956df4726c7378992ffb9c67cba319435af37219a1f173"} Jan 26 18:37:13 crc kubenswrapper[4737]: I0126 18:37:13.594311 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nx2jv" podStartSLOduration=3.089872004 podStartE2EDuration="5.594291758s" podCreationTimestamp="2026-01-26 18:37:08 +0000 UTC" firstStartedPulling="2026-01-26 18:37:10.503966869 +0000 UTC m=+403.812161577" lastFinishedPulling="2026-01-26 18:37:13.008386623 +0000 UTC m=+406.316581331" observedRunningTime="2026-01-26 18:37:13.591257079 +0000 UTC m=+406.899451787" watchObservedRunningTime="2026-01-26 18:37:13.594291758 +0000 UTC m=+406.902486466" Jan 26 18:37:14 crc kubenswrapper[4737]: I0126 18:37:14.551831 4737 generic.go:334] "Generic (PLEG): container finished" podID="575ea0ec-a40c-47ca-b30d-a1907aca111e" containerID="086facb998d04828823338abf5e23dee88c969e3333006c51a2bcc3193ea85e2" exitCode=0 Jan 26 18:37:14 crc kubenswrapper[4737]: I0126 18:37:14.551881 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2v6jg" event={"ID":"575ea0ec-a40c-47ca-b30d-a1907aca111e","Type":"ContainerDied","Data":"086facb998d04828823338abf5e23dee88c969e3333006c51a2bcc3193ea85e2"} Jan 26 18:37:14 crc kubenswrapper[4737]: I0126 18:37:14.554947 4737 generic.go:334] "Generic (PLEG): container finished" podID="99f52814-0bfb-4fa6-9bfd-a9bcf704d8f2" containerID="8f5a3929827f51882a471003d5461e7439d456e3b31aa63ec9208dce041c4dbb" exitCode=0 Jan 26 18:37:14 crc kubenswrapper[4737]: I0126 18:37:14.555015 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hjhjz" event={"ID":"99f52814-0bfb-4fa6-9bfd-a9bcf704d8f2","Type":"ContainerDied","Data":"8f5a3929827f51882a471003d5461e7439d456e3b31aa63ec9208dce041c4dbb"} Jan 26 18:37:16 crc kubenswrapper[4737]: I0126 18:37:16.570805 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2v6jg" event={"ID":"575ea0ec-a40c-47ca-b30d-a1907aca111e","Type":"ContainerStarted","Data":"0be326261e68101c3eb5a405570846262b6f6b7520dd842ce539573b7385531f"} Jan 26 18:37:16 crc kubenswrapper[4737]: I0126 18:37:16.573141 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hjhjz" event={"ID":"99f52814-0bfb-4fa6-9bfd-a9bcf704d8f2","Type":"ContainerStarted","Data":"68bbc3fcaddd5f8e0027dbcde0cfe374425e81434e9261104ca067270c7085e7"} Jan 26 18:37:16 crc kubenswrapper[4737]: I0126 18:37:16.592997 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2v6jg" podStartSLOduration=3.224604469 podStartE2EDuration="6.592971905s" podCreationTimestamp="2026-01-26 18:37:10 +0000 UTC" firstStartedPulling="2026-01-26 18:37:12.522677791 +0000 UTC m=+405.830872499" lastFinishedPulling="2026-01-26 18:37:15.891045227 +0000 UTC m=+409.199239935" observedRunningTime="2026-01-26 18:37:16.588181321 +0000 UTC m=+409.896376029" watchObservedRunningTime="2026-01-26 18:37:16.592971905 +0000 UTC m=+409.901166613" Jan 26 18:37:16 crc kubenswrapper[4737]: I0126 18:37:16.604305 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hjhjz" podStartSLOduration=3.066108286 podStartE2EDuration="6.604286337s" podCreationTimestamp="2026-01-26 18:37:10 +0000 UTC" firstStartedPulling="2026-01-26 18:37:12.529330723 +0000 UTC m=+405.837525431" lastFinishedPulling="2026-01-26 18:37:16.067508774 +0000 UTC m=+409.375703482" observedRunningTime="2026-01-26 18:37:16.603364214 +0000 UTC m=+409.911559162" watchObservedRunningTime="2026-01-26 18:37:16.604286337 +0000 UTC m=+409.912481045" Jan 26 18:37:18 crc kubenswrapper[4737]: I0126 18:37:18.582417 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kgrfg" Jan 26 18:37:18 crc kubenswrapper[4737]: I0126 18:37:18.582793 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kgrfg" Jan 26 18:37:18 crc kubenswrapper[4737]: I0126 18:37:18.634982 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kgrfg" Jan 26 18:37:18 crc kubenswrapper[4737]: I0126 18:37:18.781932 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nx2jv" Jan 26 18:37:18 crc kubenswrapper[4737]: I0126 18:37:18.781990 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nx2jv" Jan 26 18:37:19 crc kubenswrapper[4737]: I0126 18:37:19.651343 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kgrfg" Jan 26 18:37:19 crc kubenswrapper[4737]: I0126 18:37:19.821045 4737 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nx2jv" podUID="89059a8c-e6df-4f31-afd5-78a98ee6b4e5" containerName="registry-server" probeResult="failure" output=< Jan 26 18:37:19 crc kubenswrapper[4737]: timeout: failed to connect service ":50051" within 1s Jan 26 18:37:19 crc kubenswrapper[4737]: > Jan 26 18:37:20 crc kubenswrapper[4737]: I0126 18:37:20.993535 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hjhjz" Jan 26 18:37:20 crc kubenswrapper[4737]: I0126 18:37:20.994042 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hjhjz" Jan 26 18:37:21 crc kubenswrapper[4737]: I0126 18:37:21.032697 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hjhjz" Jan 26 18:37:21 crc kubenswrapper[4737]: I0126 18:37:21.180016 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2v6jg" Jan 26 18:37:21 crc kubenswrapper[4737]: I0126 18:37:21.181431 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2v6jg" Jan 26 18:37:21 crc kubenswrapper[4737]: I0126 18:37:21.219330 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2v6jg" Jan 26 18:37:21 crc kubenswrapper[4737]: I0126 18:37:21.652219 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2v6jg" Jan 26 18:37:21 crc kubenswrapper[4737]: I0126 18:37:21.652323 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hjhjz" Jan 26 18:37:28 crc kubenswrapper[4737]: I0126 18:37:28.818619 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nx2jv" Jan 26 18:37:28 crc kubenswrapper[4737]: I0126 18:37:28.858328 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nx2jv" Jan 26 18:37:30 crc kubenswrapper[4737]: I0126 18:37:30.949448 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:37:30 crc kubenswrapper[4737]: I0126 18:37:30.949508 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:37:30 crc kubenswrapper[4737]: I0126 18:37:30.949556 4737 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" Jan 26 18:37:30 crc kubenswrapper[4737]: I0126 18:37:30.950164 4737 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8783fe741322f0ba5562aa3c7abb35f1d6a9263f4a157b075924b1c99832d130"} pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 18:37:30 crc kubenswrapper[4737]: I0126 18:37:30.950211 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" containerID="cri-o://8783fe741322f0ba5562aa3c7abb35f1d6a9263f4a157b075924b1c99832d130" gracePeriod=600 Jan 26 18:37:31 crc kubenswrapper[4737]: I0126 18:37:31.656480 4737 generic.go:334] "Generic (PLEG): container finished" podID="afd75772-7900-46c3-b392-afb075e1cc08" containerID="8783fe741322f0ba5562aa3c7abb35f1d6a9263f4a157b075924b1c99832d130" exitCode=0 Jan 26 18:37:31 crc kubenswrapper[4737]: I0126 18:37:31.656669 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" event={"ID":"afd75772-7900-46c3-b392-afb075e1cc08","Type":"ContainerDied","Data":"8783fe741322f0ba5562aa3c7abb35f1d6a9263f4a157b075924b1c99832d130"} Jan 26 18:37:31 crc kubenswrapper[4737]: I0126 18:37:31.656937 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" event={"ID":"afd75772-7900-46c3-b392-afb075e1cc08","Type":"ContainerStarted","Data":"85a890545a9ff2202b93191292b7341bdb6c769889c0a4e83764a0aa6d4f8d25"} Jan 26 18:37:31 crc kubenswrapper[4737]: I0126 18:37:31.656969 4737 scope.go:117] "RemoveContainer" containerID="bea5fce0e1e77606f5e8f6cb2c1b339d6b7b8174e1f68a050834be1f5bedfec6" Jan 26 18:37:37 crc kubenswrapper[4737]: I0126 18:37:37.057367 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-7d69s"] Jan 26 18:37:37 crc kubenswrapper[4737]: I0126 18:37:37.058659 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-7d69s" Jan 26 18:37:37 crc kubenswrapper[4737]: I0126 18:37:37.060776 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Jan 26 18:37:37 crc kubenswrapper[4737]: I0126 18:37:37.061343 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-dockercfg-wwt9l" Jan 26 18:37:37 crc kubenswrapper[4737]: I0126 18:37:37.061370 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Jan 26 18:37:37 crc kubenswrapper[4737]: I0126 18:37:37.061398 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Jan 26 18:37:37 crc kubenswrapper[4737]: I0126 18:37:37.061458 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Jan 26 18:37:37 crc kubenswrapper[4737]: I0126 18:37:37.067335 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-7d69s"] Jan 26 18:37:37 crc kubenswrapper[4737]: I0126 18:37:37.168099 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/1ac33098-bcb9-4a6e-966c-64cf4ad85006-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-7d69s\" (UID: \"1ac33098-bcb9-4a6e-966c-64cf4ad85006\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-7d69s" Jan 26 18:37:37 crc kubenswrapper[4737]: I0126 18:37:37.168186 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/1ac33098-bcb9-4a6e-966c-64cf4ad85006-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-7d69s\" (UID: \"1ac33098-bcb9-4a6e-966c-64cf4ad85006\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-7d69s" Jan 26 18:37:37 crc kubenswrapper[4737]: I0126 18:37:37.168423 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xbq7\" (UniqueName: \"kubernetes.io/projected/1ac33098-bcb9-4a6e-966c-64cf4ad85006-kube-api-access-2xbq7\") pod \"cluster-monitoring-operator-6d5b84845-7d69s\" (UID: \"1ac33098-bcb9-4a6e-966c-64cf4ad85006\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-7d69s" Jan 26 18:37:37 crc kubenswrapper[4737]: I0126 18:37:37.270128 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/1ac33098-bcb9-4a6e-966c-64cf4ad85006-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-7d69s\" (UID: \"1ac33098-bcb9-4a6e-966c-64cf4ad85006\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-7d69s" Jan 26 18:37:37 crc kubenswrapper[4737]: I0126 18:37:37.270244 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/1ac33098-bcb9-4a6e-966c-64cf4ad85006-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-7d69s\" (UID: \"1ac33098-bcb9-4a6e-966c-64cf4ad85006\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-7d69s" Jan 26 18:37:37 crc kubenswrapper[4737]: I0126 18:37:37.270312 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xbq7\" (UniqueName: \"kubernetes.io/projected/1ac33098-bcb9-4a6e-966c-64cf4ad85006-kube-api-access-2xbq7\") pod \"cluster-monitoring-operator-6d5b84845-7d69s\" (UID: \"1ac33098-bcb9-4a6e-966c-64cf4ad85006\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-7d69s" Jan 26 18:37:37 crc kubenswrapper[4737]: I0126 18:37:37.271254 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/1ac33098-bcb9-4a6e-966c-64cf4ad85006-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-7d69s\" (UID: \"1ac33098-bcb9-4a6e-966c-64cf4ad85006\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-7d69s" Jan 26 18:37:37 crc kubenswrapper[4737]: I0126 18:37:37.292113 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/1ac33098-bcb9-4a6e-966c-64cf4ad85006-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-7d69s\" (UID: \"1ac33098-bcb9-4a6e-966c-64cf4ad85006\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-7d69s" Jan 26 18:37:37 crc kubenswrapper[4737]: I0126 18:37:37.293531 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xbq7\" (UniqueName: \"kubernetes.io/projected/1ac33098-bcb9-4a6e-966c-64cf4ad85006-kube-api-access-2xbq7\") pod \"cluster-monitoring-operator-6d5b84845-7d69s\" (UID: \"1ac33098-bcb9-4a6e-966c-64cf4ad85006\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-7d69s" Jan 26 18:37:37 crc kubenswrapper[4737]: I0126 18:37:37.386183 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-7d69s" Jan 26 18:37:37 crc kubenswrapper[4737]: I0126 18:37:37.778710 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-7d69s"] Jan 26 18:37:38 crc kubenswrapper[4737]: I0126 18:37:38.698137 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-7d69s" event={"ID":"1ac33098-bcb9-4a6e-966c-64cf4ad85006","Type":"ContainerStarted","Data":"22a8e2ffc3194ac4e533ee19b930972d15669a7dce6d2d9d701714acea39bdfa"} Jan 26 18:37:41 crc kubenswrapper[4737]: I0126 18:37:41.156140 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zp9vb"] Jan 26 18:37:41 crc kubenswrapper[4737]: I0126 18:37:41.157796 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zp9vb" Jan 26 18:37:41 crc kubenswrapper[4737]: I0126 18:37:41.164326 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Jan 26 18:37:41 crc kubenswrapper[4737]: I0126 18:37:41.164573 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-z4qq7" Jan 26 18:37:41 crc kubenswrapper[4737]: I0126 18:37:41.173021 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zp9vb"] Jan 26 18:37:41 crc kubenswrapper[4737]: I0126 18:37:41.231003 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/be4e6ff7-f2f4-4d86-8122-9da47a3c19ce-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-zp9vb\" (UID: \"be4e6ff7-f2f4-4d86-8122-9da47a3c19ce\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zp9vb" Jan 26 18:37:41 crc kubenswrapper[4737]: I0126 18:37:41.332347 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/be4e6ff7-f2f4-4d86-8122-9da47a3c19ce-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-zp9vb\" (UID: \"be4e6ff7-f2f4-4d86-8122-9da47a3c19ce\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zp9vb" Jan 26 18:37:41 crc kubenswrapper[4737]: E0126 18:37:41.332551 4737 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Jan 26 18:37:41 crc kubenswrapper[4737]: E0126 18:37:41.332644 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be4e6ff7-f2f4-4d86-8122-9da47a3c19ce-tls-certificates podName:be4e6ff7-f2f4-4d86-8122-9da47a3c19ce nodeName:}" failed. No retries permitted until 2026-01-26 18:37:41.832624626 +0000 UTC m=+435.140819334 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/be4e6ff7-f2f4-4d86-8122-9da47a3c19ce-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-zp9vb" (UID: "be4e6ff7-f2f4-4d86-8122-9da47a3c19ce") : secret "prometheus-operator-admission-webhook-tls" not found Jan 26 18:37:41 crc kubenswrapper[4737]: I0126 18:37:41.718050 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-7d69s" event={"ID":"1ac33098-bcb9-4a6e-966c-64cf4ad85006","Type":"ContainerStarted","Data":"4787f2de6a8a07f40936b245ffe06f1d674d4d9cdea96f188ba59c20177fffbc"} Jan 26 18:37:41 crc kubenswrapper[4737]: I0126 18:37:41.744672 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-7d69s" podStartSLOduration=1.93866061 podStartE2EDuration="4.744631979s" podCreationTimestamp="2026-01-26 18:37:37 +0000 UTC" firstStartedPulling="2026-01-26 18:37:37.787267868 +0000 UTC m=+431.095462576" lastFinishedPulling="2026-01-26 18:37:40.593239237 +0000 UTC m=+433.901433945" observedRunningTime="2026-01-26 18:37:41.739614069 +0000 UTC m=+435.047808777" watchObservedRunningTime="2026-01-26 18:37:41.744631979 +0000 UTC m=+435.052826727" Jan 26 18:37:41 crc kubenswrapper[4737]: I0126 18:37:41.838422 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/be4e6ff7-f2f4-4d86-8122-9da47a3c19ce-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-zp9vb\" (UID: \"be4e6ff7-f2f4-4d86-8122-9da47a3c19ce\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zp9vb" Jan 26 18:37:41 crc kubenswrapper[4737]: I0126 18:37:41.849651 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/be4e6ff7-f2f4-4d86-8122-9da47a3c19ce-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-zp9vb\" (UID: \"be4e6ff7-f2f4-4d86-8122-9da47a3c19ce\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zp9vb" Jan 26 18:37:42 crc kubenswrapper[4737]: I0126 18:37:42.072563 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zp9vb" Jan 26 18:37:42 crc kubenswrapper[4737]: I0126 18:37:42.536196 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zp9vb"] Jan 26 18:37:42 crc kubenswrapper[4737]: I0126 18:37:42.724490 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zp9vb" event={"ID":"be4e6ff7-f2f4-4d86-8122-9da47a3c19ce","Type":"ContainerStarted","Data":"0ff2b1f8366f887dca02526a420b318fcdebe94464af55f0246d4e1aae86acef"} Jan 26 18:37:47 crc kubenswrapper[4737]: I0126 18:37:47.756464 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zp9vb" event={"ID":"be4e6ff7-f2f4-4d86-8122-9da47a3c19ce","Type":"ContainerStarted","Data":"45c594f11dd848c7d6c90fb68871c2d83151b49450c6cb36bb1189955c37fbe2"} Jan 26 18:37:47 crc kubenswrapper[4737]: I0126 18:37:47.757181 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zp9vb" Jan 26 18:37:47 crc kubenswrapper[4737]: I0126 18:37:47.761652 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zp9vb" Jan 26 18:37:47 crc kubenswrapper[4737]: I0126 18:37:47.776038 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zp9vb" podStartSLOduration=2.684996334 podStartE2EDuration="6.776023724s" podCreationTimestamp="2026-01-26 18:37:41 +0000 UTC" firstStartedPulling="2026-01-26 18:37:42.55093641 +0000 UTC m=+435.859131118" lastFinishedPulling="2026-01-26 18:37:46.6419638 +0000 UTC m=+439.950158508" observedRunningTime="2026-01-26 18:37:47.772208125 +0000 UTC m=+441.080402833" watchObservedRunningTime="2026-01-26 18:37:47.776023724 +0000 UTC m=+441.084218432" Jan 26 18:37:48 crc kubenswrapper[4737]: I0126 18:37:48.205902 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-267bc"] Jan 26 18:37:48 crc kubenswrapper[4737]: I0126 18:37:48.206898 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-267bc" Jan 26 18:37:48 crc kubenswrapper[4737]: I0126 18:37:48.210197 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Jan 26 18:37:48 crc kubenswrapper[4737]: I0126 18:37:48.210527 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Jan 26 18:37:48 crc kubenswrapper[4737]: I0126 18:37:48.213173 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-vx72r" Jan 26 18:37:48 crc kubenswrapper[4737]: I0126 18:37:48.213232 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Jan 26 18:37:48 crc kubenswrapper[4737]: I0126 18:37:48.221148 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-267bc"] Jan 26 18:37:48 crc kubenswrapper[4737]: I0126 18:37:48.334141 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ad5ed031-dbfa-477b-a9e9-8f3d122ddc60-metrics-client-ca\") pod \"prometheus-operator-db54df47d-267bc\" (UID: \"ad5ed031-dbfa-477b-a9e9-8f3d122ddc60\") " pod="openshift-monitoring/prometheus-operator-db54df47d-267bc" Jan 26 18:37:48 crc kubenswrapper[4737]: I0126 18:37:48.334663 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ad5ed031-dbfa-477b-a9e9-8f3d122ddc60-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-267bc\" (UID: \"ad5ed031-dbfa-477b-a9e9-8f3d122ddc60\") " pod="openshift-monitoring/prometheus-operator-db54df47d-267bc" Jan 26 18:37:48 crc kubenswrapper[4737]: I0126 18:37:48.334735 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/ad5ed031-dbfa-477b-a9e9-8f3d122ddc60-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-267bc\" (UID: \"ad5ed031-dbfa-477b-a9e9-8f3d122ddc60\") " pod="openshift-monitoring/prometheus-operator-db54df47d-267bc" Jan 26 18:37:48 crc kubenswrapper[4737]: I0126 18:37:48.334837 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96x7f\" (UniqueName: \"kubernetes.io/projected/ad5ed031-dbfa-477b-a9e9-8f3d122ddc60-kube-api-access-96x7f\") pod \"prometheus-operator-db54df47d-267bc\" (UID: \"ad5ed031-dbfa-477b-a9e9-8f3d122ddc60\") " pod="openshift-monitoring/prometheus-operator-db54df47d-267bc" Jan 26 18:37:48 crc kubenswrapper[4737]: I0126 18:37:48.436470 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96x7f\" (UniqueName: \"kubernetes.io/projected/ad5ed031-dbfa-477b-a9e9-8f3d122ddc60-kube-api-access-96x7f\") pod \"prometheus-operator-db54df47d-267bc\" (UID: \"ad5ed031-dbfa-477b-a9e9-8f3d122ddc60\") " pod="openshift-monitoring/prometheus-operator-db54df47d-267bc" Jan 26 18:37:48 crc kubenswrapper[4737]: I0126 18:37:48.436555 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ad5ed031-dbfa-477b-a9e9-8f3d122ddc60-metrics-client-ca\") pod \"prometheus-operator-db54df47d-267bc\" (UID: \"ad5ed031-dbfa-477b-a9e9-8f3d122ddc60\") " pod="openshift-monitoring/prometheus-operator-db54df47d-267bc" Jan 26 18:37:48 crc kubenswrapper[4737]: I0126 18:37:48.436603 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ad5ed031-dbfa-477b-a9e9-8f3d122ddc60-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-267bc\" (UID: \"ad5ed031-dbfa-477b-a9e9-8f3d122ddc60\") " pod="openshift-monitoring/prometheus-operator-db54df47d-267bc" Jan 26 18:37:48 crc kubenswrapper[4737]: I0126 18:37:48.436641 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/ad5ed031-dbfa-477b-a9e9-8f3d122ddc60-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-267bc\" (UID: \"ad5ed031-dbfa-477b-a9e9-8f3d122ddc60\") " pod="openshift-monitoring/prometheus-operator-db54df47d-267bc" Jan 26 18:37:48 crc kubenswrapper[4737]: E0126 18:37:48.436828 4737 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Jan 26 18:37:48 crc kubenswrapper[4737]: E0126 18:37:48.436913 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad5ed031-dbfa-477b-a9e9-8f3d122ddc60-prometheus-operator-tls podName:ad5ed031-dbfa-477b-a9e9-8f3d122ddc60 nodeName:}" failed. No retries permitted until 2026-01-26 18:37:48.93688519 +0000 UTC m=+442.245079918 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/ad5ed031-dbfa-477b-a9e9-8f3d122ddc60-prometheus-operator-tls") pod "prometheus-operator-db54df47d-267bc" (UID: "ad5ed031-dbfa-477b-a9e9-8f3d122ddc60") : secret "prometheus-operator-tls" not found Jan 26 18:37:48 crc kubenswrapper[4737]: I0126 18:37:48.439253 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ad5ed031-dbfa-477b-a9e9-8f3d122ddc60-metrics-client-ca\") pod \"prometheus-operator-db54df47d-267bc\" (UID: \"ad5ed031-dbfa-477b-a9e9-8f3d122ddc60\") " pod="openshift-monitoring/prometheus-operator-db54df47d-267bc" Jan 26 18:37:48 crc kubenswrapper[4737]: I0126 18:37:48.446987 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ad5ed031-dbfa-477b-a9e9-8f3d122ddc60-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-267bc\" (UID: \"ad5ed031-dbfa-477b-a9e9-8f3d122ddc60\") " pod="openshift-monitoring/prometheus-operator-db54df47d-267bc" Jan 26 18:37:48 crc kubenswrapper[4737]: I0126 18:37:48.455684 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96x7f\" (UniqueName: \"kubernetes.io/projected/ad5ed031-dbfa-477b-a9e9-8f3d122ddc60-kube-api-access-96x7f\") pod \"prometheus-operator-db54df47d-267bc\" (UID: \"ad5ed031-dbfa-477b-a9e9-8f3d122ddc60\") " pod="openshift-monitoring/prometheus-operator-db54df47d-267bc" Jan 26 18:37:48 crc kubenswrapper[4737]: I0126 18:37:48.945495 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/ad5ed031-dbfa-477b-a9e9-8f3d122ddc60-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-267bc\" (UID: \"ad5ed031-dbfa-477b-a9e9-8f3d122ddc60\") " pod="openshift-monitoring/prometheus-operator-db54df47d-267bc" Jan 26 18:37:48 crc kubenswrapper[4737]: I0126 18:37:48.951680 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/ad5ed031-dbfa-477b-a9e9-8f3d122ddc60-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-267bc\" (UID: \"ad5ed031-dbfa-477b-a9e9-8f3d122ddc60\") " pod="openshift-monitoring/prometheus-operator-db54df47d-267bc" Jan 26 18:37:49 crc kubenswrapper[4737]: I0126 18:37:49.122440 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-267bc" Jan 26 18:37:49 crc kubenswrapper[4737]: I0126 18:37:49.543729 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-267bc"] Jan 26 18:37:49 crc kubenswrapper[4737]: I0126 18:37:49.772288 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-267bc" event={"ID":"ad5ed031-dbfa-477b-a9e9-8f3d122ddc60","Type":"ContainerStarted","Data":"31a0b104f2ed73b95924fa6cc2808d61186ed3c0ccc6c4ef2932bd63f03c6bee"} Jan 26 18:37:51 crc kubenswrapper[4737]: I0126 18:37:51.799026 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-267bc" event={"ID":"ad5ed031-dbfa-477b-a9e9-8f3d122ddc60","Type":"ContainerStarted","Data":"7bde227c7db7308bb6bd082609a880b331e290e492856dd9675d6f1d239e8130"} Jan 26 18:37:51 crc kubenswrapper[4737]: I0126 18:37:51.800107 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-267bc" event={"ID":"ad5ed031-dbfa-477b-a9e9-8f3d122ddc60","Type":"ContainerStarted","Data":"0156a3b7741b2bcd0cf4ad40cda32b93a76fff7b689ebbb71d737c7c02a5b031"} Jan 26 18:37:51 crc kubenswrapper[4737]: I0126 18:37:51.821353 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-db54df47d-267bc" podStartSLOduration=2.259805484 podStartE2EDuration="3.821323701s" podCreationTimestamp="2026-01-26 18:37:48 +0000 UTC" firstStartedPulling="2026-01-26 18:37:49.55461028 +0000 UTC m=+442.862804998" lastFinishedPulling="2026-01-26 18:37:51.116128507 +0000 UTC m=+444.424323215" observedRunningTime="2026-01-26 18:37:51.821182367 +0000 UTC m=+445.129377095" watchObservedRunningTime="2026-01-26 18:37:51.821323701 +0000 UTC m=+445.129518419" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.624248 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-v57d5"] Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.625568 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-v57d5" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.628690 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.628710 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.629882 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-9hrp8" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.641883 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-v57d5"] Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.659128 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-v7xfx"] Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.693851 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-rmww8"] Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.694744 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-v7xfx" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.703506 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-rmww8" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.705352 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-rmww8"] Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.707864 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-dcpf7" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.708215 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.708365 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.708498 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.708720 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-qzpx4" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.708874 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.709029 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.729430 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6ba826c0-a9b8-4675-b157-ca8ff7730271-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-v57d5\" (UID: \"6ba826c0-a9b8-4675-b157-ca8ff7730271\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-v57d5" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.729985 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6ba826c0-a9b8-4675-b157-ca8ff7730271-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-v57d5\" (UID: \"6ba826c0-a9b8-4675-b157-ca8ff7730271\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-v57d5" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.730044 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/6ba826c0-a9b8-4675-b157-ca8ff7730271-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-v57d5\" (UID: \"6ba826c0-a9b8-4675-b157-ca8ff7730271\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-v57d5" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.730094 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzqcb\" (UniqueName: \"kubernetes.io/projected/6ba826c0-a9b8-4675-b157-ca8ff7730271-kube-api-access-tzqcb\") pod \"openshift-state-metrics-566fddb674-v57d5\" (UID: \"6ba826c0-a9b8-4675-b157-ca8ff7730271\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-v57d5" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.831747 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6fde32c0-5dcb-4efe-af6f-599aef4e391e-sys\") pod \"node-exporter-v7xfx\" (UID: \"6fde32c0-5dcb-4efe-af6f-599aef4e391e\") " pod="openshift-monitoring/node-exporter-v7xfx" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.832244 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/6fde32c0-5dcb-4efe-af6f-599aef4e391e-node-exporter-tls\") pod \"node-exporter-v7xfx\" (UID: \"6fde32c0-5dcb-4efe-af6f-599aef4e391e\") " pod="openshift-monitoring/node-exporter-v7xfx" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.832369 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/f99f3741-65ba-4744-ac57-6472aa4b19f3-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-rmww8\" (UID: \"f99f3741-65ba-4744-ac57-6472aa4b19f3\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-rmww8" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.832470 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/f99f3741-65ba-4744-ac57-6472aa4b19f3-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-rmww8\" (UID: \"f99f3741-65ba-4744-ac57-6472aa4b19f3\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-rmww8" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.832569 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f99f3741-65ba-4744-ac57-6472aa4b19f3-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-rmww8\" (UID: \"f99f3741-65ba-4744-ac57-6472aa4b19f3\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-rmww8" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.832673 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/f99f3741-65ba-4744-ac57-6472aa4b19f3-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-rmww8\" (UID: \"f99f3741-65ba-4744-ac57-6472aa4b19f3\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-rmww8" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.832777 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/6ba826c0-a9b8-4675-b157-ca8ff7730271-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-v57d5\" (UID: \"6ba826c0-a9b8-4675-b157-ca8ff7730271\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-v57d5" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.832870 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzqcb\" (UniqueName: \"kubernetes.io/projected/6ba826c0-a9b8-4675-b157-ca8ff7730271-kube-api-access-tzqcb\") pod \"openshift-state-metrics-566fddb674-v57d5\" (UID: \"6ba826c0-a9b8-4675-b157-ca8ff7730271\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-v57d5" Jan 26 18:37:53 crc kubenswrapper[4737]: E0126 18:37:53.832969 4737 secret.go:188] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: secret "openshift-state-metrics-tls" not found Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.832988 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/f99f3741-65ba-4744-ac57-6472aa4b19f3-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-rmww8\" (UID: \"f99f3741-65ba-4744-ac57-6472aa4b19f3\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-rmww8" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.833215 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6ba826c0-a9b8-4675-b157-ca8ff7730271-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-v57d5\" (UID: \"6ba826c0-a9b8-4675-b157-ca8ff7730271\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-v57d5" Jan 26 18:37:53 crc kubenswrapper[4737]: E0126 18:37:53.833309 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ba826c0-a9b8-4675-b157-ca8ff7730271-openshift-state-metrics-tls podName:6ba826c0-a9b8-4675-b157-ca8ff7730271 nodeName:}" failed. No retries permitted until 2026-01-26 18:37:54.333270928 +0000 UTC m=+447.641465786 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/6ba826c0-a9b8-4675-b157-ca8ff7730271-openshift-state-metrics-tls") pod "openshift-state-metrics-566fddb674-v57d5" (UID: "6ba826c0-a9b8-4675-b157-ca8ff7730271") : secret "openshift-state-metrics-tls" not found Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.833474 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/6fde32c0-5dcb-4efe-af6f-599aef4e391e-node-exporter-textfile\") pod \"node-exporter-v7xfx\" (UID: \"6fde32c0-5dcb-4efe-af6f-599aef4e391e\") " pod="openshift-monitoring/node-exporter-v7xfx" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.833674 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/6fde32c0-5dcb-4efe-af6f-599aef4e391e-root\") pod \"node-exporter-v7xfx\" (UID: \"6fde32c0-5dcb-4efe-af6f-599aef4e391e\") " pod="openshift-monitoring/node-exporter-v7xfx" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.833763 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6fde32c0-5dcb-4efe-af6f-599aef4e391e-metrics-client-ca\") pod \"node-exporter-v7xfx\" (UID: \"6fde32c0-5dcb-4efe-af6f-599aef4e391e\") " pod="openshift-monitoring/node-exporter-v7xfx" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.833794 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6fde32c0-5dcb-4efe-af6f-599aef4e391e-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-v7xfx\" (UID: \"6fde32c0-5dcb-4efe-af6f-599aef4e391e\") " pod="openshift-monitoring/node-exporter-v7xfx" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.833828 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hw8c\" (UniqueName: \"kubernetes.io/projected/6fde32c0-5dcb-4efe-af6f-599aef4e391e-kube-api-access-5hw8c\") pod \"node-exporter-v7xfx\" (UID: \"6fde32c0-5dcb-4efe-af6f-599aef4e391e\") " pod="openshift-monitoring/node-exporter-v7xfx" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.833928 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j92vm\" (UniqueName: \"kubernetes.io/projected/f99f3741-65ba-4744-ac57-6472aa4b19f3-kube-api-access-j92vm\") pod \"kube-state-metrics-777cb5bd5d-rmww8\" (UID: \"f99f3741-65ba-4744-ac57-6472aa4b19f3\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-rmww8" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.833987 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/6fde32c0-5dcb-4efe-af6f-599aef4e391e-node-exporter-wtmp\") pod \"node-exporter-v7xfx\" (UID: \"6fde32c0-5dcb-4efe-af6f-599aef4e391e\") " pod="openshift-monitoring/node-exporter-v7xfx" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.834030 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6ba826c0-a9b8-4675-b157-ca8ff7730271-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-v57d5\" (UID: \"6ba826c0-a9b8-4675-b157-ca8ff7730271\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-v57d5" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.835824 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6ba826c0-a9b8-4675-b157-ca8ff7730271-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-v57d5\" (UID: \"6ba826c0-a9b8-4675-b157-ca8ff7730271\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-v57d5" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.853406 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzqcb\" (UniqueName: \"kubernetes.io/projected/6ba826c0-a9b8-4675-b157-ca8ff7730271-kube-api-access-tzqcb\") pod \"openshift-state-metrics-566fddb674-v57d5\" (UID: \"6ba826c0-a9b8-4675-b157-ca8ff7730271\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-v57d5" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.853569 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6ba826c0-a9b8-4675-b157-ca8ff7730271-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-v57d5\" (UID: \"6ba826c0-a9b8-4675-b157-ca8ff7730271\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-v57d5" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.935990 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/6fde32c0-5dcb-4efe-af6f-599aef4e391e-node-exporter-textfile\") pod \"node-exporter-v7xfx\" (UID: \"6fde32c0-5dcb-4efe-af6f-599aef4e391e\") " pod="openshift-monitoring/node-exporter-v7xfx" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.936054 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/6fde32c0-5dcb-4efe-af6f-599aef4e391e-root\") pod \"node-exporter-v7xfx\" (UID: \"6fde32c0-5dcb-4efe-af6f-599aef4e391e\") " pod="openshift-monitoring/node-exporter-v7xfx" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.936106 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6fde32c0-5dcb-4efe-af6f-599aef4e391e-metrics-client-ca\") pod \"node-exporter-v7xfx\" (UID: \"6fde32c0-5dcb-4efe-af6f-599aef4e391e\") " pod="openshift-monitoring/node-exporter-v7xfx" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.936131 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6fde32c0-5dcb-4efe-af6f-599aef4e391e-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-v7xfx\" (UID: \"6fde32c0-5dcb-4efe-af6f-599aef4e391e\") " pod="openshift-monitoring/node-exporter-v7xfx" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.936157 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hw8c\" (UniqueName: \"kubernetes.io/projected/6fde32c0-5dcb-4efe-af6f-599aef4e391e-kube-api-access-5hw8c\") pod \"node-exporter-v7xfx\" (UID: \"6fde32c0-5dcb-4efe-af6f-599aef4e391e\") " pod="openshift-monitoring/node-exporter-v7xfx" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.936179 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j92vm\" (UniqueName: \"kubernetes.io/projected/f99f3741-65ba-4744-ac57-6472aa4b19f3-kube-api-access-j92vm\") pod \"kube-state-metrics-777cb5bd5d-rmww8\" (UID: \"f99f3741-65ba-4744-ac57-6472aa4b19f3\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-rmww8" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.936194 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/6fde32c0-5dcb-4efe-af6f-599aef4e391e-node-exporter-wtmp\") pod \"node-exporter-v7xfx\" (UID: \"6fde32c0-5dcb-4efe-af6f-599aef4e391e\") " pod="openshift-monitoring/node-exporter-v7xfx" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.936228 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6fde32c0-5dcb-4efe-af6f-599aef4e391e-sys\") pod \"node-exporter-v7xfx\" (UID: \"6fde32c0-5dcb-4efe-af6f-599aef4e391e\") " pod="openshift-monitoring/node-exporter-v7xfx" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.936249 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/6fde32c0-5dcb-4efe-af6f-599aef4e391e-node-exporter-tls\") pod \"node-exporter-v7xfx\" (UID: \"6fde32c0-5dcb-4efe-af6f-599aef4e391e\") " pod="openshift-monitoring/node-exporter-v7xfx" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.936279 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/f99f3741-65ba-4744-ac57-6472aa4b19f3-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-rmww8\" (UID: \"f99f3741-65ba-4744-ac57-6472aa4b19f3\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-rmww8" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.936305 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/f99f3741-65ba-4744-ac57-6472aa4b19f3-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-rmww8\" (UID: \"f99f3741-65ba-4744-ac57-6472aa4b19f3\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-rmww8" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.936328 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f99f3741-65ba-4744-ac57-6472aa4b19f3-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-rmww8\" (UID: \"f99f3741-65ba-4744-ac57-6472aa4b19f3\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-rmww8" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.936359 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/f99f3741-65ba-4744-ac57-6472aa4b19f3-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-rmww8\" (UID: \"f99f3741-65ba-4744-ac57-6472aa4b19f3\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-rmww8" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.936439 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/f99f3741-65ba-4744-ac57-6472aa4b19f3-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-rmww8\" (UID: \"f99f3741-65ba-4744-ac57-6472aa4b19f3\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-rmww8" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.937460 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/6fde32c0-5dcb-4efe-af6f-599aef4e391e-node-exporter-wtmp\") pod \"node-exporter-v7xfx\" (UID: \"6fde32c0-5dcb-4efe-af6f-599aef4e391e\") " pod="openshift-monitoring/node-exporter-v7xfx" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.937616 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6fde32c0-5dcb-4efe-af6f-599aef4e391e-sys\") pod \"node-exporter-v7xfx\" (UID: \"6fde32c0-5dcb-4efe-af6f-599aef4e391e\") " pod="openshift-monitoring/node-exporter-v7xfx" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.937853 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/6fde32c0-5dcb-4efe-af6f-599aef4e391e-root\") pod \"node-exporter-v7xfx\" (UID: \"6fde32c0-5dcb-4efe-af6f-599aef4e391e\") " pod="openshift-monitoring/node-exporter-v7xfx" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.938147 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/f99f3741-65ba-4744-ac57-6472aa4b19f3-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-rmww8\" (UID: \"f99f3741-65ba-4744-ac57-6472aa4b19f3\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-rmww8" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.938461 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f99f3741-65ba-4744-ac57-6472aa4b19f3-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-rmww8\" (UID: \"f99f3741-65ba-4744-ac57-6472aa4b19f3\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-rmww8" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.938781 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6fde32c0-5dcb-4efe-af6f-599aef4e391e-metrics-client-ca\") pod \"node-exporter-v7xfx\" (UID: \"6fde32c0-5dcb-4efe-af6f-599aef4e391e\") " pod="openshift-monitoring/node-exporter-v7xfx" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.939096 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/6fde32c0-5dcb-4efe-af6f-599aef4e391e-node-exporter-textfile\") pod \"node-exporter-v7xfx\" (UID: \"6fde32c0-5dcb-4efe-af6f-599aef4e391e\") " pod="openshift-monitoring/node-exporter-v7xfx" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.939361 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/f99f3741-65ba-4744-ac57-6472aa4b19f3-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-rmww8\" (UID: \"f99f3741-65ba-4744-ac57-6472aa4b19f3\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-rmww8" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.941545 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/6fde32c0-5dcb-4efe-af6f-599aef4e391e-node-exporter-tls\") pod \"node-exporter-v7xfx\" (UID: \"6fde32c0-5dcb-4efe-af6f-599aef4e391e\") " pod="openshift-monitoring/node-exporter-v7xfx" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.956350 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/f99f3741-65ba-4744-ac57-6472aa4b19f3-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-rmww8\" (UID: \"f99f3741-65ba-4744-ac57-6472aa4b19f3\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-rmww8" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.956701 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6fde32c0-5dcb-4efe-af6f-599aef4e391e-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-v7xfx\" (UID: \"6fde32c0-5dcb-4efe-af6f-599aef4e391e\") " pod="openshift-monitoring/node-exporter-v7xfx" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.957007 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/f99f3741-65ba-4744-ac57-6472aa4b19f3-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-rmww8\" (UID: \"f99f3741-65ba-4744-ac57-6472aa4b19f3\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-rmww8" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.960928 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hw8c\" (UniqueName: \"kubernetes.io/projected/6fde32c0-5dcb-4efe-af6f-599aef4e391e-kube-api-access-5hw8c\") pod \"node-exporter-v7xfx\" (UID: \"6fde32c0-5dcb-4efe-af6f-599aef4e391e\") " pod="openshift-monitoring/node-exporter-v7xfx" Jan 26 18:37:53 crc kubenswrapper[4737]: I0126 18:37:53.962131 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j92vm\" (UniqueName: \"kubernetes.io/projected/f99f3741-65ba-4744-ac57-6472aa4b19f3-kube-api-access-j92vm\") pod \"kube-state-metrics-777cb5bd5d-rmww8\" (UID: \"f99f3741-65ba-4744-ac57-6472aa4b19f3\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-rmww8" Jan 26 18:37:54 crc kubenswrapper[4737]: I0126 18:37:54.028727 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-rmww8" Jan 26 18:37:54 crc kubenswrapper[4737]: I0126 18:37:54.041942 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-v7xfx" Jan 26 18:37:54 crc kubenswrapper[4737]: W0126 18:37:54.067532 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6fde32c0_5dcb_4efe_af6f_599aef4e391e.slice/crio-3422d6daf896ebda0f5d71bb3f8b2abd174ebbec32c0a208d0b791e8bbcb941f WatchSource:0}: Error finding container 3422d6daf896ebda0f5d71bb3f8b2abd174ebbec32c0a208d0b791e8bbcb941f: Status 404 returned error can't find the container with id 3422d6daf896ebda0f5d71bb3f8b2abd174ebbec32c0a208d0b791e8bbcb941f Jan 26 18:37:54 crc kubenswrapper[4737]: I0126 18:37:54.072925 4737 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 18:37:54 crc kubenswrapper[4737]: I0126 18:37:54.342697 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/6ba826c0-a9b8-4675-b157-ca8ff7730271-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-v57d5\" (UID: \"6ba826c0-a9b8-4675-b157-ca8ff7730271\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-v57d5" Jan 26 18:37:54 crc kubenswrapper[4737]: I0126 18:37:54.347326 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/6ba826c0-a9b8-4675-b157-ca8ff7730271-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-v57d5\" (UID: \"6ba826c0-a9b8-4675-b157-ca8ff7730271\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-v57d5" Jan 26 18:37:54 crc kubenswrapper[4737]: I0126 18:37:54.499831 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-rmww8"] Jan 26 18:37:54 crc kubenswrapper[4737]: I0126 18:37:54.543649 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-v57d5" Jan 26 18:37:54 crc kubenswrapper[4737]: I0126 18:37:54.753024 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Jan 26 18:37:54 crc kubenswrapper[4737]: I0126 18:37:54.754879 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Jan 26 18:37:54 crc kubenswrapper[4737]: I0126 18:37:54.758713 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Jan 26 18:37:54 crc kubenswrapper[4737]: I0126 18:37:54.759740 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Jan 26 18:37:54 crc kubenswrapper[4737]: I0126 18:37:54.761931 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-cbs2f" Jan 26 18:37:54 crc kubenswrapper[4737]: I0126 18:37:54.765317 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Jan 26 18:37:54 crc kubenswrapper[4737]: I0126 18:37:54.768390 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Jan 26 18:37:54 crc kubenswrapper[4737]: I0126 18:37:54.768727 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Jan 26 18:37:54 crc kubenswrapper[4737]: I0126 18:37:54.768740 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Jan 26 18:37:54 crc kubenswrapper[4737]: I0126 18:37:54.768849 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Jan 26 18:37:54 crc kubenswrapper[4737]: I0126 18:37:54.778825 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Jan 26 18:37:54 crc kubenswrapper[4737]: I0126 18:37:54.825889 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-v7xfx" event={"ID":"6fde32c0-5dcb-4efe-af6f-599aef4e391e","Type":"ContainerStarted","Data":"3422d6daf896ebda0f5d71bb3f8b2abd174ebbec32c0a208d0b791e8bbcb941f"} Jan 26 18:37:54 crc kubenswrapper[4737]: I0126 18:37:54.827316 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-rmww8" event={"ID":"f99f3741-65ba-4744-ac57-6472aa4b19f3","Type":"ContainerStarted","Data":"fff39066d884fe2be4aeb20ad13fae73bd8d07bba934ea2ef7cc01a3371dc1f3"} Jan 26 18:37:54 crc kubenswrapper[4737]: I0126 18:37:54.849993 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjj8n\" (UniqueName: \"kubernetes.io/projected/394af859-0214-46ab-8cd5-023c7f9a601c-kube-api-access-wjj8n\") pod \"alertmanager-main-0\" (UID: \"394af859-0214-46ab-8cd5-023c7f9a601c\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 18:37:54 crc kubenswrapper[4737]: I0126 18:37:54.850036 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/394af859-0214-46ab-8cd5-023c7f9a601c-web-config\") pod \"alertmanager-main-0\" (UID: \"394af859-0214-46ab-8cd5-023c7f9a601c\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 18:37:54 crc kubenswrapper[4737]: I0126 18:37:54.850055 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/394af859-0214-46ab-8cd5-023c7f9a601c-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"394af859-0214-46ab-8cd5-023c7f9a601c\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 18:37:54 crc kubenswrapper[4737]: I0126 18:37:54.850088 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/394af859-0214-46ab-8cd5-023c7f9a601c-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"394af859-0214-46ab-8cd5-023c7f9a601c\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 18:37:54 crc kubenswrapper[4737]: I0126 18:37:54.850110 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/394af859-0214-46ab-8cd5-023c7f9a601c-config-volume\") pod \"alertmanager-main-0\" (UID: \"394af859-0214-46ab-8cd5-023c7f9a601c\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 18:37:54 crc kubenswrapper[4737]: I0126 18:37:54.850125 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/394af859-0214-46ab-8cd5-023c7f9a601c-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"394af859-0214-46ab-8cd5-023c7f9a601c\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 18:37:54 crc kubenswrapper[4737]: I0126 18:37:54.850148 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/394af859-0214-46ab-8cd5-023c7f9a601c-config-out\") pod \"alertmanager-main-0\" (UID: \"394af859-0214-46ab-8cd5-023c7f9a601c\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 18:37:54 crc kubenswrapper[4737]: I0126 18:37:54.850266 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/394af859-0214-46ab-8cd5-023c7f9a601c-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"394af859-0214-46ab-8cd5-023c7f9a601c\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 18:37:54 crc kubenswrapper[4737]: I0126 18:37:54.850351 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/394af859-0214-46ab-8cd5-023c7f9a601c-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"394af859-0214-46ab-8cd5-023c7f9a601c\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 18:37:54 crc kubenswrapper[4737]: I0126 18:37:54.850386 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/394af859-0214-46ab-8cd5-023c7f9a601c-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"394af859-0214-46ab-8cd5-023c7f9a601c\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 18:37:54 crc kubenswrapper[4737]: I0126 18:37:54.850447 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/394af859-0214-46ab-8cd5-023c7f9a601c-tls-assets\") pod \"alertmanager-main-0\" (UID: \"394af859-0214-46ab-8cd5-023c7f9a601c\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 18:37:54 crc kubenswrapper[4737]: I0126 18:37:54.850467 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/394af859-0214-46ab-8cd5-023c7f9a601c-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"394af859-0214-46ab-8cd5-023c7f9a601c\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 18:37:54 crc kubenswrapper[4737]: I0126 18:37:54.855510 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Jan 26 18:37:54 crc kubenswrapper[4737]: I0126 18:37:54.898384 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-v57d5"] Jan 26 18:37:54 crc kubenswrapper[4737]: W0126 18:37:54.912796 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ba826c0_a9b8_4675_b157_ca8ff7730271.slice/crio-fc703b7a69e69d1cfdd51d9976711a064d72b1ab67539f7aee61106dba5c58ee WatchSource:0}: Error finding container fc703b7a69e69d1cfdd51d9976711a064d72b1ab67539f7aee61106dba5c58ee: Status 404 returned error can't find the container with id fc703b7a69e69d1cfdd51d9976711a064d72b1ab67539f7aee61106dba5c58ee Jan 26 18:37:54 crc kubenswrapper[4737]: I0126 18:37:54.951602 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/394af859-0214-46ab-8cd5-023c7f9a601c-web-config\") pod \"alertmanager-main-0\" (UID: \"394af859-0214-46ab-8cd5-023c7f9a601c\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 18:37:54 crc kubenswrapper[4737]: I0126 18:37:54.952483 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/394af859-0214-46ab-8cd5-023c7f9a601c-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"394af859-0214-46ab-8cd5-023c7f9a601c\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 18:37:54 crc kubenswrapper[4737]: I0126 18:37:54.952681 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/394af859-0214-46ab-8cd5-023c7f9a601c-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"394af859-0214-46ab-8cd5-023c7f9a601c\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 18:37:54 crc kubenswrapper[4737]: I0126 18:37:54.952859 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/394af859-0214-46ab-8cd5-023c7f9a601c-config-volume\") pod \"alertmanager-main-0\" (UID: \"394af859-0214-46ab-8cd5-023c7f9a601c\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 18:37:54 crc kubenswrapper[4737]: I0126 18:37:54.953031 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/394af859-0214-46ab-8cd5-023c7f9a601c-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"394af859-0214-46ab-8cd5-023c7f9a601c\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 18:37:54 crc kubenswrapper[4737]: I0126 18:37:54.953183 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/394af859-0214-46ab-8cd5-023c7f9a601c-config-out\") pod \"alertmanager-main-0\" (UID: \"394af859-0214-46ab-8cd5-023c7f9a601c\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 18:37:54 crc kubenswrapper[4737]: I0126 18:37:54.953602 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/394af859-0214-46ab-8cd5-023c7f9a601c-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"394af859-0214-46ab-8cd5-023c7f9a601c\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 18:37:54 crc kubenswrapper[4737]: I0126 18:37:54.953763 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/394af859-0214-46ab-8cd5-023c7f9a601c-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"394af859-0214-46ab-8cd5-023c7f9a601c\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 18:37:54 crc kubenswrapper[4737]: I0126 18:37:54.953900 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/394af859-0214-46ab-8cd5-023c7f9a601c-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"394af859-0214-46ab-8cd5-023c7f9a601c\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 18:37:54 crc kubenswrapper[4737]: I0126 18:37:54.954010 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/394af859-0214-46ab-8cd5-023c7f9a601c-tls-assets\") pod \"alertmanager-main-0\" (UID: \"394af859-0214-46ab-8cd5-023c7f9a601c\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 18:37:54 crc kubenswrapper[4737]: I0126 18:37:54.954174 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/394af859-0214-46ab-8cd5-023c7f9a601c-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"394af859-0214-46ab-8cd5-023c7f9a601c\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 18:37:54 crc kubenswrapper[4737]: I0126 18:37:54.954309 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjj8n\" (UniqueName: \"kubernetes.io/projected/394af859-0214-46ab-8cd5-023c7f9a601c-kube-api-access-wjj8n\") pod \"alertmanager-main-0\" (UID: \"394af859-0214-46ab-8cd5-023c7f9a601c\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 18:37:54 crc kubenswrapper[4737]: I0126 18:37:54.955544 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/394af859-0214-46ab-8cd5-023c7f9a601c-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"394af859-0214-46ab-8cd5-023c7f9a601c\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 18:37:54 crc kubenswrapper[4737]: I0126 18:37:54.956974 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/394af859-0214-46ab-8cd5-023c7f9a601c-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"394af859-0214-46ab-8cd5-023c7f9a601c\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 18:37:54 crc kubenswrapper[4737]: I0126 18:37:54.961361 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/394af859-0214-46ab-8cd5-023c7f9a601c-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"394af859-0214-46ab-8cd5-023c7f9a601c\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 18:37:54 crc kubenswrapper[4737]: I0126 18:37:54.966504 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/394af859-0214-46ab-8cd5-023c7f9a601c-config-out\") pod \"alertmanager-main-0\" (UID: \"394af859-0214-46ab-8cd5-023c7f9a601c\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 18:37:54 crc kubenswrapper[4737]: I0126 18:37:54.970937 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/394af859-0214-46ab-8cd5-023c7f9a601c-config-volume\") pod \"alertmanager-main-0\" (UID: \"394af859-0214-46ab-8cd5-023c7f9a601c\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 18:37:54 crc kubenswrapper[4737]: I0126 18:37:54.972920 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/394af859-0214-46ab-8cd5-023c7f9a601c-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"394af859-0214-46ab-8cd5-023c7f9a601c\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 18:37:54 crc kubenswrapper[4737]: I0126 18:37:54.973652 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/394af859-0214-46ab-8cd5-023c7f9a601c-tls-assets\") pod \"alertmanager-main-0\" (UID: \"394af859-0214-46ab-8cd5-023c7f9a601c\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 18:37:54 crc kubenswrapper[4737]: I0126 18:37:54.976776 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/394af859-0214-46ab-8cd5-023c7f9a601c-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"394af859-0214-46ab-8cd5-023c7f9a601c\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 18:37:54 crc kubenswrapper[4737]: I0126 18:37:54.977254 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/394af859-0214-46ab-8cd5-023c7f9a601c-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"394af859-0214-46ab-8cd5-023c7f9a601c\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 18:37:54 crc kubenswrapper[4737]: I0126 18:37:54.978292 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/394af859-0214-46ab-8cd5-023c7f9a601c-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"394af859-0214-46ab-8cd5-023c7f9a601c\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 18:37:54 crc kubenswrapper[4737]: I0126 18:37:54.994469 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/394af859-0214-46ab-8cd5-023c7f9a601c-web-config\") pod \"alertmanager-main-0\" (UID: \"394af859-0214-46ab-8cd5-023c7f9a601c\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 18:37:55 crc kubenswrapper[4737]: I0126 18:37:55.011814 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjj8n\" (UniqueName: \"kubernetes.io/projected/394af859-0214-46ab-8cd5-023c7f9a601c-kube-api-access-wjj8n\") pod \"alertmanager-main-0\" (UID: \"394af859-0214-46ab-8cd5-023c7f9a601c\") " pod="openshift-monitoring/alertmanager-main-0" Jan 26 18:37:55 crc kubenswrapper[4737]: I0126 18:37:55.083759 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Jan 26 18:37:55 crc kubenswrapper[4737]: I0126 18:37:55.577651 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Jan 26 18:37:55 crc kubenswrapper[4737]: I0126 18:37:55.595984 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-fc8bc4478-pnz7r"] Jan 26 18:37:55 crc kubenswrapper[4737]: I0126 18:37:55.599982 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-fc8bc4478-pnz7r" Jan 26 18:37:55 crc kubenswrapper[4737]: I0126 18:37:55.607649 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-fc8bc4478-pnz7r"] Jan 26 18:37:55 crc kubenswrapper[4737]: I0126 18:37:55.608878 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Jan 26 18:37:55 crc kubenswrapper[4737]: I0126 18:37:55.609154 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Jan 26 18:37:55 crc kubenswrapper[4737]: I0126 18:37:55.609351 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Jan 26 18:37:55 crc kubenswrapper[4737]: I0126 18:37:55.609420 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Jan 26 18:37:55 crc kubenswrapper[4737]: I0126 18:37:55.609558 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-2ek3p6m9lo3b8" Jan 26 18:37:55 crc kubenswrapper[4737]: I0126 18:37:55.609716 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Jan 26 18:37:55 crc kubenswrapper[4737]: I0126 18:37:55.609928 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-4hc4r" Jan 26 18:37:55 crc kubenswrapper[4737]: I0126 18:37:55.768812 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/f1458df1-0b67-453c-b067-4823882ec184-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-fc8bc4478-pnz7r\" (UID: \"f1458df1-0b67-453c-b067-4823882ec184\") " pod="openshift-monitoring/thanos-querier-fc8bc4478-pnz7r" Jan 26 18:37:55 crc kubenswrapper[4737]: I0126 18:37:55.769089 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8wl8\" (UniqueName: \"kubernetes.io/projected/f1458df1-0b67-453c-b067-4823882ec184-kube-api-access-v8wl8\") pod \"thanos-querier-fc8bc4478-pnz7r\" (UID: \"f1458df1-0b67-453c-b067-4823882ec184\") " pod="openshift-monitoring/thanos-querier-fc8bc4478-pnz7r" Jan 26 18:37:55 crc kubenswrapper[4737]: I0126 18:37:55.769181 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/f1458df1-0b67-453c-b067-4823882ec184-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-fc8bc4478-pnz7r\" (UID: \"f1458df1-0b67-453c-b067-4823882ec184\") " pod="openshift-monitoring/thanos-querier-fc8bc4478-pnz7r" Jan 26 18:37:55 crc kubenswrapper[4737]: I0126 18:37:55.769267 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f1458df1-0b67-453c-b067-4823882ec184-metrics-client-ca\") pod \"thanos-querier-fc8bc4478-pnz7r\" (UID: \"f1458df1-0b67-453c-b067-4823882ec184\") " pod="openshift-monitoring/thanos-querier-fc8bc4478-pnz7r" Jan 26 18:37:55 crc kubenswrapper[4737]: I0126 18:37:55.769331 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/f1458df1-0b67-453c-b067-4823882ec184-secret-grpc-tls\") pod \"thanos-querier-fc8bc4478-pnz7r\" (UID: \"f1458df1-0b67-453c-b067-4823882ec184\") " pod="openshift-monitoring/thanos-querier-fc8bc4478-pnz7r" Jan 26 18:37:55 crc kubenswrapper[4737]: I0126 18:37:55.769356 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/f1458df1-0b67-453c-b067-4823882ec184-secret-thanos-querier-tls\") pod \"thanos-querier-fc8bc4478-pnz7r\" (UID: \"f1458df1-0b67-453c-b067-4823882ec184\") " pod="openshift-monitoring/thanos-querier-fc8bc4478-pnz7r" Jan 26 18:37:55 crc kubenswrapper[4737]: I0126 18:37:55.769425 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/f1458df1-0b67-453c-b067-4823882ec184-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-fc8bc4478-pnz7r\" (UID: \"f1458df1-0b67-453c-b067-4823882ec184\") " pod="openshift-monitoring/thanos-querier-fc8bc4478-pnz7r" Jan 26 18:37:55 crc kubenswrapper[4737]: I0126 18:37:55.769474 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/f1458df1-0b67-453c-b067-4823882ec184-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-fc8bc4478-pnz7r\" (UID: \"f1458df1-0b67-453c-b067-4823882ec184\") " pod="openshift-monitoring/thanos-querier-fc8bc4478-pnz7r" Jan 26 18:37:55 crc kubenswrapper[4737]: W0126 18:37:55.769812 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod394af859_0214_46ab_8cd5_023c7f9a601c.slice/crio-654ba87156ee0ecceba3dda63652834baae94adbdbc3ce7064c9722e4a6405f3 WatchSource:0}: Error finding container 654ba87156ee0ecceba3dda63652834baae94adbdbc3ce7064c9722e4a6405f3: Status 404 returned error can't find the container with id 654ba87156ee0ecceba3dda63652834baae94adbdbc3ce7064c9722e4a6405f3 Jan 26 18:37:55 crc kubenswrapper[4737]: I0126 18:37:55.849905 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"394af859-0214-46ab-8cd5-023c7f9a601c","Type":"ContainerStarted","Data":"654ba87156ee0ecceba3dda63652834baae94adbdbc3ce7064c9722e4a6405f3"} Jan 26 18:37:55 crc kubenswrapper[4737]: I0126 18:37:55.852448 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-v57d5" event={"ID":"6ba826c0-a9b8-4675-b157-ca8ff7730271","Type":"ContainerStarted","Data":"5c1144cc130a6fec5a264c972177e46104adc7ba447f7bd57700a049668bd67b"} Jan 26 18:37:55 crc kubenswrapper[4737]: I0126 18:37:55.852479 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-v57d5" event={"ID":"6ba826c0-a9b8-4675-b157-ca8ff7730271","Type":"ContainerStarted","Data":"866457e3dfa2c9e8f971d08ae67b5815c265a16a0b28b977873f09fa34ad7406"} Jan 26 18:37:55 crc kubenswrapper[4737]: I0126 18:37:55.852489 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-v57d5" event={"ID":"6ba826c0-a9b8-4675-b157-ca8ff7730271","Type":"ContainerStarted","Data":"fc703b7a69e69d1cfdd51d9976711a064d72b1ab67539f7aee61106dba5c58ee"} Jan 26 18:37:55 crc kubenswrapper[4737]: I0126 18:37:55.870759 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8wl8\" (UniqueName: \"kubernetes.io/projected/f1458df1-0b67-453c-b067-4823882ec184-kube-api-access-v8wl8\") pod \"thanos-querier-fc8bc4478-pnz7r\" (UID: \"f1458df1-0b67-453c-b067-4823882ec184\") " pod="openshift-monitoring/thanos-querier-fc8bc4478-pnz7r" Jan 26 18:37:55 crc kubenswrapper[4737]: I0126 18:37:55.870842 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/f1458df1-0b67-453c-b067-4823882ec184-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-fc8bc4478-pnz7r\" (UID: \"f1458df1-0b67-453c-b067-4823882ec184\") " pod="openshift-monitoring/thanos-querier-fc8bc4478-pnz7r" Jan 26 18:37:55 crc kubenswrapper[4737]: I0126 18:37:55.870887 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f1458df1-0b67-453c-b067-4823882ec184-metrics-client-ca\") pod \"thanos-querier-fc8bc4478-pnz7r\" (UID: \"f1458df1-0b67-453c-b067-4823882ec184\") " pod="openshift-monitoring/thanos-querier-fc8bc4478-pnz7r" Jan 26 18:37:55 crc kubenswrapper[4737]: I0126 18:37:55.870920 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/f1458df1-0b67-453c-b067-4823882ec184-secret-grpc-tls\") pod \"thanos-querier-fc8bc4478-pnz7r\" (UID: \"f1458df1-0b67-453c-b067-4823882ec184\") " pod="openshift-monitoring/thanos-querier-fc8bc4478-pnz7r" Jan 26 18:37:55 crc kubenswrapper[4737]: I0126 18:37:55.870951 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/f1458df1-0b67-453c-b067-4823882ec184-secret-thanos-querier-tls\") pod \"thanos-querier-fc8bc4478-pnz7r\" (UID: \"f1458df1-0b67-453c-b067-4823882ec184\") " pod="openshift-monitoring/thanos-querier-fc8bc4478-pnz7r" Jan 26 18:37:55 crc kubenswrapper[4737]: I0126 18:37:55.870986 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/f1458df1-0b67-453c-b067-4823882ec184-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-fc8bc4478-pnz7r\" (UID: \"f1458df1-0b67-453c-b067-4823882ec184\") " pod="openshift-monitoring/thanos-querier-fc8bc4478-pnz7r" Jan 26 18:37:55 crc kubenswrapper[4737]: I0126 18:37:55.871022 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/f1458df1-0b67-453c-b067-4823882ec184-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-fc8bc4478-pnz7r\" (UID: \"f1458df1-0b67-453c-b067-4823882ec184\") " pod="openshift-monitoring/thanos-querier-fc8bc4478-pnz7r" Jan 26 18:37:55 crc kubenswrapper[4737]: I0126 18:37:55.871050 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/f1458df1-0b67-453c-b067-4823882ec184-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-fc8bc4478-pnz7r\" (UID: \"f1458df1-0b67-453c-b067-4823882ec184\") " pod="openshift-monitoring/thanos-querier-fc8bc4478-pnz7r" Jan 26 18:37:55 crc kubenswrapper[4737]: I0126 18:37:55.873580 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f1458df1-0b67-453c-b067-4823882ec184-metrics-client-ca\") pod \"thanos-querier-fc8bc4478-pnz7r\" (UID: \"f1458df1-0b67-453c-b067-4823882ec184\") " pod="openshift-monitoring/thanos-querier-fc8bc4478-pnz7r" Jan 26 18:37:55 crc kubenswrapper[4737]: I0126 18:37:55.877715 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/f1458df1-0b67-453c-b067-4823882ec184-secret-grpc-tls\") pod \"thanos-querier-fc8bc4478-pnz7r\" (UID: \"f1458df1-0b67-453c-b067-4823882ec184\") " pod="openshift-monitoring/thanos-querier-fc8bc4478-pnz7r" Jan 26 18:37:55 crc kubenswrapper[4737]: I0126 18:37:55.877727 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/f1458df1-0b67-453c-b067-4823882ec184-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-fc8bc4478-pnz7r\" (UID: \"f1458df1-0b67-453c-b067-4823882ec184\") " pod="openshift-monitoring/thanos-querier-fc8bc4478-pnz7r" Jan 26 18:37:55 crc kubenswrapper[4737]: I0126 18:37:55.878999 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/f1458df1-0b67-453c-b067-4823882ec184-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-fc8bc4478-pnz7r\" (UID: \"f1458df1-0b67-453c-b067-4823882ec184\") " pod="openshift-monitoring/thanos-querier-fc8bc4478-pnz7r" Jan 26 18:37:55 crc kubenswrapper[4737]: I0126 18:37:55.886885 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/f1458df1-0b67-453c-b067-4823882ec184-secret-thanos-querier-tls\") pod \"thanos-querier-fc8bc4478-pnz7r\" (UID: \"f1458df1-0b67-453c-b067-4823882ec184\") " pod="openshift-monitoring/thanos-querier-fc8bc4478-pnz7r" Jan 26 18:37:55 crc kubenswrapper[4737]: I0126 18:37:55.891223 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8wl8\" (UniqueName: \"kubernetes.io/projected/f1458df1-0b67-453c-b067-4823882ec184-kube-api-access-v8wl8\") pod \"thanos-querier-fc8bc4478-pnz7r\" (UID: \"f1458df1-0b67-453c-b067-4823882ec184\") " pod="openshift-monitoring/thanos-querier-fc8bc4478-pnz7r" Jan 26 18:37:55 crc kubenswrapper[4737]: I0126 18:37:55.892464 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/f1458df1-0b67-453c-b067-4823882ec184-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-fc8bc4478-pnz7r\" (UID: \"f1458df1-0b67-453c-b067-4823882ec184\") " pod="openshift-monitoring/thanos-querier-fc8bc4478-pnz7r" Jan 26 18:37:55 crc kubenswrapper[4737]: I0126 18:37:55.896588 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/f1458df1-0b67-453c-b067-4823882ec184-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-fc8bc4478-pnz7r\" (UID: \"f1458df1-0b67-453c-b067-4823882ec184\") " pod="openshift-monitoring/thanos-querier-fc8bc4478-pnz7r" Jan 26 18:37:55 crc kubenswrapper[4737]: I0126 18:37:55.925554 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-fc8bc4478-pnz7r" Jan 26 18:37:56 crc kubenswrapper[4737]: I0126 18:37:56.669720 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-fc8bc4478-pnz7r"] Jan 26 18:37:56 crc kubenswrapper[4737]: W0126 18:37:56.691824 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1458df1_0b67_453c_b067_4823882ec184.slice/crio-5c5056fb1985ee1708d102cddb1b5b6cb347c1fb8632471d1ef4b999fdae4db5 WatchSource:0}: Error finding container 5c5056fb1985ee1708d102cddb1b5b6cb347c1fb8632471d1ef4b999fdae4db5: Status 404 returned error can't find the container with id 5c5056fb1985ee1708d102cddb1b5b6cb347c1fb8632471d1ef4b999fdae4db5 Jan 26 18:37:56 crc kubenswrapper[4737]: I0126 18:37:56.863129 4737 generic.go:334] "Generic (PLEG): container finished" podID="6fde32c0-5dcb-4efe-af6f-599aef4e391e" containerID="9d2d9a05cfa492c8f51935879e8b1353445d45014a1b5369ccefb4e0d298f828" exitCode=0 Jan 26 18:37:56 crc kubenswrapper[4737]: I0126 18:37:56.863235 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-v7xfx" event={"ID":"6fde32c0-5dcb-4efe-af6f-599aef4e391e","Type":"ContainerDied","Data":"9d2d9a05cfa492c8f51935879e8b1353445d45014a1b5369ccefb4e0d298f828"} Jan 26 18:37:56 crc kubenswrapper[4737]: I0126 18:37:56.867877 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-rmww8" event={"ID":"f99f3741-65ba-4744-ac57-6472aa4b19f3","Type":"ContainerStarted","Data":"31f358c5472b563eab7f41aec991b4c5eee15a3524ba2f3cb729a13e4ca71551"} Jan 26 18:37:56 crc kubenswrapper[4737]: I0126 18:37:56.867919 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-rmww8" event={"ID":"f99f3741-65ba-4744-ac57-6472aa4b19f3","Type":"ContainerStarted","Data":"301b22b48acb2cb3e96648d13a05ccfad8c7324831a5106e794f09cf0c2d25d1"} Jan 26 18:37:56 crc kubenswrapper[4737]: I0126 18:37:56.870303 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-fc8bc4478-pnz7r" event={"ID":"f1458df1-0b67-453c-b067-4823882ec184","Type":"ContainerStarted","Data":"5c5056fb1985ee1708d102cddb1b5b6cb347c1fb8632471d1ef4b999fdae4db5"} Jan 26 18:37:57 crc kubenswrapper[4737]: I0126 18:37:57.876293 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-v7xfx" event={"ID":"6fde32c0-5dcb-4efe-af6f-599aef4e391e","Type":"ContainerStarted","Data":"8c27545e06900d5d699f828c05b25f5644c2c85e138d9e993d6d697cb285624c"} Jan 26 18:37:57 crc kubenswrapper[4737]: I0126 18:37:57.876604 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-v7xfx" event={"ID":"6fde32c0-5dcb-4efe-af6f-599aef4e391e","Type":"ContainerStarted","Data":"1b62305915d345a64c64c8437d262348afa70d244b48b08d2f79a38ba8c25e20"} Jan 26 18:37:57 crc kubenswrapper[4737]: I0126 18:37:57.877961 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-rmww8" event={"ID":"f99f3741-65ba-4744-ac57-6472aa4b19f3","Type":"ContainerStarted","Data":"c01d31568b29f3fe6e1cbd5cbd588793ab112f2d4508497c0d70280dbdfe1104"} Jan 26 18:37:57 crc kubenswrapper[4737]: I0126 18:37:57.898437 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-v7xfx" podStartSLOduration=3.144269164 podStartE2EDuration="4.898420898s" podCreationTimestamp="2026-01-26 18:37:53 +0000 UTC" firstStartedPulling="2026-01-26 18:37:54.072409027 +0000 UTC m=+447.380603735" lastFinishedPulling="2026-01-26 18:37:55.826560761 +0000 UTC m=+449.134755469" observedRunningTime="2026-01-26 18:37:57.891807107 +0000 UTC m=+451.200001825" watchObservedRunningTime="2026-01-26 18:37:57.898420898 +0000 UTC m=+451.206615606" Jan 26 18:37:57 crc kubenswrapper[4737]: I0126 18:37:57.915440 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-rmww8" podStartSLOduration=2.928569661 podStartE2EDuration="4.915419178s" podCreationTimestamp="2026-01-26 18:37:53 +0000 UTC" firstStartedPulling="2026-01-26 18:37:54.506358759 +0000 UTC m=+447.814553467" lastFinishedPulling="2026-01-26 18:37:56.493208276 +0000 UTC m=+449.801402984" observedRunningTime="2026-01-26 18:37:57.910578553 +0000 UTC m=+451.218773261" watchObservedRunningTime="2026-01-26 18:37:57.915419178 +0000 UTC m=+451.223613886" Jan 26 18:37:58 crc kubenswrapper[4737]: I0126 18:37:58.527846 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7b675946d5-d6dz9"] Jan 26 18:37:58 crc kubenswrapper[4737]: I0126 18:37:58.529158 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7b675946d5-d6dz9" Jan 26 18:37:58 crc kubenswrapper[4737]: I0126 18:37:58.538993 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7b675946d5-d6dz9"] Jan 26 18:37:58 crc kubenswrapper[4737]: I0126 18:37:58.612504 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa-console-config\") pod \"console-7b675946d5-d6dz9\" (UID: \"2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa\") " pod="openshift-console/console-7b675946d5-d6dz9" Jan 26 18:37:58 crc kubenswrapper[4737]: I0126 18:37:58.612999 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa-console-serving-cert\") pod \"console-7b675946d5-d6dz9\" (UID: \"2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa\") " pod="openshift-console/console-7b675946d5-d6dz9" Jan 26 18:37:58 crc kubenswrapper[4737]: I0126 18:37:58.613023 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa-console-oauth-config\") pod \"console-7b675946d5-d6dz9\" (UID: \"2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa\") " pod="openshift-console/console-7b675946d5-d6dz9" Jan 26 18:37:58 crc kubenswrapper[4737]: I0126 18:37:58.613041 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa-oauth-serving-cert\") pod \"console-7b675946d5-d6dz9\" (UID: \"2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa\") " pod="openshift-console/console-7b675946d5-d6dz9" Jan 26 18:37:58 crc kubenswrapper[4737]: I0126 18:37:58.613126 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa-service-ca\") pod \"console-7b675946d5-d6dz9\" (UID: \"2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa\") " pod="openshift-console/console-7b675946d5-d6dz9" Jan 26 18:37:58 crc kubenswrapper[4737]: I0126 18:37:58.613297 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4285b\" (UniqueName: \"kubernetes.io/projected/2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa-kube-api-access-4285b\") pod \"console-7b675946d5-d6dz9\" (UID: \"2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa\") " pod="openshift-console/console-7b675946d5-d6dz9" Jan 26 18:37:58 crc kubenswrapper[4737]: I0126 18:37:58.613348 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa-trusted-ca-bundle\") pod \"console-7b675946d5-d6dz9\" (UID: \"2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa\") " pod="openshift-console/console-7b675946d5-d6dz9" Jan 26 18:37:58 crc kubenswrapper[4737]: I0126 18:37:58.714345 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa-service-ca\") pod \"console-7b675946d5-d6dz9\" (UID: \"2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa\") " pod="openshift-console/console-7b675946d5-d6dz9" Jan 26 18:37:58 crc kubenswrapper[4737]: I0126 18:37:58.714413 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4285b\" (UniqueName: \"kubernetes.io/projected/2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa-kube-api-access-4285b\") pod \"console-7b675946d5-d6dz9\" (UID: \"2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa\") " pod="openshift-console/console-7b675946d5-d6dz9" Jan 26 18:37:58 crc kubenswrapper[4737]: I0126 18:37:58.714440 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa-trusted-ca-bundle\") pod \"console-7b675946d5-d6dz9\" (UID: \"2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa\") " pod="openshift-console/console-7b675946d5-d6dz9" Jan 26 18:37:58 crc kubenswrapper[4737]: I0126 18:37:58.714472 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa-console-config\") pod \"console-7b675946d5-d6dz9\" (UID: \"2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa\") " pod="openshift-console/console-7b675946d5-d6dz9" Jan 26 18:37:58 crc kubenswrapper[4737]: I0126 18:37:58.714504 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa-console-oauth-config\") pod \"console-7b675946d5-d6dz9\" (UID: \"2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa\") " pod="openshift-console/console-7b675946d5-d6dz9" Jan 26 18:37:58 crc kubenswrapper[4737]: I0126 18:37:58.714529 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa-console-serving-cert\") pod \"console-7b675946d5-d6dz9\" (UID: \"2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa\") " pod="openshift-console/console-7b675946d5-d6dz9" Jan 26 18:37:58 crc kubenswrapper[4737]: I0126 18:37:58.714554 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa-oauth-serving-cert\") pod \"console-7b675946d5-d6dz9\" (UID: \"2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa\") " pod="openshift-console/console-7b675946d5-d6dz9" Jan 26 18:37:58 crc kubenswrapper[4737]: I0126 18:37:58.715639 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa-oauth-serving-cert\") pod \"console-7b675946d5-d6dz9\" (UID: \"2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa\") " pod="openshift-console/console-7b675946d5-d6dz9" Jan 26 18:37:58 crc kubenswrapper[4737]: I0126 18:37:58.716478 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa-console-config\") pod \"console-7b675946d5-d6dz9\" (UID: \"2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa\") " pod="openshift-console/console-7b675946d5-d6dz9" Jan 26 18:37:58 crc kubenswrapper[4737]: I0126 18:37:58.717121 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa-trusted-ca-bundle\") pod \"console-7b675946d5-d6dz9\" (UID: \"2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa\") " pod="openshift-console/console-7b675946d5-d6dz9" Jan 26 18:37:58 crc kubenswrapper[4737]: I0126 18:37:58.717317 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa-service-ca\") pod \"console-7b675946d5-d6dz9\" (UID: \"2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa\") " pod="openshift-console/console-7b675946d5-d6dz9" Jan 26 18:37:58 crc kubenswrapper[4737]: I0126 18:37:58.720022 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa-console-serving-cert\") pod \"console-7b675946d5-d6dz9\" (UID: \"2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa\") " pod="openshift-console/console-7b675946d5-d6dz9" Jan 26 18:37:58 crc kubenswrapper[4737]: I0126 18:37:58.723165 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa-console-oauth-config\") pod \"console-7b675946d5-d6dz9\" (UID: \"2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa\") " pod="openshift-console/console-7b675946d5-d6dz9" Jan 26 18:37:58 crc kubenswrapper[4737]: I0126 18:37:58.737922 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4285b\" (UniqueName: \"kubernetes.io/projected/2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa-kube-api-access-4285b\") pod \"console-7b675946d5-d6dz9\" (UID: \"2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa\") " pod="openshift-console/console-7b675946d5-d6dz9" Jan 26 18:37:58 crc kubenswrapper[4737]: I0126 18:37:58.851051 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7b675946d5-d6dz9" Jan 26 18:37:58 crc kubenswrapper[4737]: I0126 18:37:58.903181 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"394af859-0214-46ab-8cd5-023c7f9a601c","Type":"ContainerStarted","Data":"caf255c7869cdfaf85c36821be52818f1d2fa078a513e497d851a0d361c54496"} Jan 26 18:37:58 crc kubenswrapper[4737]: I0126 18:37:58.916260 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-v57d5" event={"ID":"6ba826c0-a9b8-4675-b157-ca8ff7730271","Type":"ContainerStarted","Data":"c5734897ae4b3b7f1a53fc690e9a183203942d24cab3b1f105208abcb2a21400"} Jan 26 18:37:58 crc kubenswrapper[4737]: I0126 18:37:58.970420 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-566fddb674-v57d5" podStartSLOduration=2.591162119 podStartE2EDuration="5.970398155s" podCreationTimestamp="2026-01-26 18:37:53 +0000 UTC" firstStartedPulling="2026-01-26 18:37:55.327153385 +0000 UTC m=+448.635348093" lastFinishedPulling="2026-01-26 18:37:58.706389421 +0000 UTC m=+452.014584129" observedRunningTime="2026-01-26 18:37:58.960660353 +0000 UTC m=+452.268855061" watchObservedRunningTime="2026-01-26 18:37:58.970398155 +0000 UTC m=+452.278592863" Jan 26 18:37:59 crc kubenswrapper[4737]: I0126 18:37:59.046251 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-765f95fb8-5vxfr"] Jan 26 18:37:59 crc kubenswrapper[4737]: I0126 18:37:59.047412 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-765f95fb8-5vxfr" Jan 26 18:37:59 crc kubenswrapper[4737]: I0126 18:37:59.050703 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-csxkx" Jan 26 18:37:59 crc kubenswrapper[4737]: I0126 18:37:59.051000 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Jan 26 18:37:59 crc kubenswrapper[4737]: I0126 18:37:59.051319 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-78242abqsnne7" Jan 26 18:37:59 crc kubenswrapper[4737]: I0126 18:37:59.051492 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Jan 26 18:37:59 crc kubenswrapper[4737]: I0126 18:37:59.051610 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Jan 26 18:37:59 crc kubenswrapper[4737]: I0126 18:37:59.051713 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Jan 26 18:37:59 crc kubenswrapper[4737]: I0126 18:37:59.052562 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-765f95fb8-5vxfr"] Jan 26 18:37:59 crc kubenswrapper[4737]: I0126 18:37:59.124062 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a68d9df4-98bd-4115-ad88-23472a9902e9-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-765f95fb8-5vxfr\" (UID: \"a68d9df4-98bd-4115-ad88-23472a9902e9\") " pod="openshift-monitoring/metrics-server-765f95fb8-5vxfr" Jan 26 18:37:59 crc kubenswrapper[4737]: I0126 18:37:59.124152 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/a68d9df4-98bd-4115-ad88-23472a9902e9-audit-log\") pod \"metrics-server-765f95fb8-5vxfr\" (UID: \"a68d9df4-98bd-4115-ad88-23472a9902e9\") " pod="openshift-monitoring/metrics-server-765f95fb8-5vxfr" Jan 26 18:37:59 crc kubenswrapper[4737]: I0126 18:37:59.124249 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/a68d9df4-98bd-4115-ad88-23472a9902e9-metrics-server-audit-profiles\") pod \"metrics-server-765f95fb8-5vxfr\" (UID: \"a68d9df4-98bd-4115-ad88-23472a9902e9\") " pod="openshift-monitoring/metrics-server-765f95fb8-5vxfr" Jan 26 18:37:59 crc kubenswrapper[4737]: I0126 18:37:59.124627 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/a68d9df4-98bd-4115-ad88-23472a9902e9-secret-metrics-server-tls\") pod \"metrics-server-765f95fb8-5vxfr\" (UID: \"a68d9df4-98bd-4115-ad88-23472a9902e9\") " pod="openshift-monitoring/metrics-server-765f95fb8-5vxfr" Jan 26 18:37:59 crc kubenswrapper[4737]: I0126 18:37:59.124934 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/a68d9df4-98bd-4115-ad88-23472a9902e9-secret-metrics-client-certs\") pod \"metrics-server-765f95fb8-5vxfr\" (UID: \"a68d9df4-98bd-4115-ad88-23472a9902e9\") " pod="openshift-monitoring/metrics-server-765f95fb8-5vxfr" Jan 26 18:37:59 crc kubenswrapper[4737]: I0126 18:37:59.124980 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a68d9df4-98bd-4115-ad88-23472a9902e9-client-ca-bundle\") pod \"metrics-server-765f95fb8-5vxfr\" (UID: \"a68d9df4-98bd-4115-ad88-23472a9902e9\") " pod="openshift-monitoring/metrics-server-765f95fb8-5vxfr" Jan 26 18:37:59 crc kubenswrapper[4737]: I0126 18:37:59.125010 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tqxj\" (UniqueName: \"kubernetes.io/projected/a68d9df4-98bd-4115-ad88-23472a9902e9-kube-api-access-5tqxj\") pod \"metrics-server-765f95fb8-5vxfr\" (UID: \"a68d9df4-98bd-4115-ad88-23472a9902e9\") " pod="openshift-monitoring/metrics-server-765f95fb8-5vxfr" Jan 26 18:37:59 crc kubenswrapper[4737]: I0126 18:37:59.226830 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/a68d9df4-98bd-4115-ad88-23472a9902e9-secret-metrics-server-tls\") pod \"metrics-server-765f95fb8-5vxfr\" (UID: \"a68d9df4-98bd-4115-ad88-23472a9902e9\") " pod="openshift-monitoring/metrics-server-765f95fb8-5vxfr" Jan 26 18:37:59 crc kubenswrapper[4737]: I0126 18:37:59.226963 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/a68d9df4-98bd-4115-ad88-23472a9902e9-secret-metrics-client-certs\") pod \"metrics-server-765f95fb8-5vxfr\" (UID: \"a68d9df4-98bd-4115-ad88-23472a9902e9\") " pod="openshift-monitoring/metrics-server-765f95fb8-5vxfr" Jan 26 18:37:59 crc kubenswrapper[4737]: I0126 18:37:59.226996 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a68d9df4-98bd-4115-ad88-23472a9902e9-client-ca-bundle\") pod \"metrics-server-765f95fb8-5vxfr\" (UID: \"a68d9df4-98bd-4115-ad88-23472a9902e9\") " pod="openshift-monitoring/metrics-server-765f95fb8-5vxfr" Jan 26 18:37:59 crc kubenswrapper[4737]: I0126 18:37:59.227032 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tqxj\" (UniqueName: \"kubernetes.io/projected/a68d9df4-98bd-4115-ad88-23472a9902e9-kube-api-access-5tqxj\") pod \"metrics-server-765f95fb8-5vxfr\" (UID: \"a68d9df4-98bd-4115-ad88-23472a9902e9\") " pod="openshift-monitoring/metrics-server-765f95fb8-5vxfr" Jan 26 18:37:59 crc kubenswrapper[4737]: I0126 18:37:59.227060 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a68d9df4-98bd-4115-ad88-23472a9902e9-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-765f95fb8-5vxfr\" (UID: \"a68d9df4-98bd-4115-ad88-23472a9902e9\") " pod="openshift-monitoring/metrics-server-765f95fb8-5vxfr" Jan 26 18:37:59 crc kubenswrapper[4737]: I0126 18:37:59.227111 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/a68d9df4-98bd-4115-ad88-23472a9902e9-audit-log\") pod \"metrics-server-765f95fb8-5vxfr\" (UID: \"a68d9df4-98bd-4115-ad88-23472a9902e9\") " pod="openshift-monitoring/metrics-server-765f95fb8-5vxfr" Jan 26 18:37:59 crc kubenswrapper[4737]: I0126 18:37:59.227150 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/a68d9df4-98bd-4115-ad88-23472a9902e9-metrics-server-audit-profiles\") pod \"metrics-server-765f95fb8-5vxfr\" (UID: \"a68d9df4-98bd-4115-ad88-23472a9902e9\") " pod="openshift-monitoring/metrics-server-765f95fb8-5vxfr" Jan 26 18:37:59 crc kubenswrapper[4737]: I0126 18:37:59.228207 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/a68d9df4-98bd-4115-ad88-23472a9902e9-audit-log\") pod \"metrics-server-765f95fb8-5vxfr\" (UID: \"a68d9df4-98bd-4115-ad88-23472a9902e9\") " pod="openshift-monitoring/metrics-server-765f95fb8-5vxfr" Jan 26 18:37:59 crc kubenswrapper[4737]: I0126 18:37:59.228292 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a68d9df4-98bd-4115-ad88-23472a9902e9-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-765f95fb8-5vxfr\" (UID: \"a68d9df4-98bd-4115-ad88-23472a9902e9\") " pod="openshift-monitoring/metrics-server-765f95fb8-5vxfr" Jan 26 18:37:59 crc kubenswrapper[4737]: I0126 18:37:59.228607 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/a68d9df4-98bd-4115-ad88-23472a9902e9-metrics-server-audit-profiles\") pod \"metrics-server-765f95fb8-5vxfr\" (UID: \"a68d9df4-98bd-4115-ad88-23472a9902e9\") " pod="openshift-monitoring/metrics-server-765f95fb8-5vxfr" Jan 26 18:37:59 crc kubenswrapper[4737]: I0126 18:37:59.237873 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/a68d9df4-98bd-4115-ad88-23472a9902e9-secret-metrics-server-tls\") pod \"metrics-server-765f95fb8-5vxfr\" (UID: \"a68d9df4-98bd-4115-ad88-23472a9902e9\") " pod="openshift-monitoring/metrics-server-765f95fb8-5vxfr" Jan 26 18:37:59 crc kubenswrapper[4737]: I0126 18:37:59.238008 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a68d9df4-98bd-4115-ad88-23472a9902e9-client-ca-bundle\") pod \"metrics-server-765f95fb8-5vxfr\" (UID: \"a68d9df4-98bd-4115-ad88-23472a9902e9\") " pod="openshift-monitoring/metrics-server-765f95fb8-5vxfr" Jan 26 18:37:59 crc kubenswrapper[4737]: I0126 18:37:59.238090 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/a68d9df4-98bd-4115-ad88-23472a9902e9-secret-metrics-client-certs\") pod \"metrics-server-765f95fb8-5vxfr\" (UID: \"a68d9df4-98bd-4115-ad88-23472a9902e9\") " pod="openshift-monitoring/metrics-server-765f95fb8-5vxfr" Jan 26 18:37:59 crc kubenswrapper[4737]: I0126 18:37:59.245042 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tqxj\" (UniqueName: \"kubernetes.io/projected/a68d9df4-98bd-4115-ad88-23472a9902e9-kube-api-access-5tqxj\") pod \"metrics-server-765f95fb8-5vxfr\" (UID: \"a68d9df4-98bd-4115-ad88-23472a9902e9\") " pod="openshift-monitoring/metrics-server-765f95fb8-5vxfr" Jan 26 18:37:59 crc kubenswrapper[4737]: I0126 18:37:59.345850 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7b675946d5-d6dz9"] Jan 26 18:37:59 crc kubenswrapper[4737]: W0126 18:37:59.355331 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a4a5a1b_cf9e_4c2d_8f1f_528e57fe80fa.slice/crio-08d8e04e96112709e9f57aca68a49e89555efb4759b1dabb4d6db3abebc33fe0 WatchSource:0}: Error finding container 08d8e04e96112709e9f57aca68a49e89555efb4759b1dabb4d6db3abebc33fe0: Status 404 returned error can't find the container with id 08d8e04e96112709e9f57aca68a49e89555efb4759b1dabb4d6db3abebc33fe0 Jan 26 18:37:59 crc kubenswrapper[4737]: I0126 18:37:59.367201 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-765f95fb8-5vxfr" Jan 26 18:37:59 crc kubenswrapper[4737]: I0126 18:37:59.381381 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-57f7668bd6-kvv49"] Jan 26 18:37:59 crc kubenswrapper[4737]: I0126 18:37:59.382484 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-57f7668bd6-kvv49" Jan 26 18:37:59 crc kubenswrapper[4737]: I0126 18:37:59.385843 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Jan 26 18:37:59 crc kubenswrapper[4737]: I0126 18:37:59.386036 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6tstp" Jan 26 18:37:59 crc kubenswrapper[4737]: I0126 18:37:59.394305 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-57f7668bd6-kvv49"] Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:37:59.531374 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b43b676e-7349-4516-bec6-8a58869e230e-monitoring-plugin-cert\") pod \"monitoring-plugin-57f7668bd6-kvv49\" (UID: \"b43b676e-7349-4516-bec6-8a58869e230e\") " pod="openshift-monitoring/monitoring-plugin-57f7668bd6-kvv49" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:37:59.633887 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b43b676e-7349-4516-bec6-8a58869e230e-monitoring-plugin-cert\") pod \"monitoring-plugin-57f7668bd6-kvv49\" (UID: \"b43b676e-7349-4516-bec6-8a58869e230e\") " pod="openshift-monitoring/monitoring-plugin-57f7668bd6-kvv49" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:37:59.642555 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b43b676e-7349-4516-bec6-8a58869e230e-monitoring-plugin-cert\") pod \"monitoring-plugin-57f7668bd6-kvv49\" (UID: \"b43b676e-7349-4516-bec6-8a58869e230e\") " pod="openshift-monitoring/monitoring-plugin-57f7668bd6-kvv49" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:37:59.706016 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-57f7668bd6-kvv49" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:37:59.833311 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-765f95fb8-5vxfr"] Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:37:59.922745 4737 generic.go:334] "Generic (PLEG): container finished" podID="394af859-0214-46ab-8cd5-023c7f9a601c" containerID="caf255c7869cdfaf85c36821be52818f1d2fa078a513e497d851a0d361c54496" exitCode=0 Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:37:59.922804 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"394af859-0214-46ab-8cd5-023c7f9a601c","Type":"ContainerDied","Data":"caf255c7869cdfaf85c36821be52818f1d2fa078a513e497d851a0d361c54496"} Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:37:59.929916 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-765f95fb8-5vxfr" event={"ID":"a68d9df4-98bd-4115-ad88-23472a9902e9","Type":"ContainerStarted","Data":"b2dbfa37ae6e6a9bd63d952e35970c63ce0f77dab9b1915964c2585b189f3669"} Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:37:59.933853 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7b675946d5-d6dz9" event={"ID":"2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa","Type":"ContainerStarted","Data":"50d84adbe7f5c323b80c92bea6e74486dea96c69fc5c7cafd70fd3cda81a03d3"} Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:37:59.933905 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7b675946d5-d6dz9" event={"ID":"2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa","Type":"ContainerStarted","Data":"08d8e04e96112709e9f57aca68a49e89555efb4759b1dabb4d6db3abebc33fe0"} Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:37:59.986213 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:37:59.988166 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:37:59.992305 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:37:59.992587 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:37:59.993475 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:37:59.994096 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-4hmfqlt18micu" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:37:59.994245 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:37:59.994362 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:37:59.994627 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:37:59.995103 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-njsr5" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:37:59.996416 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.000674 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.000738 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.007081 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.007586 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.017348 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.017832 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7b675946d5-d6dz9" podStartSLOduration=2.017803356 podStartE2EDuration="2.017803356s" podCreationTimestamp="2026-01-26 18:37:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:38:00.00059824 +0000 UTC m=+453.308792958" watchObservedRunningTime="2026-01-26 18:38:00.017803356 +0000 UTC m=+453.325998064" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.144543 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3073e905-a285-4006-9d3b-f301c8a28733-web-config\") pod \"prometheus-k8s-0\" (UID: \"3073e905-a285-4006-9d3b-f301c8a28733\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.144599 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/3073e905-a285-4006-9d3b-f301c8a28733-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"3073e905-a285-4006-9d3b-f301c8a28733\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.144730 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3073e905-a285-4006-9d3b-f301c8a28733-config\") pod \"prometheus-k8s-0\" (UID: \"3073e905-a285-4006-9d3b-f301c8a28733\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.144804 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3073e905-a285-4006-9d3b-f301c8a28733-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"3073e905-a285-4006-9d3b-f301c8a28733\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.144835 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3073e905-a285-4006-9d3b-f301c8a28733-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"3073e905-a285-4006-9d3b-f301c8a28733\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.144868 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/3073e905-a285-4006-9d3b-f301c8a28733-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"3073e905-a285-4006-9d3b-f301c8a28733\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.144926 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/3073e905-a285-4006-9d3b-f301c8a28733-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"3073e905-a285-4006-9d3b-f301c8a28733\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.144980 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/3073e905-a285-4006-9d3b-f301c8a28733-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"3073e905-a285-4006-9d3b-f301c8a28733\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.145007 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3073e905-a285-4006-9d3b-f301c8a28733-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"3073e905-a285-4006-9d3b-f301c8a28733\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.145052 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/3073e905-a285-4006-9d3b-f301c8a28733-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"3073e905-a285-4006-9d3b-f301c8a28733\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.145204 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/3073e905-a285-4006-9d3b-f301c8a28733-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"3073e905-a285-4006-9d3b-f301c8a28733\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.145300 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3073e905-a285-4006-9d3b-f301c8a28733-config-out\") pod \"prometheus-k8s-0\" (UID: \"3073e905-a285-4006-9d3b-f301c8a28733\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.145976 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3073e905-a285-4006-9d3b-f301c8a28733-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"3073e905-a285-4006-9d3b-f301c8a28733\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.146018 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3073e905-a285-4006-9d3b-f301c8a28733-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"3073e905-a285-4006-9d3b-f301c8a28733\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.146140 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/3073e905-a285-4006-9d3b-f301c8a28733-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"3073e905-a285-4006-9d3b-f301c8a28733\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.146590 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x64ft\" (UniqueName: \"kubernetes.io/projected/3073e905-a285-4006-9d3b-f301c8a28733-kube-api-access-x64ft\") pod \"prometheus-k8s-0\" (UID: \"3073e905-a285-4006-9d3b-f301c8a28733\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.146674 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3073e905-a285-4006-9d3b-f301c8a28733-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"3073e905-a285-4006-9d3b-f301c8a28733\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.146741 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3073e905-a285-4006-9d3b-f301c8a28733-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"3073e905-a285-4006-9d3b-f301c8a28733\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.248119 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3073e905-a285-4006-9d3b-f301c8a28733-config-out\") pod \"prometheus-k8s-0\" (UID: \"3073e905-a285-4006-9d3b-f301c8a28733\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.248166 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3073e905-a285-4006-9d3b-f301c8a28733-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"3073e905-a285-4006-9d3b-f301c8a28733\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.248186 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3073e905-a285-4006-9d3b-f301c8a28733-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"3073e905-a285-4006-9d3b-f301c8a28733\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.248221 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/3073e905-a285-4006-9d3b-f301c8a28733-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"3073e905-a285-4006-9d3b-f301c8a28733\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.248239 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x64ft\" (UniqueName: \"kubernetes.io/projected/3073e905-a285-4006-9d3b-f301c8a28733-kube-api-access-x64ft\") pod \"prometheus-k8s-0\" (UID: \"3073e905-a285-4006-9d3b-f301c8a28733\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.248254 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3073e905-a285-4006-9d3b-f301c8a28733-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"3073e905-a285-4006-9d3b-f301c8a28733\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.248279 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3073e905-a285-4006-9d3b-f301c8a28733-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"3073e905-a285-4006-9d3b-f301c8a28733\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.248305 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3073e905-a285-4006-9d3b-f301c8a28733-web-config\") pod \"prometheus-k8s-0\" (UID: \"3073e905-a285-4006-9d3b-f301c8a28733\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.248321 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/3073e905-a285-4006-9d3b-f301c8a28733-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"3073e905-a285-4006-9d3b-f301c8a28733\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.248343 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3073e905-a285-4006-9d3b-f301c8a28733-config\") pod \"prometheus-k8s-0\" (UID: \"3073e905-a285-4006-9d3b-f301c8a28733\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.248363 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3073e905-a285-4006-9d3b-f301c8a28733-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"3073e905-a285-4006-9d3b-f301c8a28733\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.248378 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3073e905-a285-4006-9d3b-f301c8a28733-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"3073e905-a285-4006-9d3b-f301c8a28733\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.248400 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/3073e905-a285-4006-9d3b-f301c8a28733-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"3073e905-a285-4006-9d3b-f301c8a28733\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.248419 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/3073e905-a285-4006-9d3b-f301c8a28733-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"3073e905-a285-4006-9d3b-f301c8a28733\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.248434 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/3073e905-a285-4006-9d3b-f301c8a28733-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"3073e905-a285-4006-9d3b-f301c8a28733\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.248455 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3073e905-a285-4006-9d3b-f301c8a28733-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"3073e905-a285-4006-9d3b-f301c8a28733\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.248682 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/3073e905-a285-4006-9d3b-f301c8a28733-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"3073e905-a285-4006-9d3b-f301c8a28733\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.248753 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/3073e905-a285-4006-9d3b-f301c8a28733-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"3073e905-a285-4006-9d3b-f301c8a28733\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.249284 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/3073e905-a285-4006-9d3b-f301c8a28733-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"3073e905-a285-4006-9d3b-f301c8a28733\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.249329 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3073e905-a285-4006-9d3b-f301c8a28733-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"3073e905-a285-4006-9d3b-f301c8a28733\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.249769 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3073e905-a285-4006-9d3b-f301c8a28733-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"3073e905-a285-4006-9d3b-f301c8a28733\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.250038 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3073e905-a285-4006-9d3b-f301c8a28733-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"3073e905-a285-4006-9d3b-f301c8a28733\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.250105 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3073e905-a285-4006-9d3b-f301c8a28733-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"3073e905-a285-4006-9d3b-f301c8a28733\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.253594 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/3073e905-a285-4006-9d3b-f301c8a28733-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"3073e905-a285-4006-9d3b-f301c8a28733\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.253936 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/3073e905-a285-4006-9d3b-f301c8a28733-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"3073e905-a285-4006-9d3b-f301c8a28733\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.255011 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3073e905-a285-4006-9d3b-f301c8a28733-web-config\") pod \"prometheus-k8s-0\" (UID: \"3073e905-a285-4006-9d3b-f301c8a28733\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.255710 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3073e905-a285-4006-9d3b-f301c8a28733-config\") pod \"prometheus-k8s-0\" (UID: \"3073e905-a285-4006-9d3b-f301c8a28733\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.255848 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/3073e905-a285-4006-9d3b-f301c8a28733-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"3073e905-a285-4006-9d3b-f301c8a28733\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.255899 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/3073e905-a285-4006-9d3b-f301c8a28733-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"3073e905-a285-4006-9d3b-f301c8a28733\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.257383 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/3073e905-a285-4006-9d3b-f301c8a28733-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"3073e905-a285-4006-9d3b-f301c8a28733\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.258340 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3073e905-a285-4006-9d3b-f301c8a28733-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"3073e905-a285-4006-9d3b-f301c8a28733\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.260428 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/3073e905-a285-4006-9d3b-f301c8a28733-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"3073e905-a285-4006-9d3b-f301c8a28733\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.265687 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x64ft\" (UniqueName: \"kubernetes.io/projected/3073e905-a285-4006-9d3b-f301c8a28733-kube-api-access-x64ft\") pod \"prometheus-k8s-0\" (UID: \"3073e905-a285-4006-9d3b-f301c8a28733\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.266932 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3073e905-a285-4006-9d3b-f301c8a28733-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"3073e905-a285-4006-9d3b-f301c8a28733\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.267928 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3073e905-a285-4006-9d3b-f301c8a28733-config-out\") pod \"prometheus-k8s-0\" (UID: \"3073e905-a285-4006-9d3b-f301c8a28733\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.280868 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3073e905-a285-4006-9d3b-f301c8a28733-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"3073e905-a285-4006-9d3b-f301c8a28733\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.310984 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Jan 26 18:38:00 crc kubenswrapper[4737]: I0126 18:38:00.804614 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-57f7668bd6-kvv49"] Jan 26 18:38:01 crc kubenswrapper[4737]: W0126 18:38:01.133879 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb43b676e_7349_4516_bec6_8a58869e230e.slice/crio-81795774863b5d352ca6c9a5cba1beb055f7e8e1d6dd0921c93671f1397266ed WatchSource:0}: Error finding container 81795774863b5d352ca6c9a5cba1beb055f7e8e1d6dd0921c93671f1397266ed: Status 404 returned error can't find the container with id 81795774863b5d352ca6c9a5cba1beb055f7e8e1d6dd0921c93671f1397266ed Jan 26 18:38:01 crc kubenswrapper[4737]: I0126 18:38:01.578163 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Jan 26 18:38:01 crc kubenswrapper[4737]: W0126 18:38:01.589171 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3073e905_a285_4006_9d3b_f301c8a28733.slice/crio-d820202dba20cfb3878709186aa691ade27faada4a382b6ccc8515b2dfe4ebae WatchSource:0}: Error finding container d820202dba20cfb3878709186aa691ade27faada4a382b6ccc8515b2dfe4ebae: Status 404 returned error can't find the container with id d820202dba20cfb3878709186aa691ade27faada4a382b6ccc8515b2dfe4ebae Jan 26 18:38:01 crc kubenswrapper[4737]: I0126 18:38:01.949659 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-fc8bc4478-pnz7r" event={"ID":"f1458df1-0b67-453c-b067-4823882ec184","Type":"ContainerStarted","Data":"afce3b5b2d2a9790dc07ae28c1f37a84438ed927c39d1b4106de9c6d31a48878"} Jan 26 18:38:01 crc kubenswrapper[4737]: I0126 18:38:01.949708 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-fc8bc4478-pnz7r" event={"ID":"f1458df1-0b67-453c-b067-4823882ec184","Type":"ContainerStarted","Data":"e963f062c5712147853caac9d3c154dcd0a09938327c605fde93dc751d978cb8"} Jan 26 18:38:01 crc kubenswrapper[4737]: I0126 18:38:01.949722 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-fc8bc4478-pnz7r" event={"ID":"f1458df1-0b67-453c-b067-4823882ec184","Type":"ContainerStarted","Data":"053c2dcf15b9b7ad8783d06758b397ecc4c25f2d7e1dafe34f729849206abfd6"} Jan 26 18:38:01 crc kubenswrapper[4737]: I0126 18:38:01.951123 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-57f7668bd6-kvv49" event={"ID":"b43b676e-7349-4516-bec6-8a58869e230e","Type":"ContainerStarted","Data":"81795774863b5d352ca6c9a5cba1beb055f7e8e1d6dd0921c93671f1397266ed"} Jan 26 18:38:01 crc kubenswrapper[4737]: I0126 18:38:01.954201 4737 generic.go:334] "Generic (PLEG): container finished" podID="3073e905-a285-4006-9d3b-f301c8a28733" containerID="a6efaecd95f8e0243e6cda46dccc2c37fc5e75fbaba5349d150040ed0d7b7fd4" exitCode=0 Jan 26 18:38:01 crc kubenswrapper[4737]: I0126 18:38:01.954245 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"3073e905-a285-4006-9d3b-f301c8a28733","Type":"ContainerDied","Data":"a6efaecd95f8e0243e6cda46dccc2c37fc5e75fbaba5349d150040ed0d7b7fd4"} Jan 26 18:38:01 crc kubenswrapper[4737]: I0126 18:38:01.954272 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"3073e905-a285-4006-9d3b-f301c8a28733","Type":"ContainerStarted","Data":"d820202dba20cfb3878709186aa691ade27faada4a382b6ccc8515b2dfe4ebae"} Jan 26 18:38:03 crc kubenswrapper[4737]: I0126 18:38:03.970574 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-765f95fb8-5vxfr" event={"ID":"a68d9df4-98bd-4115-ad88-23472a9902e9","Type":"ContainerStarted","Data":"213f7dc7e813aeace97fdb30e3d5384085ac52c66a60a5dc47aba15cf0f0c91d"} Jan 26 18:38:03 crc kubenswrapper[4737]: I0126 18:38:03.974365 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-57f7668bd6-kvv49" event={"ID":"b43b676e-7349-4516-bec6-8a58869e230e","Type":"ContainerStarted","Data":"bfd5218d7f73ce8f1365b7f86aa6fb4936800e83df7aff74c68fe38a14209a8f"} Jan 26 18:38:03 crc kubenswrapper[4737]: I0126 18:38:03.974738 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-57f7668bd6-kvv49" Jan 26 18:38:03 crc kubenswrapper[4737]: I0126 18:38:03.985618 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-57f7668bd6-kvv49" Jan 26 18:38:03 crc kubenswrapper[4737]: I0126 18:38:03.995200 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"394af859-0214-46ab-8cd5-023c7f9a601c","Type":"ContainerStarted","Data":"a800a8e8396852b3ae63cf75f92b871e26fcbf77267c864c5a86570b36aebb2f"} Jan 26 18:38:03 crc kubenswrapper[4737]: I0126 18:38:03.995661 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-765f95fb8-5vxfr" podStartSLOduration=1.250076401 podStartE2EDuration="4.995634474s" podCreationTimestamp="2026-01-26 18:37:59 +0000 UTC" firstStartedPulling="2026-01-26 18:37:59.85496316 +0000 UTC m=+453.163157868" lastFinishedPulling="2026-01-26 18:38:03.600521233 +0000 UTC m=+456.908715941" observedRunningTime="2026-01-26 18:38:03.988634872 +0000 UTC m=+457.296829610" watchObservedRunningTime="2026-01-26 18:38:03.995634474 +0000 UTC m=+457.303829212" Jan 26 18:38:04 crc kubenswrapper[4737]: I0126 18:38:04.007720 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-57f7668bd6-kvv49" podStartSLOduration=2.536623856 podStartE2EDuration="5.007695451s" podCreationTimestamp="2026-01-26 18:37:59 +0000 UTC" firstStartedPulling="2026-01-26 18:38:01.137480116 +0000 UTC m=+454.445674824" lastFinishedPulling="2026-01-26 18:38:03.608551711 +0000 UTC m=+456.916746419" observedRunningTime="2026-01-26 18:38:04.006115142 +0000 UTC m=+457.314309880" watchObservedRunningTime="2026-01-26 18:38:04.007695451 +0000 UTC m=+457.315890169" Jan 26 18:38:05 crc kubenswrapper[4737]: I0126 18:38:05.005311 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"394af859-0214-46ab-8cd5-023c7f9a601c","Type":"ContainerStarted","Data":"46b6dc576ed706e0bcd4adc1c5b7c9876fca4edb512996470c89b29bc265c8fa"} Jan 26 18:38:05 crc kubenswrapper[4737]: I0126 18:38:05.005727 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"394af859-0214-46ab-8cd5-023c7f9a601c","Type":"ContainerStarted","Data":"19a88a6c046aa5fabe4682d44c21ae20bd026c585d125f90e3f24d5f82e5be65"} Jan 26 18:38:05 crc kubenswrapper[4737]: I0126 18:38:05.008400 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-fc8bc4478-pnz7r" event={"ID":"f1458df1-0b67-453c-b067-4823882ec184","Type":"ContainerStarted","Data":"d876cec30b424cd64ab6222349796beb91b26c2c59d8b419d73d8ca813462763"} Jan 26 18:38:05 crc kubenswrapper[4737]: I0126 18:38:05.008429 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-fc8bc4478-pnz7r" event={"ID":"f1458df1-0b67-453c-b067-4823882ec184","Type":"ContainerStarted","Data":"20b5ef9087bb08ca821ca973964b42cd4b8bab3999435e8a3270cc8a92cd5d87"} Jan 26 18:38:07 crc kubenswrapper[4737]: I0126 18:38:07.028025 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-fc8bc4478-pnz7r" event={"ID":"f1458df1-0b67-453c-b067-4823882ec184","Type":"ContainerStarted","Data":"2987c04113531e4651d369223d54fd0b4d0763caaaeb5f53d09c0e5bddb1a654"} Jan 26 18:38:07 crc kubenswrapper[4737]: I0126 18:38:07.028522 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-fc8bc4478-pnz7r" Jan 26 18:38:07 crc kubenswrapper[4737]: I0126 18:38:07.034966 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"3073e905-a285-4006-9d3b-f301c8a28733","Type":"ContainerStarted","Data":"1c03cb621bbad45daa9f422a3ade62b1bd1b474c1a084a3d30c17b8c92497cd8"} Jan 26 18:38:07 crc kubenswrapper[4737]: I0126 18:38:07.035035 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"3073e905-a285-4006-9d3b-f301c8a28733","Type":"ContainerStarted","Data":"d0b0ae5f452c00ec9078e5caabea010a1303ebec19992b0b06653df2fb8cdda0"} Jan 26 18:38:07 crc kubenswrapper[4737]: I0126 18:38:07.035051 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"3073e905-a285-4006-9d3b-f301c8a28733","Type":"ContainerStarted","Data":"83f67b8dceda53f34c262687a546bbc092b5e08fcac04d146126ce80290870a9"} Jan 26 18:38:07 crc kubenswrapper[4737]: I0126 18:38:07.035066 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"3073e905-a285-4006-9d3b-f301c8a28733","Type":"ContainerStarted","Data":"cae4f52fa064d29f4b149ca570b4be28941860ab122d53c935fce9f240d15d78"} Jan 26 18:38:07 crc kubenswrapper[4737]: I0126 18:38:07.035112 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"3073e905-a285-4006-9d3b-f301c8a28733","Type":"ContainerStarted","Data":"f0618aab49d54ff7a2215844b50b15b7a4e4a946c07888e114dfbde872603d0a"} Jan 26 18:38:07 crc kubenswrapper[4737]: I0126 18:38:07.035129 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"3073e905-a285-4006-9d3b-f301c8a28733","Type":"ContainerStarted","Data":"3d5315552c5284a0a253ab5eafc3972e3e9f40e3337c74ce69d77987aab78453"} Jan 26 18:38:07 crc kubenswrapper[4737]: I0126 18:38:07.039977 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"394af859-0214-46ab-8cd5-023c7f9a601c","Type":"ContainerStarted","Data":"e9112425ac12164df8d06bf7b95582e78840fbdfc9dfc92a7b5a0b47cb861b04"} Jan 26 18:38:07 crc kubenswrapper[4737]: I0126 18:38:07.040011 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"394af859-0214-46ab-8cd5-023c7f9a601c","Type":"ContainerStarted","Data":"33b4b6625bfeeb1d6580206e1f937bc4cb619035030df4d125fb579e26931ae0"} Jan 26 18:38:07 crc kubenswrapper[4737]: I0126 18:38:07.040027 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"394af859-0214-46ab-8cd5-023c7f9a601c","Type":"ContainerStarted","Data":"39041b9123865a6bf6723e74612968092a472cd2ebf04c3753780aa0c591e454"} Jan 26 18:38:07 crc kubenswrapper[4737]: I0126 18:38:07.044496 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-fc8bc4478-pnz7r" Jan 26 18:38:07 crc kubenswrapper[4737]: I0126 18:38:07.086187 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-fc8bc4478-pnz7r" podStartSLOduration=4.699337709 podStartE2EDuration="12.086146299s" podCreationTimestamp="2026-01-26 18:37:55 +0000 UTC" firstStartedPulling="2026-01-26 18:37:56.694452335 +0000 UTC m=+450.002647043" lastFinishedPulling="2026-01-26 18:38:04.081260925 +0000 UTC m=+457.389455633" observedRunningTime="2026-01-26 18:38:07.073747093 +0000 UTC m=+460.381941831" watchObservedRunningTime="2026-01-26 18:38:07.086146299 +0000 UTC m=+460.394341027" Jan 26 18:38:07 crc kubenswrapper[4737]: I0126 18:38:07.150290 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=4.00480629 podStartE2EDuration="8.150260689s" podCreationTimestamp="2026-01-26 18:37:59 +0000 UTC" firstStartedPulling="2026-01-26 18:38:01.95589015 +0000 UTC m=+455.264084868" lastFinishedPulling="2026-01-26 18:38:06.101344559 +0000 UTC m=+459.409539267" observedRunningTime="2026-01-26 18:38:07.145213085 +0000 UTC m=+460.453407793" watchObservedRunningTime="2026-01-26 18:38:07.150260689 +0000 UTC m=+460.458455407" Jan 26 18:38:08 crc kubenswrapper[4737]: I0126 18:38:08.852373 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7b675946d5-d6dz9" Jan 26 18:38:08 crc kubenswrapper[4737]: I0126 18:38:08.853632 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7b675946d5-d6dz9" Jan 26 18:38:08 crc kubenswrapper[4737]: I0126 18:38:08.858601 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7b675946d5-d6dz9" Jan 26 18:38:08 crc kubenswrapper[4737]: I0126 18:38:08.882978 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=7.056089756 podStartE2EDuration="14.882954927s" podCreationTimestamp="2026-01-26 18:37:54 +0000 UTC" firstStartedPulling="2026-01-26 18:37:55.77437357 +0000 UTC m=+449.082568278" lastFinishedPulling="2026-01-26 18:38:03.601238741 +0000 UTC m=+456.909433449" observedRunningTime="2026-01-26 18:38:07.178431754 +0000 UTC m=+460.486626472" watchObservedRunningTime="2026-01-26 18:38:08.882954927 +0000 UTC m=+462.191149635" Jan 26 18:38:09 crc kubenswrapper[4737]: I0126 18:38:09.060306 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7b675946d5-d6dz9" Jan 26 18:38:09 crc kubenswrapper[4737]: I0126 18:38:09.123878 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-hbdm4"] Jan 26 18:38:10 crc kubenswrapper[4737]: I0126 18:38:10.311719 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Jan 26 18:38:19 crc kubenswrapper[4737]: I0126 18:38:19.368264 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-765f95fb8-5vxfr" Jan 26 18:38:19 crc kubenswrapper[4737]: I0126 18:38:19.368562 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-765f95fb8-5vxfr" Jan 26 18:38:34 crc kubenswrapper[4737]: I0126 18:38:34.166756 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-hbdm4" podUID="255d9d52-daaf-41e1-be00-4a94de0a6324" containerName="console" containerID="cri-o://7ad1c983cd49e50a7eb1f5d187e10c3a08328d94624a10767a7aa06eea0c137c" gracePeriod=15 Jan 26 18:38:34 crc kubenswrapper[4737]: I0126 18:38:34.506958 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-hbdm4_255d9d52-daaf-41e1-be00-4a94de0a6324/console/0.log" Jan 26 18:38:34 crc kubenswrapper[4737]: I0126 18:38:34.507278 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-hbdm4" Jan 26 18:38:34 crc kubenswrapper[4737]: I0126 18:38:34.530462 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/255d9d52-daaf-41e1-be00-4a94de0a6324-console-serving-cert\") pod \"255d9d52-daaf-41e1-be00-4a94de0a6324\" (UID: \"255d9d52-daaf-41e1-be00-4a94de0a6324\") " Jan 26 18:38:34 crc kubenswrapper[4737]: I0126 18:38:34.536298 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/255d9d52-daaf-41e1-be00-4a94de0a6324-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "255d9d52-daaf-41e1-be00-4a94de0a6324" (UID: "255d9d52-daaf-41e1-be00-4a94de0a6324"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:38:34 crc kubenswrapper[4737]: I0126 18:38:34.631883 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/255d9d52-daaf-41e1-be00-4a94de0a6324-console-config\") pod \"255d9d52-daaf-41e1-be00-4a94de0a6324\" (UID: \"255d9d52-daaf-41e1-be00-4a94de0a6324\") " Jan 26 18:38:34 crc kubenswrapper[4737]: I0126 18:38:34.632357 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zf6dj\" (UniqueName: \"kubernetes.io/projected/255d9d52-daaf-41e1-be00-4a94de0a6324-kube-api-access-zf6dj\") pod \"255d9d52-daaf-41e1-be00-4a94de0a6324\" (UID: \"255d9d52-daaf-41e1-be00-4a94de0a6324\") " Jan 26 18:38:34 crc kubenswrapper[4737]: I0126 18:38:34.632436 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/255d9d52-daaf-41e1-be00-4a94de0a6324-trusted-ca-bundle\") pod \"255d9d52-daaf-41e1-be00-4a94de0a6324\" (UID: \"255d9d52-daaf-41e1-be00-4a94de0a6324\") " Jan 26 18:38:34 crc kubenswrapper[4737]: I0126 18:38:34.632479 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/255d9d52-daaf-41e1-be00-4a94de0a6324-oauth-serving-cert\") pod \"255d9d52-daaf-41e1-be00-4a94de0a6324\" (UID: \"255d9d52-daaf-41e1-be00-4a94de0a6324\") " Jan 26 18:38:34 crc kubenswrapper[4737]: I0126 18:38:34.632503 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/255d9d52-daaf-41e1-be00-4a94de0a6324-service-ca\") pod \"255d9d52-daaf-41e1-be00-4a94de0a6324\" (UID: \"255d9d52-daaf-41e1-be00-4a94de0a6324\") " Jan 26 18:38:34 crc kubenswrapper[4737]: I0126 18:38:34.632528 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/255d9d52-daaf-41e1-be00-4a94de0a6324-console-oauth-config\") pod \"255d9d52-daaf-41e1-be00-4a94de0a6324\" (UID: \"255d9d52-daaf-41e1-be00-4a94de0a6324\") " Jan 26 18:38:34 crc kubenswrapper[4737]: I0126 18:38:34.632715 4737 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/255d9d52-daaf-41e1-be00-4a94de0a6324-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:38:34 crc kubenswrapper[4737]: I0126 18:38:34.632948 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/255d9d52-daaf-41e1-be00-4a94de0a6324-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "255d9d52-daaf-41e1-be00-4a94de0a6324" (UID: "255d9d52-daaf-41e1-be00-4a94de0a6324"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:38:34 crc kubenswrapper[4737]: I0126 18:38:34.632963 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/255d9d52-daaf-41e1-be00-4a94de0a6324-console-config" (OuterVolumeSpecName: "console-config") pod "255d9d52-daaf-41e1-be00-4a94de0a6324" (UID: "255d9d52-daaf-41e1-be00-4a94de0a6324"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:38:34 crc kubenswrapper[4737]: I0126 18:38:34.633644 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/255d9d52-daaf-41e1-be00-4a94de0a6324-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "255d9d52-daaf-41e1-be00-4a94de0a6324" (UID: "255d9d52-daaf-41e1-be00-4a94de0a6324"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:38:34 crc kubenswrapper[4737]: I0126 18:38:34.633657 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/255d9d52-daaf-41e1-be00-4a94de0a6324-service-ca" (OuterVolumeSpecName: "service-ca") pod "255d9d52-daaf-41e1-be00-4a94de0a6324" (UID: "255d9d52-daaf-41e1-be00-4a94de0a6324"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:38:34 crc kubenswrapper[4737]: I0126 18:38:34.635900 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/255d9d52-daaf-41e1-be00-4a94de0a6324-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "255d9d52-daaf-41e1-be00-4a94de0a6324" (UID: "255d9d52-daaf-41e1-be00-4a94de0a6324"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:38:34 crc kubenswrapper[4737]: I0126 18:38:34.635905 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/255d9d52-daaf-41e1-be00-4a94de0a6324-kube-api-access-zf6dj" (OuterVolumeSpecName: "kube-api-access-zf6dj") pod "255d9d52-daaf-41e1-be00-4a94de0a6324" (UID: "255d9d52-daaf-41e1-be00-4a94de0a6324"). InnerVolumeSpecName "kube-api-access-zf6dj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:38:34 crc kubenswrapper[4737]: I0126 18:38:34.733698 4737 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/255d9d52-daaf-41e1-be00-4a94de0a6324-console-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:38:34 crc kubenswrapper[4737]: I0126 18:38:34.734023 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zf6dj\" (UniqueName: \"kubernetes.io/projected/255d9d52-daaf-41e1-be00-4a94de0a6324-kube-api-access-zf6dj\") on node \"crc\" DevicePath \"\"" Jan 26 18:38:34 crc kubenswrapper[4737]: I0126 18:38:34.734106 4737 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/255d9d52-daaf-41e1-be00-4a94de0a6324-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:38:34 crc kubenswrapper[4737]: I0126 18:38:34.734173 4737 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/255d9d52-daaf-41e1-be00-4a94de0a6324-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:38:34 crc kubenswrapper[4737]: I0126 18:38:34.734223 4737 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/255d9d52-daaf-41e1-be00-4a94de0a6324-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:38:34 crc kubenswrapper[4737]: I0126 18:38:34.734269 4737 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/255d9d52-daaf-41e1-be00-4a94de0a6324-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 18:38:35 crc kubenswrapper[4737]: I0126 18:38:35.218215 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-hbdm4_255d9d52-daaf-41e1-be00-4a94de0a6324/console/0.log" Jan 26 18:38:35 crc kubenswrapper[4737]: I0126 18:38:35.218277 4737 generic.go:334] "Generic (PLEG): container finished" podID="255d9d52-daaf-41e1-be00-4a94de0a6324" containerID="7ad1c983cd49e50a7eb1f5d187e10c3a08328d94624a10767a7aa06eea0c137c" exitCode=2 Jan 26 18:38:35 crc kubenswrapper[4737]: I0126 18:38:35.218311 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-hbdm4" event={"ID":"255d9d52-daaf-41e1-be00-4a94de0a6324","Type":"ContainerDied","Data":"7ad1c983cd49e50a7eb1f5d187e10c3a08328d94624a10767a7aa06eea0c137c"} Jan 26 18:38:35 crc kubenswrapper[4737]: I0126 18:38:35.218340 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-hbdm4" event={"ID":"255d9d52-daaf-41e1-be00-4a94de0a6324","Type":"ContainerDied","Data":"cc3bb592bcc22180a1d958bf5bdaaf966a903ba616b9b7c7dcf4a2f47bfa9027"} Jan 26 18:38:35 crc kubenswrapper[4737]: I0126 18:38:35.218353 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-hbdm4" Jan 26 18:38:35 crc kubenswrapper[4737]: I0126 18:38:35.218360 4737 scope.go:117] "RemoveContainer" containerID="7ad1c983cd49e50a7eb1f5d187e10c3a08328d94624a10767a7aa06eea0c137c" Jan 26 18:38:36 crc kubenswrapper[4737]: I0126 18:38:36.645640 4737 scope.go:117] "RemoveContainer" containerID="7ad1c983cd49e50a7eb1f5d187e10c3a08328d94624a10767a7aa06eea0c137c" Jan 26 18:38:36 crc kubenswrapper[4737]: E0126 18:38:36.646144 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ad1c983cd49e50a7eb1f5d187e10c3a08328d94624a10767a7aa06eea0c137c\": container with ID starting with 7ad1c983cd49e50a7eb1f5d187e10c3a08328d94624a10767a7aa06eea0c137c not found: ID does not exist" containerID="7ad1c983cd49e50a7eb1f5d187e10c3a08328d94624a10767a7aa06eea0c137c" Jan 26 18:38:36 crc kubenswrapper[4737]: I0126 18:38:36.646168 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ad1c983cd49e50a7eb1f5d187e10c3a08328d94624a10767a7aa06eea0c137c"} err="failed to get container status \"7ad1c983cd49e50a7eb1f5d187e10c3a08328d94624a10767a7aa06eea0c137c\": rpc error: code = NotFound desc = could not find container \"7ad1c983cd49e50a7eb1f5d187e10c3a08328d94624a10767a7aa06eea0c137c\": container with ID starting with 7ad1c983cd49e50a7eb1f5d187e10c3a08328d94624a10767a7aa06eea0c137c not found: ID does not exist" Jan 26 18:38:36 crc kubenswrapper[4737]: I0126 18:38:36.672344 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-hbdm4"] Jan 26 18:38:36 crc kubenswrapper[4737]: I0126 18:38:36.674640 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-hbdm4"] Jan 26 18:38:36 crc kubenswrapper[4737]: I0126 18:38:36.992532 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="255d9d52-daaf-41e1-be00-4a94de0a6324" path="/var/lib/kubelet/pods/255d9d52-daaf-41e1-be00-4a94de0a6324/volumes" Jan 26 18:38:39 crc kubenswrapper[4737]: I0126 18:38:39.374248 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-765f95fb8-5vxfr" Jan 26 18:38:39 crc kubenswrapper[4737]: I0126 18:38:39.381341 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-765f95fb8-5vxfr" Jan 26 18:39:00 crc kubenswrapper[4737]: I0126 18:39:00.312325 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Jan 26 18:39:00 crc kubenswrapper[4737]: I0126 18:39:00.339095 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Jan 26 18:39:00 crc kubenswrapper[4737]: I0126 18:39:00.518100 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Jan 26 18:39:37 crc kubenswrapper[4737]: I0126 18:39:37.519043 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-64f7cd9bf9-xgwrd"] Jan 26 18:39:37 crc kubenswrapper[4737]: E0126 18:39:37.520028 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="255d9d52-daaf-41e1-be00-4a94de0a6324" containerName="console" Jan 26 18:39:37 crc kubenswrapper[4737]: I0126 18:39:37.520045 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="255d9d52-daaf-41e1-be00-4a94de0a6324" containerName="console" Jan 26 18:39:37 crc kubenswrapper[4737]: I0126 18:39:37.520219 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="255d9d52-daaf-41e1-be00-4a94de0a6324" containerName="console" Jan 26 18:39:37 crc kubenswrapper[4737]: I0126 18:39:37.520678 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64f7cd9bf9-xgwrd" Jan 26 18:39:37 crc kubenswrapper[4737]: I0126 18:39:37.544836 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64f7cd9bf9-xgwrd"] Jan 26 18:39:37 crc kubenswrapper[4737]: I0126 18:39:37.561592 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2-console-config\") pod \"console-64f7cd9bf9-xgwrd\" (UID: \"fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2\") " pod="openshift-console/console-64f7cd9bf9-xgwrd" Jan 26 18:39:37 crc kubenswrapper[4737]: I0126 18:39:37.561702 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2-oauth-serving-cert\") pod \"console-64f7cd9bf9-xgwrd\" (UID: \"fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2\") " pod="openshift-console/console-64f7cd9bf9-xgwrd" Jan 26 18:39:37 crc kubenswrapper[4737]: I0126 18:39:37.561726 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgdq8\" (UniqueName: \"kubernetes.io/projected/fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2-kube-api-access-pgdq8\") pod \"console-64f7cd9bf9-xgwrd\" (UID: \"fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2\") " pod="openshift-console/console-64f7cd9bf9-xgwrd" Jan 26 18:39:37 crc kubenswrapper[4737]: I0126 18:39:37.561763 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2-console-serving-cert\") pod \"console-64f7cd9bf9-xgwrd\" (UID: \"fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2\") " pod="openshift-console/console-64f7cd9bf9-xgwrd" Jan 26 18:39:37 crc kubenswrapper[4737]: I0126 18:39:37.561781 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2-console-oauth-config\") pod \"console-64f7cd9bf9-xgwrd\" (UID: \"fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2\") " pod="openshift-console/console-64f7cd9bf9-xgwrd" Jan 26 18:39:37 crc kubenswrapper[4737]: I0126 18:39:37.561806 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2-trusted-ca-bundle\") pod \"console-64f7cd9bf9-xgwrd\" (UID: \"fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2\") " pod="openshift-console/console-64f7cd9bf9-xgwrd" Jan 26 18:39:37 crc kubenswrapper[4737]: I0126 18:39:37.561828 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2-service-ca\") pod \"console-64f7cd9bf9-xgwrd\" (UID: \"fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2\") " pod="openshift-console/console-64f7cd9bf9-xgwrd" Jan 26 18:39:37 crc kubenswrapper[4737]: I0126 18:39:37.663216 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2-console-serving-cert\") pod \"console-64f7cd9bf9-xgwrd\" (UID: \"fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2\") " pod="openshift-console/console-64f7cd9bf9-xgwrd" Jan 26 18:39:37 crc kubenswrapper[4737]: I0126 18:39:37.663268 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2-console-oauth-config\") pod \"console-64f7cd9bf9-xgwrd\" (UID: \"fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2\") " pod="openshift-console/console-64f7cd9bf9-xgwrd" Jan 26 18:39:37 crc kubenswrapper[4737]: I0126 18:39:37.663304 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2-trusted-ca-bundle\") pod \"console-64f7cd9bf9-xgwrd\" (UID: \"fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2\") " pod="openshift-console/console-64f7cd9bf9-xgwrd" Jan 26 18:39:37 crc kubenswrapper[4737]: I0126 18:39:37.663330 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2-service-ca\") pod \"console-64f7cd9bf9-xgwrd\" (UID: \"fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2\") " pod="openshift-console/console-64f7cd9bf9-xgwrd" Jan 26 18:39:37 crc kubenswrapper[4737]: I0126 18:39:37.663352 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2-console-config\") pod \"console-64f7cd9bf9-xgwrd\" (UID: \"fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2\") " pod="openshift-console/console-64f7cd9bf9-xgwrd" Jan 26 18:39:37 crc kubenswrapper[4737]: I0126 18:39:37.663432 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2-oauth-serving-cert\") pod \"console-64f7cd9bf9-xgwrd\" (UID: \"fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2\") " pod="openshift-console/console-64f7cd9bf9-xgwrd" Jan 26 18:39:37 crc kubenswrapper[4737]: I0126 18:39:37.663457 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgdq8\" (UniqueName: \"kubernetes.io/projected/fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2-kube-api-access-pgdq8\") pod \"console-64f7cd9bf9-xgwrd\" (UID: \"fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2\") " pod="openshift-console/console-64f7cd9bf9-xgwrd" Jan 26 18:39:37 crc kubenswrapper[4737]: I0126 18:39:37.665132 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2-console-config\") pod \"console-64f7cd9bf9-xgwrd\" (UID: \"fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2\") " pod="openshift-console/console-64f7cd9bf9-xgwrd" Jan 26 18:39:37 crc kubenswrapper[4737]: I0126 18:39:37.665468 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2-trusted-ca-bundle\") pod \"console-64f7cd9bf9-xgwrd\" (UID: \"fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2\") " pod="openshift-console/console-64f7cd9bf9-xgwrd" Jan 26 18:39:37 crc kubenswrapper[4737]: I0126 18:39:37.665469 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2-service-ca\") pod \"console-64f7cd9bf9-xgwrd\" (UID: \"fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2\") " pod="openshift-console/console-64f7cd9bf9-xgwrd" Jan 26 18:39:37 crc kubenswrapper[4737]: I0126 18:39:37.665564 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2-oauth-serving-cert\") pod \"console-64f7cd9bf9-xgwrd\" (UID: \"fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2\") " pod="openshift-console/console-64f7cd9bf9-xgwrd" Jan 26 18:39:37 crc kubenswrapper[4737]: I0126 18:39:37.670953 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2-console-oauth-config\") pod \"console-64f7cd9bf9-xgwrd\" (UID: \"fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2\") " pod="openshift-console/console-64f7cd9bf9-xgwrd" Jan 26 18:39:37 crc kubenswrapper[4737]: I0126 18:39:37.671054 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2-console-serving-cert\") pod \"console-64f7cd9bf9-xgwrd\" (UID: \"fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2\") " pod="openshift-console/console-64f7cd9bf9-xgwrd" Jan 26 18:39:37 crc kubenswrapper[4737]: I0126 18:39:37.680679 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgdq8\" (UniqueName: \"kubernetes.io/projected/fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2-kube-api-access-pgdq8\") pod \"console-64f7cd9bf9-xgwrd\" (UID: \"fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2\") " pod="openshift-console/console-64f7cd9bf9-xgwrd" Jan 26 18:39:37 crc kubenswrapper[4737]: I0126 18:39:37.843956 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64f7cd9bf9-xgwrd" Jan 26 18:39:38 crc kubenswrapper[4737]: I0126 18:39:38.071803 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64f7cd9bf9-xgwrd"] Jan 26 18:39:38 crc kubenswrapper[4737]: I0126 18:39:38.766593 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64f7cd9bf9-xgwrd" event={"ID":"fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2","Type":"ContainerStarted","Data":"de17e6a3af95874f5ea0aab3ef32b338f257ab819a737dd6a479c97153e3feda"} Jan 26 18:39:38 crc kubenswrapper[4737]: I0126 18:39:38.766665 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64f7cd9bf9-xgwrd" event={"ID":"fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2","Type":"ContainerStarted","Data":"e1f4f889bf489708f34282013ab292842dafc3316391d97bfa100fa2263d2a01"} Jan 26 18:39:47 crc kubenswrapper[4737]: I0126 18:39:47.844427 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64f7cd9bf9-xgwrd" Jan 26 18:39:47 crc kubenswrapper[4737]: I0126 18:39:47.845054 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-64f7cd9bf9-xgwrd" Jan 26 18:39:47 crc kubenswrapper[4737]: I0126 18:39:47.849158 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64f7cd9bf9-xgwrd" Jan 26 18:39:47 crc kubenswrapper[4737]: I0126 18:39:47.870781 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64f7cd9bf9-xgwrd" podStartSLOduration=10.870763972 podStartE2EDuration="10.870763972s" podCreationTimestamp="2026-01-26 18:39:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:39:38.800981242 +0000 UTC m=+552.109175940" watchObservedRunningTime="2026-01-26 18:39:47.870763972 +0000 UTC m=+561.178958680" Jan 26 18:39:48 crc kubenswrapper[4737]: I0126 18:39:48.839611 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64f7cd9bf9-xgwrd" Jan 26 18:39:48 crc kubenswrapper[4737]: I0126 18:39:48.896418 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7b675946d5-d6dz9"] Jan 26 18:40:00 crc kubenswrapper[4737]: I0126 18:40:00.948958 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:40:00 crc kubenswrapper[4737]: I0126 18:40:00.949647 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:40:13 crc kubenswrapper[4737]: I0126 18:40:13.948042 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-7b675946d5-d6dz9" podUID="2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa" containerName="console" containerID="cri-o://50d84adbe7f5c323b80c92bea6e74486dea96c69fc5c7cafd70fd3cda81a03d3" gracePeriod=15 Jan 26 18:40:14 crc kubenswrapper[4737]: I0126 18:40:14.323491 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7b675946d5-d6dz9_2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa/console/0.log" Jan 26 18:40:14 crc kubenswrapper[4737]: I0126 18:40:14.323899 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7b675946d5-d6dz9" Jan 26 18:40:14 crc kubenswrapper[4737]: I0126 18:40:14.412029 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4285b\" (UniqueName: \"kubernetes.io/projected/2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa-kube-api-access-4285b\") pod \"2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa\" (UID: \"2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa\") " Jan 26 18:40:14 crc kubenswrapper[4737]: I0126 18:40:14.412141 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa-service-ca\") pod \"2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa\" (UID: \"2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa\") " Jan 26 18:40:14 crc kubenswrapper[4737]: I0126 18:40:14.412181 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa-console-oauth-config\") pod \"2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa\" (UID: \"2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa\") " Jan 26 18:40:14 crc kubenswrapper[4737]: I0126 18:40:14.412253 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa-oauth-serving-cert\") pod \"2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa\" (UID: \"2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa\") " Jan 26 18:40:14 crc kubenswrapper[4737]: I0126 18:40:14.412293 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa-trusted-ca-bundle\") pod \"2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa\" (UID: \"2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa\") " Jan 26 18:40:14 crc kubenswrapper[4737]: I0126 18:40:14.412329 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa-console-config\") pod \"2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa\" (UID: \"2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa\") " Jan 26 18:40:14 crc kubenswrapper[4737]: I0126 18:40:14.412447 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa-console-serving-cert\") pod \"2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa\" (UID: \"2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa\") " Jan 26 18:40:14 crc kubenswrapper[4737]: I0126 18:40:14.413536 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa-service-ca" (OuterVolumeSpecName: "service-ca") pod "2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa" (UID: "2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:40:14 crc kubenswrapper[4737]: I0126 18:40:14.413559 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa" (UID: "2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:40:14 crc kubenswrapper[4737]: I0126 18:40:14.413529 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa-console-config" (OuterVolumeSpecName: "console-config") pod "2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa" (UID: "2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:40:14 crc kubenswrapper[4737]: I0126 18:40:14.413797 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa" (UID: "2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:40:14 crc kubenswrapper[4737]: I0126 18:40:14.418760 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa" (UID: "2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:40:14 crc kubenswrapper[4737]: I0126 18:40:14.419165 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa" (UID: "2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:40:14 crc kubenswrapper[4737]: I0126 18:40:14.420242 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa-kube-api-access-4285b" (OuterVolumeSpecName: "kube-api-access-4285b") pod "2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa" (UID: "2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa"). InnerVolumeSpecName "kube-api-access-4285b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:40:14 crc kubenswrapper[4737]: I0126 18:40:14.515421 4737 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 18:40:14 crc kubenswrapper[4737]: I0126 18:40:14.515519 4737 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:40:14 crc kubenswrapper[4737]: I0126 18:40:14.515562 4737 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:40:14 crc kubenswrapper[4737]: I0126 18:40:14.515583 4737 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:40:14 crc kubenswrapper[4737]: I0126 18:40:14.515610 4737 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa-console-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:40:14 crc kubenswrapper[4737]: I0126 18:40:14.515628 4737 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:40:14 crc kubenswrapper[4737]: I0126 18:40:14.515646 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4285b\" (UniqueName: \"kubernetes.io/projected/2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa-kube-api-access-4285b\") on node \"crc\" DevicePath \"\"" Jan 26 18:40:15 crc kubenswrapper[4737]: I0126 18:40:15.002845 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7b675946d5-d6dz9_2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa/console/0.log" Jan 26 18:40:15 crc kubenswrapper[4737]: I0126 18:40:15.002902 4737 generic.go:334] "Generic (PLEG): container finished" podID="2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa" containerID="50d84adbe7f5c323b80c92bea6e74486dea96c69fc5c7cafd70fd3cda81a03d3" exitCode=2 Jan 26 18:40:15 crc kubenswrapper[4737]: I0126 18:40:15.002935 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7b675946d5-d6dz9" event={"ID":"2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa","Type":"ContainerDied","Data":"50d84adbe7f5c323b80c92bea6e74486dea96c69fc5c7cafd70fd3cda81a03d3"} Jan 26 18:40:15 crc kubenswrapper[4737]: I0126 18:40:15.002964 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7b675946d5-d6dz9" event={"ID":"2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa","Type":"ContainerDied","Data":"08d8e04e96112709e9f57aca68a49e89555efb4759b1dabb4d6db3abebc33fe0"} Jan 26 18:40:15 crc kubenswrapper[4737]: I0126 18:40:15.002982 4737 scope.go:117] "RemoveContainer" containerID="50d84adbe7f5c323b80c92bea6e74486dea96c69fc5c7cafd70fd3cda81a03d3" Jan 26 18:40:15 crc kubenswrapper[4737]: I0126 18:40:15.003116 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7b675946d5-d6dz9" Jan 26 18:40:15 crc kubenswrapper[4737]: I0126 18:40:15.034087 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7b675946d5-d6dz9"] Jan 26 18:40:15 crc kubenswrapper[4737]: I0126 18:40:15.043115 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-7b675946d5-d6dz9"] Jan 26 18:40:15 crc kubenswrapper[4737]: I0126 18:40:15.050360 4737 scope.go:117] "RemoveContainer" containerID="50d84adbe7f5c323b80c92bea6e74486dea96c69fc5c7cafd70fd3cda81a03d3" Jan 26 18:40:15 crc kubenswrapper[4737]: E0126 18:40:15.051044 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50d84adbe7f5c323b80c92bea6e74486dea96c69fc5c7cafd70fd3cda81a03d3\": container with ID starting with 50d84adbe7f5c323b80c92bea6e74486dea96c69fc5c7cafd70fd3cda81a03d3 not found: ID does not exist" containerID="50d84adbe7f5c323b80c92bea6e74486dea96c69fc5c7cafd70fd3cda81a03d3" Jan 26 18:40:15 crc kubenswrapper[4737]: I0126 18:40:15.051092 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50d84adbe7f5c323b80c92bea6e74486dea96c69fc5c7cafd70fd3cda81a03d3"} err="failed to get container status \"50d84adbe7f5c323b80c92bea6e74486dea96c69fc5c7cafd70fd3cda81a03d3\": rpc error: code = NotFound desc = could not find container \"50d84adbe7f5c323b80c92bea6e74486dea96c69fc5c7cafd70fd3cda81a03d3\": container with ID starting with 50d84adbe7f5c323b80c92bea6e74486dea96c69fc5c7cafd70fd3cda81a03d3 not found: ID does not exist" Jan 26 18:40:16 crc kubenswrapper[4737]: I0126 18:40:16.995903 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa" path="/var/lib/kubelet/pods/2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa/volumes" Jan 26 18:40:30 crc kubenswrapper[4737]: I0126 18:40:30.949042 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:40:30 crc kubenswrapper[4737]: I0126 18:40:30.949625 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:41:00 crc kubenswrapper[4737]: I0126 18:41:00.949505 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:41:00 crc kubenswrapper[4737]: I0126 18:41:00.950163 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:41:00 crc kubenswrapper[4737]: I0126 18:41:00.950211 4737 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" Jan 26 18:41:00 crc kubenswrapper[4737]: I0126 18:41:00.950835 4737 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"85a890545a9ff2202b93191292b7341bdb6c769889c0a4e83764a0aa6d4f8d25"} pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 18:41:00 crc kubenswrapper[4737]: I0126 18:41:00.950888 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" containerID="cri-o://85a890545a9ff2202b93191292b7341bdb6c769889c0a4e83764a0aa6d4f8d25" gracePeriod=600 Jan 26 18:41:01 crc kubenswrapper[4737]: I0126 18:41:01.312994 4737 generic.go:334] "Generic (PLEG): container finished" podID="afd75772-7900-46c3-b392-afb075e1cc08" containerID="85a890545a9ff2202b93191292b7341bdb6c769889c0a4e83764a0aa6d4f8d25" exitCode=0 Jan 26 18:41:01 crc kubenswrapper[4737]: I0126 18:41:01.313061 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" event={"ID":"afd75772-7900-46c3-b392-afb075e1cc08","Type":"ContainerDied","Data":"85a890545a9ff2202b93191292b7341bdb6c769889c0a4e83764a0aa6d4f8d25"} Jan 26 18:41:01 crc kubenswrapper[4737]: I0126 18:41:01.313427 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" event={"ID":"afd75772-7900-46c3-b392-afb075e1cc08","Type":"ContainerStarted","Data":"a5aff21eb61341220e1d5ffef1d177ada5231e294c0204cf3d50e84b8883bcdf"} Jan 26 18:41:01 crc kubenswrapper[4737]: I0126 18:41:01.313449 4737 scope.go:117] "RemoveContainer" containerID="8783fe741322f0ba5562aa3c7abb35f1d6a9263f4a157b075924b1c99832d130" Jan 26 18:42:42 crc kubenswrapper[4737]: I0126 18:42:42.748505 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087hj6x"] Jan 26 18:42:42 crc kubenswrapper[4737]: E0126 18:42:42.749496 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa" containerName="console" Jan 26 18:42:42 crc kubenswrapper[4737]: I0126 18:42:42.749513 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa" containerName="console" Jan 26 18:42:42 crc kubenswrapper[4737]: I0126 18:42:42.749646 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a4a5a1b-cf9e-4c2d-8f1f-528e57fe80fa" containerName="console" Jan 26 18:42:42 crc kubenswrapper[4737]: I0126 18:42:42.750622 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087hj6x" Jan 26 18:42:42 crc kubenswrapper[4737]: I0126 18:42:42.753369 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 26 18:42:42 crc kubenswrapper[4737]: I0126 18:42:42.760284 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087hj6x"] Jan 26 18:42:42 crc kubenswrapper[4737]: I0126 18:42:42.809210 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2tkw\" (UniqueName: \"kubernetes.io/projected/c801ad0c-6ec9-4497-ba0d-bad429d70783-kube-api-access-p2tkw\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087hj6x\" (UID: \"c801ad0c-6ec9-4497-ba0d-bad429d70783\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087hj6x" Jan 26 18:42:42 crc kubenswrapper[4737]: I0126 18:42:42.809502 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c801ad0c-6ec9-4497-ba0d-bad429d70783-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087hj6x\" (UID: \"c801ad0c-6ec9-4497-ba0d-bad429d70783\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087hj6x" Jan 26 18:42:42 crc kubenswrapper[4737]: I0126 18:42:42.809600 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c801ad0c-6ec9-4497-ba0d-bad429d70783-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087hj6x\" (UID: \"c801ad0c-6ec9-4497-ba0d-bad429d70783\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087hj6x" Jan 26 18:42:42 crc kubenswrapper[4737]: I0126 18:42:42.911061 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c801ad0c-6ec9-4497-ba0d-bad429d70783-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087hj6x\" (UID: \"c801ad0c-6ec9-4497-ba0d-bad429d70783\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087hj6x" Jan 26 18:42:42 crc kubenswrapper[4737]: I0126 18:42:42.911392 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2tkw\" (UniqueName: \"kubernetes.io/projected/c801ad0c-6ec9-4497-ba0d-bad429d70783-kube-api-access-p2tkw\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087hj6x\" (UID: \"c801ad0c-6ec9-4497-ba0d-bad429d70783\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087hj6x" Jan 26 18:42:42 crc kubenswrapper[4737]: I0126 18:42:42.911491 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c801ad0c-6ec9-4497-ba0d-bad429d70783-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087hj6x\" (UID: \"c801ad0c-6ec9-4497-ba0d-bad429d70783\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087hj6x" Jan 26 18:42:42 crc kubenswrapper[4737]: I0126 18:42:42.911513 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c801ad0c-6ec9-4497-ba0d-bad429d70783-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087hj6x\" (UID: \"c801ad0c-6ec9-4497-ba0d-bad429d70783\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087hj6x" Jan 26 18:42:42 crc kubenswrapper[4737]: I0126 18:42:42.912288 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c801ad0c-6ec9-4497-ba0d-bad429d70783-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087hj6x\" (UID: \"c801ad0c-6ec9-4497-ba0d-bad429d70783\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087hj6x" Jan 26 18:42:42 crc kubenswrapper[4737]: I0126 18:42:42.939137 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2tkw\" (UniqueName: \"kubernetes.io/projected/c801ad0c-6ec9-4497-ba0d-bad429d70783-kube-api-access-p2tkw\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087hj6x\" (UID: \"c801ad0c-6ec9-4497-ba0d-bad429d70783\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087hj6x" Jan 26 18:42:43 crc kubenswrapper[4737]: I0126 18:42:43.070467 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087hj6x" Jan 26 18:42:43 crc kubenswrapper[4737]: I0126 18:42:43.495195 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087hj6x"] Jan 26 18:42:44 crc kubenswrapper[4737]: I0126 18:42:44.066618 4737 generic.go:334] "Generic (PLEG): container finished" podID="c801ad0c-6ec9-4497-ba0d-bad429d70783" containerID="5ef9ad722a4bb8c5799993f3c46c17a2674d222cdcbb3a703033a932b63770c5" exitCode=0 Jan 26 18:42:44 crc kubenswrapper[4737]: I0126 18:42:44.066656 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087hj6x" event={"ID":"c801ad0c-6ec9-4497-ba0d-bad429d70783","Type":"ContainerDied","Data":"5ef9ad722a4bb8c5799993f3c46c17a2674d222cdcbb3a703033a932b63770c5"} Jan 26 18:42:44 crc kubenswrapper[4737]: I0126 18:42:44.067008 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087hj6x" event={"ID":"c801ad0c-6ec9-4497-ba0d-bad429d70783","Type":"ContainerStarted","Data":"f219132913d6d8be38241ea7eff1b946f5ba1ce9a57bb71a8e2a147a7e22bafb"} Jan 26 18:42:46 crc kubenswrapper[4737]: I0126 18:42:46.082020 4737 generic.go:334] "Generic (PLEG): container finished" podID="c801ad0c-6ec9-4497-ba0d-bad429d70783" containerID="31f2b016eda5fb5575b882548fbe2e4fc0c1841444364f285d2a3b73903eeffc" exitCode=0 Jan 26 18:42:46 crc kubenswrapper[4737]: I0126 18:42:46.082053 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087hj6x" event={"ID":"c801ad0c-6ec9-4497-ba0d-bad429d70783","Type":"ContainerDied","Data":"31f2b016eda5fb5575b882548fbe2e4fc0c1841444364f285d2a3b73903eeffc"} Jan 26 18:42:47 crc kubenswrapper[4737]: I0126 18:42:47.091938 4737 generic.go:334] "Generic (PLEG): container finished" podID="c801ad0c-6ec9-4497-ba0d-bad429d70783" containerID="ae03a377e0a54edd60d6258fdd05cd987ad5a0b36e49644c0ff9dd270fef4a4c" exitCode=0 Jan 26 18:42:47 crc kubenswrapper[4737]: I0126 18:42:47.091982 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087hj6x" event={"ID":"c801ad0c-6ec9-4497-ba0d-bad429d70783","Type":"ContainerDied","Data":"ae03a377e0a54edd60d6258fdd05cd987ad5a0b36e49644c0ff9dd270fef4a4c"} Jan 26 18:42:48 crc kubenswrapper[4737]: I0126 18:42:48.372952 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087hj6x" Jan 26 18:42:48 crc kubenswrapper[4737]: I0126 18:42:48.431488 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c801ad0c-6ec9-4497-ba0d-bad429d70783-util\") pod \"c801ad0c-6ec9-4497-ba0d-bad429d70783\" (UID: \"c801ad0c-6ec9-4497-ba0d-bad429d70783\") " Jan 26 18:42:48 crc kubenswrapper[4737]: I0126 18:42:48.431557 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2tkw\" (UniqueName: \"kubernetes.io/projected/c801ad0c-6ec9-4497-ba0d-bad429d70783-kube-api-access-p2tkw\") pod \"c801ad0c-6ec9-4497-ba0d-bad429d70783\" (UID: \"c801ad0c-6ec9-4497-ba0d-bad429d70783\") " Jan 26 18:42:48 crc kubenswrapper[4737]: I0126 18:42:48.432784 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c801ad0c-6ec9-4497-ba0d-bad429d70783-bundle\") pod \"c801ad0c-6ec9-4497-ba0d-bad429d70783\" (UID: \"c801ad0c-6ec9-4497-ba0d-bad429d70783\") " Jan 26 18:42:48 crc kubenswrapper[4737]: I0126 18:42:48.435921 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c801ad0c-6ec9-4497-ba0d-bad429d70783-bundle" (OuterVolumeSpecName: "bundle") pod "c801ad0c-6ec9-4497-ba0d-bad429d70783" (UID: "c801ad0c-6ec9-4497-ba0d-bad429d70783"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:42:48 crc kubenswrapper[4737]: I0126 18:42:48.437932 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c801ad0c-6ec9-4497-ba0d-bad429d70783-kube-api-access-p2tkw" (OuterVolumeSpecName: "kube-api-access-p2tkw") pod "c801ad0c-6ec9-4497-ba0d-bad429d70783" (UID: "c801ad0c-6ec9-4497-ba0d-bad429d70783"). InnerVolumeSpecName "kube-api-access-p2tkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:42:48 crc kubenswrapper[4737]: I0126 18:42:48.445099 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c801ad0c-6ec9-4497-ba0d-bad429d70783-util" (OuterVolumeSpecName: "util") pod "c801ad0c-6ec9-4497-ba0d-bad429d70783" (UID: "c801ad0c-6ec9-4497-ba0d-bad429d70783"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:42:48 crc kubenswrapper[4737]: I0126 18:42:48.534641 4737 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c801ad0c-6ec9-4497-ba0d-bad429d70783-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:48 crc kubenswrapper[4737]: I0126 18:42:48.534685 4737 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c801ad0c-6ec9-4497-ba0d-bad429d70783-util\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:48 crc kubenswrapper[4737]: I0126 18:42:48.534699 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2tkw\" (UniqueName: \"kubernetes.io/projected/c801ad0c-6ec9-4497-ba0d-bad429d70783-kube-api-access-p2tkw\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:49 crc kubenswrapper[4737]: I0126 18:42:49.105757 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087hj6x" event={"ID":"c801ad0c-6ec9-4497-ba0d-bad429d70783","Type":"ContainerDied","Data":"f219132913d6d8be38241ea7eff1b946f5ba1ce9a57bb71a8e2a147a7e22bafb"} Jan 26 18:42:49 crc kubenswrapper[4737]: I0126 18:42:49.106439 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f219132913d6d8be38241ea7eff1b946f5ba1ce9a57bb71a8e2a147a7e22bafb" Jan 26 18:42:49 crc kubenswrapper[4737]: I0126 18:42:49.105784 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087hj6x" Jan 26 18:42:53 crc kubenswrapper[4737]: I0126 18:42:53.626663 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jgjrk"] Jan 26 18:42:53 crc kubenswrapper[4737]: I0126 18:42:53.627544 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" podUID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" containerName="ovn-controller" containerID="cri-o://067cf449746568a0f2fa056863be0cc0bf40390eb6f239e011639fdc05f2ea8f" gracePeriod=30 Jan 26 18:42:53 crc kubenswrapper[4737]: I0126 18:42:53.627988 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" podUID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" containerName="sbdb" containerID="cri-o://570bf995c9ab0a04cff8ada5b82ef19c9299d86ab480a43ea1446a3aedb8cd86" gracePeriod=30 Jan 26 18:42:53 crc kubenswrapper[4737]: I0126 18:42:53.628033 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" podUID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" containerName="nbdb" containerID="cri-o://0330027f82eafcc297d9ea91babd144a993a1f9d5b5f376274521364421fb70d" gracePeriod=30 Jan 26 18:42:53 crc kubenswrapper[4737]: I0126 18:42:53.628116 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" podUID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" containerName="northd" containerID="cri-o://8b3d9e7e5a84aa89a81ca65443973a1a75bc1b54c2f3f5cbd6cf7a00f8d04704" gracePeriod=30 Jan 26 18:42:53 crc kubenswrapper[4737]: I0126 18:42:53.628163 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" podUID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://13f6776860714e1ab348c9b7a767366f0b4b425d08ed27ee64abfaf2770f1889" gracePeriod=30 Jan 26 18:42:53 crc kubenswrapper[4737]: I0126 18:42:53.628199 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" podUID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" containerName="kube-rbac-proxy-node" containerID="cri-o://66ec75b04c2383311d9d4c54148415f6f45821810aa9e68c12fa36c22637341c" gracePeriod=30 Jan 26 18:42:53 crc kubenswrapper[4737]: I0126 18:42:53.628236 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" podUID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" containerName="ovn-acl-logging" containerID="cri-o://ee2019712957d6ff1e329746e69d806c2cb554917815ebbac73b321965e5d981" gracePeriod=30 Jan 26 18:42:53 crc kubenswrapper[4737]: I0126 18:42:53.665327 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" podUID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" containerName="ovnkube-controller" containerID="cri-o://8e27b6f397361e34d0d8df88916c81d9690564a360505f53b30d8bee1858d35b" gracePeriod=30 Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.141225 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jgjrk_ecb40773-20dc-48ef-bf7f-17f4a042b01c/ovnkube-controller/3.log" Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.143589 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jgjrk_ecb40773-20dc-48ef-bf7f-17f4a042b01c/ovn-acl-logging/0.log" Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.144142 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jgjrk_ecb40773-20dc-48ef-bf7f-17f4a042b01c/ovn-controller/0.log" Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.144717 4737 generic.go:334] "Generic (PLEG): container finished" podID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" containerID="8e27b6f397361e34d0d8df88916c81d9690564a360505f53b30d8bee1858d35b" exitCode=0 Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.144753 4737 generic.go:334] "Generic (PLEG): container finished" podID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" containerID="570bf995c9ab0a04cff8ada5b82ef19c9299d86ab480a43ea1446a3aedb8cd86" exitCode=0 Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.144763 4737 generic.go:334] "Generic (PLEG): container finished" podID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" containerID="0330027f82eafcc297d9ea91babd144a993a1f9d5b5f376274521364421fb70d" exitCode=0 Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.144771 4737 generic.go:334] "Generic (PLEG): container finished" podID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" containerID="8b3d9e7e5a84aa89a81ca65443973a1a75bc1b54c2f3f5cbd6cf7a00f8d04704" exitCode=0 Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.144783 4737 generic.go:334] "Generic (PLEG): container finished" podID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" containerID="ee2019712957d6ff1e329746e69d806c2cb554917815ebbac73b321965e5d981" exitCode=143 Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.144792 4737 generic.go:334] "Generic (PLEG): container finished" podID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" containerID="067cf449746568a0f2fa056863be0cc0bf40390eb6f239e011639fdc05f2ea8f" exitCode=143 Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.144797 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" event={"ID":"ecb40773-20dc-48ef-bf7f-17f4a042b01c","Type":"ContainerDied","Data":"8e27b6f397361e34d0d8df88916c81d9690564a360505f53b30d8bee1858d35b"} Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.144864 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" event={"ID":"ecb40773-20dc-48ef-bf7f-17f4a042b01c","Type":"ContainerDied","Data":"570bf995c9ab0a04cff8ada5b82ef19c9299d86ab480a43ea1446a3aedb8cd86"} Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.144886 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" event={"ID":"ecb40773-20dc-48ef-bf7f-17f4a042b01c","Type":"ContainerDied","Data":"0330027f82eafcc297d9ea91babd144a993a1f9d5b5f376274521364421fb70d"} Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.144898 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" event={"ID":"ecb40773-20dc-48ef-bf7f-17f4a042b01c","Type":"ContainerDied","Data":"8b3d9e7e5a84aa89a81ca65443973a1a75bc1b54c2f3f5cbd6cf7a00f8d04704"} Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.144911 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" event={"ID":"ecb40773-20dc-48ef-bf7f-17f4a042b01c","Type":"ContainerDied","Data":"ee2019712957d6ff1e329746e69d806c2cb554917815ebbac73b321965e5d981"} Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.144923 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" event={"ID":"ecb40773-20dc-48ef-bf7f-17f4a042b01c","Type":"ContainerDied","Data":"067cf449746568a0f2fa056863be0cc0bf40390eb6f239e011639fdc05f2ea8f"} Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.144949 4737 scope.go:117] "RemoveContainer" containerID="6410407283f04a3f2e54ce997c8b1d77068c25df4c498c1cd5a23c30dbd514d4" Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.146753 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qjff2_82627aad-2019-482e-934a-7e9729927a34/kube-multus/2.log" Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.147599 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qjff2_82627aad-2019-482e-934a-7e9729927a34/kube-multus/1.log" Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.147649 4737 generic.go:334] "Generic (PLEG): container finished" podID="82627aad-2019-482e-934a-7e9729927a34" containerID="00b3a8ab493480704ad64a0ee4fdc318b56fbd72df74360380e03d02e458cb9a" exitCode=2 Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.147690 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qjff2" event={"ID":"82627aad-2019-482e-934a-7e9729927a34","Type":"ContainerDied","Data":"00b3a8ab493480704ad64a0ee4fdc318b56fbd72df74360380e03d02e458cb9a"} Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.148570 4737 scope.go:117] "RemoveContainer" containerID="00b3a8ab493480704ad64a0ee4fdc318b56fbd72df74360380e03d02e458cb9a" Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.209298 4737 scope.go:117] "RemoveContainer" containerID="debc5589aae465210c77fde58754f822ad1d429fc00cfb56625deddf51cf6fc2" Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.886581 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jgjrk_ecb40773-20dc-48ef-bf7f-17f4a042b01c/ovn-acl-logging/0.log" Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.888480 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jgjrk_ecb40773-20dc-48ef-bf7f-17f4a042b01c/ovn-controller/0.log" Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.889017 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.929227 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ecb40773-20dc-48ef-bf7f-17f4a042b01c-ovnkube-config\") pod \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.929487 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-host-kubelet\") pod \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.929598 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.929686 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-var-lib-openvswitch\") pod \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.929627 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecb40773-20dc-48ef-bf7f-17f4a042b01c-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "ecb40773-20dc-48ef-bf7f-17f4a042b01c" (UID: "ecb40773-20dc-48ef-bf7f-17f4a042b01c"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.929781 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "ecb40773-20dc-48ef-bf7f-17f4a042b01c" (UID: "ecb40773-20dc-48ef-bf7f-17f4a042b01c"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.929650 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "ecb40773-20dc-48ef-bf7f-17f4a042b01c" (UID: "ecb40773-20dc-48ef-bf7f-17f4a042b01c"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.929761 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-run-openvswitch\") pod \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.929828 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "ecb40773-20dc-48ef-bf7f-17f4a042b01c" (UID: "ecb40773-20dc-48ef-bf7f-17f4a042b01c"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.929866 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-host-run-ovn-kubernetes\") pod \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.929925 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-host-cni-netd\") pod \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.929952 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-log-socket\") pod \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.929944 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "ecb40773-20dc-48ef-bf7f-17f4a042b01c" (UID: "ecb40773-20dc-48ef-bf7f-17f4a042b01c"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.930011 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-host-run-netns\") pod \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.930012 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "ecb40773-20dc-48ef-bf7f-17f4a042b01c" (UID: "ecb40773-20dc-48ef-bf7f-17f4a042b01c"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.930043 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-run-systemd\") pod \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.930044 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-log-socket" (OuterVolumeSpecName: "log-socket") pod "ecb40773-20dc-48ef-bf7f-17f4a042b01c" (UID: "ecb40773-20dc-48ef-bf7f-17f4a042b01c"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.930057 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "ecb40773-20dc-48ef-bf7f-17f4a042b01c" (UID: "ecb40773-20dc-48ef-bf7f-17f4a042b01c"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.930082 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-run-ovn\") pod \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.930136 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "ecb40773-20dc-48ef-bf7f-17f4a042b01c" (UID: "ecb40773-20dc-48ef-bf7f-17f4a042b01c"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.930171 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ecb40773-20dc-48ef-bf7f-17f4a042b01c-ovn-node-metrics-cert\") pod \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.930189 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-host-slash\") pod \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.930205 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-etc-openvswitch\") pod \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.930226 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-systemd-units\") pod \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.930219 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-host-slash" (OuterVolumeSpecName: "host-slash") pod "ecb40773-20dc-48ef-bf7f-17f4a042b01c" (UID: "ecb40773-20dc-48ef-bf7f-17f4a042b01c"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.930245 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ecb40773-20dc-48ef-bf7f-17f4a042b01c-ovnkube-script-lib\") pod \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.930258 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "ecb40773-20dc-48ef-bf7f-17f4a042b01c" (UID: "ecb40773-20dc-48ef-bf7f-17f4a042b01c"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.930277 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ecb40773-20dc-48ef-bf7f-17f4a042b01c-env-overrides\") pod \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.930310 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnp4x\" (UniqueName: \"kubernetes.io/projected/ecb40773-20dc-48ef-bf7f-17f4a042b01c-kube-api-access-cnp4x\") pod \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.930325 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-host-cni-bin\") pod \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.930277 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "ecb40773-20dc-48ef-bf7f-17f4a042b01c" (UID: "ecb40773-20dc-48ef-bf7f-17f4a042b01c"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.930340 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-node-log\") pod \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\" (UID: \"ecb40773-20dc-48ef-bf7f-17f4a042b01c\") " Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.930359 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-node-log" (OuterVolumeSpecName: "node-log") pod "ecb40773-20dc-48ef-bf7f-17f4a042b01c" (UID: "ecb40773-20dc-48ef-bf7f-17f4a042b01c"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.930651 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecb40773-20dc-48ef-bf7f-17f4a042b01c-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "ecb40773-20dc-48ef-bf7f-17f4a042b01c" (UID: "ecb40773-20dc-48ef-bf7f-17f4a042b01c"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.930758 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "ecb40773-20dc-48ef-bf7f-17f4a042b01c" (UID: "ecb40773-20dc-48ef-bf7f-17f4a042b01c"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.930815 4737 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.930910 4737 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.930856 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecb40773-20dc-48ef-bf7f-17f4a042b01c-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "ecb40773-20dc-48ef-bf7f-17f4a042b01c" (UID: "ecb40773-20dc-48ef-bf7f-17f4a042b01c"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.930969 4737 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.931021 4737 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.931035 4737 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-log-socket\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.931047 4737 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.931058 4737 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.931079 4737 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-host-slash\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.931090 4737 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.931101 4737 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.931113 4737 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ecb40773-20dc-48ef-bf7f-17f4a042b01c-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.931125 4737 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-node-log\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.931135 4737 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ecb40773-20dc-48ef-bf7f-17f4a042b01c-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.931143 4737 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.931390 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "ecb40773-20dc-48ef-bf7f-17f4a042b01c" (UID: "ecb40773-20dc-48ef-bf7f-17f4a042b01c"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.949405 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecb40773-20dc-48ef-bf7f-17f4a042b01c-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "ecb40773-20dc-48ef-bf7f-17f4a042b01c" (UID: "ecb40773-20dc-48ef-bf7f-17f4a042b01c"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.953339 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecb40773-20dc-48ef-bf7f-17f4a042b01c-kube-api-access-cnp4x" (OuterVolumeSpecName: "kube-api-access-cnp4x") pod "ecb40773-20dc-48ef-bf7f-17f4a042b01c" (UID: "ecb40773-20dc-48ef-bf7f-17f4a042b01c"). InnerVolumeSpecName "kube-api-access-cnp4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:42:54 crc kubenswrapper[4737]: I0126 18:42:54.974349 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "ecb40773-20dc-48ef-bf7f-17f4a042b01c" (UID: "ecb40773-20dc-48ef-bf7f-17f4a042b01c"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.034002 4737 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.034032 4737 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ecb40773-20dc-48ef-bf7f-17f4a042b01c-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.034045 4737 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ecb40773-20dc-48ef-bf7f-17f4a042b01c-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.034054 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnp4x\" (UniqueName: \"kubernetes.io/projected/ecb40773-20dc-48ef-bf7f-17f4a042b01c-kube-api-access-cnp4x\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.034063 4737 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.034090 4737 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ecb40773-20dc-48ef-bf7f-17f4a042b01c-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.068150 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-b5645"] Jan 26 18:42:55 crc kubenswrapper[4737]: E0126 18:42:55.068412 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c801ad0c-6ec9-4497-ba0d-bad429d70783" containerName="util" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.068432 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="c801ad0c-6ec9-4497-ba0d-bad429d70783" containerName="util" Jan 26 18:42:55 crc kubenswrapper[4737]: E0126 18:42:55.068440 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" containerName="ovnkube-controller" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.068449 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" containerName="ovnkube-controller" Jan 26 18:42:55 crc kubenswrapper[4737]: E0126 18:42:55.068460 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" containerName="ovn-acl-logging" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.068466 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" containerName="ovn-acl-logging" Jan 26 18:42:55 crc kubenswrapper[4737]: E0126 18:42:55.068474 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" containerName="nbdb" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.068480 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" containerName="nbdb" Jan 26 18:42:55 crc kubenswrapper[4737]: E0126 18:42:55.068488 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" containerName="northd" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.068493 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" containerName="northd" Jan 26 18:42:55 crc kubenswrapper[4737]: E0126 18:42:55.068504 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" containerName="kube-rbac-proxy-node" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.068510 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" containerName="kube-rbac-proxy-node" Jan 26 18:42:55 crc kubenswrapper[4737]: E0126 18:42:55.068522 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c801ad0c-6ec9-4497-ba0d-bad429d70783" containerName="extract" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.068528 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="c801ad0c-6ec9-4497-ba0d-bad429d70783" containerName="extract" Jan 26 18:42:55 crc kubenswrapper[4737]: E0126 18:42:55.068538 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" containerName="ovn-controller" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.068543 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" containerName="ovn-controller" Jan 26 18:42:55 crc kubenswrapper[4737]: E0126 18:42:55.068552 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" containerName="sbdb" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.068557 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" containerName="sbdb" Jan 26 18:42:55 crc kubenswrapper[4737]: E0126 18:42:55.068566 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" containerName="ovnkube-controller" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.068572 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" containerName="ovnkube-controller" Jan 26 18:42:55 crc kubenswrapper[4737]: E0126 18:42:55.068579 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" containerName="kubecfg-setup" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.068585 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" containerName="kubecfg-setup" Jan 26 18:42:55 crc kubenswrapper[4737]: E0126 18:42:55.068593 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c801ad0c-6ec9-4497-ba0d-bad429d70783" containerName="pull" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.068598 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="c801ad0c-6ec9-4497-ba0d-bad429d70783" containerName="pull" Jan 26 18:42:55 crc kubenswrapper[4737]: E0126 18:42:55.068605 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.068611 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 18:42:55 crc kubenswrapper[4737]: E0126 18:42:55.068619 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" containerName="ovnkube-controller" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.068626 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" containerName="ovnkube-controller" Jan 26 18:42:55 crc kubenswrapper[4737]: E0126 18:42:55.068634 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" containerName="ovnkube-controller" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.068640 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" containerName="ovnkube-controller" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.068738 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" containerName="nbdb" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.068747 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" containerName="ovnkube-controller" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.068754 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" containerName="ovn-acl-logging" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.068763 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.068768 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" containerName="ovnkube-controller" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.068778 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" containerName="ovnkube-controller" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.068785 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" containerName="ovn-controller" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.068793 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" containerName="northd" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.068799 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" containerName="ovnkube-controller" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.068808 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" containerName="ovnkube-controller" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.068817 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" containerName="kube-rbac-proxy-node" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.068826 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" containerName="sbdb" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.068833 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="c801ad0c-6ec9-4497-ba0d-bad429d70783" containerName="extract" Jan 26 18:42:55 crc kubenswrapper[4737]: E0126 18:42:55.068933 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" containerName="ovnkube-controller" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.068939 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" containerName="ovnkube-controller" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.079865 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: E0126 18:42:55.104445 4737 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podecb40773_20dc_48ef_bf7f_17f4a042b01c.slice/crio-56dbde75f9c625602d0f93fe42f936bce62a2956e6b776567123379cdc8cd4c6\": RecentStats: unable to find data in memory cache]" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.135282 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/13aef528-d160-451f-97db-46c7c0be2665-env-overrides\") pod \"ovnkube-node-b5645\" (UID: \"13aef528-d160-451f-97db-46c7c0be2665\") " pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.135340 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/13aef528-d160-451f-97db-46c7c0be2665-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-b5645\" (UID: \"13aef528-d160-451f-97db-46c7c0be2665\") " pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.135371 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/13aef528-d160-451f-97db-46c7c0be2665-etc-openvswitch\") pod \"ovnkube-node-b5645\" (UID: \"13aef528-d160-451f-97db-46c7c0be2665\") " pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.135389 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/13aef528-d160-451f-97db-46c7c0be2665-run-openvswitch\") pod \"ovnkube-node-b5645\" (UID: \"13aef528-d160-451f-97db-46c7c0be2665\") " pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.135404 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/13aef528-d160-451f-97db-46c7c0be2665-systemd-units\") pod \"ovnkube-node-b5645\" (UID: \"13aef528-d160-451f-97db-46c7c0be2665\") " pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.135542 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/13aef528-d160-451f-97db-46c7c0be2665-host-cni-bin\") pod \"ovnkube-node-b5645\" (UID: \"13aef528-d160-451f-97db-46c7c0be2665\") " pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.135614 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/13aef528-d160-451f-97db-46c7c0be2665-host-run-netns\") pod \"ovnkube-node-b5645\" (UID: \"13aef528-d160-451f-97db-46c7c0be2665\") " pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.135634 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/13aef528-d160-451f-97db-46c7c0be2665-host-run-ovn-kubernetes\") pod \"ovnkube-node-b5645\" (UID: \"13aef528-d160-451f-97db-46c7c0be2665\") " pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.135652 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/13aef528-d160-451f-97db-46c7c0be2665-ovn-node-metrics-cert\") pod \"ovnkube-node-b5645\" (UID: \"13aef528-d160-451f-97db-46c7c0be2665\") " pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.135672 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/13aef528-d160-451f-97db-46c7c0be2665-host-slash\") pod \"ovnkube-node-b5645\" (UID: \"13aef528-d160-451f-97db-46c7c0be2665\") " pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.135706 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/13aef528-d160-451f-97db-46c7c0be2665-node-log\") pod \"ovnkube-node-b5645\" (UID: \"13aef528-d160-451f-97db-46c7c0be2665\") " pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.135789 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/13aef528-d160-451f-97db-46c7c0be2665-ovnkube-config\") pod \"ovnkube-node-b5645\" (UID: \"13aef528-d160-451f-97db-46c7c0be2665\") " pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.135837 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/13aef528-d160-451f-97db-46c7c0be2665-log-socket\") pod \"ovnkube-node-b5645\" (UID: \"13aef528-d160-451f-97db-46c7c0be2665\") " pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.135939 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsskp\" (UniqueName: \"kubernetes.io/projected/13aef528-d160-451f-97db-46c7c0be2665-kube-api-access-xsskp\") pod \"ovnkube-node-b5645\" (UID: \"13aef528-d160-451f-97db-46c7c0be2665\") " pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.136006 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/13aef528-d160-451f-97db-46c7c0be2665-host-kubelet\") pod \"ovnkube-node-b5645\" (UID: \"13aef528-d160-451f-97db-46c7c0be2665\") " pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.136056 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/13aef528-d160-451f-97db-46c7c0be2665-run-systemd\") pod \"ovnkube-node-b5645\" (UID: \"13aef528-d160-451f-97db-46c7c0be2665\") " pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.136099 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/13aef528-d160-451f-97db-46c7c0be2665-ovnkube-script-lib\") pod \"ovnkube-node-b5645\" (UID: \"13aef528-d160-451f-97db-46c7c0be2665\") " pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.136122 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/13aef528-d160-451f-97db-46c7c0be2665-host-cni-netd\") pod \"ovnkube-node-b5645\" (UID: \"13aef528-d160-451f-97db-46c7c0be2665\") " pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.136144 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/13aef528-d160-451f-97db-46c7c0be2665-var-lib-openvswitch\") pod \"ovnkube-node-b5645\" (UID: \"13aef528-d160-451f-97db-46c7c0be2665\") " pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.136166 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/13aef528-d160-451f-97db-46c7c0be2665-run-ovn\") pod \"ovnkube-node-b5645\" (UID: \"13aef528-d160-451f-97db-46c7c0be2665\") " pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.157136 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jgjrk_ecb40773-20dc-48ef-bf7f-17f4a042b01c/ovn-acl-logging/0.log" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.157546 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jgjrk_ecb40773-20dc-48ef-bf7f-17f4a042b01c/ovn-controller/0.log" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.157890 4737 generic.go:334] "Generic (PLEG): container finished" podID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" containerID="13f6776860714e1ab348c9b7a767366f0b4b425d08ed27ee64abfaf2770f1889" exitCode=0 Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.157913 4737 generic.go:334] "Generic (PLEG): container finished" podID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" containerID="66ec75b04c2383311d9d4c54148415f6f45821810aa9e68c12fa36c22637341c" exitCode=0 Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.157965 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" event={"ID":"ecb40773-20dc-48ef-bf7f-17f4a042b01c","Type":"ContainerDied","Data":"13f6776860714e1ab348c9b7a767366f0b4b425d08ed27ee64abfaf2770f1889"} Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.157989 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" event={"ID":"ecb40773-20dc-48ef-bf7f-17f4a042b01c","Type":"ContainerDied","Data":"66ec75b04c2383311d9d4c54148415f6f45821810aa9e68c12fa36c22637341c"} Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.158001 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" event={"ID":"ecb40773-20dc-48ef-bf7f-17f4a042b01c","Type":"ContainerDied","Data":"56dbde75f9c625602d0f93fe42f936bce62a2956e6b776567123379cdc8cd4c6"} Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.158018 4737 scope.go:117] "RemoveContainer" containerID="8e27b6f397361e34d0d8df88916c81d9690564a360505f53b30d8bee1858d35b" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.158030 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jgjrk" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.161088 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qjff2_82627aad-2019-482e-934a-7e9729927a34/kube-multus/2.log" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.161139 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qjff2" event={"ID":"82627aad-2019-482e-934a-7e9729927a34","Type":"ContainerStarted","Data":"d980061c89eb48227c083cde495d1a6f979b03fd71301a90f4846e5b4099826f"} Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.186104 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jgjrk"] Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.187156 4737 scope.go:117] "RemoveContainer" containerID="570bf995c9ab0a04cff8ada5b82ef19c9299d86ab480a43ea1446a3aedb8cd86" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.197954 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jgjrk"] Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.203358 4737 scope.go:117] "RemoveContainer" containerID="0330027f82eafcc297d9ea91babd144a993a1f9d5b5f376274521364421fb70d" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.219627 4737 scope.go:117] "RemoveContainer" containerID="8b3d9e7e5a84aa89a81ca65443973a1a75bc1b54c2f3f5cbd6cf7a00f8d04704" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.235889 4737 scope.go:117] "RemoveContainer" containerID="13f6776860714e1ab348c9b7a767366f0b4b425d08ed27ee64abfaf2770f1889" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.237313 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/13aef528-d160-451f-97db-46c7c0be2665-run-systemd\") pod \"ovnkube-node-b5645\" (UID: \"13aef528-d160-451f-97db-46c7c0be2665\") " pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.237375 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/13aef528-d160-451f-97db-46c7c0be2665-ovnkube-script-lib\") pod \"ovnkube-node-b5645\" (UID: \"13aef528-d160-451f-97db-46c7c0be2665\") " pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.237404 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/13aef528-d160-451f-97db-46c7c0be2665-run-systemd\") pod \"ovnkube-node-b5645\" (UID: \"13aef528-d160-451f-97db-46c7c0be2665\") " pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.237408 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/13aef528-d160-451f-97db-46c7c0be2665-host-cni-netd\") pod \"ovnkube-node-b5645\" (UID: \"13aef528-d160-451f-97db-46c7c0be2665\") " pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.237460 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/13aef528-d160-451f-97db-46c7c0be2665-host-cni-netd\") pod \"ovnkube-node-b5645\" (UID: \"13aef528-d160-451f-97db-46c7c0be2665\") " pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.237534 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/13aef528-d160-451f-97db-46c7c0be2665-var-lib-openvswitch\") pod \"ovnkube-node-b5645\" (UID: \"13aef528-d160-451f-97db-46c7c0be2665\") " pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.237508 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/13aef528-d160-451f-97db-46c7c0be2665-var-lib-openvswitch\") pod \"ovnkube-node-b5645\" (UID: \"13aef528-d160-451f-97db-46c7c0be2665\") " pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.237608 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/13aef528-d160-451f-97db-46c7c0be2665-run-ovn\") pod \"ovnkube-node-b5645\" (UID: \"13aef528-d160-451f-97db-46c7c0be2665\") " pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.237654 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/13aef528-d160-451f-97db-46c7c0be2665-env-overrides\") pod \"ovnkube-node-b5645\" (UID: \"13aef528-d160-451f-97db-46c7c0be2665\") " pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.237682 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/13aef528-d160-451f-97db-46c7c0be2665-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-b5645\" (UID: \"13aef528-d160-451f-97db-46c7c0be2665\") " pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.237713 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/13aef528-d160-451f-97db-46c7c0be2665-run-ovn\") pod \"ovnkube-node-b5645\" (UID: \"13aef528-d160-451f-97db-46c7c0be2665\") " pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.237755 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/13aef528-d160-451f-97db-46c7c0be2665-systemd-units\") pod \"ovnkube-node-b5645\" (UID: \"13aef528-d160-451f-97db-46c7c0be2665\") " pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.237803 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/13aef528-d160-451f-97db-46c7c0be2665-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-b5645\" (UID: \"13aef528-d160-451f-97db-46c7c0be2665\") " pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.237832 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/13aef528-d160-451f-97db-46c7c0be2665-systemd-units\") pod \"ovnkube-node-b5645\" (UID: \"13aef528-d160-451f-97db-46c7c0be2665\") " pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.237780 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/13aef528-d160-451f-97db-46c7c0be2665-etc-openvswitch\") pod \"ovnkube-node-b5645\" (UID: \"13aef528-d160-451f-97db-46c7c0be2665\") " pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.237886 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/13aef528-d160-451f-97db-46c7c0be2665-etc-openvswitch\") pod \"ovnkube-node-b5645\" (UID: \"13aef528-d160-451f-97db-46c7c0be2665\") " pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.237900 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/13aef528-d160-451f-97db-46c7c0be2665-run-openvswitch\") pod \"ovnkube-node-b5645\" (UID: \"13aef528-d160-451f-97db-46c7c0be2665\") " pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.237930 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/13aef528-d160-451f-97db-46c7c0be2665-run-openvswitch\") pod \"ovnkube-node-b5645\" (UID: \"13aef528-d160-451f-97db-46c7c0be2665\") " pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.237944 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/13aef528-d160-451f-97db-46c7c0be2665-host-cni-bin\") pod \"ovnkube-node-b5645\" (UID: \"13aef528-d160-451f-97db-46c7c0be2665\") " pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.237976 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/13aef528-d160-451f-97db-46c7c0be2665-host-run-netns\") pod \"ovnkube-node-b5645\" (UID: \"13aef528-d160-451f-97db-46c7c0be2665\") " pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.237996 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/13aef528-d160-451f-97db-46c7c0be2665-host-run-ovn-kubernetes\") pod \"ovnkube-node-b5645\" (UID: \"13aef528-d160-451f-97db-46c7c0be2665\") " pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.238017 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/13aef528-d160-451f-97db-46c7c0be2665-ovn-node-metrics-cert\") pod \"ovnkube-node-b5645\" (UID: \"13aef528-d160-451f-97db-46c7c0be2665\") " pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.238038 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/13aef528-d160-451f-97db-46c7c0be2665-host-slash\") pod \"ovnkube-node-b5645\" (UID: \"13aef528-d160-451f-97db-46c7c0be2665\") " pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.238123 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/13aef528-d160-451f-97db-46c7c0be2665-node-log\") pod \"ovnkube-node-b5645\" (UID: \"13aef528-d160-451f-97db-46c7c0be2665\") " pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.238148 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/13aef528-d160-451f-97db-46c7c0be2665-ovnkube-config\") pod \"ovnkube-node-b5645\" (UID: \"13aef528-d160-451f-97db-46c7c0be2665\") " pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.238167 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/13aef528-d160-451f-97db-46c7c0be2665-host-run-ovn-kubernetes\") pod \"ovnkube-node-b5645\" (UID: \"13aef528-d160-451f-97db-46c7c0be2665\") " pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.238169 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/13aef528-d160-451f-97db-46c7c0be2665-log-socket\") pod \"ovnkube-node-b5645\" (UID: \"13aef528-d160-451f-97db-46c7c0be2665\") " pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.238189 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/13aef528-d160-451f-97db-46c7c0be2665-log-socket\") pod \"ovnkube-node-b5645\" (UID: \"13aef528-d160-451f-97db-46c7c0be2665\") " pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.238202 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsskp\" (UniqueName: \"kubernetes.io/projected/13aef528-d160-451f-97db-46c7c0be2665-kube-api-access-xsskp\") pod \"ovnkube-node-b5645\" (UID: \"13aef528-d160-451f-97db-46c7c0be2665\") " pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.238229 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/13aef528-d160-451f-97db-46c7c0be2665-ovnkube-script-lib\") pod \"ovnkube-node-b5645\" (UID: \"13aef528-d160-451f-97db-46c7c0be2665\") " pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.238263 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/13aef528-d160-451f-97db-46c7c0be2665-host-kubelet\") pod \"ovnkube-node-b5645\" (UID: \"13aef528-d160-451f-97db-46c7c0be2665\") " pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.238293 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/13aef528-d160-451f-97db-46c7c0be2665-host-run-netns\") pod \"ovnkube-node-b5645\" (UID: \"13aef528-d160-451f-97db-46c7c0be2665\") " pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.238376 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/13aef528-d160-451f-97db-46c7c0be2665-host-slash\") pod \"ovnkube-node-b5645\" (UID: \"13aef528-d160-451f-97db-46c7c0be2665\") " pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.238379 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/13aef528-d160-451f-97db-46c7c0be2665-host-kubelet\") pod \"ovnkube-node-b5645\" (UID: \"13aef528-d160-451f-97db-46c7c0be2665\") " pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.238397 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/13aef528-d160-451f-97db-46c7c0be2665-host-cni-bin\") pod \"ovnkube-node-b5645\" (UID: \"13aef528-d160-451f-97db-46c7c0be2665\") " pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.238418 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/13aef528-d160-451f-97db-46c7c0be2665-node-log\") pod \"ovnkube-node-b5645\" (UID: \"13aef528-d160-451f-97db-46c7c0be2665\") " pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.238428 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/13aef528-d160-451f-97db-46c7c0be2665-env-overrides\") pod \"ovnkube-node-b5645\" (UID: \"13aef528-d160-451f-97db-46c7c0be2665\") " pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.238925 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/13aef528-d160-451f-97db-46c7c0be2665-ovnkube-config\") pod \"ovnkube-node-b5645\" (UID: \"13aef528-d160-451f-97db-46c7c0be2665\") " pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.242939 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/13aef528-d160-451f-97db-46c7c0be2665-ovn-node-metrics-cert\") pod \"ovnkube-node-b5645\" (UID: \"13aef528-d160-451f-97db-46c7c0be2665\") " pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.253835 4737 scope.go:117] "RemoveContainer" containerID="66ec75b04c2383311d9d4c54148415f6f45821810aa9e68c12fa36c22637341c" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.272465 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsskp\" (UniqueName: \"kubernetes.io/projected/13aef528-d160-451f-97db-46c7c0be2665-kube-api-access-xsskp\") pod \"ovnkube-node-b5645\" (UID: \"13aef528-d160-451f-97db-46c7c0be2665\") " pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.273062 4737 scope.go:117] "RemoveContainer" containerID="ee2019712957d6ff1e329746e69d806c2cb554917815ebbac73b321965e5d981" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.290999 4737 scope.go:117] "RemoveContainer" containerID="067cf449746568a0f2fa056863be0cc0bf40390eb6f239e011639fdc05f2ea8f" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.309304 4737 scope.go:117] "RemoveContainer" containerID="a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.326986 4737 scope.go:117] "RemoveContainer" containerID="8e27b6f397361e34d0d8df88916c81d9690564a360505f53b30d8bee1858d35b" Jan 26 18:42:55 crc kubenswrapper[4737]: E0126 18:42:55.327478 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e27b6f397361e34d0d8df88916c81d9690564a360505f53b30d8bee1858d35b\": container with ID starting with 8e27b6f397361e34d0d8df88916c81d9690564a360505f53b30d8bee1858d35b not found: ID does not exist" containerID="8e27b6f397361e34d0d8df88916c81d9690564a360505f53b30d8bee1858d35b" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.327524 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e27b6f397361e34d0d8df88916c81d9690564a360505f53b30d8bee1858d35b"} err="failed to get container status \"8e27b6f397361e34d0d8df88916c81d9690564a360505f53b30d8bee1858d35b\": rpc error: code = NotFound desc = could not find container \"8e27b6f397361e34d0d8df88916c81d9690564a360505f53b30d8bee1858d35b\": container with ID starting with 8e27b6f397361e34d0d8df88916c81d9690564a360505f53b30d8bee1858d35b not found: ID does not exist" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.327551 4737 scope.go:117] "RemoveContainer" containerID="570bf995c9ab0a04cff8ada5b82ef19c9299d86ab480a43ea1446a3aedb8cd86" Jan 26 18:42:55 crc kubenswrapper[4737]: E0126 18:42:55.327909 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"570bf995c9ab0a04cff8ada5b82ef19c9299d86ab480a43ea1446a3aedb8cd86\": container with ID starting with 570bf995c9ab0a04cff8ada5b82ef19c9299d86ab480a43ea1446a3aedb8cd86 not found: ID does not exist" containerID="570bf995c9ab0a04cff8ada5b82ef19c9299d86ab480a43ea1446a3aedb8cd86" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.327969 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"570bf995c9ab0a04cff8ada5b82ef19c9299d86ab480a43ea1446a3aedb8cd86"} err="failed to get container status \"570bf995c9ab0a04cff8ada5b82ef19c9299d86ab480a43ea1446a3aedb8cd86\": rpc error: code = NotFound desc = could not find container \"570bf995c9ab0a04cff8ada5b82ef19c9299d86ab480a43ea1446a3aedb8cd86\": container with ID starting with 570bf995c9ab0a04cff8ada5b82ef19c9299d86ab480a43ea1446a3aedb8cd86 not found: ID does not exist" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.327999 4737 scope.go:117] "RemoveContainer" containerID="0330027f82eafcc297d9ea91babd144a993a1f9d5b5f376274521364421fb70d" Jan 26 18:42:55 crc kubenswrapper[4737]: E0126 18:42:55.328898 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0330027f82eafcc297d9ea91babd144a993a1f9d5b5f376274521364421fb70d\": container with ID starting with 0330027f82eafcc297d9ea91babd144a993a1f9d5b5f376274521364421fb70d not found: ID does not exist" containerID="0330027f82eafcc297d9ea91babd144a993a1f9d5b5f376274521364421fb70d" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.328938 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0330027f82eafcc297d9ea91babd144a993a1f9d5b5f376274521364421fb70d"} err="failed to get container status \"0330027f82eafcc297d9ea91babd144a993a1f9d5b5f376274521364421fb70d\": rpc error: code = NotFound desc = could not find container \"0330027f82eafcc297d9ea91babd144a993a1f9d5b5f376274521364421fb70d\": container with ID starting with 0330027f82eafcc297d9ea91babd144a993a1f9d5b5f376274521364421fb70d not found: ID does not exist" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.328967 4737 scope.go:117] "RemoveContainer" containerID="8b3d9e7e5a84aa89a81ca65443973a1a75bc1b54c2f3f5cbd6cf7a00f8d04704" Jan 26 18:42:55 crc kubenswrapper[4737]: E0126 18:42:55.329241 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b3d9e7e5a84aa89a81ca65443973a1a75bc1b54c2f3f5cbd6cf7a00f8d04704\": container with ID starting with 8b3d9e7e5a84aa89a81ca65443973a1a75bc1b54c2f3f5cbd6cf7a00f8d04704 not found: ID does not exist" containerID="8b3d9e7e5a84aa89a81ca65443973a1a75bc1b54c2f3f5cbd6cf7a00f8d04704" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.329268 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b3d9e7e5a84aa89a81ca65443973a1a75bc1b54c2f3f5cbd6cf7a00f8d04704"} err="failed to get container status \"8b3d9e7e5a84aa89a81ca65443973a1a75bc1b54c2f3f5cbd6cf7a00f8d04704\": rpc error: code = NotFound desc = could not find container \"8b3d9e7e5a84aa89a81ca65443973a1a75bc1b54c2f3f5cbd6cf7a00f8d04704\": container with ID starting with 8b3d9e7e5a84aa89a81ca65443973a1a75bc1b54c2f3f5cbd6cf7a00f8d04704 not found: ID does not exist" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.329282 4737 scope.go:117] "RemoveContainer" containerID="13f6776860714e1ab348c9b7a767366f0b4b425d08ed27ee64abfaf2770f1889" Jan 26 18:42:55 crc kubenswrapper[4737]: E0126 18:42:55.330261 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13f6776860714e1ab348c9b7a767366f0b4b425d08ed27ee64abfaf2770f1889\": container with ID starting with 13f6776860714e1ab348c9b7a767366f0b4b425d08ed27ee64abfaf2770f1889 not found: ID does not exist" containerID="13f6776860714e1ab348c9b7a767366f0b4b425d08ed27ee64abfaf2770f1889" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.330288 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13f6776860714e1ab348c9b7a767366f0b4b425d08ed27ee64abfaf2770f1889"} err="failed to get container status \"13f6776860714e1ab348c9b7a767366f0b4b425d08ed27ee64abfaf2770f1889\": rpc error: code = NotFound desc = could not find container \"13f6776860714e1ab348c9b7a767366f0b4b425d08ed27ee64abfaf2770f1889\": container with ID starting with 13f6776860714e1ab348c9b7a767366f0b4b425d08ed27ee64abfaf2770f1889 not found: ID does not exist" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.330302 4737 scope.go:117] "RemoveContainer" containerID="66ec75b04c2383311d9d4c54148415f6f45821810aa9e68c12fa36c22637341c" Jan 26 18:42:55 crc kubenswrapper[4737]: E0126 18:42:55.330564 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66ec75b04c2383311d9d4c54148415f6f45821810aa9e68c12fa36c22637341c\": container with ID starting with 66ec75b04c2383311d9d4c54148415f6f45821810aa9e68c12fa36c22637341c not found: ID does not exist" containerID="66ec75b04c2383311d9d4c54148415f6f45821810aa9e68c12fa36c22637341c" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.330590 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66ec75b04c2383311d9d4c54148415f6f45821810aa9e68c12fa36c22637341c"} err="failed to get container status \"66ec75b04c2383311d9d4c54148415f6f45821810aa9e68c12fa36c22637341c\": rpc error: code = NotFound desc = could not find container \"66ec75b04c2383311d9d4c54148415f6f45821810aa9e68c12fa36c22637341c\": container with ID starting with 66ec75b04c2383311d9d4c54148415f6f45821810aa9e68c12fa36c22637341c not found: ID does not exist" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.330606 4737 scope.go:117] "RemoveContainer" containerID="ee2019712957d6ff1e329746e69d806c2cb554917815ebbac73b321965e5d981" Jan 26 18:42:55 crc kubenswrapper[4737]: E0126 18:42:55.330888 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee2019712957d6ff1e329746e69d806c2cb554917815ebbac73b321965e5d981\": container with ID starting with ee2019712957d6ff1e329746e69d806c2cb554917815ebbac73b321965e5d981 not found: ID does not exist" containerID="ee2019712957d6ff1e329746e69d806c2cb554917815ebbac73b321965e5d981" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.330924 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee2019712957d6ff1e329746e69d806c2cb554917815ebbac73b321965e5d981"} err="failed to get container status \"ee2019712957d6ff1e329746e69d806c2cb554917815ebbac73b321965e5d981\": rpc error: code = NotFound desc = could not find container \"ee2019712957d6ff1e329746e69d806c2cb554917815ebbac73b321965e5d981\": container with ID starting with ee2019712957d6ff1e329746e69d806c2cb554917815ebbac73b321965e5d981 not found: ID does not exist" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.330941 4737 scope.go:117] "RemoveContainer" containerID="067cf449746568a0f2fa056863be0cc0bf40390eb6f239e011639fdc05f2ea8f" Jan 26 18:42:55 crc kubenswrapper[4737]: E0126 18:42:55.332238 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"067cf449746568a0f2fa056863be0cc0bf40390eb6f239e011639fdc05f2ea8f\": container with ID starting with 067cf449746568a0f2fa056863be0cc0bf40390eb6f239e011639fdc05f2ea8f not found: ID does not exist" containerID="067cf449746568a0f2fa056863be0cc0bf40390eb6f239e011639fdc05f2ea8f" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.332288 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"067cf449746568a0f2fa056863be0cc0bf40390eb6f239e011639fdc05f2ea8f"} err="failed to get container status \"067cf449746568a0f2fa056863be0cc0bf40390eb6f239e011639fdc05f2ea8f\": rpc error: code = NotFound desc = could not find container \"067cf449746568a0f2fa056863be0cc0bf40390eb6f239e011639fdc05f2ea8f\": container with ID starting with 067cf449746568a0f2fa056863be0cc0bf40390eb6f239e011639fdc05f2ea8f not found: ID does not exist" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.332315 4737 scope.go:117] "RemoveContainer" containerID="a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9" Jan 26 18:42:55 crc kubenswrapper[4737]: E0126 18:42:55.332725 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\": container with ID starting with a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9 not found: ID does not exist" containerID="a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.332748 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9"} err="failed to get container status \"a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\": rpc error: code = NotFound desc = could not find container \"a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\": container with ID starting with a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9 not found: ID does not exist" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.332767 4737 scope.go:117] "RemoveContainer" containerID="8e27b6f397361e34d0d8df88916c81d9690564a360505f53b30d8bee1858d35b" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.333055 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e27b6f397361e34d0d8df88916c81d9690564a360505f53b30d8bee1858d35b"} err="failed to get container status \"8e27b6f397361e34d0d8df88916c81d9690564a360505f53b30d8bee1858d35b\": rpc error: code = NotFound desc = could not find container \"8e27b6f397361e34d0d8df88916c81d9690564a360505f53b30d8bee1858d35b\": container with ID starting with 8e27b6f397361e34d0d8df88916c81d9690564a360505f53b30d8bee1858d35b not found: ID does not exist" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.333103 4737 scope.go:117] "RemoveContainer" containerID="570bf995c9ab0a04cff8ada5b82ef19c9299d86ab480a43ea1446a3aedb8cd86" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.333381 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"570bf995c9ab0a04cff8ada5b82ef19c9299d86ab480a43ea1446a3aedb8cd86"} err="failed to get container status \"570bf995c9ab0a04cff8ada5b82ef19c9299d86ab480a43ea1446a3aedb8cd86\": rpc error: code = NotFound desc = could not find container \"570bf995c9ab0a04cff8ada5b82ef19c9299d86ab480a43ea1446a3aedb8cd86\": container with ID starting with 570bf995c9ab0a04cff8ada5b82ef19c9299d86ab480a43ea1446a3aedb8cd86 not found: ID does not exist" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.333408 4737 scope.go:117] "RemoveContainer" containerID="0330027f82eafcc297d9ea91babd144a993a1f9d5b5f376274521364421fb70d" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.333723 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0330027f82eafcc297d9ea91babd144a993a1f9d5b5f376274521364421fb70d"} err="failed to get container status \"0330027f82eafcc297d9ea91babd144a993a1f9d5b5f376274521364421fb70d\": rpc error: code = NotFound desc = could not find container \"0330027f82eafcc297d9ea91babd144a993a1f9d5b5f376274521364421fb70d\": container with ID starting with 0330027f82eafcc297d9ea91babd144a993a1f9d5b5f376274521364421fb70d not found: ID does not exist" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.333748 4737 scope.go:117] "RemoveContainer" containerID="8b3d9e7e5a84aa89a81ca65443973a1a75bc1b54c2f3f5cbd6cf7a00f8d04704" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.333970 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b3d9e7e5a84aa89a81ca65443973a1a75bc1b54c2f3f5cbd6cf7a00f8d04704"} err="failed to get container status \"8b3d9e7e5a84aa89a81ca65443973a1a75bc1b54c2f3f5cbd6cf7a00f8d04704\": rpc error: code = NotFound desc = could not find container \"8b3d9e7e5a84aa89a81ca65443973a1a75bc1b54c2f3f5cbd6cf7a00f8d04704\": container with ID starting with 8b3d9e7e5a84aa89a81ca65443973a1a75bc1b54c2f3f5cbd6cf7a00f8d04704 not found: ID does not exist" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.333992 4737 scope.go:117] "RemoveContainer" containerID="13f6776860714e1ab348c9b7a767366f0b4b425d08ed27ee64abfaf2770f1889" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.334322 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13f6776860714e1ab348c9b7a767366f0b4b425d08ed27ee64abfaf2770f1889"} err="failed to get container status \"13f6776860714e1ab348c9b7a767366f0b4b425d08ed27ee64abfaf2770f1889\": rpc error: code = NotFound desc = could not find container \"13f6776860714e1ab348c9b7a767366f0b4b425d08ed27ee64abfaf2770f1889\": container with ID starting with 13f6776860714e1ab348c9b7a767366f0b4b425d08ed27ee64abfaf2770f1889 not found: ID does not exist" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.334347 4737 scope.go:117] "RemoveContainer" containerID="66ec75b04c2383311d9d4c54148415f6f45821810aa9e68c12fa36c22637341c" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.339612 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66ec75b04c2383311d9d4c54148415f6f45821810aa9e68c12fa36c22637341c"} err="failed to get container status \"66ec75b04c2383311d9d4c54148415f6f45821810aa9e68c12fa36c22637341c\": rpc error: code = NotFound desc = could not find container \"66ec75b04c2383311d9d4c54148415f6f45821810aa9e68c12fa36c22637341c\": container with ID starting with 66ec75b04c2383311d9d4c54148415f6f45821810aa9e68c12fa36c22637341c not found: ID does not exist" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.339646 4737 scope.go:117] "RemoveContainer" containerID="ee2019712957d6ff1e329746e69d806c2cb554917815ebbac73b321965e5d981" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.342279 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee2019712957d6ff1e329746e69d806c2cb554917815ebbac73b321965e5d981"} err="failed to get container status \"ee2019712957d6ff1e329746e69d806c2cb554917815ebbac73b321965e5d981\": rpc error: code = NotFound desc = could not find container \"ee2019712957d6ff1e329746e69d806c2cb554917815ebbac73b321965e5d981\": container with ID starting with ee2019712957d6ff1e329746e69d806c2cb554917815ebbac73b321965e5d981 not found: ID does not exist" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.342308 4737 scope.go:117] "RemoveContainer" containerID="067cf449746568a0f2fa056863be0cc0bf40390eb6f239e011639fdc05f2ea8f" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.342921 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"067cf449746568a0f2fa056863be0cc0bf40390eb6f239e011639fdc05f2ea8f"} err="failed to get container status \"067cf449746568a0f2fa056863be0cc0bf40390eb6f239e011639fdc05f2ea8f\": rpc error: code = NotFound desc = could not find container \"067cf449746568a0f2fa056863be0cc0bf40390eb6f239e011639fdc05f2ea8f\": container with ID starting with 067cf449746568a0f2fa056863be0cc0bf40390eb6f239e011639fdc05f2ea8f not found: ID does not exist" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.342968 4737 scope.go:117] "RemoveContainer" containerID="a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.343857 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9"} err="failed to get container status \"a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\": rpc error: code = NotFound desc = could not find container \"a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9\": container with ID starting with a45002c02d30f093be7e9c7fafe764878c1a5b6a7c1bd8ca2bb57bd59c98f2e9 not found: ID does not exist" Jan 26 18:42:55 crc kubenswrapper[4737]: I0126 18:42:55.398646 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:42:55 crc kubenswrapper[4737]: W0126 18:42:55.437301 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13aef528_d160_451f_97db_46c7c0be2665.slice/crio-410bd1370c0eb52befadbb66d2c70add358c2f6bfd3680af30c9d6283e566fc0 WatchSource:0}: Error finding container 410bd1370c0eb52befadbb66d2c70add358c2f6bfd3680af30c9d6283e566fc0: Status 404 returned error can't find the container with id 410bd1370c0eb52befadbb66d2c70add358c2f6bfd3680af30c9d6283e566fc0 Jan 26 18:42:56 crc kubenswrapper[4737]: I0126 18:42:56.172267 4737 generic.go:334] "Generic (PLEG): container finished" podID="13aef528-d160-451f-97db-46c7c0be2665" containerID="ae85b36d8a4271df7ed5b01eeb73e068a42f5ee495b45b9a862f2bacb8a1b618" exitCode=0 Jan 26 18:42:56 crc kubenswrapper[4737]: I0126 18:42:56.172315 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b5645" event={"ID":"13aef528-d160-451f-97db-46c7c0be2665","Type":"ContainerDied","Data":"ae85b36d8a4271df7ed5b01eeb73e068a42f5ee495b45b9a862f2bacb8a1b618"} Jan 26 18:42:56 crc kubenswrapper[4737]: I0126 18:42:56.172637 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b5645" event={"ID":"13aef528-d160-451f-97db-46c7c0be2665","Type":"ContainerStarted","Data":"410bd1370c0eb52befadbb66d2c70add358c2f6bfd3680af30c9d6283e566fc0"} Jan 26 18:42:56 crc kubenswrapper[4737]: I0126 18:42:56.343795 4737 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 18:42:56 crc kubenswrapper[4737]: I0126 18:42:56.530846 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-jvfnx"] Jan 26 18:42:56 crc kubenswrapper[4737]: I0126 18:42:56.531669 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jvfnx" Jan 26 18:42:56 crc kubenswrapper[4737]: I0126 18:42:56.535237 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 26 18:42:56 crc kubenswrapper[4737]: I0126 18:42:56.535582 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-l2d46" Jan 26 18:42:56 crc kubenswrapper[4737]: I0126 18:42:56.535667 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 26 18:42:56 crc kubenswrapper[4737]: I0126 18:42:56.633601 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-b48686b7d-tjv85"] Jan 26 18:42:56 crc kubenswrapper[4737]: I0126 18:42:56.634348 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-b48686b7d-tjv85" Jan 26 18:42:56 crc kubenswrapper[4737]: I0126 18:42:56.637484 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 26 18:42:56 crc kubenswrapper[4737]: I0126 18:42:56.637733 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-dfpqw" Jan 26 18:42:56 crc kubenswrapper[4737]: I0126 18:42:56.647135 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-b48686b7d-s2s9r"] Jan 26 18:42:56 crc kubenswrapper[4737]: I0126 18:42:56.648043 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-b48686b7d-s2s9r" Jan 26 18:42:56 crc kubenswrapper[4737]: I0126 18:42:56.657725 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7lxc\" (UniqueName: \"kubernetes.io/projected/780e85db-cb8c-4a8c-920d-2594cd33eebf-kube-api-access-t7lxc\") pod \"obo-prometheus-operator-68bc856cb9-jvfnx\" (UID: \"780e85db-cb8c-4a8c-920d-2594cd33eebf\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jvfnx" Jan 26 18:42:56 crc kubenswrapper[4737]: I0126 18:42:56.758838 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/33031648-f53a-4f71-8c03-041f7f1fcbf5-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-b48686b7d-s2s9r\" (UID: \"33031648-f53a-4f71-8c03-041f7f1fcbf5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-b48686b7d-s2s9r" Jan 26 18:42:56 crc kubenswrapper[4737]: I0126 18:42:56.758894 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cc4df7ac-3298-4316-8c9b-1ac9827330fd-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-b48686b7d-tjv85\" (UID: \"cc4df7ac-3298-4316-8c9b-1ac9827330fd\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-b48686b7d-tjv85" Jan 26 18:42:56 crc kubenswrapper[4737]: I0126 18:42:56.758927 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cc4df7ac-3298-4316-8c9b-1ac9827330fd-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-b48686b7d-tjv85\" (UID: \"cc4df7ac-3298-4316-8c9b-1ac9827330fd\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-b48686b7d-tjv85" Jan 26 18:42:56 crc kubenswrapper[4737]: I0126 18:42:56.758951 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/33031648-f53a-4f71-8c03-041f7f1fcbf5-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-b48686b7d-s2s9r\" (UID: \"33031648-f53a-4f71-8c03-041f7f1fcbf5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-b48686b7d-s2s9r" Jan 26 18:42:56 crc kubenswrapper[4737]: I0126 18:42:56.758986 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7lxc\" (UniqueName: \"kubernetes.io/projected/780e85db-cb8c-4a8c-920d-2594cd33eebf-kube-api-access-t7lxc\") pod \"obo-prometheus-operator-68bc856cb9-jvfnx\" (UID: \"780e85db-cb8c-4a8c-920d-2594cd33eebf\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jvfnx" Jan 26 18:42:56 crc kubenswrapper[4737]: I0126 18:42:56.781164 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7lxc\" (UniqueName: \"kubernetes.io/projected/780e85db-cb8c-4a8c-920d-2594cd33eebf-kube-api-access-t7lxc\") pod \"obo-prometheus-operator-68bc856cb9-jvfnx\" (UID: \"780e85db-cb8c-4a8c-920d-2594cd33eebf\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jvfnx" Jan 26 18:42:56 crc kubenswrapper[4737]: I0126 18:42:56.850828 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-xf99z"] Jan 26 18:42:56 crc kubenswrapper[4737]: I0126 18:42:56.852059 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-xf99z" Jan 26 18:42:56 crc kubenswrapper[4737]: I0126 18:42:56.854621 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jvfnx" Jan 26 18:42:56 crc kubenswrapper[4737]: I0126 18:42:56.858793 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-6ts54" Jan 26 18:42:56 crc kubenswrapper[4737]: I0126 18:42:56.858901 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 26 18:42:56 crc kubenswrapper[4737]: I0126 18:42:56.860119 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/33031648-f53a-4f71-8c03-041f7f1fcbf5-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-b48686b7d-s2s9r\" (UID: \"33031648-f53a-4f71-8c03-041f7f1fcbf5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-b48686b7d-s2s9r" Jan 26 18:42:56 crc kubenswrapper[4737]: I0126 18:42:56.860164 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cc4df7ac-3298-4316-8c9b-1ac9827330fd-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-b48686b7d-tjv85\" (UID: \"cc4df7ac-3298-4316-8c9b-1ac9827330fd\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-b48686b7d-tjv85" Jan 26 18:42:56 crc kubenswrapper[4737]: I0126 18:42:56.860204 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cc4df7ac-3298-4316-8c9b-1ac9827330fd-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-b48686b7d-tjv85\" (UID: \"cc4df7ac-3298-4316-8c9b-1ac9827330fd\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-b48686b7d-tjv85" Jan 26 18:42:56 crc kubenswrapper[4737]: I0126 18:42:56.860230 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/33031648-f53a-4f71-8c03-041f7f1fcbf5-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-b48686b7d-s2s9r\" (UID: \"33031648-f53a-4f71-8c03-041f7f1fcbf5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-b48686b7d-s2s9r" Jan 26 18:42:56 crc kubenswrapper[4737]: I0126 18:42:56.869953 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/33031648-f53a-4f71-8c03-041f7f1fcbf5-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-b48686b7d-s2s9r\" (UID: \"33031648-f53a-4f71-8c03-041f7f1fcbf5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-b48686b7d-s2s9r" Jan 26 18:42:56 crc kubenswrapper[4737]: I0126 18:42:56.870772 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cc4df7ac-3298-4316-8c9b-1ac9827330fd-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-b48686b7d-tjv85\" (UID: \"cc4df7ac-3298-4316-8c9b-1ac9827330fd\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-b48686b7d-tjv85" Jan 26 18:42:56 crc kubenswrapper[4737]: I0126 18:42:56.871643 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cc4df7ac-3298-4316-8c9b-1ac9827330fd-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-b48686b7d-tjv85\" (UID: \"cc4df7ac-3298-4316-8c9b-1ac9827330fd\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-b48686b7d-tjv85" Jan 26 18:42:56 crc kubenswrapper[4737]: I0126 18:42:56.876178 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/33031648-f53a-4f71-8c03-041f7f1fcbf5-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-b48686b7d-s2s9r\" (UID: \"33031648-f53a-4f71-8c03-041f7f1fcbf5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-b48686b7d-s2s9r" Jan 26 18:42:56 crc kubenswrapper[4737]: E0126 18:42:56.930362 4737 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-jvfnx_openshift-operators_780e85db-cb8c-4a8c-920d-2594cd33eebf_0(0749cdce8e4ea1775fb40c054953049e50e8e9b55a8bf468d190d88796664b63): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 18:42:56 crc kubenswrapper[4737]: E0126 18:42:56.930428 4737 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-jvfnx_openshift-operators_780e85db-cb8c-4a8c-920d-2594cd33eebf_0(0749cdce8e4ea1775fb40c054953049e50e8e9b55a8bf468d190d88796664b63): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jvfnx" Jan 26 18:42:56 crc kubenswrapper[4737]: E0126 18:42:56.930461 4737 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-jvfnx_openshift-operators_780e85db-cb8c-4a8c-920d-2594cd33eebf_0(0749cdce8e4ea1775fb40c054953049e50e8e9b55a8bf468d190d88796664b63): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jvfnx" Jan 26 18:42:56 crc kubenswrapper[4737]: E0126 18:42:56.930508 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-jvfnx_openshift-operators(780e85db-cb8c-4a8c-920d-2594cd33eebf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-jvfnx_openshift-operators(780e85db-cb8c-4a8c-920d-2594cd33eebf)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-jvfnx_openshift-operators_780e85db-cb8c-4a8c-920d-2594cd33eebf_0(0749cdce8e4ea1775fb40c054953049e50e8e9b55a8bf468d190d88796664b63): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jvfnx" podUID="780e85db-cb8c-4a8c-920d-2594cd33eebf" Jan 26 18:42:56 crc kubenswrapper[4737]: I0126 18:42:56.961835 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtqjs\" (UniqueName: \"kubernetes.io/projected/b319754a-04cc-40dd-b031-ea72a3d19db2-kube-api-access-xtqjs\") pod \"observability-operator-59bdc8b94-xf99z\" (UID: \"b319754a-04cc-40dd-b031-ea72a3d19db2\") " pod="openshift-operators/observability-operator-59bdc8b94-xf99z" Jan 26 18:42:56 crc kubenswrapper[4737]: I0126 18:42:56.961887 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/b319754a-04cc-40dd-b031-ea72a3d19db2-observability-operator-tls\") pod \"observability-operator-59bdc8b94-xf99z\" (UID: \"b319754a-04cc-40dd-b031-ea72a3d19db2\") " pod="openshift-operators/observability-operator-59bdc8b94-xf99z" Jan 26 18:42:56 crc kubenswrapper[4737]: I0126 18:42:56.983866 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-b48686b7d-tjv85" Jan 26 18:42:56 crc kubenswrapper[4737]: I0126 18:42:56.995051 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-b48686b7d-s2s9r" Jan 26 18:42:56 crc kubenswrapper[4737]: I0126 18:42:56.999039 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecb40773-20dc-48ef-bf7f-17f4a042b01c" path="/var/lib/kubelet/pods/ecb40773-20dc-48ef-bf7f-17f4a042b01c/volumes" Jan 26 18:42:57 crc kubenswrapper[4737]: E0126 18:42:57.033282 4737 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-b48686b7d-tjv85_openshift-operators_cc4df7ac-3298-4316-8c9b-1ac9827330fd_0(109fcaf3b9c8c90f661a98257a0d179ad36651f098ea1ec96b1ed0a6e7999628): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 18:42:57 crc kubenswrapper[4737]: E0126 18:42:57.033348 4737 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-b48686b7d-tjv85_openshift-operators_cc4df7ac-3298-4316-8c9b-1ac9827330fd_0(109fcaf3b9c8c90f661a98257a0d179ad36651f098ea1ec96b1ed0a6e7999628): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-b48686b7d-tjv85" Jan 26 18:42:57 crc kubenswrapper[4737]: E0126 18:42:57.033376 4737 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-b48686b7d-tjv85_openshift-operators_cc4df7ac-3298-4316-8c9b-1ac9827330fd_0(109fcaf3b9c8c90f661a98257a0d179ad36651f098ea1ec96b1ed0a6e7999628): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-b48686b7d-tjv85" Jan 26 18:42:57 crc kubenswrapper[4737]: E0126 18:42:57.033422 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-b48686b7d-tjv85_openshift-operators(cc4df7ac-3298-4316-8c9b-1ac9827330fd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-b48686b7d-tjv85_openshift-operators(cc4df7ac-3298-4316-8c9b-1ac9827330fd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-b48686b7d-tjv85_openshift-operators_cc4df7ac-3298-4316-8c9b-1ac9827330fd_0(109fcaf3b9c8c90f661a98257a0d179ad36651f098ea1ec96b1ed0a6e7999628): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-b48686b7d-tjv85" podUID="cc4df7ac-3298-4316-8c9b-1ac9827330fd" Jan 26 18:42:57 crc kubenswrapper[4737]: E0126 18:42:57.044682 4737 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-b48686b7d-s2s9r_openshift-operators_33031648-f53a-4f71-8c03-041f7f1fcbf5_0(62dcb8c933b9d7411dc43107b09d87a16bb7628510339c68091221b780a22237): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 18:42:57 crc kubenswrapper[4737]: E0126 18:42:57.044758 4737 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-b48686b7d-s2s9r_openshift-operators_33031648-f53a-4f71-8c03-041f7f1fcbf5_0(62dcb8c933b9d7411dc43107b09d87a16bb7628510339c68091221b780a22237): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-b48686b7d-s2s9r" Jan 26 18:42:57 crc kubenswrapper[4737]: E0126 18:42:57.044782 4737 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-b48686b7d-s2s9r_openshift-operators_33031648-f53a-4f71-8c03-041f7f1fcbf5_0(62dcb8c933b9d7411dc43107b09d87a16bb7628510339c68091221b780a22237): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-b48686b7d-s2s9r" Jan 26 18:42:57 crc kubenswrapper[4737]: E0126 18:42:57.044821 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-b48686b7d-s2s9r_openshift-operators(33031648-f53a-4f71-8c03-041f7f1fcbf5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-b48686b7d-s2s9r_openshift-operators(33031648-f53a-4f71-8c03-041f7f1fcbf5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-b48686b7d-s2s9r_openshift-operators_33031648-f53a-4f71-8c03-041f7f1fcbf5_0(62dcb8c933b9d7411dc43107b09d87a16bb7628510339c68091221b780a22237): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-b48686b7d-s2s9r" podUID="33031648-f53a-4f71-8c03-041f7f1fcbf5" Jan 26 18:42:57 crc kubenswrapper[4737]: I0126 18:42:57.063540 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-r5vwv"] Jan 26 18:42:57 crc kubenswrapper[4737]: I0126 18:42:57.063843 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtqjs\" (UniqueName: \"kubernetes.io/projected/b319754a-04cc-40dd-b031-ea72a3d19db2-kube-api-access-xtqjs\") pod \"observability-operator-59bdc8b94-xf99z\" (UID: \"b319754a-04cc-40dd-b031-ea72a3d19db2\") " pod="openshift-operators/observability-operator-59bdc8b94-xf99z" Jan 26 18:42:57 crc kubenswrapper[4737]: I0126 18:42:57.063885 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/b319754a-04cc-40dd-b031-ea72a3d19db2-observability-operator-tls\") pod \"observability-operator-59bdc8b94-xf99z\" (UID: \"b319754a-04cc-40dd-b031-ea72a3d19db2\") " pod="openshift-operators/observability-operator-59bdc8b94-xf99z" Jan 26 18:42:57 crc kubenswrapper[4737]: I0126 18:42:57.064299 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-r5vwv" Jan 26 18:42:57 crc kubenswrapper[4737]: I0126 18:42:57.067055 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-q4pnf" Jan 26 18:42:57 crc kubenswrapper[4737]: I0126 18:42:57.067732 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/b319754a-04cc-40dd-b031-ea72a3d19db2-observability-operator-tls\") pod \"observability-operator-59bdc8b94-xf99z\" (UID: \"b319754a-04cc-40dd-b031-ea72a3d19db2\") " pod="openshift-operators/observability-operator-59bdc8b94-xf99z" Jan 26 18:42:57 crc kubenswrapper[4737]: I0126 18:42:57.086761 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtqjs\" (UniqueName: \"kubernetes.io/projected/b319754a-04cc-40dd-b031-ea72a3d19db2-kube-api-access-xtqjs\") pod \"observability-operator-59bdc8b94-xf99z\" (UID: \"b319754a-04cc-40dd-b031-ea72a3d19db2\") " pod="openshift-operators/observability-operator-59bdc8b94-xf99z" Jan 26 18:42:57 crc kubenswrapper[4737]: I0126 18:42:57.165864 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llsbf\" (UniqueName: \"kubernetes.io/projected/7478def9-da54-4632-803e-47f36b6fb64b-kube-api-access-llsbf\") pod \"perses-operator-5bf474d74f-r5vwv\" (UID: \"7478def9-da54-4632-803e-47f36b6fb64b\") " pod="openshift-operators/perses-operator-5bf474d74f-r5vwv" Jan 26 18:42:57 crc kubenswrapper[4737]: I0126 18:42:57.165961 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/7478def9-da54-4632-803e-47f36b6fb64b-openshift-service-ca\") pod \"perses-operator-5bf474d74f-r5vwv\" (UID: \"7478def9-da54-4632-803e-47f36b6fb64b\") " pod="openshift-operators/perses-operator-5bf474d74f-r5vwv" Jan 26 18:42:57 crc kubenswrapper[4737]: I0126 18:42:57.183701 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b5645" event={"ID":"13aef528-d160-451f-97db-46c7c0be2665","Type":"ContainerStarted","Data":"4e5e46234c4c5baa4fe4c6a69029bd8a5105c2eb16ddb71f4974a66dd9966228"} Jan 26 18:42:57 crc kubenswrapper[4737]: I0126 18:42:57.183778 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b5645" event={"ID":"13aef528-d160-451f-97db-46c7c0be2665","Type":"ContainerStarted","Data":"004606c6d1a8e84ac31bc4f193eee0d01c65f4d61c2a0616d50f97cb2c243ad1"} Jan 26 18:42:57 crc kubenswrapper[4737]: I0126 18:42:57.183790 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b5645" event={"ID":"13aef528-d160-451f-97db-46c7c0be2665","Type":"ContainerStarted","Data":"5dd6ea724cef81882b5de720edca770938472131bc1b78358b915acfee8b5025"} Jan 26 18:42:57 crc kubenswrapper[4737]: I0126 18:42:57.183799 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b5645" event={"ID":"13aef528-d160-451f-97db-46c7c0be2665","Type":"ContainerStarted","Data":"1b61f1f7e2c63b42842e611a1a83db9c6e48b732ad97a1d7b12fa3d91271350c"} Jan 26 18:42:57 crc kubenswrapper[4737]: I0126 18:42:57.183808 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b5645" event={"ID":"13aef528-d160-451f-97db-46c7c0be2665","Type":"ContainerStarted","Data":"329abfda3207a755f2ac5b6fe4140905e85c80dba693cf9fbb1fa124876a075e"} Jan 26 18:42:57 crc kubenswrapper[4737]: I0126 18:42:57.183818 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b5645" event={"ID":"13aef528-d160-451f-97db-46c7c0be2665","Type":"ContainerStarted","Data":"4345223f9821241f9019a92d48740982c615010b8311b06c2ec6fbe36217c4c9"} Jan 26 18:42:57 crc kubenswrapper[4737]: I0126 18:42:57.267967 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/7478def9-da54-4632-803e-47f36b6fb64b-openshift-service-ca\") pod \"perses-operator-5bf474d74f-r5vwv\" (UID: \"7478def9-da54-4632-803e-47f36b6fb64b\") " pod="openshift-operators/perses-operator-5bf474d74f-r5vwv" Jan 26 18:42:57 crc kubenswrapper[4737]: I0126 18:42:57.268107 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llsbf\" (UniqueName: \"kubernetes.io/projected/7478def9-da54-4632-803e-47f36b6fb64b-kube-api-access-llsbf\") pod \"perses-operator-5bf474d74f-r5vwv\" (UID: \"7478def9-da54-4632-803e-47f36b6fb64b\") " pod="openshift-operators/perses-operator-5bf474d74f-r5vwv" Jan 26 18:42:57 crc kubenswrapper[4737]: I0126 18:42:57.268967 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/7478def9-da54-4632-803e-47f36b6fb64b-openshift-service-ca\") pod \"perses-operator-5bf474d74f-r5vwv\" (UID: \"7478def9-da54-4632-803e-47f36b6fb64b\") " pod="openshift-operators/perses-operator-5bf474d74f-r5vwv" Jan 26 18:42:57 crc kubenswrapper[4737]: I0126 18:42:57.285329 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llsbf\" (UniqueName: \"kubernetes.io/projected/7478def9-da54-4632-803e-47f36b6fb64b-kube-api-access-llsbf\") pod \"perses-operator-5bf474d74f-r5vwv\" (UID: \"7478def9-da54-4632-803e-47f36b6fb64b\") " pod="openshift-operators/perses-operator-5bf474d74f-r5vwv" Jan 26 18:42:57 crc kubenswrapper[4737]: I0126 18:42:57.296044 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-xf99z" Jan 26 18:42:57 crc kubenswrapper[4737]: E0126 18:42:57.366119 4737 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-xf99z_openshift-operators_b319754a-04cc-40dd-b031-ea72a3d19db2_0(01c3bc48e9c61b1b4a4ef3cd5ac89c11a31cfaab65b07061ed6f74f6d85bf56b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 18:42:57 crc kubenswrapper[4737]: E0126 18:42:57.366197 4737 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-xf99z_openshift-operators_b319754a-04cc-40dd-b031-ea72a3d19db2_0(01c3bc48e9c61b1b4a4ef3cd5ac89c11a31cfaab65b07061ed6f74f6d85bf56b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-xf99z" Jan 26 18:42:57 crc kubenswrapper[4737]: E0126 18:42:57.366230 4737 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-xf99z_openshift-operators_b319754a-04cc-40dd-b031-ea72a3d19db2_0(01c3bc48e9c61b1b4a4ef3cd5ac89c11a31cfaab65b07061ed6f74f6d85bf56b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-xf99z" Jan 26 18:42:57 crc kubenswrapper[4737]: E0126 18:42:57.366295 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-xf99z_openshift-operators(b319754a-04cc-40dd-b031-ea72a3d19db2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-xf99z_openshift-operators(b319754a-04cc-40dd-b031-ea72a3d19db2)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-xf99z_openshift-operators_b319754a-04cc-40dd-b031-ea72a3d19db2_0(01c3bc48e9c61b1b4a4ef3cd5ac89c11a31cfaab65b07061ed6f74f6d85bf56b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-xf99z" podUID="b319754a-04cc-40dd-b031-ea72a3d19db2" Jan 26 18:42:57 crc kubenswrapper[4737]: I0126 18:42:57.383960 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-r5vwv" Jan 26 18:42:57 crc kubenswrapper[4737]: E0126 18:42:57.423956 4737 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-r5vwv_openshift-operators_7478def9-da54-4632-803e-47f36b6fb64b_0(12116cdf40c772fb4236ee08af49ed5d7bc8360f1bd20c611e7963a92d658ddd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 18:42:57 crc kubenswrapper[4737]: E0126 18:42:57.424021 4737 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-r5vwv_openshift-operators_7478def9-da54-4632-803e-47f36b6fb64b_0(12116cdf40c772fb4236ee08af49ed5d7bc8360f1bd20c611e7963a92d658ddd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-r5vwv" Jan 26 18:42:57 crc kubenswrapper[4737]: E0126 18:42:57.424043 4737 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-r5vwv_openshift-operators_7478def9-da54-4632-803e-47f36b6fb64b_0(12116cdf40c772fb4236ee08af49ed5d7bc8360f1bd20c611e7963a92d658ddd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-r5vwv" Jan 26 18:42:57 crc kubenswrapper[4737]: E0126 18:42:57.424169 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-r5vwv_openshift-operators(7478def9-da54-4632-803e-47f36b6fb64b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-r5vwv_openshift-operators(7478def9-da54-4632-803e-47f36b6fb64b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-r5vwv_openshift-operators_7478def9-da54-4632-803e-47f36b6fb64b_0(12116cdf40c772fb4236ee08af49ed5d7bc8360f1bd20c611e7963a92d658ddd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-r5vwv" podUID="7478def9-da54-4632-803e-47f36b6fb64b" Jan 26 18:43:00 crc kubenswrapper[4737]: I0126 18:43:00.202627 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b5645" event={"ID":"13aef528-d160-451f-97db-46c7c0be2665","Type":"ContainerStarted","Data":"65a68f359bf2cb17dc564e41f95144a3d6368bf9d595cd066813c50e5555753c"} Jan 26 18:43:03 crc kubenswrapper[4737]: I0126 18:43:03.228424 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b5645" event={"ID":"13aef528-d160-451f-97db-46c7c0be2665","Type":"ContainerStarted","Data":"c16d0b4f2d2ffdfc3c7203aa29fc0264686a41120f30c95c271641bc447adfc6"} Jan 26 18:43:03 crc kubenswrapper[4737]: I0126 18:43:03.229965 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:43:03 crc kubenswrapper[4737]: I0126 18:43:03.261387 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-b5645" podStartSLOduration=8.261367846 podStartE2EDuration="8.261367846s" podCreationTimestamp="2026-01-26 18:42:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:43:03.257021086 +0000 UTC m=+756.565215794" watchObservedRunningTime="2026-01-26 18:43:03.261367846 +0000 UTC m=+756.569562554" Jan 26 18:43:03 crc kubenswrapper[4737]: I0126 18:43:03.278700 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:43:03 crc kubenswrapper[4737]: I0126 18:43:03.619731 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-b48686b7d-tjv85"] Jan 26 18:43:03 crc kubenswrapper[4737]: I0126 18:43:03.619858 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-b48686b7d-tjv85" Jan 26 18:43:03 crc kubenswrapper[4737]: I0126 18:43:03.620340 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-b48686b7d-tjv85" Jan 26 18:43:03 crc kubenswrapper[4737]: I0126 18:43:03.631296 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-jvfnx"] Jan 26 18:43:03 crc kubenswrapper[4737]: I0126 18:43:03.631450 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jvfnx" Jan 26 18:43:03 crc kubenswrapper[4737]: I0126 18:43:03.631939 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jvfnx" Jan 26 18:43:03 crc kubenswrapper[4737]: I0126 18:43:03.651203 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-b48686b7d-s2s9r"] Jan 26 18:43:03 crc kubenswrapper[4737]: I0126 18:43:03.651328 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-b48686b7d-s2s9r" Jan 26 18:43:03 crc kubenswrapper[4737]: I0126 18:43:03.651893 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-b48686b7d-s2s9r" Jan 26 18:43:03 crc kubenswrapper[4737]: I0126 18:43:03.682627 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-r5vwv"] Jan 26 18:43:03 crc kubenswrapper[4737]: I0126 18:43:03.682776 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-r5vwv" Jan 26 18:43:03 crc kubenswrapper[4737]: I0126 18:43:03.683362 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-r5vwv" Jan 26 18:43:03 crc kubenswrapper[4737]: E0126 18:43:03.789797 4737 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-b48686b7d-tjv85_openshift-operators_cc4df7ac-3298-4316-8c9b-1ac9827330fd_0(f719cd324c2b4214219808af7b01ca657b716b7036f54c6c349c27fdd1f30de6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 18:43:03 crc kubenswrapper[4737]: E0126 18:43:03.789873 4737 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-b48686b7d-tjv85_openshift-operators_cc4df7ac-3298-4316-8c9b-1ac9827330fd_0(f719cd324c2b4214219808af7b01ca657b716b7036f54c6c349c27fdd1f30de6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-b48686b7d-tjv85" Jan 26 18:43:03 crc kubenswrapper[4737]: E0126 18:43:03.789897 4737 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-b48686b7d-tjv85_openshift-operators_cc4df7ac-3298-4316-8c9b-1ac9827330fd_0(f719cd324c2b4214219808af7b01ca657b716b7036f54c6c349c27fdd1f30de6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-b48686b7d-tjv85" Jan 26 18:43:03 crc kubenswrapper[4737]: E0126 18:43:03.789981 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-b48686b7d-tjv85_openshift-operators(cc4df7ac-3298-4316-8c9b-1ac9827330fd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-b48686b7d-tjv85_openshift-operators(cc4df7ac-3298-4316-8c9b-1ac9827330fd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-b48686b7d-tjv85_openshift-operators_cc4df7ac-3298-4316-8c9b-1ac9827330fd_0(f719cd324c2b4214219808af7b01ca657b716b7036f54c6c349c27fdd1f30de6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-b48686b7d-tjv85" podUID="cc4df7ac-3298-4316-8c9b-1ac9827330fd" Jan 26 18:43:03 crc kubenswrapper[4737]: E0126 18:43:03.807248 4737 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-jvfnx_openshift-operators_780e85db-cb8c-4a8c-920d-2594cd33eebf_0(e256e56d688f20a45aa92ef66cef362c711e1197d32cafcff0a139d860f262e9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 18:43:03 crc kubenswrapper[4737]: E0126 18:43:03.807321 4737 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-jvfnx_openshift-operators_780e85db-cb8c-4a8c-920d-2594cd33eebf_0(e256e56d688f20a45aa92ef66cef362c711e1197d32cafcff0a139d860f262e9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jvfnx" Jan 26 18:43:03 crc kubenswrapper[4737]: E0126 18:43:03.807541 4737 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-jvfnx_openshift-operators_780e85db-cb8c-4a8c-920d-2594cd33eebf_0(e256e56d688f20a45aa92ef66cef362c711e1197d32cafcff0a139d860f262e9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jvfnx" Jan 26 18:43:03 crc kubenswrapper[4737]: E0126 18:43:03.807593 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-jvfnx_openshift-operators(780e85db-cb8c-4a8c-920d-2594cd33eebf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-jvfnx_openshift-operators(780e85db-cb8c-4a8c-920d-2594cd33eebf)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-jvfnx_openshift-operators_780e85db-cb8c-4a8c-920d-2594cd33eebf_0(e256e56d688f20a45aa92ef66cef362c711e1197d32cafcff0a139d860f262e9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jvfnx" podUID="780e85db-cb8c-4a8c-920d-2594cd33eebf" Jan 26 18:43:03 crc kubenswrapper[4737]: I0126 18:43:03.811844 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-xf99z"] Jan 26 18:43:03 crc kubenswrapper[4737]: I0126 18:43:03.811978 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-xf99z" Jan 26 18:43:03 crc kubenswrapper[4737]: I0126 18:43:03.812492 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-xf99z" Jan 26 18:43:03 crc kubenswrapper[4737]: E0126 18:43:03.892780 4737 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-b48686b7d-s2s9r_openshift-operators_33031648-f53a-4f71-8c03-041f7f1fcbf5_0(d5428d7bb35079a64dbff397d0c933d43d4d52bc96033b7e52b16da3ae5e28e1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 18:43:03 crc kubenswrapper[4737]: E0126 18:43:03.892856 4737 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-b48686b7d-s2s9r_openshift-operators_33031648-f53a-4f71-8c03-041f7f1fcbf5_0(d5428d7bb35079a64dbff397d0c933d43d4d52bc96033b7e52b16da3ae5e28e1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-b48686b7d-s2s9r" Jan 26 18:43:03 crc kubenswrapper[4737]: E0126 18:43:03.892887 4737 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-b48686b7d-s2s9r_openshift-operators_33031648-f53a-4f71-8c03-041f7f1fcbf5_0(d5428d7bb35079a64dbff397d0c933d43d4d52bc96033b7e52b16da3ae5e28e1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-b48686b7d-s2s9r" Jan 26 18:43:03 crc kubenswrapper[4737]: E0126 18:43:03.892943 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-b48686b7d-s2s9r_openshift-operators(33031648-f53a-4f71-8c03-041f7f1fcbf5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-b48686b7d-s2s9r_openshift-operators(33031648-f53a-4f71-8c03-041f7f1fcbf5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-b48686b7d-s2s9r_openshift-operators_33031648-f53a-4f71-8c03-041f7f1fcbf5_0(d5428d7bb35079a64dbff397d0c933d43d4d52bc96033b7e52b16da3ae5e28e1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-b48686b7d-s2s9r" podUID="33031648-f53a-4f71-8c03-041f7f1fcbf5" Jan 26 18:43:03 crc kubenswrapper[4737]: E0126 18:43:03.893295 4737 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-r5vwv_openshift-operators_7478def9-da54-4632-803e-47f36b6fb64b_0(a028035105928ef0cd67754df898a4c07bf6b415be5cb9c4a96d56dce1b3144f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 18:43:03 crc kubenswrapper[4737]: E0126 18:43:03.893358 4737 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-r5vwv_openshift-operators_7478def9-da54-4632-803e-47f36b6fb64b_0(a028035105928ef0cd67754df898a4c07bf6b415be5cb9c4a96d56dce1b3144f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-r5vwv" Jan 26 18:43:03 crc kubenswrapper[4737]: E0126 18:43:03.893385 4737 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-r5vwv_openshift-operators_7478def9-da54-4632-803e-47f36b6fb64b_0(a028035105928ef0cd67754df898a4c07bf6b415be5cb9c4a96d56dce1b3144f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-r5vwv" Jan 26 18:43:03 crc kubenswrapper[4737]: E0126 18:43:03.893430 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-r5vwv_openshift-operators(7478def9-da54-4632-803e-47f36b6fb64b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-r5vwv_openshift-operators(7478def9-da54-4632-803e-47f36b6fb64b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-r5vwv_openshift-operators_7478def9-da54-4632-803e-47f36b6fb64b_0(a028035105928ef0cd67754df898a4c07bf6b415be5cb9c4a96d56dce1b3144f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-r5vwv" podUID="7478def9-da54-4632-803e-47f36b6fb64b" Jan 26 18:43:03 crc kubenswrapper[4737]: E0126 18:43:03.922392 4737 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-xf99z_openshift-operators_b319754a-04cc-40dd-b031-ea72a3d19db2_0(7164327923a5eca0132e35632e74e7c34cd36ccd56d68d6c652376eb85cfa573): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 18:43:03 crc kubenswrapper[4737]: E0126 18:43:03.922468 4737 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-xf99z_openshift-operators_b319754a-04cc-40dd-b031-ea72a3d19db2_0(7164327923a5eca0132e35632e74e7c34cd36ccd56d68d6c652376eb85cfa573): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-xf99z" Jan 26 18:43:03 crc kubenswrapper[4737]: E0126 18:43:03.922492 4737 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-xf99z_openshift-operators_b319754a-04cc-40dd-b031-ea72a3d19db2_0(7164327923a5eca0132e35632e74e7c34cd36ccd56d68d6c652376eb85cfa573): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-xf99z" Jan 26 18:43:03 crc kubenswrapper[4737]: E0126 18:43:03.922544 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-xf99z_openshift-operators(b319754a-04cc-40dd-b031-ea72a3d19db2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-xf99z_openshift-operators(b319754a-04cc-40dd-b031-ea72a3d19db2)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-xf99z_openshift-operators_b319754a-04cc-40dd-b031-ea72a3d19db2_0(7164327923a5eca0132e35632e74e7c34cd36ccd56d68d6c652376eb85cfa573): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-xf99z" podUID="b319754a-04cc-40dd-b031-ea72a3d19db2" Jan 26 18:43:04 crc kubenswrapper[4737]: I0126 18:43:04.235113 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:43:04 crc kubenswrapper[4737]: I0126 18:43:04.235170 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:43:04 crc kubenswrapper[4737]: I0126 18:43:04.316197 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:43:14 crc kubenswrapper[4737]: I0126 18:43:14.981959 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-xf99z" Jan 26 18:43:14 crc kubenswrapper[4737]: I0126 18:43:14.983156 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-xf99z" Jan 26 18:43:15 crc kubenswrapper[4737]: I0126 18:43:15.375267 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-xf99z"] Jan 26 18:43:15 crc kubenswrapper[4737]: I0126 18:43:15.395972 4737 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 18:43:15 crc kubenswrapper[4737]: I0126 18:43:15.981429 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-b48686b7d-s2s9r" Jan 26 18:43:15 crc kubenswrapper[4737]: I0126 18:43:15.982631 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-b48686b7d-s2s9r" Jan 26 18:43:16 crc kubenswrapper[4737]: I0126 18:43:16.369311 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-xf99z" event={"ID":"b319754a-04cc-40dd-b031-ea72a3d19db2","Type":"ContainerStarted","Data":"82e8ff953575dca9e67650934ea88a27dd819be6ef18137e0c58d153d16d0f8d"} Jan 26 18:43:16 crc kubenswrapper[4737]: I0126 18:43:16.453276 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-b48686b7d-s2s9r"] Jan 26 18:43:16 crc kubenswrapper[4737]: W0126 18:43:16.459434 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33031648_f53a_4f71_8c03_041f7f1fcbf5.slice/crio-f3022be70fa42ac1b8a280105602ac7dd7f2cf208f4d47d89c9dc912c2e08cb7 WatchSource:0}: Error finding container f3022be70fa42ac1b8a280105602ac7dd7f2cf208f4d47d89c9dc912c2e08cb7: Status 404 returned error can't find the container with id f3022be70fa42ac1b8a280105602ac7dd7f2cf208f4d47d89c9dc912c2e08cb7 Jan 26 18:43:16 crc kubenswrapper[4737]: I0126 18:43:16.987052 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jvfnx" Jan 26 18:43:16 crc kubenswrapper[4737]: I0126 18:43:16.987289 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-b48686b7d-tjv85" Jan 26 18:43:16 crc kubenswrapper[4737]: I0126 18:43:16.987881 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jvfnx" Jan 26 18:43:16 crc kubenswrapper[4737]: I0126 18:43:16.989083 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-b48686b7d-tjv85" Jan 26 18:43:17 crc kubenswrapper[4737]: I0126 18:43:17.378152 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-b48686b7d-s2s9r" event={"ID":"33031648-f53a-4f71-8c03-041f7f1fcbf5","Type":"ContainerStarted","Data":"f3022be70fa42ac1b8a280105602ac7dd7f2cf208f4d47d89c9dc912c2e08cb7"} Jan 26 18:43:17 crc kubenswrapper[4737]: I0126 18:43:17.505277 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-b48686b7d-tjv85"] Jan 26 18:43:17 crc kubenswrapper[4737]: W0126 18:43:17.519170 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc4df7ac_3298_4316_8c9b_1ac9827330fd.slice/crio-a2510aca642b49447dd347be9e4b134ea3b041f5accf2031d2a526e09d16975b WatchSource:0}: Error finding container a2510aca642b49447dd347be9e4b134ea3b041f5accf2031d2a526e09d16975b: Status 404 returned error can't find the container with id a2510aca642b49447dd347be9e4b134ea3b041f5accf2031d2a526e09d16975b Jan 26 18:43:17 crc kubenswrapper[4737]: I0126 18:43:17.575918 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-jvfnx"] Jan 26 18:43:17 crc kubenswrapper[4737]: W0126 18:43:17.633320 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod780e85db_cb8c_4a8c_920d_2594cd33eebf.slice/crio-4a554e639c2f84712bf8316bfd04736ffd2dc42ccd7860e5a3ed73d172bf2315 WatchSource:0}: Error finding container 4a554e639c2f84712bf8316bfd04736ffd2dc42ccd7860e5a3ed73d172bf2315: Status 404 returned error can't find the container with id 4a554e639c2f84712bf8316bfd04736ffd2dc42ccd7860e5a3ed73d172bf2315 Jan 26 18:43:17 crc kubenswrapper[4737]: I0126 18:43:17.981315 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-r5vwv" Jan 26 18:43:17 crc kubenswrapper[4737]: I0126 18:43:17.982189 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-r5vwv" Jan 26 18:43:18 crc kubenswrapper[4737]: I0126 18:43:18.365713 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-r5vwv"] Jan 26 18:43:18 crc kubenswrapper[4737]: I0126 18:43:18.425745 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-b48686b7d-tjv85" event={"ID":"cc4df7ac-3298-4316-8c9b-1ac9827330fd","Type":"ContainerStarted","Data":"a2510aca642b49447dd347be9e4b134ea3b041f5accf2031d2a526e09d16975b"} Jan 26 18:43:18 crc kubenswrapper[4737]: I0126 18:43:18.428909 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jvfnx" event={"ID":"780e85db-cb8c-4a8c-920d-2594cd33eebf","Type":"ContainerStarted","Data":"4a554e639c2f84712bf8316bfd04736ffd2dc42ccd7860e5a3ed73d172bf2315"} Jan 26 18:43:24 crc kubenswrapper[4737]: I0126 18:43:24.477961 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-r5vwv" event={"ID":"7478def9-da54-4632-803e-47f36b6fb64b","Type":"ContainerStarted","Data":"670417a935bba88cb199782b3bdc22394907e2e5f04b04e84984f256fe7c1ac7"} Jan 26 18:43:25 crc kubenswrapper[4737]: I0126 18:43:25.419891 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-b5645" Jan 26 18:43:25 crc kubenswrapper[4737]: I0126 18:43:25.495680 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-b48686b7d-s2s9r" event={"ID":"33031648-f53a-4f71-8c03-041f7f1fcbf5","Type":"ContainerStarted","Data":"b7169807cc1579d938a5c3caa32d3e76f10b42c14ee6ff6a13ca65f6fd4c7bf6"} Jan 26 18:43:25 crc kubenswrapper[4737]: I0126 18:43:25.499047 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-b48686b7d-tjv85" event={"ID":"cc4df7ac-3298-4316-8c9b-1ac9827330fd","Type":"ContainerStarted","Data":"0afd2ff3373fa2b6892cb5d1532e5dafdf664cca137b816b6bf265fff85f09ad"} Jan 26 18:43:25 crc kubenswrapper[4737]: I0126 18:43:25.502265 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-xf99z" event={"ID":"b319754a-04cc-40dd-b031-ea72a3d19db2","Type":"ContainerStarted","Data":"733cb5c6f8e7c526cfdd0a0a9a5df38ab0967e14e872f6b54dc438ff1cc6f796"} Jan 26 18:43:25 crc kubenswrapper[4737]: I0126 18:43:25.503414 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-xf99z" Jan 26 18:43:25 crc kubenswrapper[4737]: I0126 18:43:25.512672 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jvfnx" event={"ID":"780e85db-cb8c-4a8c-920d-2594cd33eebf","Type":"ContainerStarted","Data":"6066877a8d36233382492d366cae397b7ab791583060d6bf96d40bea5b2a23ef"} Jan 26 18:43:25 crc kubenswrapper[4737]: I0126 18:43:25.519934 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-b48686b7d-s2s9r" podStartSLOduration=21.698492369 podStartE2EDuration="29.519911221s" podCreationTimestamp="2026-01-26 18:42:56 +0000 UTC" firstStartedPulling="2026-01-26 18:43:16.465766965 +0000 UTC m=+769.773961673" lastFinishedPulling="2026-01-26 18:43:24.287185817 +0000 UTC m=+777.595380525" observedRunningTime="2026-01-26 18:43:25.514509774 +0000 UTC m=+778.822704482" watchObservedRunningTime="2026-01-26 18:43:25.519911221 +0000 UTC m=+778.828105929" Jan 26 18:43:25 crc kubenswrapper[4737]: I0126 18:43:25.544641 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jvfnx" podStartSLOduration=22.860559494 podStartE2EDuration="29.544620406s" podCreationTimestamp="2026-01-26 18:42:56 +0000 UTC" firstStartedPulling="2026-01-26 18:43:17.641736212 +0000 UTC m=+770.949930920" lastFinishedPulling="2026-01-26 18:43:24.325797124 +0000 UTC m=+777.633991832" observedRunningTime="2026-01-26 18:43:25.54004921 +0000 UTC m=+778.848243918" watchObservedRunningTime="2026-01-26 18:43:25.544620406 +0000 UTC m=+778.852815114" Jan 26 18:43:25 crc kubenswrapper[4737]: I0126 18:43:25.569022 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-xf99z" Jan 26 18:43:25 crc kubenswrapper[4737]: I0126 18:43:25.644956 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-xf99z" podStartSLOduration=20.713398044 podStartE2EDuration="29.644929885s" podCreationTimestamp="2026-01-26 18:42:56 +0000 UTC" firstStartedPulling="2026-01-26 18:43:15.395563026 +0000 UTC m=+768.703757734" lastFinishedPulling="2026-01-26 18:43:24.327094867 +0000 UTC m=+777.635289575" observedRunningTime="2026-01-26 18:43:25.591529183 +0000 UTC m=+778.899723891" watchObservedRunningTime="2026-01-26 18:43:25.644929885 +0000 UTC m=+778.953124593" Jan 26 18:43:25 crc kubenswrapper[4737]: I0126 18:43:25.646273 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-b48686b7d-tjv85" podStartSLOduration=22.893179539 podStartE2EDuration="29.646262099s" podCreationTimestamp="2026-01-26 18:42:56 +0000 UTC" firstStartedPulling="2026-01-26 18:43:17.534324693 +0000 UTC m=+770.842519401" lastFinishedPulling="2026-01-26 18:43:24.287407253 +0000 UTC m=+777.595601961" observedRunningTime="2026-01-26 18:43:25.624771415 +0000 UTC m=+778.932966123" watchObservedRunningTime="2026-01-26 18:43:25.646262099 +0000 UTC m=+778.954456807" Jan 26 18:43:26 crc kubenswrapper[4737]: I0126 18:43:26.523059 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-r5vwv" event={"ID":"7478def9-da54-4632-803e-47f36b6fb64b","Type":"ContainerStarted","Data":"b09250e33245dbfbe019228b3426473ef01be8a231df898c70b9eea8a61f84f4"} Jan 26 18:43:27 crc kubenswrapper[4737]: I0126 18:43:27.384515 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-r5vwv" Jan 26 18:43:30 crc kubenswrapper[4737]: I0126 18:43:30.949054 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:43:30 crc kubenswrapper[4737]: I0126 18:43:30.949800 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:43:32 crc kubenswrapper[4737]: I0126 18:43:32.332673 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-r5vwv" podStartSLOduration=32.924551915 podStartE2EDuration="35.332655281s" podCreationTimestamp="2026-01-26 18:42:57 +0000 UTC" firstStartedPulling="2026-01-26 18:43:23.518635832 +0000 UTC m=+776.826830530" lastFinishedPulling="2026-01-26 18:43:25.926739168 +0000 UTC m=+779.234933896" observedRunningTime="2026-01-26 18:43:26.549462671 +0000 UTC m=+779.857657379" watchObservedRunningTime="2026-01-26 18:43:32.332655281 +0000 UTC m=+785.640849989" Jan 26 18:43:32 crc kubenswrapper[4737]: I0126 18:43:32.335710 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-qschs"] Jan 26 18:43:32 crc kubenswrapper[4737]: I0126 18:43:32.336577 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-qschs" Jan 26 18:43:32 crc kubenswrapper[4737]: I0126 18:43:32.343030 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 26 18:43:32 crc kubenswrapper[4737]: I0126 18:43:32.343361 4737 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-mmlkj" Jan 26 18:43:32 crc kubenswrapper[4737]: I0126 18:43:32.344454 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 26 18:43:32 crc kubenswrapper[4737]: I0126 18:43:32.348092 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-qschs"] Jan 26 18:43:32 crc kubenswrapper[4737]: I0126 18:43:32.389185 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-57xsl"] Jan 26 18:43:32 crc kubenswrapper[4737]: I0126 18:43:32.391009 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-57xsl" Jan 26 18:43:32 crc kubenswrapper[4737]: I0126 18:43:32.396145 4737 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-7sgnv" Jan 26 18:43:32 crc kubenswrapper[4737]: I0126 18:43:32.403587 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2595\" (UniqueName: \"kubernetes.io/projected/c42be5f9-9a91-43c2-ac4b-5c7b49bb434c-kube-api-access-c2595\") pod \"cert-manager-cainjector-cf98fcc89-qschs\" (UID: \"c42be5f9-9a91-43c2-ac4b-5c7b49bb434c\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-qschs" Jan 26 18:43:32 crc kubenswrapper[4737]: I0126 18:43:32.403623 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-bjjtc"] Jan 26 18:43:32 crc kubenswrapper[4737]: I0126 18:43:32.404560 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-bjjtc" Jan 26 18:43:32 crc kubenswrapper[4737]: I0126 18:43:32.408381 4737 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-n5h49" Jan 26 18:43:32 crc kubenswrapper[4737]: I0126 18:43:32.413637 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-57xsl"] Jan 26 18:43:32 crc kubenswrapper[4737]: I0126 18:43:32.425881 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-bjjtc"] Jan 26 18:43:32 crc kubenswrapper[4737]: I0126 18:43:32.505798 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klrnv\" (UniqueName: \"kubernetes.io/projected/e5a74a57-5f9a-442f-a166-7787942994c8-kube-api-access-klrnv\") pod \"cert-manager-webhook-687f57d79b-57xsl\" (UID: \"e5a74a57-5f9a-442f-a166-7787942994c8\") " pod="cert-manager/cert-manager-webhook-687f57d79b-57xsl" Jan 26 18:43:32 crc kubenswrapper[4737]: I0126 18:43:32.505876 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj2rs\" (UniqueName: \"kubernetes.io/projected/780b9f7e-40b5-4b9b-94bc-0401ce35b5e3-kube-api-access-kj2rs\") pod \"cert-manager-858654f9db-bjjtc\" (UID: \"780b9f7e-40b5-4b9b-94bc-0401ce35b5e3\") " pod="cert-manager/cert-manager-858654f9db-bjjtc" Jan 26 18:43:32 crc kubenswrapper[4737]: I0126 18:43:32.505927 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2595\" (UniqueName: \"kubernetes.io/projected/c42be5f9-9a91-43c2-ac4b-5c7b49bb434c-kube-api-access-c2595\") pod \"cert-manager-cainjector-cf98fcc89-qschs\" (UID: \"c42be5f9-9a91-43c2-ac4b-5c7b49bb434c\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-qschs" Jan 26 18:43:32 crc kubenswrapper[4737]: I0126 18:43:32.529321 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2595\" (UniqueName: \"kubernetes.io/projected/c42be5f9-9a91-43c2-ac4b-5c7b49bb434c-kube-api-access-c2595\") pod \"cert-manager-cainjector-cf98fcc89-qschs\" (UID: \"c42be5f9-9a91-43c2-ac4b-5c7b49bb434c\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-qschs" Jan 26 18:43:32 crc kubenswrapper[4737]: I0126 18:43:32.607689 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klrnv\" (UniqueName: \"kubernetes.io/projected/e5a74a57-5f9a-442f-a166-7787942994c8-kube-api-access-klrnv\") pod \"cert-manager-webhook-687f57d79b-57xsl\" (UID: \"e5a74a57-5f9a-442f-a166-7787942994c8\") " pod="cert-manager/cert-manager-webhook-687f57d79b-57xsl" Jan 26 18:43:32 crc kubenswrapper[4737]: I0126 18:43:32.607742 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kj2rs\" (UniqueName: \"kubernetes.io/projected/780b9f7e-40b5-4b9b-94bc-0401ce35b5e3-kube-api-access-kj2rs\") pod \"cert-manager-858654f9db-bjjtc\" (UID: \"780b9f7e-40b5-4b9b-94bc-0401ce35b5e3\") " pod="cert-manager/cert-manager-858654f9db-bjjtc" Jan 26 18:43:32 crc kubenswrapper[4737]: I0126 18:43:32.627172 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kj2rs\" (UniqueName: \"kubernetes.io/projected/780b9f7e-40b5-4b9b-94bc-0401ce35b5e3-kube-api-access-kj2rs\") pod \"cert-manager-858654f9db-bjjtc\" (UID: \"780b9f7e-40b5-4b9b-94bc-0401ce35b5e3\") " pod="cert-manager/cert-manager-858654f9db-bjjtc" Jan 26 18:43:32 crc kubenswrapper[4737]: I0126 18:43:32.627922 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klrnv\" (UniqueName: \"kubernetes.io/projected/e5a74a57-5f9a-442f-a166-7787942994c8-kube-api-access-klrnv\") pod \"cert-manager-webhook-687f57d79b-57xsl\" (UID: \"e5a74a57-5f9a-442f-a166-7787942994c8\") " pod="cert-manager/cert-manager-webhook-687f57d79b-57xsl" Jan 26 18:43:32 crc kubenswrapper[4737]: I0126 18:43:32.668033 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-qschs" Jan 26 18:43:32 crc kubenswrapper[4737]: I0126 18:43:32.706237 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-57xsl" Jan 26 18:43:32 crc kubenswrapper[4737]: I0126 18:43:32.719884 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-bjjtc" Jan 26 18:43:33 crc kubenswrapper[4737]: I0126 18:43:33.154859 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-57xsl"] Jan 26 18:43:33 crc kubenswrapper[4737]: I0126 18:43:33.173421 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-qschs"] Jan 26 18:43:33 crc kubenswrapper[4737]: W0126 18:43:33.180771 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc42be5f9_9a91_43c2_ac4b_5c7b49bb434c.slice/crio-8c7dfe3370de60ca161ed1adc95f3bc4eb104492fc376354ab1866b6699a2a5d WatchSource:0}: Error finding container 8c7dfe3370de60ca161ed1adc95f3bc4eb104492fc376354ab1866b6699a2a5d: Status 404 returned error can't find the container with id 8c7dfe3370de60ca161ed1adc95f3bc4eb104492fc376354ab1866b6699a2a5d Jan 26 18:43:33 crc kubenswrapper[4737]: I0126 18:43:33.384016 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-bjjtc"] Jan 26 18:43:33 crc kubenswrapper[4737]: W0126 18:43:33.386426 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod780b9f7e_40b5_4b9b_94bc_0401ce35b5e3.slice/crio-2a0968baa7639ebca1f55c59696057524b57cf06afd394ad64bce15e70ff8a44 WatchSource:0}: Error finding container 2a0968baa7639ebca1f55c59696057524b57cf06afd394ad64bce15e70ff8a44: Status 404 returned error can't find the container with id 2a0968baa7639ebca1f55c59696057524b57cf06afd394ad64bce15e70ff8a44 Jan 26 18:43:33 crc kubenswrapper[4737]: I0126 18:43:33.580907 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-qschs" event={"ID":"c42be5f9-9a91-43c2-ac4b-5c7b49bb434c","Type":"ContainerStarted","Data":"8c7dfe3370de60ca161ed1adc95f3bc4eb104492fc376354ab1866b6699a2a5d"} Jan 26 18:43:33 crc kubenswrapper[4737]: I0126 18:43:33.582951 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-bjjtc" event={"ID":"780b9f7e-40b5-4b9b-94bc-0401ce35b5e3","Type":"ContainerStarted","Data":"2a0968baa7639ebca1f55c59696057524b57cf06afd394ad64bce15e70ff8a44"} Jan 26 18:43:33 crc kubenswrapper[4737]: I0126 18:43:33.584269 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-57xsl" event={"ID":"e5a74a57-5f9a-442f-a166-7787942994c8","Type":"ContainerStarted","Data":"79aa9f09748008baf19ed7b7c09b36816ca89470ad77008dd66e977e29b79afd"} Jan 26 18:43:37 crc kubenswrapper[4737]: I0126 18:43:37.390333 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-r5vwv" Jan 26 18:43:37 crc kubenswrapper[4737]: I0126 18:43:37.629632 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-bjjtc" event={"ID":"780b9f7e-40b5-4b9b-94bc-0401ce35b5e3","Type":"ContainerStarted","Data":"a1d2694dece7227f6addb9163d619935478ba2673f2010b4364a520086548a35"} Jan 26 18:43:37 crc kubenswrapper[4737]: I0126 18:43:37.631719 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-57xsl" event={"ID":"e5a74a57-5f9a-442f-a166-7787942994c8","Type":"ContainerStarted","Data":"85f0856dcb7e062346a2a4f9987ce24466fc4c446874894b66caf7f91e74dd75"} Jan 26 18:43:37 crc kubenswrapper[4737]: I0126 18:43:37.632552 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-57xsl" Jan 26 18:43:37 crc kubenswrapper[4737]: I0126 18:43:37.636773 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-qschs" event={"ID":"c42be5f9-9a91-43c2-ac4b-5c7b49bb434c","Type":"ContainerStarted","Data":"b6255b3a5f88e1c385e281b23f0a3e9aad113d7bba5d9b4098417b163bdd6863"} Jan 26 18:43:37 crc kubenswrapper[4737]: I0126 18:43:37.653487 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-bjjtc" podStartSLOduration=1.939294869 podStartE2EDuration="5.653462735s" podCreationTimestamp="2026-01-26 18:43:32 +0000 UTC" firstStartedPulling="2026-01-26 18:43:33.390008985 +0000 UTC m=+786.698203693" lastFinishedPulling="2026-01-26 18:43:37.104176851 +0000 UTC m=+790.412371559" observedRunningTime="2026-01-26 18:43:37.652406268 +0000 UTC m=+790.960600976" watchObservedRunningTime="2026-01-26 18:43:37.653462735 +0000 UTC m=+790.961657443" Jan 26 18:43:37 crc kubenswrapper[4737]: I0126 18:43:37.696552 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-57xsl" podStartSLOduration=1.731676193 podStartE2EDuration="5.696526404s" podCreationTimestamp="2026-01-26 18:43:32 +0000 UTC" firstStartedPulling="2026-01-26 18:43:33.165502622 +0000 UTC m=+786.473697340" lastFinishedPulling="2026-01-26 18:43:37.130352843 +0000 UTC m=+790.438547551" observedRunningTime="2026-01-26 18:43:37.691261702 +0000 UTC m=+790.999456410" watchObservedRunningTime="2026-01-26 18:43:37.696526404 +0000 UTC m=+791.004721112" Jan 26 18:43:37 crc kubenswrapper[4737]: I0126 18:43:37.724445 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-qschs" podStartSLOduration=1.8510693759999999 podStartE2EDuration="5.724425971s" podCreationTimestamp="2026-01-26 18:43:32 +0000 UTC" firstStartedPulling="2026-01-26 18:43:33.183332864 +0000 UTC m=+786.491527572" lastFinishedPulling="2026-01-26 18:43:37.056689449 +0000 UTC m=+790.364884167" observedRunningTime="2026-01-26 18:43:37.722616105 +0000 UTC m=+791.030810813" watchObservedRunningTime="2026-01-26 18:43:37.724425971 +0000 UTC m=+791.032620679" Jan 26 18:43:42 crc kubenswrapper[4737]: I0126 18:43:42.710289 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-57xsl" Jan 26 18:44:00 crc kubenswrapper[4737]: I0126 18:44:00.949387 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:44:00 crc kubenswrapper[4737]: I0126 18:44:00.950105 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:44:06 crc kubenswrapper[4737]: I0126 18:44:06.397046 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bw4px7"] Jan 26 18:44:06 crc kubenswrapper[4737]: I0126 18:44:06.398796 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bw4px7" Jan 26 18:44:06 crc kubenswrapper[4737]: I0126 18:44:06.401189 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 26 18:44:06 crc kubenswrapper[4737]: I0126 18:44:06.406896 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bw4px7"] Jan 26 18:44:06 crc kubenswrapper[4737]: I0126 18:44:06.567293 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/65f7c351-84bb-41e0-9775-a820da54e2eb-bundle\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bw4px7\" (UID: \"65f7c351-84bb-41e0-9775-a820da54e2eb\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bw4px7" Jan 26 18:44:06 crc kubenswrapper[4737]: I0126 18:44:06.567935 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6nfh\" (UniqueName: \"kubernetes.io/projected/65f7c351-84bb-41e0-9775-a820da54e2eb-kube-api-access-f6nfh\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bw4px7\" (UID: \"65f7c351-84bb-41e0-9775-a820da54e2eb\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bw4px7" Jan 26 18:44:06 crc kubenswrapper[4737]: I0126 18:44:06.568093 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/65f7c351-84bb-41e0-9775-a820da54e2eb-util\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bw4px7\" (UID: \"65f7c351-84bb-41e0-9775-a820da54e2eb\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bw4px7" Jan 26 18:44:06 crc kubenswrapper[4737]: I0126 18:44:06.669898 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/65f7c351-84bb-41e0-9775-a820da54e2eb-bundle\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bw4px7\" (UID: \"65f7c351-84bb-41e0-9775-a820da54e2eb\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bw4px7" Jan 26 18:44:06 crc kubenswrapper[4737]: I0126 18:44:06.669975 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6nfh\" (UniqueName: \"kubernetes.io/projected/65f7c351-84bb-41e0-9775-a820da54e2eb-kube-api-access-f6nfh\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bw4px7\" (UID: \"65f7c351-84bb-41e0-9775-a820da54e2eb\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bw4px7" Jan 26 18:44:06 crc kubenswrapper[4737]: I0126 18:44:06.670027 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/65f7c351-84bb-41e0-9775-a820da54e2eb-util\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bw4px7\" (UID: \"65f7c351-84bb-41e0-9775-a820da54e2eb\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bw4px7" Jan 26 18:44:06 crc kubenswrapper[4737]: I0126 18:44:06.670530 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/65f7c351-84bb-41e0-9775-a820da54e2eb-util\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bw4px7\" (UID: \"65f7c351-84bb-41e0-9775-a820da54e2eb\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bw4px7" Jan 26 18:44:06 crc kubenswrapper[4737]: I0126 18:44:06.670533 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/65f7c351-84bb-41e0-9775-a820da54e2eb-bundle\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bw4px7\" (UID: \"65f7c351-84bb-41e0-9775-a820da54e2eb\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bw4px7" Jan 26 18:44:06 crc kubenswrapper[4737]: I0126 18:44:06.689121 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6nfh\" (UniqueName: \"kubernetes.io/projected/65f7c351-84bb-41e0-9775-a820da54e2eb-kube-api-access-f6nfh\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bw4px7\" (UID: \"65f7c351-84bb-41e0-9775-a820da54e2eb\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bw4px7" Jan 26 18:44:06 crc kubenswrapper[4737]: I0126 18:44:06.716357 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bw4px7" Jan 26 18:44:06 crc kubenswrapper[4737]: I0126 18:44:06.813190 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2jtwmf"] Jan 26 18:44:06 crc kubenswrapper[4737]: I0126 18:44:06.815142 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2jtwmf" Jan 26 18:44:06 crc kubenswrapper[4737]: I0126 18:44:06.834678 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2jtwmf"] Jan 26 18:44:06 crc kubenswrapper[4737]: I0126 18:44:06.879201 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/52bcbbde-c297-4cce-80fd-cde90894b5df-bundle\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2jtwmf\" (UID: \"52bcbbde-c297-4cce-80fd-cde90894b5df\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2jtwmf" Jan 26 18:44:06 crc kubenswrapper[4737]: I0126 18:44:06.879260 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw9zr\" (UniqueName: \"kubernetes.io/projected/52bcbbde-c297-4cce-80fd-cde90894b5df-kube-api-access-bw9zr\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2jtwmf\" (UID: \"52bcbbde-c297-4cce-80fd-cde90894b5df\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2jtwmf" Jan 26 18:44:06 crc kubenswrapper[4737]: I0126 18:44:06.879286 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/52bcbbde-c297-4cce-80fd-cde90894b5df-util\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2jtwmf\" (UID: \"52bcbbde-c297-4cce-80fd-cde90894b5df\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2jtwmf" Jan 26 18:44:06 crc kubenswrapper[4737]: I0126 18:44:06.968531 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bw4px7"] Jan 26 18:44:06 crc kubenswrapper[4737]: I0126 18:44:06.981042 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/52bcbbde-c297-4cce-80fd-cde90894b5df-bundle\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2jtwmf\" (UID: \"52bcbbde-c297-4cce-80fd-cde90894b5df\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2jtwmf" Jan 26 18:44:06 crc kubenswrapper[4737]: I0126 18:44:06.981154 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bw9zr\" (UniqueName: \"kubernetes.io/projected/52bcbbde-c297-4cce-80fd-cde90894b5df-kube-api-access-bw9zr\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2jtwmf\" (UID: \"52bcbbde-c297-4cce-80fd-cde90894b5df\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2jtwmf" Jan 26 18:44:06 crc kubenswrapper[4737]: I0126 18:44:06.981194 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/52bcbbde-c297-4cce-80fd-cde90894b5df-util\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2jtwmf\" (UID: \"52bcbbde-c297-4cce-80fd-cde90894b5df\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2jtwmf" Jan 26 18:44:06 crc kubenswrapper[4737]: I0126 18:44:06.981565 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/52bcbbde-c297-4cce-80fd-cde90894b5df-bundle\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2jtwmf\" (UID: \"52bcbbde-c297-4cce-80fd-cde90894b5df\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2jtwmf" Jan 26 18:44:06 crc kubenswrapper[4737]: I0126 18:44:06.982114 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/52bcbbde-c297-4cce-80fd-cde90894b5df-util\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2jtwmf\" (UID: \"52bcbbde-c297-4cce-80fd-cde90894b5df\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2jtwmf" Jan 26 18:44:07 crc kubenswrapper[4737]: I0126 18:44:07.007044 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw9zr\" (UniqueName: \"kubernetes.io/projected/52bcbbde-c297-4cce-80fd-cde90894b5df-kube-api-access-bw9zr\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2jtwmf\" (UID: \"52bcbbde-c297-4cce-80fd-cde90894b5df\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2jtwmf" Jan 26 18:44:07 crc kubenswrapper[4737]: I0126 18:44:07.018050 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bw4px7" event={"ID":"65f7c351-84bb-41e0-9775-a820da54e2eb","Type":"ContainerStarted","Data":"a8df0a4216529a9d62eecfae469ff79b95249e471b6e0e8d0948083e362e99cc"} Jan 26 18:44:07 crc kubenswrapper[4737]: I0126 18:44:07.141229 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2jtwmf" Jan 26 18:44:07 crc kubenswrapper[4737]: I0126 18:44:07.588154 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2jtwmf"] Jan 26 18:44:07 crc kubenswrapper[4737]: W0126 18:44:07.603389 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52bcbbde_c297_4cce_80fd_cde90894b5df.slice/crio-fea7c215b1c27bb8c3297cd82c987c15bef752db20de365f629676c214cba1cd WatchSource:0}: Error finding container fea7c215b1c27bb8c3297cd82c987c15bef752db20de365f629676c214cba1cd: Status 404 returned error can't find the container with id fea7c215b1c27bb8c3297cd82c987c15bef752db20de365f629676c214cba1cd Jan 26 18:44:08 crc kubenswrapper[4737]: I0126 18:44:08.024171 4737 generic.go:334] "Generic (PLEG): container finished" podID="52bcbbde-c297-4cce-80fd-cde90894b5df" containerID="c9f91b77cd61e57bb25ada68aca08cb5f5bb629591e24571a368d6eec79384fc" exitCode=0 Jan 26 18:44:08 crc kubenswrapper[4737]: I0126 18:44:08.024260 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2jtwmf" event={"ID":"52bcbbde-c297-4cce-80fd-cde90894b5df","Type":"ContainerDied","Data":"c9f91b77cd61e57bb25ada68aca08cb5f5bb629591e24571a368d6eec79384fc"} Jan 26 18:44:08 crc kubenswrapper[4737]: I0126 18:44:08.024287 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2jtwmf" event={"ID":"52bcbbde-c297-4cce-80fd-cde90894b5df","Type":"ContainerStarted","Data":"fea7c215b1c27bb8c3297cd82c987c15bef752db20de365f629676c214cba1cd"} Jan 26 18:44:08 crc kubenswrapper[4737]: I0126 18:44:08.026240 4737 generic.go:334] "Generic (PLEG): container finished" podID="65f7c351-84bb-41e0-9775-a820da54e2eb" containerID="e7bee9fcb2a1bb333a1e443fd3b6f300447ecd83b1089ea077abe68c5ee7ada9" exitCode=0 Jan 26 18:44:08 crc kubenswrapper[4737]: I0126 18:44:08.026272 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bw4px7" event={"ID":"65f7c351-84bb-41e0-9775-a820da54e2eb","Type":"ContainerDied","Data":"e7bee9fcb2a1bb333a1e443fd3b6f300447ecd83b1089ea077abe68c5ee7ada9"} Jan 26 18:44:10 crc kubenswrapper[4737]: I0126 18:44:10.039705 4737 generic.go:334] "Generic (PLEG): container finished" podID="52bcbbde-c297-4cce-80fd-cde90894b5df" containerID="1af72f058098bcd04cc1cb53f6c2432739112b87dee969346d39151b97cb7e71" exitCode=0 Jan 26 18:44:10 crc kubenswrapper[4737]: I0126 18:44:10.039907 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2jtwmf" event={"ID":"52bcbbde-c297-4cce-80fd-cde90894b5df","Type":"ContainerDied","Data":"1af72f058098bcd04cc1cb53f6c2432739112b87dee969346d39151b97cb7e71"} Jan 26 18:44:10 crc kubenswrapper[4737]: I0126 18:44:10.042743 4737 generic.go:334] "Generic (PLEG): container finished" podID="65f7c351-84bb-41e0-9775-a820da54e2eb" containerID="f6dbb104867220d94a39510c4de55a2a186310594c2467d06a44e77a21e738da" exitCode=0 Jan 26 18:44:10 crc kubenswrapper[4737]: I0126 18:44:10.042810 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bw4px7" event={"ID":"65f7c351-84bb-41e0-9775-a820da54e2eb","Type":"ContainerDied","Data":"f6dbb104867220d94a39510c4de55a2a186310594c2467d06a44e77a21e738da"} Jan 26 18:44:10 crc kubenswrapper[4737]: I0126 18:44:10.160745 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xqgvz"] Jan 26 18:44:10 crc kubenswrapper[4737]: I0126 18:44:10.162053 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xqgvz" Jan 26 18:44:10 crc kubenswrapper[4737]: I0126 18:44:10.174697 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xqgvz"] Jan 26 18:44:10 crc kubenswrapper[4737]: I0126 18:44:10.255929 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfdjk\" (UniqueName: \"kubernetes.io/projected/6ba0b006-b876-4817-8ea5-369a59bf660a-kube-api-access-xfdjk\") pod \"redhat-operators-xqgvz\" (UID: \"6ba0b006-b876-4817-8ea5-369a59bf660a\") " pod="openshift-marketplace/redhat-operators-xqgvz" Jan 26 18:44:10 crc kubenswrapper[4737]: I0126 18:44:10.256365 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ba0b006-b876-4817-8ea5-369a59bf660a-catalog-content\") pod \"redhat-operators-xqgvz\" (UID: \"6ba0b006-b876-4817-8ea5-369a59bf660a\") " pod="openshift-marketplace/redhat-operators-xqgvz" Jan 26 18:44:10 crc kubenswrapper[4737]: I0126 18:44:10.256450 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ba0b006-b876-4817-8ea5-369a59bf660a-utilities\") pod \"redhat-operators-xqgvz\" (UID: \"6ba0b006-b876-4817-8ea5-369a59bf660a\") " pod="openshift-marketplace/redhat-operators-xqgvz" Jan 26 18:44:10 crc kubenswrapper[4737]: I0126 18:44:10.358119 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfdjk\" (UniqueName: \"kubernetes.io/projected/6ba0b006-b876-4817-8ea5-369a59bf660a-kube-api-access-xfdjk\") pod \"redhat-operators-xqgvz\" (UID: \"6ba0b006-b876-4817-8ea5-369a59bf660a\") " pod="openshift-marketplace/redhat-operators-xqgvz" Jan 26 18:44:10 crc kubenswrapper[4737]: I0126 18:44:10.358180 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ba0b006-b876-4817-8ea5-369a59bf660a-catalog-content\") pod \"redhat-operators-xqgvz\" (UID: \"6ba0b006-b876-4817-8ea5-369a59bf660a\") " pod="openshift-marketplace/redhat-operators-xqgvz" Jan 26 18:44:10 crc kubenswrapper[4737]: I0126 18:44:10.358230 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ba0b006-b876-4817-8ea5-369a59bf660a-utilities\") pod \"redhat-operators-xqgvz\" (UID: \"6ba0b006-b876-4817-8ea5-369a59bf660a\") " pod="openshift-marketplace/redhat-operators-xqgvz" Jan 26 18:44:10 crc kubenswrapper[4737]: I0126 18:44:10.358729 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ba0b006-b876-4817-8ea5-369a59bf660a-utilities\") pod \"redhat-operators-xqgvz\" (UID: \"6ba0b006-b876-4817-8ea5-369a59bf660a\") " pod="openshift-marketplace/redhat-operators-xqgvz" Jan 26 18:44:10 crc kubenswrapper[4737]: I0126 18:44:10.358795 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ba0b006-b876-4817-8ea5-369a59bf660a-catalog-content\") pod \"redhat-operators-xqgvz\" (UID: \"6ba0b006-b876-4817-8ea5-369a59bf660a\") " pod="openshift-marketplace/redhat-operators-xqgvz" Jan 26 18:44:10 crc kubenswrapper[4737]: I0126 18:44:10.377156 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfdjk\" (UniqueName: \"kubernetes.io/projected/6ba0b006-b876-4817-8ea5-369a59bf660a-kube-api-access-xfdjk\") pod \"redhat-operators-xqgvz\" (UID: \"6ba0b006-b876-4817-8ea5-369a59bf660a\") " pod="openshift-marketplace/redhat-operators-xqgvz" Jan 26 18:44:10 crc kubenswrapper[4737]: I0126 18:44:10.481228 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xqgvz" Jan 26 18:44:10 crc kubenswrapper[4737]: I0126 18:44:10.731748 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xqgvz"] Jan 26 18:44:11 crc kubenswrapper[4737]: I0126 18:44:11.051420 4737 generic.go:334] "Generic (PLEG): container finished" podID="65f7c351-84bb-41e0-9775-a820da54e2eb" containerID="fbb5daceeb191274202f0ce2462d7c9d46c24e278274fe4e7c387fd5a4a6cc8b" exitCode=0 Jan 26 18:44:11 crc kubenswrapper[4737]: I0126 18:44:11.051610 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bw4px7" event={"ID":"65f7c351-84bb-41e0-9775-a820da54e2eb","Type":"ContainerDied","Data":"fbb5daceeb191274202f0ce2462d7c9d46c24e278274fe4e7c387fd5a4a6cc8b"} Jan 26 18:44:11 crc kubenswrapper[4737]: I0126 18:44:11.053153 4737 generic.go:334] "Generic (PLEG): container finished" podID="6ba0b006-b876-4817-8ea5-369a59bf660a" containerID="8a28ad29fe3e7d26bcb2ff22d8309bfc63a49050f1be0af66312942340bf4fa8" exitCode=0 Jan 26 18:44:11 crc kubenswrapper[4737]: I0126 18:44:11.053223 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xqgvz" event={"ID":"6ba0b006-b876-4817-8ea5-369a59bf660a","Type":"ContainerDied","Data":"8a28ad29fe3e7d26bcb2ff22d8309bfc63a49050f1be0af66312942340bf4fa8"} Jan 26 18:44:11 crc kubenswrapper[4737]: I0126 18:44:11.053243 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xqgvz" event={"ID":"6ba0b006-b876-4817-8ea5-369a59bf660a","Type":"ContainerStarted","Data":"226b50ccc8da7dde651e7ffa33808121482e7faf914c8b5c92fa64fee7b8f7b3"} Jan 26 18:44:11 crc kubenswrapper[4737]: I0126 18:44:11.056246 4737 generic.go:334] "Generic (PLEG): container finished" podID="52bcbbde-c297-4cce-80fd-cde90894b5df" containerID="dd8e61368dc2b25d4316711d8f7eda5b636cfa76c3ae03f094e0981944ff46bc" exitCode=0 Jan 26 18:44:11 crc kubenswrapper[4737]: I0126 18:44:11.056288 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2jtwmf" event={"ID":"52bcbbde-c297-4cce-80fd-cde90894b5df","Type":"ContainerDied","Data":"dd8e61368dc2b25d4316711d8f7eda5b636cfa76c3ae03f094e0981944ff46bc"} Jan 26 18:44:12 crc kubenswrapper[4737]: I0126 18:44:12.368723 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2jtwmf" Jan 26 18:44:12 crc kubenswrapper[4737]: I0126 18:44:12.400720 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/52bcbbde-c297-4cce-80fd-cde90894b5df-util\") pod \"52bcbbde-c297-4cce-80fd-cde90894b5df\" (UID: \"52bcbbde-c297-4cce-80fd-cde90894b5df\") " Jan 26 18:44:12 crc kubenswrapper[4737]: I0126 18:44:12.400777 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/52bcbbde-c297-4cce-80fd-cde90894b5df-bundle\") pod \"52bcbbde-c297-4cce-80fd-cde90894b5df\" (UID: \"52bcbbde-c297-4cce-80fd-cde90894b5df\") " Jan 26 18:44:12 crc kubenswrapper[4737]: I0126 18:44:12.400916 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bw9zr\" (UniqueName: \"kubernetes.io/projected/52bcbbde-c297-4cce-80fd-cde90894b5df-kube-api-access-bw9zr\") pod \"52bcbbde-c297-4cce-80fd-cde90894b5df\" (UID: \"52bcbbde-c297-4cce-80fd-cde90894b5df\") " Jan 26 18:44:12 crc kubenswrapper[4737]: I0126 18:44:12.402013 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52bcbbde-c297-4cce-80fd-cde90894b5df-bundle" (OuterVolumeSpecName: "bundle") pod "52bcbbde-c297-4cce-80fd-cde90894b5df" (UID: "52bcbbde-c297-4cce-80fd-cde90894b5df"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:44:12 crc kubenswrapper[4737]: I0126 18:44:12.410530 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52bcbbde-c297-4cce-80fd-cde90894b5df-kube-api-access-bw9zr" (OuterVolumeSpecName: "kube-api-access-bw9zr") pod "52bcbbde-c297-4cce-80fd-cde90894b5df" (UID: "52bcbbde-c297-4cce-80fd-cde90894b5df"). InnerVolumeSpecName "kube-api-access-bw9zr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:44:12 crc kubenswrapper[4737]: I0126 18:44:12.417880 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52bcbbde-c297-4cce-80fd-cde90894b5df-util" (OuterVolumeSpecName: "util") pod "52bcbbde-c297-4cce-80fd-cde90894b5df" (UID: "52bcbbde-c297-4cce-80fd-cde90894b5df"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:44:12 crc kubenswrapper[4737]: I0126 18:44:12.435276 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bw4px7" Jan 26 18:44:12 crc kubenswrapper[4737]: I0126 18:44:12.502769 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6nfh\" (UniqueName: \"kubernetes.io/projected/65f7c351-84bb-41e0-9775-a820da54e2eb-kube-api-access-f6nfh\") pod \"65f7c351-84bb-41e0-9775-a820da54e2eb\" (UID: \"65f7c351-84bb-41e0-9775-a820da54e2eb\") " Jan 26 18:44:12 crc kubenswrapper[4737]: I0126 18:44:12.502866 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/65f7c351-84bb-41e0-9775-a820da54e2eb-util\") pod \"65f7c351-84bb-41e0-9775-a820da54e2eb\" (UID: \"65f7c351-84bb-41e0-9775-a820da54e2eb\") " Jan 26 18:44:12 crc kubenswrapper[4737]: I0126 18:44:12.503020 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/65f7c351-84bb-41e0-9775-a820da54e2eb-bundle\") pod \"65f7c351-84bb-41e0-9775-a820da54e2eb\" (UID: \"65f7c351-84bb-41e0-9775-a820da54e2eb\") " Jan 26 18:44:12 crc kubenswrapper[4737]: I0126 18:44:12.503429 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bw9zr\" (UniqueName: \"kubernetes.io/projected/52bcbbde-c297-4cce-80fd-cde90894b5df-kube-api-access-bw9zr\") on node \"crc\" DevicePath \"\"" Jan 26 18:44:12 crc kubenswrapper[4737]: I0126 18:44:12.503459 4737 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/52bcbbde-c297-4cce-80fd-cde90894b5df-util\") on node \"crc\" DevicePath \"\"" Jan 26 18:44:12 crc kubenswrapper[4737]: I0126 18:44:12.503476 4737 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/52bcbbde-c297-4cce-80fd-cde90894b5df-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:44:12 crc kubenswrapper[4737]: I0126 18:44:12.503950 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65f7c351-84bb-41e0-9775-a820da54e2eb-bundle" (OuterVolumeSpecName: "bundle") pod "65f7c351-84bb-41e0-9775-a820da54e2eb" (UID: "65f7c351-84bb-41e0-9775-a820da54e2eb"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:44:12 crc kubenswrapper[4737]: I0126 18:44:12.505993 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65f7c351-84bb-41e0-9775-a820da54e2eb-kube-api-access-f6nfh" (OuterVolumeSpecName: "kube-api-access-f6nfh") pod "65f7c351-84bb-41e0-9775-a820da54e2eb" (UID: "65f7c351-84bb-41e0-9775-a820da54e2eb"). InnerVolumeSpecName "kube-api-access-f6nfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:44:12 crc kubenswrapper[4737]: I0126 18:44:12.564286 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65f7c351-84bb-41e0-9775-a820da54e2eb-util" (OuterVolumeSpecName: "util") pod "65f7c351-84bb-41e0-9775-a820da54e2eb" (UID: "65f7c351-84bb-41e0-9775-a820da54e2eb"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:44:12 crc kubenswrapper[4737]: I0126 18:44:12.605463 4737 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/65f7c351-84bb-41e0-9775-a820da54e2eb-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:44:12 crc kubenswrapper[4737]: I0126 18:44:12.605513 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6nfh\" (UniqueName: \"kubernetes.io/projected/65f7c351-84bb-41e0-9775-a820da54e2eb-kube-api-access-f6nfh\") on node \"crc\" DevicePath \"\"" Jan 26 18:44:12 crc kubenswrapper[4737]: I0126 18:44:12.605525 4737 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/65f7c351-84bb-41e0-9775-a820da54e2eb-util\") on node \"crc\" DevicePath \"\"" Jan 26 18:44:13 crc kubenswrapper[4737]: I0126 18:44:13.069343 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xqgvz" event={"ID":"6ba0b006-b876-4817-8ea5-369a59bf660a","Type":"ContainerStarted","Data":"7be3ce58098c0f13c11b070f1f452f9dc793991ffca79a5e51399ff6924edd7a"} Jan 26 18:44:13 crc kubenswrapper[4737]: I0126 18:44:13.071994 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2jtwmf" Jan 26 18:44:13 crc kubenswrapper[4737]: I0126 18:44:13.071988 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2jtwmf" event={"ID":"52bcbbde-c297-4cce-80fd-cde90894b5df","Type":"ContainerDied","Data":"fea7c215b1c27bb8c3297cd82c987c15bef752db20de365f629676c214cba1cd"} Jan 26 18:44:13 crc kubenswrapper[4737]: I0126 18:44:13.072105 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fea7c215b1c27bb8c3297cd82c987c15bef752db20de365f629676c214cba1cd" Jan 26 18:44:13 crc kubenswrapper[4737]: I0126 18:44:13.074298 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bw4px7" event={"ID":"65f7c351-84bb-41e0-9775-a820da54e2eb","Type":"ContainerDied","Data":"a8df0a4216529a9d62eecfae469ff79b95249e471b6e0e8d0948083e362e99cc"} Jan 26 18:44:13 crc kubenswrapper[4737]: I0126 18:44:13.074323 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8df0a4216529a9d62eecfae469ff79b95249e471b6e0e8d0948083e362e99cc" Jan 26 18:44:13 crc kubenswrapper[4737]: I0126 18:44:13.074397 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bw4px7" Jan 26 18:44:14 crc kubenswrapper[4737]: I0126 18:44:14.085600 4737 generic.go:334] "Generic (PLEG): container finished" podID="6ba0b006-b876-4817-8ea5-369a59bf660a" containerID="7be3ce58098c0f13c11b070f1f452f9dc793991ffca79a5e51399ff6924edd7a" exitCode=0 Jan 26 18:44:14 crc kubenswrapper[4737]: I0126 18:44:14.085664 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xqgvz" event={"ID":"6ba0b006-b876-4817-8ea5-369a59bf660a","Type":"ContainerDied","Data":"7be3ce58098c0f13c11b070f1f452f9dc793991ffca79a5e51399ff6924edd7a"} Jan 26 18:44:15 crc kubenswrapper[4737]: I0126 18:44:15.095825 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xqgvz" event={"ID":"6ba0b006-b876-4817-8ea5-369a59bf660a","Type":"ContainerStarted","Data":"93730cefc672ce807963a0dd492d2a51ee24927467d32d7c8d256ef8ab6fab43"} Jan 26 18:44:15 crc kubenswrapper[4737]: I0126 18:44:15.125428 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xqgvz" podStartSLOduration=1.6720040200000001 podStartE2EDuration="5.125402995s" podCreationTimestamp="2026-01-26 18:44:10 +0000 UTC" firstStartedPulling="2026-01-26 18:44:11.054598401 +0000 UTC m=+824.362793109" lastFinishedPulling="2026-01-26 18:44:14.507997376 +0000 UTC m=+827.816192084" observedRunningTime="2026-01-26 18:44:15.122947332 +0000 UTC m=+828.431142080" watchObservedRunningTime="2026-01-26 18:44:15.125402995 +0000 UTC m=+828.433597723" Jan 26 18:44:20 crc kubenswrapper[4737]: I0126 18:44:20.482483 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xqgvz" Jan 26 18:44:20 crc kubenswrapper[4737]: I0126 18:44:20.484440 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xqgvz" Jan 26 18:44:21 crc kubenswrapper[4737]: I0126 18:44:21.530566 4737 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xqgvz" podUID="6ba0b006-b876-4817-8ea5-369a59bf660a" containerName="registry-server" probeResult="failure" output=< Jan 26 18:44:21 crc kubenswrapper[4737]: timeout: failed to connect service ":50051" within 1s Jan 26 18:44:21 crc kubenswrapper[4737]: > Jan 26 18:44:23 crc kubenswrapper[4737]: I0126 18:44:23.143548 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-6dbff5787b-d86s9"] Jan 26 18:44:23 crc kubenswrapper[4737]: E0126 18:44:23.143843 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65f7c351-84bb-41e0-9775-a820da54e2eb" containerName="pull" Jan 26 18:44:23 crc kubenswrapper[4737]: I0126 18:44:23.143858 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="65f7c351-84bb-41e0-9775-a820da54e2eb" containerName="pull" Jan 26 18:44:23 crc kubenswrapper[4737]: E0126 18:44:23.143872 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52bcbbde-c297-4cce-80fd-cde90894b5df" containerName="util" Jan 26 18:44:23 crc kubenswrapper[4737]: I0126 18:44:23.143880 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="52bcbbde-c297-4cce-80fd-cde90894b5df" containerName="util" Jan 26 18:44:23 crc kubenswrapper[4737]: E0126 18:44:23.143894 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65f7c351-84bb-41e0-9775-a820da54e2eb" containerName="extract" Jan 26 18:44:23 crc kubenswrapper[4737]: I0126 18:44:23.143903 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="65f7c351-84bb-41e0-9775-a820da54e2eb" containerName="extract" Jan 26 18:44:23 crc kubenswrapper[4737]: E0126 18:44:23.143918 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52bcbbde-c297-4cce-80fd-cde90894b5df" containerName="pull" Jan 26 18:44:23 crc kubenswrapper[4737]: I0126 18:44:23.143924 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="52bcbbde-c297-4cce-80fd-cde90894b5df" containerName="pull" Jan 26 18:44:23 crc kubenswrapper[4737]: E0126 18:44:23.143935 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65f7c351-84bb-41e0-9775-a820da54e2eb" containerName="util" Jan 26 18:44:23 crc kubenswrapper[4737]: I0126 18:44:23.143941 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="65f7c351-84bb-41e0-9775-a820da54e2eb" containerName="util" Jan 26 18:44:23 crc kubenswrapper[4737]: E0126 18:44:23.143955 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52bcbbde-c297-4cce-80fd-cde90894b5df" containerName="extract" Jan 26 18:44:23 crc kubenswrapper[4737]: I0126 18:44:23.143961 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="52bcbbde-c297-4cce-80fd-cde90894b5df" containerName="extract" Jan 26 18:44:23 crc kubenswrapper[4737]: I0126 18:44:23.144087 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="52bcbbde-c297-4cce-80fd-cde90894b5df" containerName="extract" Jan 26 18:44:23 crc kubenswrapper[4737]: I0126 18:44:23.144096 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="65f7c351-84bb-41e0-9775-a820da54e2eb" containerName="extract" Jan 26 18:44:23 crc kubenswrapper[4737]: I0126 18:44:23.144931 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-6dbff5787b-d86s9" Jan 26 18:44:23 crc kubenswrapper[4737]: I0126 18:44:23.149527 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Jan 26 18:44:23 crc kubenswrapper[4737]: I0126 18:44:23.149702 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-mkzss" Jan 26 18:44:23 crc kubenswrapper[4737]: I0126 18:44:23.149834 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Jan 26 18:44:23 crc kubenswrapper[4737]: I0126 18:44:23.149848 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Jan 26 18:44:23 crc kubenswrapper[4737]: I0126 18:44:23.150025 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Jan 26 18:44:23 crc kubenswrapper[4737]: I0126 18:44:23.150465 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Jan 26 18:44:23 crc kubenswrapper[4737]: I0126 18:44:23.161919 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-6dbff5787b-d86s9"] Jan 26 18:44:23 crc kubenswrapper[4737]: I0126 18:44:23.186512 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/697c3f44-b05d-4404-bd79-a93c1c29b8ad-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-6dbff5787b-d86s9\" (UID: \"697c3f44-b05d-4404-bd79-a93c1c29b8ad\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6dbff5787b-d86s9" Jan 26 18:44:23 crc kubenswrapper[4737]: I0126 18:44:23.186591 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/697c3f44-b05d-4404-bd79-a93c1c29b8ad-webhook-cert\") pod \"loki-operator-controller-manager-6dbff5787b-d86s9\" (UID: \"697c3f44-b05d-4404-bd79-a93c1c29b8ad\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6dbff5787b-d86s9" Jan 26 18:44:23 crc kubenswrapper[4737]: I0126 18:44:23.186634 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/697c3f44-b05d-4404-bd79-a93c1c29b8ad-apiservice-cert\") pod \"loki-operator-controller-manager-6dbff5787b-d86s9\" (UID: \"697c3f44-b05d-4404-bd79-a93c1c29b8ad\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6dbff5787b-d86s9" Jan 26 18:44:23 crc kubenswrapper[4737]: I0126 18:44:23.186704 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/697c3f44-b05d-4404-bd79-a93c1c29b8ad-manager-config\") pod \"loki-operator-controller-manager-6dbff5787b-d86s9\" (UID: \"697c3f44-b05d-4404-bd79-a93c1c29b8ad\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6dbff5787b-d86s9" Jan 26 18:44:23 crc kubenswrapper[4737]: I0126 18:44:23.186739 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2tkw\" (UniqueName: \"kubernetes.io/projected/697c3f44-b05d-4404-bd79-a93c1c29b8ad-kube-api-access-n2tkw\") pod \"loki-operator-controller-manager-6dbff5787b-d86s9\" (UID: \"697c3f44-b05d-4404-bd79-a93c1c29b8ad\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6dbff5787b-d86s9" Jan 26 18:44:23 crc kubenswrapper[4737]: I0126 18:44:23.288089 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/697c3f44-b05d-4404-bd79-a93c1c29b8ad-manager-config\") pod \"loki-operator-controller-manager-6dbff5787b-d86s9\" (UID: \"697c3f44-b05d-4404-bd79-a93c1c29b8ad\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6dbff5787b-d86s9" Jan 26 18:44:23 crc kubenswrapper[4737]: I0126 18:44:23.288144 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2tkw\" (UniqueName: \"kubernetes.io/projected/697c3f44-b05d-4404-bd79-a93c1c29b8ad-kube-api-access-n2tkw\") pod \"loki-operator-controller-manager-6dbff5787b-d86s9\" (UID: \"697c3f44-b05d-4404-bd79-a93c1c29b8ad\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6dbff5787b-d86s9" Jan 26 18:44:23 crc kubenswrapper[4737]: I0126 18:44:23.288218 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/697c3f44-b05d-4404-bd79-a93c1c29b8ad-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-6dbff5787b-d86s9\" (UID: \"697c3f44-b05d-4404-bd79-a93c1c29b8ad\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6dbff5787b-d86s9" Jan 26 18:44:23 crc kubenswrapper[4737]: I0126 18:44:23.288256 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/697c3f44-b05d-4404-bd79-a93c1c29b8ad-webhook-cert\") pod \"loki-operator-controller-manager-6dbff5787b-d86s9\" (UID: \"697c3f44-b05d-4404-bd79-a93c1c29b8ad\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6dbff5787b-d86s9" Jan 26 18:44:23 crc kubenswrapper[4737]: I0126 18:44:23.288278 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/697c3f44-b05d-4404-bd79-a93c1c29b8ad-apiservice-cert\") pod \"loki-operator-controller-manager-6dbff5787b-d86s9\" (UID: \"697c3f44-b05d-4404-bd79-a93c1c29b8ad\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6dbff5787b-d86s9" Jan 26 18:44:23 crc kubenswrapper[4737]: I0126 18:44:23.289319 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/697c3f44-b05d-4404-bd79-a93c1c29b8ad-manager-config\") pod \"loki-operator-controller-manager-6dbff5787b-d86s9\" (UID: \"697c3f44-b05d-4404-bd79-a93c1c29b8ad\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6dbff5787b-d86s9" Jan 26 18:44:23 crc kubenswrapper[4737]: I0126 18:44:23.294436 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/697c3f44-b05d-4404-bd79-a93c1c29b8ad-webhook-cert\") pod \"loki-operator-controller-manager-6dbff5787b-d86s9\" (UID: \"697c3f44-b05d-4404-bd79-a93c1c29b8ad\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6dbff5787b-d86s9" Jan 26 18:44:23 crc kubenswrapper[4737]: I0126 18:44:23.294467 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/697c3f44-b05d-4404-bd79-a93c1c29b8ad-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-6dbff5787b-d86s9\" (UID: \"697c3f44-b05d-4404-bd79-a93c1c29b8ad\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6dbff5787b-d86s9" Jan 26 18:44:23 crc kubenswrapper[4737]: I0126 18:44:23.294587 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/697c3f44-b05d-4404-bd79-a93c1c29b8ad-apiservice-cert\") pod \"loki-operator-controller-manager-6dbff5787b-d86s9\" (UID: \"697c3f44-b05d-4404-bd79-a93c1c29b8ad\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6dbff5787b-d86s9" Jan 26 18:44:23 crc kubenswrapper[4737]: I0126 18:44:23.309745 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2tkw\" (UniqueName: \"kubernetes.io/projected/697c3f44-b05d-4404-bd79-a93c1c29b8ad-kube-api-access-n2tkw\") pod \"loki-operator-controller-manager-6dbff5787b-d86s9\" (UID: \"697c3f44-b05d-4404-bd79-a93c1c29b8ad\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6dbff5787b-d86s9" Jan 26 18:44:23 crc kubenswrapper[4737]: I0126 18:44:23.464132 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-6dbff5787b-d86s9" Jan 26 18:44:23 crc kubenswrapper[4737]: I0126 18:44:23.765322 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-6dbff5787b-d86s9"] Jan 26 18:44:24 crc kubenswrapper[4737]: I0126 18:44:24.160985 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-6dbff5787b-d86s9" event={"ID":"697c3f44-b05d-4404-bd79-a93c1c29b8ad","Type":"ContainerStarted","Data":"14630a15e2f5091fa983febb5e0e3cab2f9b75f2f53d5711a9c37289a091e772"} Jan 26 18:44:27 crc kubenswrapper[4737]: I0126 18:44:27.328000 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-79cf69ddc8-zx2hl"] Jan 26 18:44:27 crc kubenswrapper[4737]: I0126 18:44:27.331466 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-zx2hl" Jan 26 18:44:27 crc kubenswrapper[4737]: I0126 18:44:27.337416 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-5fvjp" Jan 26 18:44:27 crc kubenswrapper[4737]: I0126 18:44:27.337560 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Jan 26 18:44:27 crc kubenswrapper[4737]: I0126 18:44:27.337562 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Jan 26 18:44:27 crc kubenswrapper[4737]: I0126 18:44:27.354918 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-79cf69ddc8-zx2hl"] Jan 26 18:44:27 crc kubenswrapper[4737]: I0126 18:44:27.462654 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lwdp\" (UniqueName: \"kubernetes.io/projected/19021b35-3bd2-40f3-a312-466b0c15bc35-kube-api-access-4lwdp\") pod \"cluster-logging-operator-79cf69ddc8-zx2hl\" (UID: \"19021b35-3bd2-40f3-a312-466b0c15bc35\") " pod="openshift-logging/cluster-logging-operator-79cf69ddc8-zx2hl" Jan 26 18:44:27 crc kubenswrapper[4737]: I0126 18:44:27.564589 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lwdp\" (UniqueName: \"kubernetes.io/projected/19021b35-3bd2-40f3-a312-466b0c15bc35-kube-api-access-4lwdp\") pod \"cluster-logging-operator-79cf69ddc8-zx2hl\" (UID: \"19021b35-3bd2-40f3-a312-466b0c15bc35\") " pod="openshift-logging/cluster-logging-operator-79cf69ddc8-zx2hl" Jan 26 18:44:27 crc kubenswrapper[4737]: I0126 18:44:27.602207 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lwdp\" (UniqueName: \"kubernetes.io/projected/19021b35-3bd2-40f3-a312-466b0c15bc35-kube-api-access-4lwdp\") pod \"cluster-logging-operator-79cf69ddc8-zx2hl\" (UID: \"19021b35-3bd2-40f3-a312-466b0c15bc35\") " pod="openshift-logging/cluster-logging-operator-79cf69ddc8-zx2hl" Jan 26 18:44:27 crc kubenswrapper[4737]: I0126 18:44:27.648663 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-zx2hl" Jan 26 18:44:28 crc kubenswrapper[4737]: I0126 18:44:28.012711 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-79cf69ddc8-zx2hl"] Jan 26 18:44:28 crc kubenswrapper[4737]: W0126 18:44:28.018052 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19021b35_3bd2_40f3_a312_466b0c15bc35.slice/crio-4fe8c4b8c39049ec6a9467a1172fee76eb8814f0bf45d5695e7476dd9d31af44 WatchSource:0}: Error finding container 4fe8c4b8c39049ec6a9467a1172fee76eb8814f0bf45d5695e7476dd9d31af44: Status 404 returned error can't find the container with id 4fe8c4b8c39049ec6a9467a1172fee76eb8814f0bf45d5695e7476dd9d31af44 Jan 26 18:44:28 crc kubenswrapper[4737]: I0126 18:44:28.227231 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-zx2hl" event={"ID":"19021b35-3bd2-40f3-a312-466b0c15bc35","Type":"ContainerStarted","Data":"4fe8c4b8c39049ec6a9467a1172fee76eb8814f0bf45d5695e7476dd9d31af44"} Jan 26 18:44:30 crc kubenswrapper[4737]: I0126 18:44:30.551816 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xqgvz" Jan 26 18:44:30 crc kubenswrapper[4737]: I0126 18:44:30.606062 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xqgvz" Jan 26 18:44:30 crc kubenswrapper[4737]: I0126 18:44:30.949331 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:44:30 crc kubenswrapper[4737]: I0126 18:44:30.949396 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:44:30 crc kubenswrapper[4737]: I0126 18:44:30.949442 4737 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" Jan 26 18:44:30 crc kubenswrapper[4737]: I0126 18:44:30.950049 4737 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a5aff21eb61341220e1d5ffef1d177ada5231e294c0204cf3d50e84b8883bcdf"} pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 18:44:30 crc kubenswrapper[4737]: I0126 18:44:30.950126 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" containerID="cri-o://a5aff21eb61341220e1d5ffef1d177ada5231e294c0204cf3d50e84b8883bcdf" gracePeriod=600 Jan 26 18:44:31 crc kubenswrapper[4737]: I0126 18:44:31.254697 4737 generic.go:334] "Generic (PLEG): container finished" podID="afd75772-7900-46c3-b392-afb075e1cc08" containerID="a5aff21eb61341220e1d5ffef1d177ada5231e294c0204cf3d50e84b8883bcdf" exitCode=0 Jan 26 18:44:31 crc kubenswrapper[4737]: I0126 18:44:31.254791 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" event={"ID":"afd75772-7900-46c3-b392-afb075e1cc08","Type":"ContainerDied","Data":"a5aff21eb61341220e1d5ffef1d177ada5231e294c0204cf3d50e84b8883bcdf"} Jan 26 18:44:31 crc kubenswrapper[4737]: I0126 18:44:31.255220 4737 scope.go:117] "RemoveContainer" containerID="85a890545a9ff2202b93191292b7341bdb6c769889c0a4e83764a0aa6d4f8d25" Jan 26 18:44:32 crc kubenswrapper[4737]: I0126 18:44:32.285487 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" event={"ID":"afd75772-7900-46c3-b392-afb075e1cc08","Type":"ContainerStarted","Data":"234088f96dcb5aa606a89e947e92e3f85265b7ec69ab162d10f16abfa114b135"} Jan 26 18:44:32 crc kubenswrapper[4737]: I0126 18:44:32.288788 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-6dbff5787b-d86s9" event={"ID":"697c3f44-b05d-4404-bd79-a93c1c29b8ad","Type":"ContainerStarted","Data":"29032bcc83db9ccdd911fa209ca321907844ae6944b137bf6210e27c262d09ff"} Jan 26 18:44:33 crc kubenswrapper[4737]: I0126 18:44:33.751899 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xqgvz"] Jan 26 18:44:33 crc kubenswrapper[4737]: I0126 18:44:33.753243 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xqgvz" podUID="6ba0b006-b876-4817-8ea5-369a59bf660a" containerName="registry-server" containerID="cri-o://93730cefc672ce807963a0dd492d2a51ee24927467d32d7c8d256ef8ab6fab43" gracePeriod=2 Jan 26 18:44:34 crc kubenswrapper[4737]: I0126 18:44:34.304710 4737 generic.go:334] "Generic (PLEG): container finished" podID="6ba0b006-b876-4817-8ea5-369a59bf660a" containerID="93730cefc672ce807963a0dd492d2a51ee24927467d32d7c8d256ef8ab6fab43" exitCode=0 Jan 26 18:44:34 crc kubenswrapper[4737]: I0126 18:44:34.304750 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xqgvz" event={"ID":"6ba0b006-b876-4817-8ea5-369a59bf660a","Type":"ContainerDied","Data":"93730cefc672ce807963a0dd492d2a51ee24927467d32d7c8d256ef8ab6fab43"} Jan 26 18:44:37 crc kubenswrapper[4737]: I0126 18:44:37.002194 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xqgvz" Jan 26 18:44:37 crc kubenswrapper[4737]: I0126 18:44:37.028640 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ba0b006-b876-4817-8ea5-369a59bf660a-utilities\") pod \"6ba0b006-b876-4817-8ea5-369a59bf660a\" (UID: \"6ba0b006-b876-4817-8ea5-369a59bf660a\") " Jan 26 18:44:37 crc kubenswrapper[4737]: I0126 18:44:37.028755 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfdjk\" (UniqueName: \"kubernetes.io/projected/6ba0b006-b876-4817-8ea5-369a59bf660a-kube-api-access-xfdjk\") pod \"6ba0b006-b876-4817-8ea5-369a59bf660a\" (UID: \"6ba0b006-b876-4817-8ea5-369a59bf660a\") " Jan 26 18:44:37 crc kubenswrapper[4737]: I0126 18:44:37.028798 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ba0b006-b876-4817-8ea5-369a59bf660a-catalog-content\") pod \"6ba0b006-b876-4817-8ea5-369a59bf660a\" (UID: \"6ba0b006-b876-4817-8ea5-369a59bf660a\") " Jan 26 18:44:37 crc kubenswrapper[4737]: I0126 18:44:37.030116 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ba0b006-b876-4817-8ea5-369a59bf660a-utilities" (OuterVolumeSpecName: "utilities") pod "6ba0b006-b876-4817-8ea5-369a59bf660a" (UID: "6ba0b006-b876-4817-8ea5-369a59bf660a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:44:37 crc kubenswrapper[4737]: I0126 18:44:37.058410 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ba0b006-b876-4817-8ea5-369a59bf660a-kube-api-access-xfdjk" (OuterVolumeSpecName: "kube-api-access-xfdjk") pod "6ba0b006-b876-4817-8ea5-369a59bf660a" (UID: "6ba0b006-b876-4817-8ea5-369a59bf660a"). InnerVolumeSpecName "kube-api-access-xfdjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:44:37 crc kubenswrapper[4737]: I0126 18:44:37.131116 4737 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ba0b006-b876-4817-8ea5-369a59bf660a-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:44:37 crc kubenswrapper[4737]: I0126 18:44:37.131164 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfdjk\" (UniqueName: \"kubernetes.io/projected/6ba0b006-b876-4817-8ea5-369a59bf660a-kube-api-access-xfdjk\") on node \"crc\" DevicePath \"\"" Jan 26 18:44:37 crc kubenswrapper[4737]: I0126 18:44:37.168610 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ba0b006-b876-4817-8ea5-369a59bf660a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6ba0b006-b876-4817-8ea5-369a59bf660a" (UID: "6ba0b006-b876-4817-8ea5-369a59bf660a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:44:37 crc kubenswrapper[4737]: I0126 18:44:37.232851 4737 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ba0b006-b876-4817-8ea5-369a59bf660a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:44:37 crc kubenswrapper[4737]: I0126 18:44:37.342377 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xqgvz" event={"ID":"6ba0b006-b876-4817-8ea5-369a59bf660a","Type":"ContainerDied","Data":"226b50ccc8da7dde651e7ffa33808121482e7faf914c8b5c92fa64fee7b8f7b3"} Jan 26 18:44:37 crc kubenswrapper[4737]: I0126 18:44:37.342447 4737 scope.go:117] "RemoveContainer" containerID="93730cefc672ce807963a0dd492d2a51ee24927467d32d7c8d256ef8ab6fab43" Jan 26 18:44:37 crc kubenswrapper[4737]: I0126 18:44:37.342505 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xqgvz" Jan 26 18:44:37 crc kubenswrapper[4737]: I0126 18:44:37.381379 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xqgvz"] Jan 26 18:44:37 crc kubenswrapper[4737]: I0126 18:44:37.397848 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xqgvz"] Jan 26 18:44:38 crc kubenswrapper[4737]: I0126 18:44:38.989438 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ba0b006-b876-4817-8ea5-369a59bf660a" path="/var/lib/kubelet/pods/6ba0b006-b876-4817-8ea5-369a59bf660a/volumes" Jan 26 18:44:39 crc kubenswrapper[4737]: I0126 18:44:39.590159 4737 scope.go:117] "RemoveContainer" containerID="7be3ce58098c0f13c11b070f1f452f9dc793991ffca79a5e51399ff6924edd7a" Jan 26 18:44:40 crc kubenswrapper[4737]: I0126 18:44:40.427691 4737 scope.go:117] "RemoveContainer" containerID="8a28ad29fe3e7d26bcb2ff22d8309bfc63a49050f1be0af66312942340bf4fa8" Jan 26 18:44:41 crc kubenswrapper[4737]: I0126 18:44:41.386958 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-zx2hl" event={"ID":"19021b35-3bd2-40f3-a312-466b0c15bc35","Type":"ContainerStarted","Data":"d8c74f35098cf55efe0b6e04f8d0bd53cbfde4df1189d1abfe68507c41dbe1e0"} Jan 26 18:44:41 crc kubenswrapper[4737]: I0126 18:44:41.388751 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-6dbff5787b-d86s9" event={"ID":"697c3f44-b05d-4404-bd79-a93c1c29b8ad","Type":"ContainerStarted","Data":"52ebdecce263cf95ce8381cc7d74b4bdf3510a3ed2473a13026e2e8e08ba4df9"} Jan 26 18:44:41 crc kubenswrapper[4737]: I0126 18:44:41.388936 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-6dbff5787b-d86s9" Jan 26 18:44:41 crc kubenswrapper[4737]: I0126 18:44:41.391596 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-6dbff5787b-d86s9" Jan 26 18:44:41 crc kubenswrapper[4737]: I0126 18:44:41.415014 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-zx2hl" podStartSLOduration=2.019026798 podStartE2EDuration="14.414986532s" podCreationTimestamp="2026-01-26 18:44:27 +0000 UTC" firstStartedPulling="2026-01-26 18:44:28.031718439 +0000 UTC m=+841.339913147" lastFinishedPulling="2026-01-26 18:44:40.427678173 +0000 UTC m=+853.735872881" observedRunningTime="2026-01-26 18:44:41.409029164 +0000 UTC m=+854.717223912" watchObservedRunningTime="2026-01-26 18:44:41.414986532 +0000 UTC m=+854.723181250" Jan 26 18:44:41 crc kubenswrapper[4737]: I0126 18:44:41.448815 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-6dbff5787b-d86s9" podStartSLOduration=1.700732449 podStartE2EDuration="18.448782062s" podCreationTimestamp="2026-01-26 18:44:23 +0000 UTC" firstStartedPulling="2026-01-26 18:44:23.776664952 +0000 UTC m=+837.084859660" lastFinishedPulling="2026-01-26 18:44:40.524714565 +0000 UTC m=+853.832909273" observedRunningTime="2026-01-26 18:44:41.444221759 +0000 UTC m=+854.752416507" watchObservedRunningTime="2026-01-26 18:44:41.448782062 +0000 UTC m=+854.756976790" Jan 26 18:44:46 crc kubenswrapper[4737]: I0126 18:44:46.163863 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Jan 26 18:44:46 crc kubenswrapper[4737]: E0126 18:44:46.164608 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ba0b006-b876-4817-8ea5-369a59bf660a" containerName="registry-server" Jan 26 18:44:46 crc kubenswrapper[4737]: I0126 18:44:46.164620 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ba0b006-b876-4817-8ea5-369a59bf660a" containerName="registry-server" Jan 26 18:44:46 crc kubenswrapper[4737]: E0126 18:44:46.164631 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ba0b006-b876-4817-8ea5-369a59bf660a" containerName="extract-content" Jan 26 18:44:46 crc kubenswrapper[4737]: I0126 18:44:46.164636 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ba0b006-b876-4817-8ea5-369a59bf660a" containerName="extract-content" Jan 26 18:44:46 crc kubenswrapper[4737]: E0126 18:44:46.164645 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ba0b006-b876-4817-8ea5-369a59bf660a" containerName="extract-utilities" Jan 26 18:44:46 crc kubenswrapper[4737]: I0126 18:44:46.164651 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ba0b006-b876-4817-8ea5-369a59bf660a" containerName="extract-utilities" Jan 26 18:44:46 crc kubenswrapper[4737]: I0126 18:44:46.164757 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ba0b006-b876-4817-8ea5-369a59bf660a" containerName="registry-server" Jan 26 18:44:46 crc kubenswrapper[4737]: I0126 18:44:46.165231 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Jan 26 18:44:46 crc kubenswrapper[4737]: I0126 18:44:46.167111 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Jan 26 18:44:46 crc kubenswrapper[4737]: I0126 18:44:46.167728 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Jan 26 18:44:46 crc kubenswrapper[4737]: I0126 18:44:46.173183 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Jan 26 18:44:46 crc kubenswrapper[4737]: I0126 18:44:46.269948 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj984\" (UniqueName: \"kubernetes.io/projected/6a522380-2333-4f63-a5cc-df08a9719fd5-kube-api-access-xj984\") pod \"minio\" (UID: \"6a522380-2333-4f63-a5cc-df08a9719fd5\") " pod="minio-dev/minio" Jan 26 18:44:46 crc kubenswrapper[4737]: I0126 18:44:46.270329 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2230df6b-78da-410d-8087-01d53cbb240c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2230df6b-78da-410d-8087-01d53cbb240c\") pod \"minio\" (UID: \"6a522380-2333-4f63-a5cc-df08a9719fd5\") " pod="minio-dev/minio" Jan 26 18:44:46 crc kubenswrapper[4737]: I0126 18:44:46.371470 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2230df6b-78da-410d-8087-01d53cbb240c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2230df6b-78da-410d-8087-01d53cbb240c\") pod \"minio\" (UID: \"6a522380-2333-4f63-a5cc-df08a9719fd5\") " pod="minio-dev/minio" Jan 26 18:44:46 crc kubenswrapper[4737]: I0126 18:44:46.371564 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xj984\" (UniqueName: \"kubernetes.io/projected/6a522380-2333-4f63-a5cc-df08a9719fd5-kube-api-access-xj984\") pod \"minio\" (UID: \"6a522380-2333-4f63-a5cc-df08a9719fd5\") " pod="minio-dev/minio" Jan 26 18:44:46 crc kubenswrapper[4737]: I0126 18:44:46.374392 4737 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 18:44:46 crc kubenswrapper[4737]: I0126 18:44:46.374428 4737 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2230df6b-78da-410d-8087-01d53cbb240c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2230df6b-78da-410d-8087-01d53cbb240c\") pod \"minio\" (UID: \"6a522380-2333-4f63-a5cc-df08a9719fd5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4d28e39917b6af6bf4bdbdfcf91da24ddf6f7192d7d5c73c964a155215063bf1/globalmount\"" pod="minio-dev/minio" Jan 26 18:44:46 crc kubenswrapper[4737]: I0126 18:44:46.396720 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2230df6b-78da-410d-8087-01d53cbb240c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2230df6b-78da-410d-8087-01d53cbb240c\") pod \"minio\" (UID: \"6a522380-2333-4f63-a5cc-df08a9719fd5\") " pod="minio-dev/minio" Jan 26 18:44:46 crc kubenswrapper[4737]: I0126 18:44:46.398347 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xj984\" (UniqueName: \"kubernetes.io/projected/6a522380-2333-4f63-a5cc-df08a9719fd5-kube-api-access-xj984\") pod \"minio\" (UID: \"6a522380-2333-4f63-a5cc-df08a9719fd5\") " pod="minio-dev/minio" Jan 26 18:44:46 crc kubenswrapper[4737]: I0126 18:44:46.534230 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Jan 26 18:44:46 crc kubenswrapper[4737]: I0126 18:44:46.941442 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Jan 26 18:44:47 crc kubenswrapper[4737]: I0126 18:44:47.435758 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"6a522380-2333-4f63-a5cc-df08a9719fd5","Type":"ContainerStarted","Data":"029b75372a1e189a806052230a05df496da4de4a69818bd73a169af2d5abbaa5"} Jan 26 18:44:51 crc kubenswrapper[4737]: I0126 18:44:51.466498 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"6a522380-2333-4f63-a5cc-df08a9719fd5","Type":"ContainerStarted","Data":"3eef4d07a339b6ebb5ebfb590ba420f686a585b7a8292bfd0ca074da74f2aa43"} Jan 26 18:44:51 crc kubenswrapper[4737]: I0126 18:44:51.485832 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=3.947775738 podStartE2EDuration="7.485813261s" podCreationTimestamp="2026-01-26 18:44:44 +0000 UTC" firstStartedPulling="2026-01-26 18:44:46.952265595 +0000 UTC m=+860.260460303" lastFinishedPulling="2026-01-26 18:44:50.490303118 +0000 UTC m=+863.798497826" observedRunningTime="2026-01-26 18:44:51.481025362 +0000 UTC m=+864.789220100" watchObservedRunningTime="2026-01-26 18:44:51.485813261 +0000 UTC m=+864.794007969" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.245107 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-5f678c8dd6-6wp46"] Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.246476 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-6wp46" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.248035 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.248529 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.249356 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-m866d" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.249361 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.258278 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5f678c8dd6-6wp46"] Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.258802 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.351770 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f15f2968-e05a-49f0-8024-3a1764d4b9e2-config\") pod \"logging-loki-distributor-5f678c8dd6-6wp46\" (UID: \"f15f2968-e05a-49f0-8024-3a1764d4b9e2\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-6wp46" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.352021 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/f15f2968-e05a-49f0-8024-3a1764d4b9e2-logging-loki-distributor-http\") pod \"logging-loki-distributor-5f678c8dd6-6wp46\" (UID: \"f15f2968-e05a-49f0-8024-3a1764d4b9e2\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-6wp46" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.352133 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmsgm\" (UniqueName: \"kubernetes.io/projected/f15f2968-e05a-49f0-8024-3a1764d4b9e2-kube-api-access-rmsgm\") pod \"logging-loki-distributor-5f678c8dd6-6wp46\" (UID: \"f15f2968-e05a-49f0-8024-3a1764d4b9e2\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-6wp46" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.352250 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f15f2968-e05a-49f0-8024-3a1764d4b9e2-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5f678c8dd6-6wp46\" (UID: \"f15f2968-e05a-49f0-8024-3a1764d4b9e2\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-6wp46" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.352350 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/f15f2968-e05a-49f0-8024-3a1764d4b9e2-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5f678c8dd6-6wp46\" (UID: \"f15f2968-e05a-49f0-8024-3a1764d4b9e2\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-6wp46" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.411962 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-76788598db-rsdfq"] Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.412960 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76788598db-rsdfq" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.415270 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.415886 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.418214 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.437774 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76788598db-rsdfq"] Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.454618 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f15f2968-e05a-49f0-8024-3a1764d4b9e2-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5f678c8dd6-6wp46\" (UID: \"f15f2968-e05a-49f0-8024-3a1764d4b9e2\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-6wp46" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.454729 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/f15f2968-e05a-49f0-8024-3a1764d4b9e2-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5f678c8dd6-6wp46\" (UID: \"f15f2968-e05a-49f0-8024-3a1764d4b9e2\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-6wp46" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.454784 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f15f2968-e05a-49f0-8024-3a1764d4b9e2-config\") pod \"logging-loki-distributor-5f678c8dd6-6wp46\" (UID: \"f15f2968-e05a-49f0-8024-3a1764d4b9e2\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-6wp46" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.454809 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/f15f2968-e05a-49f0-8024-3a1764d4b9e2-logging-loki-distributor-http\") pod \"logging-loki-distributor-5f678c8dd6-6wp46\" (UID: \"f15f2968-e05a-49f0-8024-3a1764d4b9e2\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-6wp46" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.454857 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmsgm\" (UniqueName: \"kubernetes.io/projected/f15f2968-e05a-49f0-8024-3a1764d4b9e2-kube-api-access-rmsgm\") pod \"logging-loki-distributor-5f678c8dd6-6wp46\" (UID: \"f15f2968-e05a-49f0-8024-3a1764d4b9e2\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-6wp46" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.456753 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f15f2968-e05a-49f0-8024-3a1764d4b9e2-config\") pod \"logging-loki-distributor-5f678c8dd6-6wp46\" (UID: \"f15f2968-e05a-49f0-8024-3a1764d4b9e2\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-6wp46" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.457365 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f15f2968-e05a-49f0-8024-3a1764d4b9e2-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5f678c8dd6-6wp46\" (UID: \"f15f2968-e05a-49f0-8024-3a1764d4b9e2\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-6wp46" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.464575 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/f15f2968-e05a-49f0-8024-3a1764d4b9e2-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5f678c8dd6-6wp46\" (UID: \"f15f2968-e05a-49f0-8024-3a1764d4b9e2\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-6wp46" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.464977 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/f15f2968-e05a-49f0-8024-3a1764d4b9e2-logging-loki-distributor-http\") pod \"logging-loki-distributor-5f678c8dd6-6wp46\" (UID: \"f15f2968-e05a-49f0-8024-3a1764d4b9e2\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-6wp46" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.487905 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmsgm\" (UniqueName: \"kubernetes.io/projected/f15f2968-e05a-49f0-8024-3a1764d4b9e2-kube-api-access-rmsgm\") pod \"logging-loki-distributor-5f678c8dd6-6wp46\" (UID: \"f15f2968-e05a-49f0-8024-3a1764d4b9e2\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-6wp46" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.530209 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-69d9546745-qqkdc"] Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.534315 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-69d9546745-qqkdc" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.537789 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.538056 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.557581 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15449cbd-7753-47b6-811f-059d9f83ff0b-logging-loki-ca-bundle\") pod \"logging-loki-querier-76788598db-rsdfq\" (UID: \"15449cbd-7753-47b6-811f-059d9f83ff0b\") " pod="openshift-logging/logging-loki-querier-76788598db-rsdfq" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.557648 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/954c3b49-1fc8-4e5c-9312-7b8e66b7a681-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-69d9546745-qqkdc\" (UID: \"954c3b49-1fc8-4e5c-9312-7b8e66b7a681\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-qqkdc" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.557688 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/15449cbd-7753-47b6-811f-059d9f83ff0b-logging-loki-querier-http\") pod \"logging-loki-querier-76788598db-rsdfq\" (UID: \"15449cbd-7753-47b6-811f-059d9f83ff0b\") " pod="openshift-logging/logging-loki-querier-76788598db-rsdfq" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.557724 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15449cbd-7753-47b6-811f-059d9f83ff0b-config\") pod \"logging-loki-querier-76788598db-rsdfq\" (UID: \"15449cbd-7753-47b6-811f-059d9f83ff0b\") " pod="openshift-logging/logging-loki-querier-76788598db-rsdfq" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.557751 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvncz\" (UniqueName: \"kubernetes.io/projected/15449cbd-7753-47b6-811f-059d9f83ff0b-kube-api-access-gvncz\") pod \"logging-loki-querier-76788598db-rsdfq\" (UID: \"15449cbd-7753-47b6-811f-059d9f83ff0b\") " pod="openshift-logging/logging-loki-querier-76788598db-rsdfq" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.557800 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfd2f\" (UniqueName: \"kubernetes.io/projected/954c3b49-1fc8-4e5c-9312-7b8e66b7a681-kube-api-access-vfd2f\") pod \"logging-loki-query-frontend-69d9546745-qqkdc\" (UID: \"954c3b49-1fc8-4e5c-9312-7b8e66b7a681\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-qqkdc" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.557824 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/15449cbd-7753-47b6-811f-059d9f83ff0b-logging-loki-s3\") pod \"logging-loki-querier-76788598db-rsdfq\" (UID: \"15449cbd-7753-47b6-811f-059d9f83ff0b\") " pod="openshift-logging/logging-loki-querier-76788598db-rsdfq" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.557871 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/954c3b49-1fc8-4e5c-9312-7b8e66b7a681-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-69d9546745-qqkdc\" (UID: \"954c3b49-1fc8-4e5c-9312-7b8e66b7a681\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-qqkdc" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.557902 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/954c3b49-1fc8-4e5c-9312-7b8e66b7a681-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-69d9546745-qqkdc\" (UID: \"954c3b49-1fc8-4e5c-9312-7b8e66b7a681\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-qqkdc" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.557925 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/15449cbd-7753-47b6-811f-059d9f83ff0b-logging-loki-querier-grpc\") pod \"logging-loki-querier-76788598db-rsdfq\" (UID: \"15449cbd-7753-47b6-811f-059d9f83ff0b\") " pod="openshift-logging/logging-loki-querier-76788598db-rsdfq" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.557956 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/954c3b49-1fc8-4e5c-9312-7b8e66b7a681-config\") pod \"logging-loki-query-frontend-69d9546745-qqkdc\" (UID: \"954c3b49-1fc8-4e5c-9312-7b8e66b7a681\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-qqkdc" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.575618 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-6wp46" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.612686 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-69d9546745-qqkdc"] Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.661691 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfd2f\" (UniqueName: \"kubernetes.io/projected/954c3b49-1fc8-4e5c-9312-7b8e66b7a681-kube-api-access-vfd2f\") pod \"logging-loki-query-frontend-69d9546745-qqkdc\" (UID: \"954c3b49-1fc8-4e5c-9312-7b8e66b7a681\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-qqkdc" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.661731 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/15449cbd-7753-47b6-811f-059d9f83ff0b-logging-loki-s3\") pod \"logging-loki-querier-76788598db-rsdfq\" (UID: \"15449cbd-7753-47b6-811f-059d9f83ff0b\") " pod="openshift-logging/logging-loki-querier-76788598db-rsdfq" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.661783 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/954c3b49-1fc8-4e5c-9312-7b8e66b7a681-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-69d9546745-qqkdc\" (UID: \"954c3b49-1fc8-4e5c-9312-7b8e66b7a681\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-qqkdc" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.661807 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/954c3b49-1fc8-4e5c-9312-7b8e66b7a681-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-69d9546745-qqkdc\" (UID: \"954c3b49-1fc8-4e5c-9312-7b8e66b7a681\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-qqkdc" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.661823 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/15449cbd-7753-47b6-811f-059d9f83ff0b-logging-loki-querier-grpc\") pod \"logging-loki-querier-76788598db-rsdfq\" (UID: \"15449cbd-7753-47b6-811f-059d9f83ff0b\") " pod="openshift-logging/logging-loki-querier-76788598db-rsdfq" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.661848 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/954c3b49-1fc8-4e5c-9312-7b8e66b7a681-config\") pod \"logging-loki-query-frontend-69d9546745-qqkdc\" (UID: \"954c3b49-1fc8-4e5c-9312-7b8e66b7a681\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-qqkdc" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.661871 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15449cbd-7753-47b6-811f-059d9f83ff0b-logging-loki-ca-bundle\") pod \"logging-loki-querier-76788598db-rsdfq\" (UID: \"15449cbd-7753-47b6-811f-059d9f83ff0b\") " pod="openshift-logging/logging-loki-querier-76788598db-rsdfq" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.661896 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/954c3b49-1fc8-4e5c-9312-7b8e66b7a681-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-69d9546745-qqkdc\" (UID: \"954c3b49-1fc8-4e5c-9312-7b8e66b7a681\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-qqkdc" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.661926 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/15449cbd-7753-47b6-811f-059d9f83ff0b-logging-loki-querier-http\") pod \"logging-loki-querier-76788598db-rsdfq\" (UID: \"15449cbd-7753-47b6-811f-059d9f83ff0b\") " pod="openshift-logging/logging-loki-querier-76788598db-rsdfq" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.661956 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15449cbd-7753-47b6-811f-059d9f83ff0b-config\") pod \"logging-loki-querier-76788598db-rsdfq\" (UID: \"15449cbd-7753-47b6-811f-059d9f83ff0b\") " pod="openshift-logging/logging-loki-querier-76788598db-rsdfq" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.661981 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvncz\" (UniqueName: \"kubernetes.io/projected/15449cbd-7753-47b6-811f-059d9f83ff0b-kube-api-access-gvncz\") pod \"logging-loki-querier-76788598db-rsdfq\" (UID: \"15449cbd-7753-47b6-811f-059d9f83ff0b\") " pod="openshift-logging/logging-loki-querier-76788598db-rsdfq" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.663786 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/954c3b49-1fc8-4e5c-9312-7b8e66b7a681-config\") pod \"logging-loki-query-frontend-69d9546745-qqkdc\" (UID: \"954c3b49-1fc8-4e5c-9312-7b8e66b7a681\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-qqkdc" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.664463 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15449cbd-7753-47b6-811f-059d9f83ff0b-logging-loki-ca-bundle\") pod \"logging-loki-querier-76788598db-rsdfq\" (UID: \"15449cbd-7753-47b6-811f-059d9f83ff0b\") " pod="openshift-logging/logging-loki-querier-76788598db-rsdfq" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.664944 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15449cbd-7753-47b6-811f-059d9f83ff0b-config\") pod \"logging-loki-querier-76788598db-rsdfq\" (UID: \"15449cbd-7753-47b6-811f-059d9f83ff0b\") " pod="openshift-logging/logging-loki-querier-76788598db-rsdfq" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.665691 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/954c3b49-1fc8-4e5c-9312-7b8e66b7a681-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-69d9546745-qqkdc\" (UID: \"954c3b49-1fc8-4e5c-9312-7b8e66b7a681\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-qqkdc" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.667455 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/15449cbd-7753-47b6-811f-059d9f83ff0b-logging-loki-querier-http\") pod \"logging-loki-querier-76788598db-rsdfq\" (UID: \"15449cbd-7753-47b6-811f-059d9f83ff0b\") " pod="openshift-logging/logging-loki-querier-76788598db-rsdfq" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.669928 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/954c3b49-1fc8-4e5c-9312-7b8e66b7a681-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-69d9546745-qqkdc\" (UID: \"954c3b49-1fc8-4e5c-9312-7b8e66b7a681\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-qqkdc" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.674816 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/15449cbd-7753-47b6-811f-059d9f83ff0b-logging-loki-s3\") pod \"logging-loki-querier-76788598db-rsdfq\" (UID: \"15449cbd-7753-47b6-811f-059d9f83ff0b\") " pod="openshift-logging/logging-loki-querier-76788598db-rsdfq" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.676258 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/954c3b49-1fc8-4e5c-9312-7b8e66b7a681-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-69d9546745-qqkdc\" (UID: \"954c3b49-1fc8-4e5c-9312-7b8e66b7a681\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-qqkdc" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.676757 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/15449cbd-7753-47b6-811f-059d9f83ff0b-logging-loki-querier-grpc\") pod \"logging-loki-querier-76788598db-rsdfq\" (UID: \"15449cbd-7753-47b6-811f-059d9f83ff0b\") " pod="openshift-logging/logging-loki-querier-76788598db-rsdfq" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.681958 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-5c6b766d5f-kcfsl"] Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.683118 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-5c6b766d5f-kcfsl" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.688716 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.691572 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-vdwz9" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.691767 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.692232 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.692275 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.692800 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvncz\" (UniqueName: \"kubernetes.io/projected/15449cbd-7753-47b6-811f-059d9f83ff0b-kube-api-access-gvncz\") pod \"logging-loki-querier-76788598db-rsdfq\" (UID: \"15449cbd-7753-47b6-811f-059d9f83ff0b\") " pod="openshift-logging/logging-loki-querier-76788598db-rsdfq" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.694367 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.696020 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-5c6b766d5f-c5kng"] Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.702158 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfd2f\" (UniqueName: \"kubernetes.io/projected/954c3b49-1fc8-4e5c-9312-7b8e66b7a681-kube-api-access-vfd2f\") pod \"logging-loki-query-frontend-69d9546745-qqkdc\" (UID: \"954c3b49-1fc8-4e5c-9312-7b8e66b7a681\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-qqkdc" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.706754 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-5c6b766d5f-c5kng" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.711368 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-5c6b766d5f-kcfsl"] Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.719420 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-5c6b766d5f-c5kng"] Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.732808 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76788598db-rsdfq" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.855908 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-69d9546745-qqkdc" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.867142 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzxbn\" (UniqueName: \"kubernetes.io/projected/225843b1-6423-4d7f-aa3c-5945a9e4bd8e-kube-api-access-lzxbn\") pod \"logging-loki-gateway-5c6b766d5f-kcfsl\" (UID: \"225843b1-6423-4d7f-aa3c-5945a9e4bd8e\") " pod="openshift-logging/logging-loki-gateway-5c6b766d5f-kcfsl" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.867501 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/225843b1-6423-4d7f-aa3c-5945a9e4bd8e-lokistack-gateway\") pod \"logging-loki-gateway-5c6b766d5f-kcfsl\" (UID: \"225843b1-6423-4d7f-aa3c-5945a9e4bd8e\") " pod="openshift-logging/logging-loki-gateway-5c6b766d5f-kcfsl" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.867534 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/225843b1-6423-4d7f-aa3c-5945a9e4bd8e-tenants\") pod \"logging-loki-gateway-5c6b766d5f-kcfsl\" (UID: \"225843b1-6423-4d7f-aa3c-5945a9e4bd8e\") " pod="openshift-logging/logging-loki-gateway-5c6b766d5f-kcfsl" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.867558 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/225843b1-6423-4d7f-aa3c-5945a9e4bd8e-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5c6b766d5f-kcfsl\" (UID: \"225843b1-6423-4d7f-aa3c-5945a9e4bd8e\") " pod="openshift-logging/logging-loki-gateway-5c6b766d5f-kcfsl" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.867577 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/e9d6a3ae-5064-4b4a-bbdb-3b05596bc38e-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5c6b766d5f-c5kng\" (UID: \"e9d6a3ae-5064-4b4a-bbdb-3b05596bc38e\") " pod="openshift-logging/logging-loki-gateway-5c6b766d5f-c5kng" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.867593 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9d6a3ae-5064-4b4a-bbdb-3b05596bc38e-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5c6b766d5f-c5kng\" (UID: \"e9d6a3ae-5064-4b4a-bbdb-3b05596bc38e\") " pod="openshift-logging/logging-loki-gateway-5c6b766d5f-c5kng" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.867612 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/225843b1-6423-4d7f-aa3c-5945a9e4bd8e-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5c6b766d5f-kcfsl\" (UID: \"225843b1-6423-4d7f-aa3c-5945a9e4bd8e\") " pod="openshift-logging/logging-loki-gateway-5c6b766d5f-kcfsl" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.867634 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/e9d6a3ae-5064-4b4a-bbdb-3b05596bc38e-lokistack-gateway\") pod \"logging-loki-gateway-5c6b766d5f-c5kng\" (UID: \"e9d6a3ae-5064-4b4a-bbdb-3b05596bc38e\") " pod="openshift-logging/logging-loki-gateway-5c6b766d5f-c5kng" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.867654 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n62c\" (UniqueName: \"kubernetes.io/projected/e9d6a3ae-5064-4b4a-bbdb-3b05596bc38e-kube-api-access-5n62c\") pod \"logging-loki-gateway-5c6b766d5f-c5kng\" (UID: \"e9d6a3ae-5064-4b4a-bbdb-3b05596bc38e\") " pod="openshift-logging/logging-loki-gateway-5c6b766d5f-c5kng" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.867680 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/225843b1-6423-4d7f-aa3c-5945a9e4bd8e-tls-secret\") pod \"logging-loki-gateway-5c6b766d5f-kcfsl\" (UID: \"225843b1-6423-4d7f-aa3c-5945a9e4bd8e\") " pod="openshift-logging/logging-loki-gateway-5c6b766d5f-kcfsl" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.867719 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/225843b1-6423-4d7f-aa3c-5945a9e4bd8e-rbac\") pod \"logging-loki-gateway-5c6b766d5f-kcfsl\" (UID: \"225843b1-6423-4d7f-aa3c-5945a9e4bd8e\") " pod="openshift-logging/logging-loki-gateway-5c6b766d5f-kcfsl" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.867751 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/e9d6a3ae-5064-4b4a-bbdb-3b05596bc38e-tenants\") pod \"logging-loki-gateway-5c6b766d5f-c5kng\" (UID: \"e9d6a3ae-5064-4b4a-bbdb-3b05596bc38e\") " pod="openshift-logging/logging-loki-gateway-5c6b766d5f-c5kng" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.867786 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/e9d6a3ae-5064-4b4a-bbdb-3b05596bc38e-rbac\") pod \"logging-loki-gateway-5c6b766d5f-c5kng\" (UID: \"e9d6a3ae-5064-4b4a-bbdb-3b05596bc38e\") " pod="openshift-logging/logging-loki-gateway-5c6b766d5f-c5kng" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.867809 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9d6a3ae-5064-4b4a-bbdb-3b05596bc38e-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5c6b766d5f-c5kng\" (UID: \"e9d6a3ae-5064-4b4a-bbdb-3b05596bc38e\") " pod="openshift-logging/logging-loki-gateway-5c6b766d5f-c5kng" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.867832 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/e9d6a3ae-5064-4b4a-bbdb-3b05596bc38e-tls-secret\") pod \"logging-loki-gateway-5c6b766d5f-c5kng\" (UID: \"e9d6a3ae-5064-4b4a-bbdb-3b05596bc38e\") " pod="openshift-logging/logging-loki-gateway-5c6b766d5f-c5kng" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.867850 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/225843b1-6423-4d7f-aa3c-5945a9e4bd8e-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5c6b766d5f-kcfsl\" (UID: \"225843b1-6423-4d7f-aa3c-5945a9e4bd8e\") " pod="openshift-logging/logging-loki-gateway-5c6b766d5f-kcfsl" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.969644 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9d6a3ae-5064-4b4a-bbdb-3b05596bc38e-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5c6b766d5f-c5kng\" (UID: \"e9d6a3ae-5064-4b4a-bbdb-3b05596bc38e\") " pod="openshift-logging/logging-loki-gateway-5c6b766d5f-c5kng" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.969704 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/e9d6a3ae-5064-4b4a-bbdb-3b05596bc38e-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5c6b766d5f-c5kng\" (UID: \"e9d6a3ae-5064-4b4a-bbdb-3b05596bc38e\") " pod="openshift-logging/logging-loki-gateway-5c6b766d5f-c5kng" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.969742 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/225843b1-6423-4d7f-aa3c-5945a9e4bd8e-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5c6b766d5f-kcfsl\" (UID: \"225843b1-6423-4d7f-aa3c-5945a9e4bd8e\") " pod="openshift-logging/logging-loki-gateway-5c6b766d5f-kcfsl" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.969770 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/e9d6a3ae-5064-4b4a-bbdb-3b05596bc38e-lokistack-gateway\") pod \"logging-loki-gateway-5c6b766d5f-c5kng\" (UID: \"e9d6a3ae-5064-4b4a-bbdb-3b05596bc38e\") " pod="openshift-logging/logging-loki-gateway-5c6b766d5f-c5kng" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.969793 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5n62c\" (UniqueName: \"kubernetes.io/projected/e9d6a3ae-5064-4b4a-bbdb-3b05596bc38e-kube-api-access-5n62c\") pod \"logging-loki-gateway-5c6b766d5f-c5kng\" (UID: \"e9d6a3ae-5064-4b4a-bbdb-3b05596bc38e\") " pod="openshift-logging/logging-loki-gateway-5c6b766d5f-c5kng" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.969814 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/225843b1-6423-4d7f-aa3c-5945a9e4bd8e-tls-secret\") pod \"logging-loki-gateway-5c6b766d5f-kcfsl\" (UID: \"225843b1-6423-4d7f-aa3c-5945a9e4bd8e\") " pod="openshift-logging/logging-loki-gateway-5c6b766d5f-kcfsl" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.969850 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/225843b1-6423-4d7f-aa3c-5945a9e4bd8e-rbac\") pod \"logging-loki-gateway-5c6b766d5f-kcfsl\" (UID: \"225843b1-6423-4d7f-aa3c-5945a9e4bd8e\") " pod="openshift-logging/logging-loki-gateway-5c6b766d5f-kcfsl" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.969902 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/e9d6a3ae-5064-4b4a-bbdb-3b05596bc38e-tenants\") pod \"logging-loki-gateway-5c6b766d5f-c5kng\" (UID: \"e9d6a3ae-5064-4b4a-bbdb-3b05596bc38e\") " pod="openshift-logging/logging-loki-gateway-5c6b766d5f-c5kng" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.969941 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/e9d6a3ae-5064-4b4a-bbdb-3b05596bc38e-rbac\") pod \"logging-loki-gateway-5c6b766d5f-c5kng\" (UID: \"e9d6a3ae-5064-4b4a-bbdb-3b05596bc38e\") " pod="openshift-logging/logging-loki-gateway-5c6b766d5f-c5kng" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.969964 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9d6a3ae-5064-4b4a-bbdb-3b05596bc38e-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5c6b766d5f-c5kng\" (UID: \"e9d6a3ae-5064-4b4a-bbdb-3b05596bc38e\") " pod="openshift-logging/logging-loki-gateway-5c6b766d5f-c5kng" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.970000 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/e9d6a3ae-5064-4b4a-bbdb-3b05596bc38e-tls-secret\") pod \"logging-loki-gateway-5c6b766d5f-c5kng\" (UID: \"e9d6a3ae-5064-4b4a-bbdb-3b05596bc38e\") " pod="openshift-logging/logging-loki-gateway-5c6b766d5f-c5kng" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.970024 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/225843b1-6423-4d7f-aa3c-5945a9e4bd8e-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5c6b766d5f-kcfsl\" (UID: \"225843b1-6423-4d7f-aa3c-5945a9e4bd8e\") " pod="openshift-logging/logging-loki-gateway-5c6b766d5f-kcfsl" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.970066 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzxbn\" (UniqueName: \"kubernetes.io/projected/225843b1-6423-4d7f-aa3c-5945a9e4bd8e-kube-api-access-lzxbn\") pod \"logging-loki-gateway-5c6b766d5f-kcfsl\" (UID: \"225843b1-6423-4d7f-aa3c-5945a9e4bd8e\") " pod="openshift-logging/logging-loki-gateway-5c6b766d5f-kcfsl" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.970124 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/225843b1-6423-4d7f-aa3c-5945a9e4bd8e-lokistack-gateway\") pod \"logging-loki-gateway-5c6b766d5f-kcfsl\" (UID: \"225843b1-6423-4d7f-aa3c-5945a9e4bd8e\") " pod="openshift-logging/logging-loki-gateway-5c6b766d5f-kcfsl" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.970160 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/225843b1-6423-4d7f-aa3c-5945a9e4bd8e-tenants\") pod \"logging-loki-gateway-5c6b766d5f-kcfsl\" (UID: \"225843b1-6423-4d7f-aa3c-5945a9e4bd8e\") " pod="openshift-logging/logging-loki-gateway-5c6b766d5f-kcfsl" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.970194 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/225843b1-6423-4d7f-aa3c-5945a9e4bd8e-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5c6b766d5f-kcfsl\" (UID: \"225843b1-6423-4d7f-aa3c-5945a9e4bd8e\") " pod="openshift-logging/logging-loki-gateway-5c6b766d5f-kcfsl" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.970894 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9d6a3ae-5064-4b4a-bbdb-3b05596bc38e-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5c6b766d5f-c5kng\" (UID: \"e9d6a3ae-5064-4b4a-bbdb-3b05596bc38e\") " pod="openshift-logging/logging-loki-gateway-5c6b766d5f-c5kng" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.971423 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/225843b1-6423-4d7f-aa3c-5945a9e4bd8e-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5c6b766d5f-kcfsl\" (UID: \"225843b1-6423-4d7f-aa3c-5945a9e4bd8e\") " pod="openshift-logging/logging-loki-gateway-5c6b766d5f-kcfsl" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.971535 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/225843b1-6423-4d7f-aa3c-5945a9e4bd8e-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5c6b766d5f-kcfsl\" (UID: \"225843b1-6423-4d7f-aa3c-5945a9e4bd8e\") " pod="openshift-logging/logging-loki-gateway-5c6b766d5f-kcfsl" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.972158 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/e9d6a3ae-5064-4b4a-bbdb-3b05596bc38e-rbac\") pod \"logging-loki-gateway-5c6b766d5f-c5kng\" (UID: \"e9d6a3ae-5064-4b4a-bbdb-3b05596bc38e\") " pod="openshift-logging/logging-loki-gateway-5c6b766d5f-c5kng" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.972675 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9d6a3ae-5064-4b4a-bbdb-3b05596bc38e-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5c6b766d5f-c5kng\" (UID: \"e9d6a3ae-5064-4b4a-bbdb-3b05596bc38e\") " pod="openshift-logging/logging-loki-gateway-5c6b766d5f-c5kng" Jan 26 18:44:58 crc kubenswrapper[4737]: E0126 18:44:58.972741 4737 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Jan 26 18:44:58 crc kubenswrapper[4737]: E0126 18:44:58.972787 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9d6a3ae-5064-4b4a-bbdb-3b05596bc38e-tls-secret podName:e9d6a3ae-5064-4b4a-bbdb-3b05596bc38e nodeName:}" failed. No retries permitted until 2026-01-26 18:44:59.472774973 +0000 UTC m=+872.780969681 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/e9d6a3ae-5064-4b4a-bbdb-3b05596bc38e-tls-secret") pod "logging-loki-gateway-5c6b766d5f-c5kng" (UID: "e9d6a3ae-5064-4b4a-bbdb-3b05596bc38e") : secret "logging-loki-gateway-http" not found Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.974015 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/e9d6a3ae-5064-4b4a-bbdb-3b05596bc38e-lokistack-gateway\") pod \"logging-loki-gateway-5c6b766d5f-c5kng\" (UID: \"e9d6a3ae-5064-4b4a-bbdb-3b05596bc38e\") " pod="openshift-logging/logging-loki-gateway-5c6b766d5f-c5kng" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.977001 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/225843b1-6423-4d7f-aa3c-5945a9e4bd8e-lokistack-gateway\") pod \"logging-loki-gateway-5c6b766d5f-kcfsl\" (UID: \"225843b1-6423-4d7f-aa3c-5945a9e4bd8e\") " pod="openshift-logging/logging-loki-gateway-5c6b766d5f-kcfsl" Jan 26 18:44:58 crc kubenswrapper[4737]: E0126 18:44:58.978384 4737 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Jan 26 18:44:58 crc kubenswrapper[4737]: E0126 18:44:58.978506 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/225843b1-6423-4d7f-aa3c-5945a9e4bd8e-tls-secret podName:225843b1-6423-4d7f-aa3c-5945a9e4bd8e nodeName:}" failed. No retries permitted until 2026-01-26 18:44:59.478442834 +0000 UTC m=+872.786637542 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/225843b1-6423-4d7f-aa3c-5945a9e4bd8e-tls-secret") pod "logging-loki-gateway-5c6b766d5f-kcfsl" (UID: "225843b1-6423-4d7f-aa3c-5945a9e4bd8e") : secret "logging-loki-gateway-http" not found Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.979722 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/225843b1-6423-4d7f-aa3c-5945a9e4bd8e-rbac\") pod \"logging-loki-gateway-5c6b766d5f-kcfsl\" (UID: \"225843b1-6423-4d7f-aa3c-5945a9e4bd8e\") " pod="openshift-logging/logging-loki-gateway-5c6b766d5f-kcfsl" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.984900 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/e9d6a3ae-5064-4b4a-bbdb-3b05596bc38e-tenants\") pod \"logging-loki-gateway-5c6b766d5f-c5kng\" (UID: \"e9d6a3ae-5064-4b4a-bbdb-3b05596bc38e\") " pod="openshift-logging/logging-loki-gateway-5c6b766d5f-c5kng" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.985176 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/225843b1-6423-4d7f-aa3c-5945a9e4bd8e-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5c6b766d5f-kcfsl\" (UID: \"225843b1-6423-4d7f-aa3c-5945a9e4bd8e\") " pod="openshift-logging/logging-loki-gateway-5c6b766d5f-kcfsl" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.985980 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/225843b1-6423-4d7f-aa3c-5945a9e4bd8e-tenants\") pod \"logging-loki-gateway-5c6b766d5f-kcfsl\" (UID: \"225843b1-6423-4d7f-aa3c-5945a9e4bd8e\") " pod="openshift-logging/logging-loki-gateway-5c6b766d5f-kcfsl" Jan 26 18:44:58 crc kubenswrapper[4737]: I0126 18:44:58.986523 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/e9d6a3ae-5064-4b4a-bbdb-3b05596bc38e-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5c6b766d5f-c5kng\" (UID: \"e9d6a3ae-5064-4b4a-bbdb-3b05596bc38e\") " pod="openshift-logging/logging-loki-gateway-5c6b766d5f-c5kng" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.008887 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzxbn\" (UniqueName: \"kubernetes.io/projected/225843b1-6423-4d7f-aa3c-5945a9e4bd8e-kube-api-access-lzxbn\") pod \"logging-loki-gateway-5c6b766d5f-kcfsl\" (UID: \"225843b1-6423-4d7f-aa3c-5945a9e4bd8e\") " pod="openshift-logging/logging-loki-gateway-5c6b766d5f-kcfsl" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.009635 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5n62c\" (UniqueName: \"kubernetes.io/projected/e9d6a3ae-5064-4b4a-bbdb-3b05596bc38e-kube-api-access-5n62c\") pod \"logging-loki-gateway-5c6b766d5f-c5kng\" (UID: \"e9d6a3ae-5064-4b4a-bbdb-3b05596bc38e\") " pod="openshift-logging/logging-loki-gateway-5c6b766d5f-c5kng" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.110923 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5f678c8dd6-6wp46"] Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.243807 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76788598db-rsdfq"] Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.313693 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-69d9546745-qqkdc"] Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.399039 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.399896 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Jan 26 18:44:59 crc kubenswrapper[4737]: W0126 18:44:59.403578 4737 reflector.go:561] object-"openshift-logging"/"logging-loki-ingester-http": failed to list *v1.Secret: secrets "logging-loki-ingester-http" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-logging": no relationship found between node 'crc' and this object Jan 26 18:44:59 crc kubenswrapper[4737]: E0126 18:44:59.404109 4737 reflector.go:158] "Unhandled Error" err="object-\"openshift-logging\"/\"logging-loki-ingester-http\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"logging-loki-ingester-http\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-logging\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 18:44:59 crc kubenswrapper[4737]: W0126 18:44:59.404367 4737 reflector.go:561] object-"openshift-logging"/"logging-loki-ingester-grpc": failed to list *v1.Secret: secrets "logging-loki-ingester-grpc" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-logging": no relationship found between node 'crc' and this object Jan 26 18:44:59 crc kubenswrapper[4737]: E0126 18:44:59.404398 4737 reflector.go:158] "Unhandled Error" err="object-\"openshift-logging\"/\"logging-loki-ingester-grpc\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"logging-loki-ingester-grpc\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-logging\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.417683 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.478818 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/225843b1-6423-4d7f-aa3c-5945a9e4bd8e-tls-secret\") pod \"logging-loki-gateway-5c6b766d5f-kcfsl\" (UID: \"225843b1-6423-4d7f-aa3c-5945a9e4bd8e\") " pod="openshift-logging/logging-loki-gateway-5c6b766d5f-kcfsl" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.478933 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/e9d6a3ae-5064-4b4a-bbdb-3b05596bc38e-tls-secret\") pod \"logging-loki-gateway-5c6b766d5f-c5kng\" (UID: \"e9d6a3ae-5064-4b4a-bbdb-3b05596bc38e\") " pod="openshift-logging/logging-loki-gateway-5c6b766d5f-c5kng" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.483167 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/225843b1-6423-4d7f-aa3c-5945a9e4bd8e-tls-secret\") pod \"logging-loki-gateway-5c6b766d5f-kcfsl\" (UID: \"225843b1-6423-4d7f-aa3c-5945a9e4bd8e\") " pod="openshift-logging/logging-loki-gateway-5c6b766d5f-kcfsl" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.483807 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/e9d6a3ae-5064-4b4a-bbdb-3b05596bc38e-tls-secret\") pod \"logging-loki-gateway-5c6b766d5f-c5kng\" (UID: \"e9d6a3ae-5064-4b4a-bbdb-3b05596bc38e\") " pod="openshift-logging/logging-loki-gateway-5c6b766d5f-c5kng" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.517519 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.518325 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.521094 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.522286 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-69d9546745-qqkdc" event={"ID":"954c3b49-1fc8-4e5c-9312-7b8e66b7a681","Type":"ContainerStarted","Data":"3a4da8ecebfdbc6c10ee478a59b2bf1872f6c680d0fa10a2e745f43634efaacc"} Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.523685 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76788598db-rsdfq" event={"ID":"15449cbd-7753-47b6-811f-059d9f83ff0b","Type":"ContainerStarted","Data":"712310a276d5b7847155f4182ef9ea430b06c815d33581ecae1fc9caaf31abbc"} Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.525830 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-6wp46" event={"ID":"f15f2968-e05a-49f0-8024-3a1764d4b9e2","Type":"ContainerStarted","Data":"bef5a13a9a1a940ce8d77a67dc310da88cac31d32d1ac28449e20aa0ef3c399c"} Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.527629 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.542421 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.580089 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/a05526c9-7b63-4f57-bdaf-95d8a7912bb8-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"a05526c9-7b63-4f57-bdaf-95d8a7912bb8\") " pod="openshift-logging/logging-loki-ingester-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.580170 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-829e0d9d-3935-4f19-b3e0-ac0fdf7418e2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-829e0d9d-3935-4f19-b3e0-ac0fdf7418e2\") pod \"logging-loki-ingester-0\" (UID: \"a05526c9-7b63-4f57-bdaf-95d8a7912bb8\") " pod="openshift-logging/logging-loki-ingester-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.580203 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkll9\" (UniqueName: \"kubernetes.io/projected/a05526c9-7b63-4f57-bdaf-95d8a7912bb8-kube-api-access-mkll9\") pod \"logging-loki-ingester-0\" (UID: \"a05526c9-7b63-4f57-bdaf-95d8a7912bb8\") " pod="openshift-logging/logging-loki-ingester-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.580239 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/a05526c9-7b63-4f57-bdaf-95d8a7912bb8-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"a05526c9-7b63-4f57-bdaf-95d8a7912bb8\") " pod="openshift-logging/logging-loki-ingester-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.580275 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4b79f18a-b95d-4c97-8a65-7c698ca6d425\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4b79f18a-b95d-4c97-8a65-7c698ca6d425\") pod \"logging-loki-ingester-0\" (UID: \"a05526c9-7b63-4f57-bdaf-95d8a7912bb8\") " pod="openshift-logging/logging-loki-ingester-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.580390 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/a05526c9-7b63-4f57-bdaf-95d8a7912bb8-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"a05526c9-7b63-4f57-bdaf-95d8a7912bb8\") " pod="openshift-logging/logging-loki-ingester-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.580696 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a05526c9-7b63-4f57-bdaf-95d8a7912bb8-config\") pod \"logging-loki-ingester-0\" (UID: \"a05526c9-7b63-4f57-bdaf-95d8a7912bb8\") " pod="openshift-logging/logging-loki-ingester-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.580728 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a05526c9-7b63-4f57-bdaf-95d8a7912bb8-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"a05526c9-7b63-4f57-bdaf-95d8a7912bb8\") " pod="openshift-logging/logging-loki-ingester-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.606582 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-5c6b766d5f-kcfsl" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.612032 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.612928 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.615912 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.616594 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.625323 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.647931 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-5c6b766d5f-c5kng" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.682011 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a05526c9-7b63-4f57-bdaf-95d8a7912bb8-config\") pod \"logging-loki-ingester-0\" (UID: \"a05526c9-7b63-4f57-bdaf-95d8a7912bb8\") " pod="openshift-logging/logging-loki-ingester-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.682080 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmd8t\" (UniqueName: \"kubernetes.io/projected/274a7c37-3e64-45ce-8d6f-dfeac9c15288-kube-api-access-qmd8t\") pod \"logging-loki-compactor-0\" (UID: \"274a7c37-3e64-45ce-8d6f-dfeac9c15288\") " pod="openshift-logging/logging-loki-compactor-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.682115 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a05526c9-7b63-4f57-bdaf-95d8a7912bb8-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"a05526c9-7b63-4f57-bdaf-95d8a7912bb8\") " pod="openshift-logging/logging-loki-ingester-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.682141 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/274a7c37-3e64-45ce-8d6f-dfeac9c15288-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"274a7c37-3e64-45ce-8d6f-dfeac9c15288\") " pod="openshift-logging/logging-loki-compactor-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.682167 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/274a7c37-3e64-45ce-8d6f-dfeac9c15288-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"274a7c37-3e64-45ce-8d6f-dfeac9c15288\") " pod="openshift-logging/logging-loki-compactor-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.682191 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/274a7c37-3e64-45ce-8d6f-dfeac9c15288-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"274a7c37-3e64-45ce-8d6f-dfeac9c15288\") " pod="openshift-logging/logging-loki-compactor-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.682249 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/a05526c9-7b63-4f57-bdaf-95d8a7912bb8-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"a05526c9-7b63-4f57-bdaf-95d8a7912bb8\") " pod="openshift-logging/logging-loki-ingester-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.682301 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-829e0d9d-3935-4f19-b3e0-ac0fdf7418e2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-829e0d9d-3935-4f19-b3e0-ac0fdf7418e2\") pod \"logging-loki-ingester-0\" (UID: \"a05526c9-7b63-4f57-bdaf-95d8a7912bb8\") " pod="openshift-logging/logging-loki-ingester-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.682327 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkll9\" (UniqueName: \"kubernetes.io/projected/a05526c9-7b63-4f57-bdaf-95d8a7912bb8-kube-api-access-mkll9\") pod \"logging-loki-ingester-0\" (UID: \"a05526c9-7b63-4f57-bdaf-95d8a7912bb8\") " pod="openshift-logging/logging-loki-ingester-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.682351 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/274a7c37-3e64-45ce-8d6f-dfeac9c15288-config\") pod \"logging-loki-compactor-0\" (UID: \"274a7c37-3e64-45ce-8d6f-dfeac9c15288\") " pod="openshift-logging/logging-loki-compactor-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.682391 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3c795593-ee1e-4eba-9e57-2a61ce243b04\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3c795593-ee1e-4eba-9e57-2a61ce243b04\") pod \"logging-loki-compactor-0\" (UID: \"274a7c37-3e64-45ce-8d6f-dfeac9c15288\") " pod="openshift-logging/logging-loki-compactor-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.682422 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/a05526c9-7b63-4f57-bdaf-95d8a7912bb8-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"a05526c9-7b63-4f57-bdaf-95d8a7912bb8\") " pod="openshift-logging/logging-loki-ingester-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.682579 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4b79f18a-b95d-4c97-8a65-7c698ca6d425\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4b79f18a-b95d-4c97-8a65-7c698ca6d425\") pod \"logging-loki-ingester-0\" (UID: \"a05526c9-7b63-4f57-bdaf-95d8a7912bb8\") " pod="openshift-logging/logging-loki-ingester-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.682698 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/a05526c9-7b63-4f57-bdaf-95d8a7912bb8-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"a05526c9-7b63-4f57-bdaf-95d8a7912bb8\") " pod="openshift-logging/logging-loki-ingester-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.682795 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/274a7c37-3e64-45ce-8d6f-dfeac9c15288-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"274a7c37-3e64-45ce-8d6f-dfeac9c15288\") " pod="openshift-logging/logging-loki-compactor-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.683221 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a05526c9-7b63-4f57-bdaf-95d8a7912bb8-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"a05526c9-7b63-4f57-bdaf-95d8a7912bb8\") " pod="openshift-logging/logging-loki-ingester-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.683290 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a05526c9-7b63-4f57-bdaf-95d8a7912bb8-config\") pod \"logging-loki-ingester-0\" (UID: \"a05526c9-7b63-4f57-bdaf-95d8a7912bb8\") " pod="openshift-logging/logging-loki-ingester-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.686756 4737 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.686810 4737 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-829e0d9d-3935-4f19-b3e0-ac0fdf7418e2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-829e0d9d-3935-4f19-b3e0-ac0fdf7418e2\") pod \"logging-loki-ingester-0\" (UID: \"a05526c9-7b63-4f57-bdaf-95d8a7912bb8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4104a5c405489b113e43865d7dced617e59141180f55aebc0428864b0c55af33/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.687911 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/a05526c9-7b63-4f57-bdaf-95d8a7912bb8-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"a05526c9-7b63-4f57-bdaf-95d8a7912bb8\") " pod="openshift-logging/logging-loki-ingester-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.696100 4737 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.696163 4737 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4b79f18a-b95d-4c97-8a65-7c698ca6d425\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4b79f18a-b95d-4c97-8a65-7c698ca6d425\") pod \"logging-loki-ingester-0\" (UID: \"a05526c9-7b63-4f57-bdaf-95d8a7912bb8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1e7b1c3af2364a74f87898e8f6baa6fa24d6b5222f72932ed806dac400a6c636/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.700559 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkll9\" (UniqueName: \"kubernetes.io/projected/a05526c9-7b63-4f57-bdaf-95d8a7912bb8-kube-api-access-mkll9\") pod \"logging-loki-ingester-0\" (UID: \"a05526c9-7b63-4f57-bdaf-95d8a7912bb8\") " pod="openshift-logging/logging-loki-ingester-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.729345 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-829e0d9d-3935-4f19-b3e0-ac0fdf7418e2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-829e0d9d-3935-4f19-b3e0-ac0fdf7418e2\") pod \"logging-loki-ingester-0\" (UID: \"a05526c9-7b63-4f57-bdaf-95d8a7912bb8\") " pod="openshift-logging/logging-loki-ingester-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.729368 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4b79f18a-b95d-4c97-8a65-7c698ca6d425\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4b79f18a-b95d-4c97-8a65-7c698ca6d425\") pod \"logging-loki-ingester-0\" (UID: \"a05526c9-7b63-4f57-bdaf-95d8a7912bb8\") " pod="openshift-logging/logging-loki-ingester-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.784616 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/274a7c37-3e64-45ce-8d6f-dfeac9c15288-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"274a7c37-3e64-45ce-8d6f-dfeac9c15288\") " pod="openshift-logging/logging-loki-compactor-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.784673 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/274a7c37-3e64-45ce-8d6f-dfeac9c15288-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"274a7c37-3e64-45ce-8d6f-dfeac9c15288\") " pod="openshift-logging/logging-loki-compactor-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.784731 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d74d1ee-657b-4404-9390-cd94e3cb6d2c-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"7d74d1ee-657b-4404-9390-cd94e3cb6d2c\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.784775 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/274a7c37-3e64-45ce-8d6f-dfeac9c15288-config\") pod \"logging-loki-compactor-0\" (UID: \"274a7c37-3e64-45ce-8d6f-dfeac9c15288\") " pod="openshift-logging/logging-loki-compactor-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.785811 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3c795593-ee1e-4eba-9e57-2a61ce243b04\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3c795593-ee1e-4eba-9e57-2a61ce243b04\") pod \"logging-loki-compactor-0\" (UID: \"274a7c37-3e64-45ce-8d6f-dfeac9c15288\") " pod="openshift-logging/logging-loki-compactor-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.786692 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/274a7c37-3e64-45ce-8d6f-dfeac9c15288-config\") pod \"logging-loki-compactor-0\" (UID: \"274a7c37-3e64-45ce-8d6f-dfeac9c15288\") " pod="openshift-logging/logging-loki-compactor-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.787633 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvmjt\" (UniqueName: \"kubernetes.io/projected/7d74d1ee-657b-4404-9390-cd94e3cb6d2c-kube-api-access-lvmjt\") pod \"logging-loki-index-gateway-0\" (UID: \"7d74d1ee-657b-4404-9390-cd94e3cb6d2c\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.787675 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d74d1ee-657b-4404-9390-cd94e3cb6d2c-config\") pod \"logging-loki-index-gateway-0\" (UID: \"7d74d1ee-657b-4404-9390-cd94e3cb6d2c\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.787699 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/7d74d1ee-657b-4404-9390-cd94e3cb6d2c-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"7d74d1ee-657b-4404-9390-cd94e3cb6d2c\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.787736 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmd8t\" (UniqueName: \"kubernetes.io/projected/274a7c37-3e64-45ce-8d6f-dfeac9c15288-kube-api-access-qmd8t\") pod \"logging-loki-compactor-0\" (UID: \"274a7c37-3e64-45ce-8d6f-dfeac9c15288\") " pod="openshift-logging/logging-loki-compactor-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.788129 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/274a7c37-3e64-45ce-8d6f-dfeac9c15288-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"274a7c37-3e64-45ce-8d6f-dfeac9c15288\") " pod="openshift-logging/logging-loki-compactor-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.788157 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/274a7c37-3e64-45ce-8d6f-dfeac9c15288-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"274a7c37-3e64-45ce-8d6f-dfeac9c15288\") " pod="openshift-logging/logging-loki-compactor-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.788191 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/7d74d1ee-657b-4404-9390-cd94e3cb6d2c-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"7d74d1ee-657b-4404-9390-cd94e3cb6d2c\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.788712 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a7148831-8091-4ad6-bf2f-cae0766197c4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a7148831-8091-4ad6-bf2f-cae0766197c4\") pod \"logging-loki-index-gateway-0\" (UID: \"7d74d1ee-657b-4404-9390-cd94e3cb6d2c\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.788879 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/274a7c37-3e64-45ce-8d6f-dfeac9c15288-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"274a7c37-3e64-45ce-8d6f-dfeac9c15288\") " pod="openshift-logging/logging-loki-compactor-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.789616 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/7d74d1ee-657b-4404-9390-cd94e3cb6d2c-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"7d74d1ee-657b-4404-9390-cd94e3cb6d2c\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.791716 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/274a7c37-3e64-45ce-8d6f-dfeac9c15288-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"274a7c37-3e64-45ce-8d6f-dfeac9c15288\") " pod="openshift-logging/logging-loki-compactor-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.792509 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/274a7c37-3e64-45ce-8d6f-dfeac9c15288-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"274a7c37-3e64-45ce-8d6f-dfeac9c15288\") " pod="openshift-logging/logging-loki-compactor-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.792998 4737 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.793035 4737 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3c795593-ee1e-4eba-9e57-2a61ce243b04\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3c795593-ee1e-4eba-9e57-2a61ce243b04\") pod \"logging-loki-compactor-0\" (UID: \"274a7c37-3e64-45ce-8d6f-dfeac9c15288\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f413168763db1c827e5f85d08919a4152b9fcdac9ee544708936212fabec6c33/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.796746 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/274a7c37-3e64-45ce-8d6f-dfeac9c15288-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"274a7c37-3e64-45ce-8d6f-dfeac9c15288\") " pod="openshift-logging/logging-loki-compactor-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.806905 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmd8t\" (UniqueName: \"kubernetes.io/projected/274a7c37-3e64-45ce-8d6f-dfeac9c15288-kube-api-access-qmd8t\") pod \"logging-loki-compactor-0\" (UID: \"274a7c37-3e64-45ce-8d6f-dfeac9c15288\") " pod="openshift-logging/logging-loki-compactor-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.821542 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3c795593-ee1e-4eba-9e57-2a61ce243b04\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3c795593-ee1e-4eba-9e57-2a61ce243b04\") pod \"logging-loki-compactor-0\" (UID: \"274a7c37-3e64-45ce-8d6f-dfeac9c15288\") " pod="openshift-logging/logging-loki-compactor-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.844563 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.874364 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-5c6b766d5f-kcfsl"] Jan 26 18:44:59 crc kubenswrapper[4737]: W0126 18:44:59.888951 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod225843b1_6423_4d7f_aa3c_5945a9e4bd8e.slice/crio-a620cd39476e5130bca3f71892dd7ed9a90d0683f160f6790e23e15ea2eacf8f WatchSource:0}: Error finding container a620cd39476e5130bca3f71892dd7ed9a90d0683f160f6790e23e15ea2eacf8f: Status 404 returned error can't find the container with id a620cd39476e5130bca3f71892dd7ed9a90d0683f160f6790e23e15ea2eacf8f Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.891598 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d74d1ee-657b-4404-9390-cd94e3cb6d2c-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"7d74d1ee-657b-4404-9390-cd94e3cb6d2c\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.891746 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvmjt\" (UniqueName: \"kubernetes.io/projected/7d74d1ee-657b-4404-9390-cd94e3cb6d2c-kube-api-access-lvmjt\") pod \"logging-loki-index-gateway-0\" (UID: \"7d74d1ee-657b-4404-9390-cd94e3cb6d2c\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.891768 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d74d1ee-657b-4404-9390-cd94e3cb6d2c-config\") pod \"logging-loki-index-gateway-0\" (UID: \"7d74d1ee-657b-4404-9390-cd94e3cb6d2c\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.891784 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/7d74d1ee-657b-4404-9390-cd94e3cb6d2c-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"7d74d1ee-657b-4404-9390-cd94e3cb6d2c\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.891835 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/7d74d1ee-657b-4404-9390-cd94e3cb6d2c-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"7d74d1ee-657b-4404-9390-cd94e3cb6d2c\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.891860 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a7148831-8091-4ad6-bf2f-cae0766197c4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a7148831-8091-4ad6-bf2f-cae0766197c4\") pod \"logging-loki-index-gateway-0\" (UID: \"7d74d1ee-657b-4404-9390-cd94e3cb6d2c\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.891923 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/7d74d1ee-657b-4404-9390-cd94e3cb6d2c-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"7d74d1ee-657b-4404-9390-cd94e3cb6d2c\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.893588 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d74d1ee-657b-4404-9390-cd94e3cb6d2c-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"7d74d1ee-657b-4404-9390-cd94e3cb6d2c\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.894054 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d74d1ee-657b-4404-9390-cd94e3cb6d2c-config\") pod \"logging-loki-index-gateway-0\" (UID: \"7d74d1ee-657b-4404-9390-cd94e3cb6d2c\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.896927 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/7d74d1ee-657b-4404-9390-cd94e3cb6d2c-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"7d74d1ee-657b-4404-9390-cd94e3cb6d2c\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.905050 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/7d74d1ee-657b-4404-9390-cd94e3cb6d2c-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"7d74d1ee-657b-4404-9390-cd94e3cb6d2c\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.905377 4737 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.905446 4737 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a7148831-8091-4ad6-bf2f-cae0766197c4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a7148831-8091-4ad6-bf2f-cae0766197c4\") pod \"logging-loki-index-gateway-0\" (UID: \"7d74d1ee-657b-4404-9390-cd94e3cb6d2c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d69470e1fcaba54d3fde76b2bf6ca5ea1f94e4a9e7a92a18a2820f9085a4deb8/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.905703 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/7d74d1ee-657b-4404-9390-cd94e3cb6d2c-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"7d74d1ee-657b-4404-9390-cd94e3cb6d2c\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.913143 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvmjt\" (UniqueName: \"kubernetes.io/projected/7d74d1ee-657b-4404-9390-cd94e3cb6d2c-kube-api-access-lvmjt\") pod \"logging-loki-index-gateway-0\" (UID: \"7d74d1ee-657b-4404-9390-cd94e3cb6d2c\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 26 18:44:59 crc kubenswrapper[4737]: I0126 18:44:59.937347 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a7148831-8091-4ad6-bf2f-cae0766197c4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a7148831-8091-4ad6-bf2f-cae0766197c4\") pod \"logging-loki-index-gateway-0\" (UID: \"7d74d1ee-657b-4404-9390-cd94e3cb6d2c\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 26 18:45:00 crc kubenswrapper[4737]: I0126 18:45:00.081955 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Jan 26 18:45:00 crc kubenswrapper[4737]: I0126 18:45:00.143554 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-5c6b766d5f-c5kng"] Jan 26 18:45:00 crc kubenswrapper[4737]: I0126 18:45:00.150849 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490885-vng2z"] Jan 26 18:45:00 crc kubenswrapper[4737]: I0126 18:45:00.151996 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490885-vng2z" Jan 26 18:45:00 crc kubenswrapper[4737]: I0126 18:45:00.154115 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 18:45:00 crc kubenswrapper[4737]: I0126 18:45:00.154124 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 18:45:00 crc kubenswrapper[4737]: I0126 18:45:00.157773 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490885-vng2z"] Jan 26 18:45:00 crc kubenswrapper[4737]: I0126 18:45:00.226896 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Jan 26 18:45:00 crc kubenswrapper[4737]: I0126 18:45:00.298544 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/52054249-74bc-48df-978b-dcf49912e6c7-secret-volume\") pod \"collect-profiles-29490885-vng2z\" (UID: \"52054249-74bc-48df-978b-dcf49912e6c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490885-vng2z" Jan 26 18:45:00 crc kubenswrapper[4737]: I0126 18:45:00.298668 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx9zf\" (UniqueName: \"kubernetes.io/projected/52054249-74bc-48df-978b-dcf49912e6c7-kube-api-access-fx9zf\") pod \"collect-profiles-29490885-vng2z\" (UID: \"52054249-74bc-48df-978b-dcf49912e6c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490885-vng2z" Jan 26 18:45:00 crc kubenswrapper[4737]: I0126 18:45:00.298775 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52054249-74bc-48df-978b-dcf49912e6c7-config-volume\") pod \"collect-profiles-29490885-vng2z\" (UID: \"52054249-74bc-48df-978b-dcf49912e6c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490885-vng2z" Jan 26 18:45:00 crc kubenswrapper[4737]: I0126 18:45:00.399991 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/52054249-74bc-48df-978b-dcf49912e6c7-secret-volume\") pod \"collect-profiles-29490885-vng2z\" (UID: \"52054249-74bc-48df-978b-dcf49912e6c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490885-vng2z" Jan 26 18:45:00 crc kubenswrapper[4737]: I0126 18:45:00.400354 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fx9zf\" (UniqueName: \"kubernetes.io/projected/52054249-74bc-48df-978b-dcf49912e6c7-kube-api-access-fx9zf\") pod \"collect-profiles-29490885-vng2z\" (UID: \"52054249-74bc-48df-978b-dcf49912e6c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490885-vng2z" Jan 26 18:45:00 crc kubenswrapper[4737]: I0126 18:45:00.400420 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52054249-74bc-48df-978b-dcf49912e6c7-config-volume\") pod \"collect-profiles-29490885-vng2z\" (UID: \"52054249-74bc-48df-978b-dcf49912e6c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490885-vng2z" Jan 26 18:45:00 crc kubenswrapper[4737]: I0126 18:45:00.402110 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52054249-74bc-48df-978b-dcf49912e6c7-config-volume\") pod \"collect-profiles-29490885-vng2z\" (UID: \"52054249-74bc-48df-978b-dcf49912e6c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490885-vng2z" Jan 26 18:45:00 crc kubenswrapper[4737]: I0126 18:45:00.406176 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/52054249-74bc-48df-978b-dcf49912e6c7-secret-volume\") pod \"collect-profiles-29490885-vng2z\" (UID: \"52054249-74bc-48df-978b-dcf49912e6c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490885-vng2z" Jan 26 18:45:00 crc kubenswrapper[4737]: I0126 18:45:00.418014 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fx9zf\" (UniqueName: \"kubernetes.io/projected/52054249-74bc-48df-978b-dcf49912e6c7-kube-api-access-fx9zf\") pod \"collect-profiles-29490885-vng2z\" (UID: \"52054249-74bc-48df-978b-dcf49912e6c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490885-vng2z" Jan 26 18:45:00 crc kubenswrapper[4737]: I0126 18:45:00.473130 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490885-vng2z" Jan 26 18:45:00 crc kubenswrapper[4737]: I0126 18:45:00.536989 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"274a7c37-3e64-45ce-8d6f-dfeac9c15288","Type":"ContainerStarted","Data":"f66ae64a1199c1f5b39e7a1c3ddd31be49d7735b2ac22fd4441c6d3f2836317d"} Jan 26 18:45:00 crc kubenswrapper[4737]: I0126 18:45:00.539463 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5c6b766d5f-kcfsl" event={"ID":"225843b1-6423-4d7f-aa3c-5945a9e4bd8e","Type":"ContainerStarted","Data":"a620cd39476e5130bca3f71892dd7ed9a90d0683f160f6790e23e15ea2eacf8f"} Jan 26 18:45:00 crc kubenswrapper[4737]: I0126 18:45:00.541431 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5c6b766d5f-c5kng" event={"ID":"e9d6a3ae-5064-4b4a-bbdb-3b05596bc38e","Type":"ContainerStarted","Data":"d481c7f6b764ea6f1b1c7564fad8b415b1a564285b4e360e0f53c22fde80a289"} Jan 26 18:45:00 crc kubenswrapper[4737]: I0126 18:45:00.596429 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Jan 26 18:45:00 crc kubenswrapper[4737]: I0126 18:45:00.622622 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/a05526c9-7b63-4f57-bdaf-95d8a7912bb8-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"a05526c9-7b63-4f57-bdaf-95d8a7912bb8\") " pod="openshift-logging/logging-loki-ingester-0" Jan 26 18:45:00 crc kubenswrapper[4737]: I0126 18:45:00.680792 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Jan 26 18:45:00 crc kubenswrapper[4737]: E0126 18:45:00.683262 4737 secret.go:188] Couldn't get secret openshift-logging/logging-loki-ingester-http: failed to sync secret cache: timed out waiting for the condition Jan 26 18:45:00 crc kubenswrapper[4737]: E0126 18:45:00.683388 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a05526c9-7b63-4f57-bdaf-95d8a7912bb8-logging-loki-ingester-http podName:a05526c9-7b63-4f57-bdaf-95d8a7912bb8 nodeName:}" failed. No retries permitted until 2026-01-26 18:45:01.183358067 +0000 UTC m=+874.491552765 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "logging-loki-ingester-http" (UniqueName: "kubernetes.io/secret/a05526c9-7b63-4f57-bdaf-95d8a7912bb8-logging-loki-ingester-http") pod "logging-loki-ingester-0" (UID: "a05526c9-7b63-4f57-bdaf-95d8a7912bb8") : failed to sync secret cache: timed out waiting for the condition Jan 26 18:45:00 crc kubenswrapper[4737]: I0126 18:45:00.811326 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Jan 26 18:45:00 crc kubenswrapper[4737]: I0126 18:45:00.935484 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490885-vng2z"] Jan 26 18:45:01 crc kubenswrapper[4737]: I0126 18:45:01.235827 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/a05526c9-7b63-4f57-bdaf-95d8a7912bb8-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"a05526c9-7b63-4f57-bdaf-95d8a7912bb8\") " pod="openshift-logging/logging-loki-ingester-0" Jan 26 18:45:01 crc kubenswrapper[4737]: I0126 18:45:01.241394 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/a05526c9-7b63-4f57-bdaf-95d8a7912bb8-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"a05526c9-7b63-4f57-bdaf-95d8a7912bb8\") " pod="openshift-logging/logging-loki-ingester-0" Jan 26 18:45:01 crc kubenswrapper[4737]: I0126 18:45:01.521060 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Jan 26 18:45:01 crc kubenswrapper[4737]: I0126 18:45:01.578463 4737 generic.go:334] "Generic (PLEG): container finished" podID="52054249-74bc-48df-978b-dcf49912e6c7" containerID="c0fa96a76151b48afaa093e6b9a004113454e0a4197ed339d2d6049ead31e772" exitCode=0 Jan 26 18:45:01 crc kubenswrapper[4737]: I0126 18:45:01.579255 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490885-vng2z" event={"ID":"52054249-74bc-48df-978b-dcf49912e6c7","Type":"ContainerDied","Data":"c0fa96a76151b48afaa093e6b9a004113454e0a4197ed339d2d6049ead31e772"} Jan 26 18:45:01 crc kubenswrapper[4737]: I0126 18:45:01.579286 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490885-vng2z" event={"ID":"52054249-74bc-48df-978b-dcf49912e6c7","Type":"ContainerStarted","Data":"1bea778fc3195742b4a2d639e25c42209c4be1458dcc2fbe234106983de08a81"} Jan 26 18:45:01 crc kubenswrapper[4737]: I0126 18:45:01.614314 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"7d74d1ee-657b-4404-9390-cd94e3cb6d2c","Type":"ContainerStarted","Data":"ab6906d0cc1422767c257e8a95a08977591680758c787b6526f34630e843add6"} Jan 26 18:45:02 crc kubenswrapper[4737]: I0126 18:45:02.097963 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Jan 26 18:45:04 crc kubenswrapper[4737]: W0126 18:45:04.308141 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda05526c9_7b63_4f57_bdaf_95d8a7912bb8.slice/crio-e2d3691103fb2e28837f0e6bfee3753076585e848c508bcdf106b34d1eb47adf WatchSource:0}: Error finding container e2d3691103fb2e28837f0e6bfee3753076585e848c508bcdf106b34d1eb47adf: Status 404 returned error can't find the container with id e2d3691103fb2e28837f0e6bfee3753076585e848c508bcdf106b34d1eb47adf Jan 26 18:45:04 crc kubenswrapper[4737]: I0126 18:45:04.392650 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490885-vng2z" Jan 26 18:45:04 crc kubenswrapper[4737]: I0126 18:45:04.529193 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52054249-74bc-48df-978b-dcf49912e6c7-config-volume\") pod \"52054249-74bc-48df-978b-dcf49912e6c7\" (UID: \"52054249-74bc-48df-978b-dcf49912e6c7\") " Jan 26 18:45:04 crc kubenswrapper[4737]: I0126 18:45:04.529373 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/52054249-74bc-48df-978b-dcf49912e6c7-secret-volume\") pod \"52054249-74bc-48df-978b-dcf49912e6c7\" (UID: \"52054249-74bc-48df-978b-dcf49912e6c7\") " Jan 26 18:45:04 crc kubenswrapper[4737]: I0126 18:45:04.529402 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fx9zf\" (UniqueName: \"kubernetes.io/projected/52054249-74bc-48df-978b-dcf49912e6c7-kube-api-access-fx9zf\") pod \"52054249-74bc-48df-978b-dcf49912e6c7\" (UID: \"52054249-74bc-48df-978b-dcf49912e6c7\") " Jan 26 18:45:04 crc kubenswrapper[4737]: I0126 18:45:04.529932 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52054249-74bc-48df-978b-dcf49912e6c7-config-volume" (OuterVolumeSpecName: "config-volume") pod "52054249-74bc-48df-978b-dcf49912e6c7" (UID: "52054249-74bc-48df-978b-dcf49912e6c7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:45:04 crc kubenswrapper[4737]: I0126 18:45:04.534453 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52054249-74bc-48df-978b-dcf49912e6c7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "52054249-74bc-48df-978b-dcf49912e6c7" (UID: "52054249-74bc-48df-978b-dcf49912e6c7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:45:04 crc kubenswrapper[4737]: I0126 18:45:04.535208 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52054249-74bc-48df-978b-dcf49912e6c7-kube-api-access-fx9zf" (OuterVolumeSpecName: "kube-api-access-fx9zf") pod "52054249-74bc-48df-978b-dcf49912e6c7" (UID: "52054249-74bc-48df-978b-dcf49912e6c7"). InnerVolumeSpecName "kube-api-access-fx9zf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:45:04 crc kubenswrapper[4737]: I0126 18:45:04.631514 4737 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/52054249-74bc-48df-978b-dcf49912e6c7-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 18:45:04 crc kubenswrapper[4737]: I0126 18:45:04.631554 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fx9zf\" (UniqueName: \"kubernetes.io/projected/52054249-74bc-48df-978b-dcf49912e6c7-kube-api-access-fx9zf\") on node \"crc\" DevicePath \"\"" Jan 26 18:45:04 crc kubenswrapper[4737]: I0126 18:45:04.631564 4737 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52054249-74bc-48df-978b-dcf49912e6c7-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 18:45:04 crc kubenswrapper[4737]: I0126 18:45:04.638544 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490885-vng2z" event={"ID":"52054249-74bc-48df-978b-dcf49912e6c7","Type":"ContainerDied","Data":"1bea778fc3195742b4a2d639e25c42209c4be1458dcc2fbe234106983de08a81"} Jan 26 18:45:04 crc kubenswrapper[4737]: I0126 18:45:04.638579 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bea778fc3195742b4a2d639e25c42209c4be1458dcc2fbe234106983de08a81" Jan 26 18:45:04 crc kubenswrapper[4737]: I0126 18:45:04.638631 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490885-vng2z" Jan 26 18:45:04 crc kubenswrapper[4737]: I0126 18:45:04.645937 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"a05526c9-7b63-4f57-bdaf-95d8a7912bb8","Type":"ContainerStarted","Data":"e2d3691103fb2e28837f0e6bfee3753076585e848c508bcdf106b34d1eb47adf"} Jan 26 18:45:07 crc kubenswrapper[4737]: I0126 18:45:07.668116 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5c6b766d5f-c5kng" event={"ID":"e9d6a3ae-5064-4b4a-bbdb-3b05596bc38e","Type":"ContainerStarted","Data":"25d0310d155ac5f6d69fd9e182fae9e7529776e872dc961339480a1c69422243"} Jan 26 18:45:07 crc kubenswrapper[4737]: I0126 18:45:07.671476 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-6wp46" event={"ID":"f15f2968-e05a-49f0-8024-3a1764d4b9e2","Type":"ContainerStarted","Data":"65753f2dd59e116fa02c48deb2be3bc87b9e7f0df436e8407c352f640eb86f41"} Jan 26 18:45:07 crc kubenswrapper[4737]: I0126 18:45:07.671616 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-6wp46" Jan 26 18:45:07 crc kubenswrapper[4737]: I0126 18:45:07.674446 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"274a7c37-3e64-45ce-8d6f-dfeac9c15288","Type":"ContainerStarted","Data":"da3c542496b5627753108d73a8ff881ad5b57191a9291732139f2ae94a8b97f0"} Jan 26 18:45:07 crc kubenswrapper[4737]: I0126 18:45:07.674574 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Jan 26 18:45:07 crc kubenswrapper[4737]: I0126 18:45:07.677375 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5c6b766d5f-kcfsl" event={"ID":"225843b1-6423-4d7f-aa3c-5945a9e4bd8e","Type":"ContainerStarted","Data":"5e2585b0dd2ac778e41c652293f96b903104f699f93d02a3c453ea9feca36748"} Jan 26 18:45:07 crc kubenswrapper[4737]: I0126 18:45:07.678928 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76788598db-rsdfq" event={"ID":"15449cbd-7753-47b6-811f-059d9f83ff0b","Type":"ContainerStarted","Data":"2490464427c64c05734fb581aa2832ae231793f83b152eed1e3748c991505d19"} Jan 26 18:45:07 crc kubenswrapper[4737]: I0126 18:45:07.679096 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-76788598db-rsdfq" Jan 26 18:45:07 crc kubenswrapper[4737]: I0126 18:45:07.680687 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"7d74d1ee-657b-4404-9390-cd94e3cb6d2c","Type":"ContainerStarted","Data":"da093e1ad7fa6ef53bd5e60988995db11512c8f8975493390bf7d2e11270f4c1"} Jan 26 18:45:07 crc kubenswrapper[4737]: I0126 18:45:07.680977 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Jan 26 18:45:07 crc kubenswrapper[4737]: I0126 18:45:07.684913 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-69d9546745-qqkdc" event={"ID":"954c3b49-1fc8-4e5c-9312-7b8e66b7a681","Type":"ContainerStarted","Data":"3e60f70e6c230b1d53e401a6da3532a55604a8e60c926f0c521f73eda549dec4"} Jan 26 18:45:07 crc kubenswrapper[4737]: I0126 18:45:07.684989 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-69d9546745-qqkdc" Jan 26 18:45:07 crc kubenswrapper[4737]: I0126 18:45:07.686955 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"a05526c9-7b63-4f57-bdaf-95d8a7912bb8","Type":"ContainerStarted","Data":"caba2da5901e9a2351c93e4ffecdb450f5ad610bf71a4ebad9f507feab4fba34"} Jan 26 18:45:07 crc kubenswrapper[4737]: I0126 18:45:07.687307 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Jan 26 18:45:07 crc kubenswrapper[4737]: I0126 18:45:07.695149 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-6wp46" podStartSLOduration=2.403523099 podStartE2EDuration="9.695122723s" podCreationTimestamp="2026-01-26 18:44:58 +0000 UTC" firstStartedPulling="2026-01-26 18:44:59.133254664 +0000 UTC m=+872.441449372" lastFinishedPulling="2026-01-26 18:45:06.424854278 +0000 UTC m=+879.733048996" observedRunningTime="2026-01-26 18:45:07.690782886 +0000 UTC m=+880.998977634" watchObservedRunningTime="2026-01-26 18:45:07.695122723 +0000 UTC m=+881.003317431" Jan 26 18:45:07 crc kubenswrapper[4737]: I0126 18:45:07.719612 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-76788598db-rsdfq" podStartSLOduration=2.417324212 podStartE2EDuration="9.719595622s" podCreationTimestamp="2026-01-26 18:44:58 +0000 UTC" firstStartedPulling="2026-01-26 18:44:59.25738006 +0000 UTC m=+872.565574768" lastFinishedPulling="2026-01-26 18:45:06.55965147 +0000 UTC m=+879.867846178" observedRunningTime="2026-01-26 18:45:07.714479625 +0000 UTC m=+881.022674343" watchObservedRunningTime="2026-01-26 18:45:07.719595622 +0000 UTC m=+881.027790330" Jan 26 18:45:07 crc kubenswrapper[4737]: I0126 18:45:07.743609 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=7.498247799 podStartE2EDuration="9.743589768s" podCreationTimestamp="2026-01-26 18:44:58 +0000 UTC" firstStartedPulling="2026-01-26 18:45:04.316589957 +0000 UTC m=+877.624784665" lastFinishedPulling="2026-01-26 18:45:06.561931886 +0000 UTC m=+879.870126634" observedRunningTime="2026-01-26 18:45:07.732688088 +0000 UTC m=+881.040882806" watchObservedRunningTime="2026-01-26 18:45:07.743589768 +0000 UTC m=+881.051784486" Jan 26 18:45:07 crc kubenswrapper[4737]: I0126 18:45:07.753405 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-69d9546745-qqkdc" podStartSLOduration=2.515294267 podStartE2EDuration="9.753386412s" podCreationTimestamp="2026-01-26 18:44:58 +0000 UTC" firstStartedPulling="2026-01-26 18:44:59.320427107 +0000 UTC m=+872.628621815" lastFinishedPulling="2026-01-26 18:45:06.558519222 +0000 UTC m=+879.866713960" observedRunningTime="2026-01-26 18:45:07.750188372 +0000 UTC m=+881.058383090" watchObservedRunningTime="2026-01-26 18:45:07.753386412 +0000 UTC m=+881.061581130" Jan 26 18:45:07 crc kubenswrapper[4737]: I0126 18:45:07.780213 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=3.690918339 podStartE2EDuration="9.780192228s" podCreationTimestamp="2026-01-26 18:44:58 +0000 UTC" firstStartedPulling="2026-01-26 18:45:00.090962057 +0000 UTC m=+873.399156765" lastFinishedPulling="2026-01-26 18:45:06.180235946 +0000 UTC m=+879.488430654" observedRunningTime="2026-01-26 18:45:07.778023925 +0000 UTC m=+881.086218643" watchObservedRunningTime="2026-01-26 18:45:07.780192228 +0000 UTC m=+881.088386936" Jan 26 18:45:07 crc kubenswrapper[4737]: I0126 18:45:07.800693 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=3.939363488 podStartE2EDuration="9.800670828s" podCreationTimestamp="2026-01-26 18:44:58 +0000 UTC" firstStartedPulling="2026-01-26 18:45:00.699297293 +0000 UTC m=+874.007491991" lastFinishedPulling="2026-01-26 18:45:06.560604583 +0000 UTC m=+879.868799331" observedRunningTime="2026-01-26 18:45:07.797457408 +0000 UTC m=+881.105652116" watchObservedRunningTime="2026-01-26 18:45:07.800670828 +0000 UTC m=+881.108865546" Jan 26 18:45:15 crc kubenswrapper[4737]: I0126 18:45:15.754036 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5c6b766d5f-kcfsl" event={"ID":"225843b1-6423-4d7f-aa3c-5945a9e4bd8e","Type":"ContainerStarted","Data":"52861b901130acf3c6ba6c6def26095dbbf10481c2d0ab30e52a742d6b76e718"} Jan 26 18:45:15 crc kubenswrapper[4737]: I0126 18:45:15.754866 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-5c6b766d5f-kcfsl" Jan 26 18:45:15 crc kubenswrapper[4737]: I0126 18:45:15.754966 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-5c6b766d5f-kcfsl" Jan 26 18:45:15 crc kubenswrapper[4737]: I0126 18:45:15.758931 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5c6b766d5f-c5kng" event={"ID":"e9d6a3ae-5064-4b4a-bbdb-3b05596bc38e","Type":"ContainerStarted","Data":"88e15f0dbe35a83972c940520d2765c4518f4913504d3ae86e886f97f42577aa"} Jan 26 18:45:15 crc kubenswrapper[4737]: I0126 18:45:15.759921 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-5c6b766d5f-c5kng" Jan 26 18:45:15 crc kubenswrapper[4737]: I0126 18:45:15.760317 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-5c6b766d5f-c5kng" Jan 26 18:45:15 crc kubenswrapper[4737]: I0126 18:45:15.764946 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-5c6b766d5f-kcfsl" Jan 26 18:45:15 crc kubenswrapper[4737]: I0126 18:45:15.765723 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-5c6b766d5f-kcfsl" Jan 26 18:45:15 crc kubenswrapper[4737]: I0126 18:45:15.769415 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-5c6b766d5f-c5kng" Jan 26 18:45:15 crc kubenswrapper[4737]: I0126 18:45:15.770733 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-5c6b766d5f-c5kng" Jan 26 18:45:15 crc kubenswrapper[4737]: I0126 18:45:15.780021 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-5c6b766d5f-kcfsl" podStartSLOduration=2.373977225 podStartE2EDuration="17.780000723s" podCreationTimestamp="2026-01-26 18:44:58 +0000 UTC" firstStartedPulling="2026-01-26 18:44:59.895468146 +0000 UTC m=+873.203662844" lastFinishedPulling="2026-01-26 18:45:15.301491634 +0000 UTC m=+888.609686342" observedRunningTime="2026-01-26 18:45:15.774599178 +0000 UTC m=+889.082793896" watchObservedRunningTime="2026-01-26 18:45:15.780000723 +0000 UTC m=+889.088195451" Jan 26 18:45:15 crc kubenswrapper[4737]: I0126 18:45:15.796617 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-5c6b766d5f-c5kng" podStartSLOduration=2.644380719 podStartE2EDuration="17.796600536s" podCreationTimestamp="2026-01-26 18:44:58 +0000 UTC" firstStartedPulling="2026-01-26 18:45:00.15378837 +0000 UTC m=+873.461983078" lastFinishedPulling="2026-01-26 18:45:15.306008187 +0000 UTC m=+888.614202895" observedRunningTime="2026-01-26 18:45:15.793143149 +0000 UTC m=+889.101337897" watchObservedRunningTime="2026-01-26 18:45:15.796600536 +0000 UTC m=+889.104795244" Jan 26 18:45:28 crc kubenswrapper[4737]: I0126 18:45:28.583585 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-6wp46" Jan 26 18:45:28 crc kubenswrapper[4737]: I0126 18:45:28.744313 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-76788598db-rsdfq" Jan 26 18:45:28 crc kubenswrapper[4737]: I0126 18:45:28.863287 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-69d9546745-qqkdc" Jan 26 18:45:29 crc kubenswrapper[4737]: I0126 18:45:29.851885 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Jan 26 18:45:30 crc kubenswrapper[4737]: I0126 18:45:30.233365 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Jan 26 18:45:31 crc kubenswrapper[4737]: I0126 18:45:31.529242 4737 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Jan 26 18:45:31 crc kubenswrapper[4737]: I0126 18:45:31.529562 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="a05526c9-7b63-4f57-bdaf-95d8a7912bb8" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 26 18:45:41 crc kubenswrapper[4737]: I0126 18:45:41.527986 4737 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Jan 26 18:45:41 crc kubenswrapper[4737]: I0126 18:45:41.528581 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="a05526c9-7b63-4f57-bdaf-95d8a7912bb8" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 26 18:45:46 crc kubenswrapper[4737]: I0126 18:45:46.281891 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kjxq7"] Jan 26 18:45:46 crc kubenswrapper[4737]: E0126 18:45:46.282844 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52054249-74bc-48df-978b-dcf49912e6c7" containerName="collect-profiles" Jan 26 18:45:46 crc kubenswrapper[4737]: I0126 18:45:46.282858 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="52054249-74bc-48df-978b-dcf49912e6c7" containerName="collect-profiles" Jan 26 18:45:46 crc kubenswrapper[4737]: I0126 18:45:46.282997 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="52054249-74bc-48df-978b-dcf49912e6c7" containerName="collect-profiles" Jan 26 18:45:46 crc kubenswrapper[4737]: I0126 18:45:46.284042 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kjxq7" Jan 26 18:45:46 crc kubenswrapper[4737]: I0126 18:45:46.302281 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kjxq7"] Jan 26 18:45:46 crc kubenswrapper[4737]: I0126 18:45:46.382439 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/addd7a71-19f0-47a5-af5e-9f76509513fe-utilities\") pod \"community-operators-kjxq7\" (UID: \"addd7a71-19f0-47a5-af5e-9f76509513fe\") " pod="openshift-marketplace/community-operators-kjxq7" Jan 26 18:45:46 crc kubenswrapper[4737]: I0126 18:45:46.382505 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/addd7a71-19f0-47a5-af5e-9f76509513fe-catalog-content\") pod \"community-operators-kjxq7\" (UID: \"addd7a71-19f0-47a5-af5e-9f76509513fe\") " pod="openshift-marketplace/community-operators-kjxq7" Jan 26 18:45:46 crc kubenswrapper[4737]: I0126 18:45:46.383016 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cvp2\" (UniqueName: \"kubernetes.io/projected/addd7a71-19f0-47a5-af5e-9f76509513fe-kube-api-access-4cvp2\") pod \"community-operators-kjxq7\" (UID: \"addd7a71-19f0-47a5-af5e-9f76509513fe\") " pod="openshift-marketplace/community-operators-kjxq7" Jan 26 18:45:46 crc kubenswrapper[4737]: I0126 18:45:46.484555 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cvp2\" (UniqueName: \"kubernetes.io/projected/addd7a71-19f0-47a5-af5e-9f76509513fe-kube-api-access-4cvp2\") pod \"community-operators-kjxq7\" (UID: \"addd7a71-19f0-47a5-af5e-9f76509513fe\") " pod="openshift-marketplace/community-operators-kjxq7" Jan 26 18:45:46 crc kubenswrapper[4737]: I0126 18:45:46.484653 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/addd7a71-19f0-47a5-af5e-9f76509513fe-utilities\") pod \"community-operators-kjxq7\" (UID: \"addd7a71-19f0-47a5-af5e-9f76509513fe\") " pod="openshift-marketplace/community-operators-kjxq7" Jan 26 18:45:46 crc kubenswrapper[4737]: I0126 18:45:46.484692 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/addd7a71-19f0-47a5-af5e-9f76509513fe-catalog-content\") pod \"community-operators-kjxq7\" (UID: \"addd7a71-19f0-47a5-af5e-9f76509513fe\") " pod="openshift-marketplace/community-operators-kjxq7" Jan 26 18:45:46 crc kubenswrapper[4737]: I0126 18:45:46.485301 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/addd7a71-19f0-47a5-af5e-9f76509513fe-utilities\") pod \"community-operators-kjxq7\" (UID: \"addd7a71-19f0-47a5-af5e-9f76509513fe\") " pod="openshift-marketplace/community-operators-kjxq7" Jan 26 18:45:46 crc kubenswrapper[4737]: I0126 18:45:46.485345 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/addd7a71-19f0-47a5-af5e-9f76509513fe-catalog-content\") pod \"community-operators-kjxq7\" (UID: \"addd7a71-19f0-47a5-af5e-9f76509513fe\") " pod="openshift-marketplace/community-operators-kjxq7" Jan 26 18:45:46 crc kubenswrapper[4737]: I0126 18:45:46.513303 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cvp2\" (UniqueName: \"kubernetes.io/projected/addd7a71-19f0-47a5-af5e-9f76509513fe-kube-api-access-4cvp2\") pod \"community-operators-kjxq7\" (UID: \"addd7a71-19f0-47a5-af5e-9f76509513fe\") " pod="openshift-marketplace/community-operators-kjxq7" Jan 26 18:45:46 crc kubenswrapper[4737]: I0126 18:45:46.608143 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kjxq7" Jan 26 18:45:46 crc kubenswrapper[4737]: I0126 18:45:46.950365 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kjxq7"] Jan 26 18:45:46 crc kubenswrapper[4737]: W0126 18:45:46.960229 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaddd7a71_19f0_47a5_af5e_9f76509513fe.slice/crio-bfdf7512d4ed9d8d241b925a0c2ef1c24614cf3c8a1d12b66d0c89f002b028cd WatchSource:0}: Error finding container bfdf7512d4ed9d8d241b925a0c2ef1c24614cf3c8a1d12b66d0c89f002b028cd: Status 404 returned error can't find the container with id bfdf7512d4ed9d8d241b925a0c2ef1c24614cf3c8a1d12b66d0c89f002b028cd Jan 26 18:45:46 crc kubenswrapper[4737]: I0126 18:45:46.995575 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kjxq7" event={"ID":"addd7a71-19f0-47a5-af5e-9f76509513fe","Type":"ContainerStarted","Data":"bfdf7512d4ed9d8d241b925a0c2ef1c24614cf3c8a1d12b66d0c89f002b028cd"} Jan 26 18:45:47 crc kubenswrapper[4737]: I0126 18:45:47.997207 4737 generic.go:334] "Generic (PLEG): container finished" podID="addd7a71-19f0-47a5-af5e-9f76509513fe" containerID="c23e87edcb03527799ac450839ed692e89879fed782bc969cd92d4ac2936d288" exitCode=0 Jan 26 18:45:47 crc kubenswrapper[4737]: I0126 18:45:47.997286 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kjxq7" event={"ID":"addd7a71-19f0-47a5-af5e-9f76509513fe","Type":"ContainerDied","Data":"c23e87edcb03527799ac450839ed692e89879fed782bc969cd92d4ac2936d288"} Jan 26 18:45:49 crc kubenswrapper[4737]: I0126 18:45:49.009552 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kjxq7" event={"ID":"addd7a71-19f0-47a5-af5e-9f76509513fe","Type":"ContainerStarted","Data":"230beaa683d6894497ef6ea2eaba9f969d229d94af67e72b45bf71e60f2aa9a0"} Jan 26 18:45:50 crc kubenswrapper[4737]: I0126 18:45:50.021208 4737 generic.go:334] "Generic (PLEG): container finished" podID="addd7a71-19f0-47a5-af5e-9f76509513fe" containerID="230beaa683d6894497ef6ea2eaba9f969d229d94af67e72b45bf71e60f2aa9a0" exitCode=0 Jan 26 18:45:50 crc kubenswrapper[4737]: I0126 18:45:50.021279 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kjxq7" event={"ID":"addd7a71-19f0-47a5-af5e-9f76509513fe","Type":"ContainerDied","Data":"230beaa683d6894497ef6ea2eaba9f969d229d94af67e72b45bf71e60f2aa9a0"} Jan 26 18:45:50 crc kubenswrapper[4737]: I0126 18:45:50.021627 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kjxq7" event={"ID":"addd7a71-19f0-47a5-af5e-9f76509513fe","Type":"ContainerStarted","Data":"0d9b6cd6f0436ece05778b9446753cb3014a722914a3078d3348952ba7c6efe6"} Jan 26 18:45:50 crc kubenswrapper[4737]: I0126 18:45:50.043757 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kjxq7" podStartSLOduration=2.627138103 podStartE2EDuration="4.043720967s" podCreationTimestamp="2026-01-26 18:45:46 +0000 UTC" firstStartedPulling="2026-01-26 18:45:47.999064026 +0000 UTC m=+921.307258734" lastFinishedPulling="2026-01-26 18:45:49.41564689 +0000 UTC m=+922.723841598" observedRunningTime="2026-01-26 18:45:50.037853471 +0000 UTC m=+923.346048179" watchObservedRunningTime="2026-01-26 18:45:50.043720967 +0000 UTC m=+923.351915685" Jan 26 18:45:51 crc kubenswrapper[4737]: I0126 18:45:51.531787 4737 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Jan 26 18:45:51 crc kubenswrapper[4737]: I0126 18:45:51.531852 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="a05526c9-7b63-4f57-bdaf-95d8a7912bb8" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 26 18:45:56 crc kubenswrapper[4737]: I0126 18:45:56.609023 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kjxq7" Jan 26 18:45:56 crc kubenswrapper[4737]: I0126 18:45:56.612884 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kjxq7" Jan 26 18:45:56 crc kubenswrapper[4737]: I0126 18:45:56.746125 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kjxq7" Jan 26 18:45:57 crc kubenswrapper[4737]: I0126 18:45:57.117620 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kjxq7" Jan 26 18:45:57 crc kubenswrapper[4737]: I0126 18:45:57.169031 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kjxq7"] Jan 26 18:45:59 crc kubenswrapper[4737]: I0126 18:45:59.087433 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kjxq7" podUID="addd7a71-19f0-47a5-af5e-9f76509513fe" containerName="registry-server" containerID="cri-o://0d9b6cd6f0436ece05778b9446753cb3014a722914a3078d3348952ba7c6efe6" gracePeriod=2 Jan 26 18:45:59 crc kubenswrapper[4737]: I0126 18:45:59.561172 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kjxq7" Jan 26 18:45:59 crc kubenswrapper[4737]: I0126 18:45:59.727422 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/addd7a71-19f0-47a5-af5e-9f76509513fe-catalog-content\") pod \"addd7a71-19f0-47a5-af5e-9f76509513fe\" (UID: \"addd7a71-19f0-47a5-af5e-9f76509513fe\") " Jan 26 18:45:59 crc kubenswrapper[4737]: I0126 18:45:59.727596 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/addd7a71-19f0-47a5-af5e-9f76509513fe-utilities\") pod \"addd7a71-19f0-47a5-af5e-9f76509513fe\" (UID: \"addd7a71-19f0-47a5-af5e-9f76509513fe\") " Jan 26 18:45:59 crc kubenswrapper[4737]: I0126 18:45:59.728256 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cvp2\" (UniqueName: \"kubernetes.io/projected/addd7a71-19f0-47a5-af5e-9f76509513fe-kube-api-access-4cvp2\") pod \"addd7a71-19f0-47a5-af5e-9f76509513fe\" (UID: \"addd7a71-19f0-47a5-af5e-9f76509513fe\") " Jan 26 18:45:59 crc kubenswrapper[4737]: I0126 18:45:59.728771 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/addd7a71-19f0-47a5-af5e-9f76509513fe-utilities" (OuterVolumeSpecName: "utilities") pod "addd7a71-19f0-47a5-af5e-9f76509513fe" (UID: "addd7a71-19f0-47a5-af5e-9f76509513fe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:45:59 crc kubenswrapper[4737]: I0126 18:45:59.735095 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/addd7a71-19f0-47a5-af5e-9f76509513fe-kube-api-access-4cvp2" (OuterVolumeSpecName: "kube-api-access-4cvp2") pod "addd7a71-19f0-47a5-af5e-9f76509513fe" (UID: "addd7a71-19f0-47a5-af5e-9f76509513fe"). InnerVolumeSpecName "kube-api-access-4cvp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:45:59 crc kubenswrapper[4737]: I0126 18:45:59.781544 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/addd7a71-19f0-47a5-af5e-9f76509513fe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "addd7a71-19f0-47a5-af5e-9f76509513fe" (UID: "addd7a71-19f0-47a5-af5e-9f76509513fe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:45:59 crc kubenswrapper[4737]: I0126 18:45:59.830295 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cvp2\" (UniqueName: \"kubernetes.io/projected/addd7a71-19f0-47a5-af5e-9f76509513fe-kube-api-access-4cvp2\") on node \"crc\" DevicePath \"\"" Jan 26 18:45:59 crc kubenswrapper[4737]: I0126 18:45:59.830340 4737 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/addd7a71-19f0-47a5-af5e-9f76509513fe-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:45:59 crc kubenswrapper[4737]: I0126 18:45:59.830352 4737 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/addd7a71-19f0-47a5-af5e-9f76509513fe-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:46:00 crc kubenswrapper[4737]: I0126 18:46:00.114941 4737 generic.go:334] "Generic (PLEG): container finished" podID="addd7a71-19f0-47a5-af5e-9f76509513fe" containerID="0d9b6cd6f0436ece05778b9446753cb3014a722914a3078d3348952ba7c6efe6" exitCode=0 Jan 26 18:46:00 crc kubenswrapper[4737]: I0126 18:46:00.115155 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kjxq7" event={"ID":"addd7a71-19f0-47a5-af5e-9f76509513fe","Type":"ContainerDied","Data":"0d9b6cd6f0436ece05778b9446753cb3014a722914a3078d3348952ba7c6efe6"} Jan 26 18:46:00 crc kubenswrapper[4737]: I0126 18:46:00.115511 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kjxq7" event={"ID":"addd7a71-19f0-47a5-af5e-9f76509513fe","Type":"ContainerDied","Data":"bfdf7512d4ed9d8d241b925a0c2ef1c24614cf3c8a1d12b66d0c89f002b028cd"} Jan 26 18:46:00 crc kubenswrapper[4737]: I0126 18:46:00.115179 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kjxq7" Jan 26 18:46:00 crc kubenswrapper[4737]: I0126 18:46:00.115581 4737 scope.go:117] "RemoveContainer" containerID="0d9b6cd6f0436ece05778b9446753cb3014a722914a3078d3348952ba7c6efe6" Jan 26 18:46:00 crc kubenswrapper[4737]: I0126 18:46:00.151339 4737 scope.go:117] "RemoveContainer" containerID="230beaa683d6894497ef6ea2eaba9f969d229d94af67e72b45bf71e60f2aa9a0" Jan 26 18:46:00 crc kubenswrapper[4737]: I0126 18:46:00.155889 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kjxq7"] Jan 26 18:46:00 crc kubenswrapper[4737]: I0126 18:46:00.162413 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kjxq7"] Jan 26 18:46:00 crc kubenswrapper[4737]: I0126 18:46:00.179595 4737 scope.go:117] "RemoveContainer" containerID="c23e87edcb03527799ac450839ed692e89879fed782bc969cd92d4ac2936d288" Jan 26 18:46:00 crc kubenswrapper[4737]: I0126 18:46:00.200042 4737 scope.go:117] "RemoveContainer" containerID="0d9b6cd6f0436ece05778b9446753cb3014a722914a3078d3348952ba7c6efe6" Jan 26 18:46:00 crc kubenswrapper[4737]: E0126 18:46:00.200676 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d9b6cd6f0436ece05778b9446753cb3014a722914a3078d3348952ba7c6efe6\": container with ID starting with 0d9b6cd6f0436ece05778b9446753cb3014a722914a3078d3348952ba7c6efe6 not found: ID does not exist" containerID="0d9b6cd6f0436ece05778b9446753cb3014a722914a3078d3348952ba7c6efe6" Jan 26 18:46:00 crc kubenswrapper[4737]: I0126 18:46:00.200780 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d9b6cd6f0436ece05778b9446753cb3014a722914a3078d3348952ba7c6efe6"} err="failed to get container status \"0d9b6cd6f0436ece05778b9446753cb3014a722914a3078d3348952ba7c6efe6\": rpc error: code = NotFound desc = could not find container \"0d9b6cd6f0436ece05778b9446753cb3014a722914a3078d3348952ba7c6efe6\": container with ID starting with 0d9b6cd6f0436ece05778b9446753cb3014a722914a3078d3348952ba7c6efe6 not found: ID does not exist" Jan 26 18:46:00 crc kubenswrapper[4737]: I0126 18:46:00.200858 4737 scope.go:117] "RemoveContainer" containerID="230beaa683d6894497ef6ea2eaba9f969d229d94af67e72b45bf71e60f2aa9a0" Jan 26 18:46:00 crc kubenswrapper[4737]: E0126 18:46:00.201317 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"230beaa683d6894497ef6ea2eaba9f969d229d94af67e72b45bf71e60f2aa9a0\": container with ID starting with 230beaa683d6894497ef6ea2eaba9f969d229d94af67e72b45bf71e60f2aa9a0 not found: ID does not exist" containerID="230beaa683d6894497ef6ea2eaba9f969d229d94af67e72b45bf71e60f2aa9a0" Jan 26 18:46:00 crc kubenswrapper[4737]: I0126 18:46:00.201402 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"230beaa683d6894497ef6ea2eaba9f969d229d94af67e72b45bf71e60f2aa9a0"} err="failed to get container status \"230beaa683d6894497ef6ea2eaba9f969d229d94af67e72b45bf71e60f2aa9a0\": rpc error: code = NotFound desc = could not find container \"230beaa683d6894497ef6ea2eaba9f969d229d94af67e72b45bf71e60f2aa9a0\": container with ID starting with 230beaa683d6894497ef6ea2eaba9f969d229d94af67e72b45bf71e60f2aa9a0 not found: ID does not exist" Jan 26 18:46:00 crc kubenswrapper[4737]: I0126 18:46:00.201457 4737 scope.go:117] "RemoveContainer" containerID="c23e87edcb03527799ac450839ed692e89879fed782bc969cd92d4ac2936d288" Jan 26 18:46:00 crc kubenswrapper[4737]: E0126 18:46:00.201879 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c23e87edcb03527799ac450839ed692e89879fed782bc969cd92d4ac2936d288\": container with ID starting with c23e87edcb03527799ac450839ed692e89879fed782bc969cd92d4ac2936d288 not found: ID does not exist" containerID="c23e87edcb03527799ac450839ed692e89879fed782bc969cd92d4ac2936d288" Jan 26 18:46:00 crc kubenswrapper[4737]: I0126 18:46:00.201993 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c23e87edcb03527799ac450839ed692e89879fed782bc969cd92d4ac2936d288"} err="failed to get container status \"c23e87edcb03527799ac450839ed692e89879fed782bc969cd92d4ac2936d288\": rpc error: code = NotFound desc = could not find container \"c23e87edcb03527799ac450839ed692e89879fed782bc969cd92d4ac2936d288\": container with ID starting with c23e87edcb03527799ac450839ed692e89879fed782bc969cd92d4ac2936d288 not found: ID does not exist" Jan 26 18:46:00 crc kubenswrapper[4737]: I0126 18:46:00.993235 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="addd7a71-19f0-47a5-af5e-9f76509513fe" path="/var/lib/kubelet/pods/addd7a71-19f0-47a5-af5e-9f76509513fe/volumes" Jan 26 18:46:01 crc kubenswrapper[4737]: I0126 18:46:01.528113 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Jan 26 18:46:09 crc kubenswrapper[4737]: I0126 18:46:09.215771 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jcw6k"] Jan 26 18:46:09 crc kubenswrapper[4737]: E0126 18:46:09.216564 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="addd7a71-19f0-47a5-af5e-9f76509513fe" containerName="extract-utilities" Jan 26 18:46:09 crc kubenswrapper[4737]: I0126 18:46:09.216577 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="addd7a71-19f0-47a5-af5e-9f76509513fe" containerName="extract-utilities" Jan 26 18:46:09 crc kubenswrapper[4737]: E0126 18:46:09.216585 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="addd7a71-19f0-47a5-af5e-9f76509513fe" containerName="extract-content" Jan 26 18:46:09 crc kubenswrapper[4737]: I0126 18:46:09.216591 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="addd7a71-19f0-47a5-af5e-9f76509513fe" containerName="extract-content" Jan 26 18:46:09 crc kubenswrapper[4737]: E0126 18:46:09.216606 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="addd7a71-19f0-47a5-af5e-9f76509513fe" containerName="registry-server" Jan 26 18:46:09 crc kubenswrapper[4737]: I0126 18:46:09.216612 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="addd7a71-19f0-47a5-af5e-9f76509513fe" containerName="registry-server" Jan 26 18:46:09 crc kubenswrapper[4737]: I0126 18:46:09.216749 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="addd7a71-19f0-47a5-af5e-9f76509513fe" containerName="registry-server" Jan 26 18:46:09 crc kubenswrapper[4737]: I0126 18:46:09.217906 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jcw6k" Jan 26 18:46:09 crc kubenswrapper[4737]: I0126 18:46:09.233493 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jcw6k"] Jan 26 18:46:09 crc kubenswrapper[4737]: I0126 18:46:09.374534 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/317b2aaf-bd65-4d5b-ad28-6826be98f201-utilities\") pod \"redhat-marketplace-jcw6k\" (UID: \"317b2aaf-bd65-4d5b-ad28-6826be98f201\") " pod="openshift-marketplace/redhat-marketplace-jcw6k" Jan 26 18:46:09 crc kubenswrapper[4737]: I0126 18:46:09.374596 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdvjf\" (UniqueName: \"kubernetes.io/projected/317b2aaf-bd65-4d5b-ad28-6826be98f201-kube-api-access-cdvjf\") pod \"redhat-marketplace-jcw6k\" (UID: \"317b2aaf-bd65-4d5b-ad28-6826be98f201\") " pod="openshift-marketplace/redhat-marketplace-jcw6k" Jan 26 18:46:09 crc kubenswrapper[4737]: I0126 18:46:09.374850 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/317b2aaf-bd65-4d5b-ad28-6826be98f201-catalog-content\") pod \"redhat-marketplace-jcw6k\" (UID: \"317b2aaf-bd65-4d5b-ad28-6826be98f201\") " pod="openshift-marketplace/redhat-marketplace-jcw6k" Jan 26 18:46:09 crc kubenswrapper[4737]: I0126 18:46:09.476396 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/317b2aaf-bd65-4d5b-ad28-6826be98f201-utilities\") pod \"redhat-marketplace-jcw6k\" (UID: \"317b2aaf-bd65-4d5b-ad28-6826be98f201\") " pod="openshift-marketplace/redhat-marketplace-jcw6k" Jan 26 18:46:09 crc kubenswrapper[4737]: I0126 18:46:09.476452 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdvjf\" (UniqueName: \"kubernetes.io/projected/317b2aaf-bd65-4d5b-ad28-6826be98f201-kube-api-access-cdvjf\") pod \"redhat-marketplace-jcw6k\" (UID: \"317b2aaf-bd65-4d5b-ad28-6826be98f201\") " pod="openshift-marketplace/redhat-marketplace-jcw6k" Jan 26 18:46:09 crc kubenswrapper[4737]: I0126 18:46:09.476532 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/317b2aaf-bd65-4d5b-ad28-6826be98f201-catalog-content\") pod \"redhat-marketplace-jcw6k\" (UID: \"317b2aaf-bd65-4d5b-ad28-6826be98f201\") " pod="openshift-marketplace/redhat-marketplace-jcw6k" Jan 26 18:46:09 crc kubenswrapper[4737]: I0126 18:46:09.477052 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/317b2aaf-bd65-4d5b-ad28-6826be98f201-utilities\") pod \"redhat-marketplace-jcw6k\" (UID: \"317b2aaf-bd65-4d5b-ad28-6826be98f201\") " pod="openshift-marketplace/redhat-marketplace-jcw6k" Jan 26 18:46:09 crc kubenswrapper[4737]: I0126 18:46:09.477153 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/317b2aaf-bd65-4d5b-ad28-6826be98f201-catalog-content\") pod \"redhat-marketplace-jcw6k\" (UID: \"317b2aaf-bd65-4d5b-ad28-6826be98f201\") " pod="openshift-marketplace/redhat-marketplace-jcw6k" Jan 26 18:46:09 crc kubenswrapper[4737]: I0126 18:46:09.502443 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdvjf\" (UniqueName: \"kubernetes.io/projected/317b2aaf-bd65-4d5b-ad28-6826be98f201-kube-api-access-cdvjf\") pod \"redhat-marketplace-jcw6k\" (UID: \"317b2aaf-bd65-4d5b-ad28-6826be98f201\") " pod="openshift-marketplace/redhat-marketplace-jcw6k" Jan 26 18:46:09 crc kubenswrapper[4737]: I0126 18:46:09.538196 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jcw6k" Jan 26 18:46:09 crc kubenswrapper[4737]: I0126 18:46:09.969180 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jcw6k"] Jan 26 18:46:10 crc kubenswrapper[4737]: I0126 18:46:10.187934 4737 generic.go:334] "Generic (PLEG): container finished" podID="317b2aaf-bd65-4d5b-ad28-6826be98f201" containerID="15e7602bfba80c593572685b4e5e19008dfb6b1bd0200a3cb2ab26d5e727cbd9" exitCode=0 Jan 26 18:46:10 crc kubenswrapper[4737]: I0126 18:46:10.188029 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jcw6k" event={"ID":"317b2aaf-bd65-4d5b-ad28-6826be98f201","Type":"ContainerDied","Data":"15e7602bfba80c593572685b4e5e19008dfb6b1bd0200a3cb2ab26d5e727cbd9"} Jan 26 18:46:10 crc kubenswrapper[4737]: I0126 18:46:10.188358 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jcw6k" event={"ID":"317b2aaf-bd65-4d5b-ad28-6826be98f201","Type":"ContainerStarted","Data":"8e7202f2fda65e2d53dce65b5141097e167215b0b543845f86819dc6e05ee13f"} Jan 26 18:46:11 crc kubenswrapper[4737]: I0126 18:46:11.201587 4737 generic.go:334] "Generic (PLEG): container finished" podID="317b2aaf-bd65-4d5b-ad28-6826be98f201" containerID="7674a9988f366b40dedb1b1cbe003f2bc1f11c20126d53ac6aef3e561f551ae2" exitCode=0 Jan 26 18:46:11 crc kubenswrapper[4737]: I0126 18:46:11.201635 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jcw6k" event={"ID":"317b2aaf-bd65-4d5b-ad28-6826be98f201","Type":"ContainerDied","Data":"7674a9988f366b40dedb1b1cbe003f2bc1f11c20126d53ac6aef3e561f551ae2"} Jan 26 18:46:12 crc kubenswrapper[4737]: I0126 18:46:12.219529 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jcw6k" event={"ID":"317b2aaf-bd65-4d5b-ad28-6826be98f201","Type":"ContainerStarted","Data":"dc1cddaf721b4f826010e6c7329ebb19a6502e1ec6d4d027729e6d49efb35606"} Jan 26 18:46:12 crc kubenswrapper[4737]: I0126 18:46:12.236632 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jcw6k" podStartSLOduration=1.777401515 podStartE2EDuration="3.236612948s" podCreationTimestamp="2026-01-26 18:46:09 +0000 UTC" firstStartedPulling="2026-01-26 18:46:10.18919019 +0000 UTC m=+943.497384898" lastFinishedPulling="2026-01-26 18:46:11.648401633 +0000 UTC m=+944.956596331" observedRunningTime="2026-01-26 18:46:12.235773517 +0000 UTC m=+945.543968225" watchObservedRunningTime="2026-01-26 18:46:12.236612948 +0000 UTC m=+945.544807656" Jan 26 18:46:16 crc kubenswrapper[4737]: I0126 18:46:16.557384 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-874mq"] Jan 26 18:46:16 crc kubenswrapper[4737]: I0126 18:46:16.558896 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-874mq" Jan 26 18:46:16 crc kubenswrapper[4737]: I0126 18:46:16.564133 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Jan 26 18:46:16 crc kubenswrapper[4737]: I0126 18:46:16.564938 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-snp2x" Jan 26 18:46:16 crc kubenswrapper[4737]: I0126 18:46:16.566026 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Jan 26 18:46:16 crc kubenswrapper[4737]: I0126 18:46:16.566824 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Jan 26 18:46:16 crc kubenswrapper[4737]: I0126 18:46:16.567195 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Jan 26 18:46:16 crc kubenswrapper[4737]: I0126 18:46:16.574590 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Jan 26 18:46:16 crc kubenswrapper[4737]: I0126 18:46:16.582393 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-874mq"] Jan 26 18:46:16 crc kubenswrapper[4737]: I0126 18:46:16.637624 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-874mq"] Jan 26 18:46:16 crc kubenswrapper[4737]: E0126 18:46:16.638189 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-k2zb8 metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-logging/collector-874mq" podUID="29b6c57e-4752-472d-97b6-b49dce1043aa" Jan 26 18:46:16 crc kubenswrapper[4737]: I0126 18:46:16.691220 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/29b6c57e-4752-472d-97b6-b49dce1043aa-collector-token\") pod \"collector-874mq\" (UID: \"29b6c57e-4752-472d-97b6-b49dce1043aa\") " pod="openshift-logging/collector-874mq" Jan 26 18:46:16 crc kubenswrapper[4737]: I0126 18:46:16.691312 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/29b6c57e-4752-472d-97b6-b49dce1043aa-entrypoint\") pod \"collector-874mq\" (UID: \"29b6c57e-4752-472d-97b6-b49dce1043aa\") " pod="openshift-logging/collector-874mq" Jan 26 18:46:16 crc kubenswrapper[4737]: I0126 18:46:16.691345 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29b6c57e-4752-472d-97b6-b49dce1043aa-config\") pod \"collector-874mq\" (UID: \"29b6c57e-4752-472d-97b6-b49dce1043aa\") " pod="openshift-logging/collector-874mq" Jan 26 18:46:16 crc kubenswrapper[4737]: I0126 18:46:16.691490 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/29b6c57e-4752-472d-97b6-b49dce1043aa-config-openshift-service-cacrt\") pod \"collector-874mq\" (UID: \"29b6c57e-4752-472d-97b6-b49dce1043aa\") " pod="openshift-logging/collector-874mq" Jan 26 18:46:16 crc kubenswrapper[4737]: I0126 18:46:16.691588 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/29b6c57e-4752-472d-97b6-b49dce1043aa-collector-syslog-receiver\") pod \"collector-874mq\" (UID: \"29b6c57e-4752-472d-97b6-b49dce1043aa\") " pod="openshift-logging/collector-874mq" Jan 26 18:46:16 crc kubenswrapper[4737]: I0126 18:46:16.691842 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/29b6c57e-4752-472d-97b6-b49dce1043aa-sa-token\") pod \"collector-874mq\" (UID: \"29b6c57e-4752-472d-97b6-b49dce1043aa\") " pod="openshift-logging/collector-874mq" Jan 26 18:46:16 crc kubenswrapper[4737]: I0126 18:46:16.692021 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2zb8\" (UniqueName: \"kubernetes.io/projected/29b6c57e-4752-472d-97b6-b49dce1043aa-kube-api-access-k2zb8\") pod \"collector-874mq\" (UID: \"29b6c57e-4752-472d-97b6-b49dce1043aa\") " pod="openshift-logging/collector-874mq" Jan 26 18:46:16 crc kubenswrapper[4737]: I0126 18:46:16.692085 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/29b6c57e-4752-472d-97b6-b49dce1043aa-metrics\") pod \"collector-874mq\" (UID: \"29b6c57e-4752-472d-97b6-b49dce1043aa\") " pod="openshift-logging/collector-874mq" Jan 26 18:46:16 crc kubenswrapper[4737]: I0126 18:46:16.692146 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/29b6c57e-4752-472d-97b6-b49dce1043aa-tmp\") pod \"collector-874mq\" (UID: \"29b6c57e-4752-472d-97b6-b49dce1043aa\") " pod="openshift-logging/collector-874mq" Jan 26 18:46:16 crc kubenswrapper[4737]: I0126 18:46:16.692198 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/29b6c57e-4752-472d-97b6-b49dce1043aa-trusted-ca\") pod \"collector-874mq\" (UID: \"29b6c57e-4752-472d-97b6-b49dce1043aa\") " pod="openshift-logging/collector-874mq" Jan 26 18:46:16 crc kubenswrapper[4737]: I0126 18:46:16.692318 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/29b6c57e-4752-472d-97b6-b49dce1043aa-datadir\") pod \"collector-874mq\" (UID: \"29b6c57e-4752-472d-97b6-b49dce1043aa\") " pod="openshift-logging/collector-874mq" Jan 26 18:46:16 crc kubenswrapper[4737]: I0126 18:46:16.794047 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/29b6c57e-4752-472d-97b6-b49dce1043aa-tmp\") pod \"collector-874mq\" (UID: \"29b6c57e-4752-472d-97b6-b49dce1043aa\") " pod="openshift-logging/collector-874mq" Jan 26 18:46:16 crc kubenswrapper[4737]: I0126 18:46:16.794110 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/29b6c57e-4752-472d-97b6-b49dce1043aa-trusted-ca\") pod \"collector-874mq\" (UID: \"29b6c57e-4752-472d-97b6-b49dce1043aa\") " pod="openshift-logging/collector-874mq" Jan 26 18:46:16 crc kubenswrapper[4737]: I0126 18:46:16.794153 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/29b6c57e-4752-472d-97b6-b49dce1043aa-datadir\") pod \"collector-874mq\" (UID: \"29b6c57e-4752-472d-97b6-b49dce1043aa\") " pod="openshift-logging/collector-874mq" Jan 26 18:46:16 crc kubenswrapper[4737]: I0126 18:46:16.794175 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/29b6c57e-4752-472d-97b6-b49dce1043aa-collector-token\") pod \"collector-874mq\" (UID: \"29b6c57e-4752-472d-97b6-b49dce1043aa\") " pod="openshift-logging/collector-874mq" Jan 26 18:46:16 crc kubenswrapper[4737]: I0126 18:46:16.794217 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/29b6c57e-4752-472d-97b6-b49dce1043aa-entrypoint\") pod \"collector-874mq\" (UID: \"29b6c57e-4752-472d-97b6-b49dce1043aa\") " pod="openshift-logging/collector-874mq" Jan 26 18:46:16 crc kubenswrapper[4737]: I0126 18:46:16.794247 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29b6c57e-4752-472d-97b6-b49dce1043aa-config\") pod \"collector-874mq\" (UID: \"29b6c57e-4752-472d-97b6-b49dce1043aa\") " pod="openshift-logging/collector-874mq" Jan 26 18:46:16 crc kubenswrapper[4737]: I0126 18:46:16.794269 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/29b6c57e-4752-472d-97b6-b49dce1043aa-config-openshift-service-cacrt\") pod \"collector-874mq\" (UID: \"29b6c57e-4752-472d-97b6-b49dce1043aa\") " pod="openshift-logging/collector-874mq" Jan 26 18:46:16 crc kubenswrapper[4737]: I0126 18:46:16.794293 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/29b6c57e-4752-472d-97b6-b49dce1043aa-collector-syslog-receiver\") pod \"collector-874mq\" (UID: \"29b6c57e-4752-472d-97b6-b49dce1043aa\") " pod="openshift-logging/collector-874mq" Jan 26 18:46:16 crc kubenswrapper[4737]: I0126 18:46:16.794320 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/29b6c57e-4752-472d-97b6-b49dce1043aa-datadir\") pod \"collector-874mq\" (UID: \"29b6c57e-4752-472d-97b6-b49dce1043aa\") " pod="openshift-logging/collector-874mq" Jan 26 18:46:16 crc kubenswrapper[4737]: I0126 18:46:16.794352 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/29b6c57e-4752-472d-97b6-b49dce1043aa-sa-token\") pod \"collector-874mq\" (UID: \"29b6c57e-4752-472d-97b6-b49dce1043aa\") " pod="openshift-logging/collector-874mq" Jan 26 18:46:16 crc kubenswrapper[4737]: I0126 18:46:16.794506 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2zb8\" (UniqueName: \"kubernetes.io/projected/29b6c57e-4752-472d-97b6-b49dce1043aa-kube-api-access-k2zb8\") pod \"collector-874mq\" (UID: \"29b6c57e-4752-472d-97b6-b49dce1043aa\") " pod="openshift-logging/collector-874mq" Jan 26 18:46:16 crc kubenswrapper[4737]: I0126 18:46:16.794537 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/29b6c57e-4752-472d-97b6-b49dce1043aa-metrics\") pod \"collector-874mq\" (UID: \"29b6c57e-4752-472d-97b6-b49dce1043aa\") " pod="openshift-logging/collector-874mq" Jan 26 18:46:16 crc kubenswrapper[4737]: E0126 18:46:16.794752 4737 secret.go:188] Couldn't get secret openshift-logging/collector-metrics: secret "collector-metrics" not found Jan 26 18:46:16 crc kubenswrapper[4737]: E0126 18:46:16.794807 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29b6c57e-4752-472d-97b6-b49dce1043aa-metrics podName:29b6c57e-4752-472d-97b6-b49dce1043aa nodeName:}" failed. No retries permitted until 2026-01-26 18:46:17.294787567 +0000 UTC m=+950.602982275 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics" (UniqueName: "kubernetes.io/secret/29b6c57e-4752-472d-97b6-b49dce1043aa-metrics") pod "collector-874mq" (UID: "29b6c57e-4752-472d-97b6-b49dce1043aa") : secret "collector-metrics" not found Jan 26 18:46:16 crc kubenswrapper[4737]: E0126 18:46:16.795150 4737 secret.go:188] Couldn't get secret openshift-logging/collector-syslog-receiver: secret "collector-syslog-receiver" not found Jan 26 18:46:16 crc kubenswrapper[4737]: E0126 18:46:16.795252 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29b6c57e-4752-472d-97b6-b49dce1043aa-collector-syslog-receiver podName:29b6c57e-4752-472d-97b6-b49dce1043aa nodeName:}" failed. No retries permitted until 2026-01-26 18:46:17.295229238 +0000 UTC m=+950.603423996 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "collector-syslog-receiver" (UniqueName: "kubernetes.io/secret/29b6c57e-4752-472d-97b6-b49dce1043aa-collector-syslog-receiver") pod "collector-874mq" (UID: "29b6c57e-4752-472d-97b6-b49dce1043aa") : secret "collector-syslog-receiver" not found Jan 26 18:46:16 crc kubenswrapper[4737]: I0126 18:46:16.795353 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/29b6c57e-4752-472d-97b6-b49dce1043aa-entrypoint\") pod \"collector-874mq\" (UID: \"29b6c57e-4752-472d-97b6-b49dce1043aa\") " pod="openshift-logging/collector-874mq" Jan 26 18:46:16 crc kubenswrapper[4737]: I0126 18:46:16.795459 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/29b6c57e-4752-472d-97b6-b49dce1043aa-trusted-ca\") pod \"collector-874mq\" (UID: \"29b6c57e-4752-472d-97b6-b49dce1043aa\") " pod="openshift-logging/collector-874mq" Jan 26 18:46:16 crc kubenswrapper[4737]: I0126 18:46:16.795642 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29b6c57e-4752-472d-97b6-b49dce1043aa-config\") pod \"collector-874mq\" (UID: \"29b6c57e-4752-472d-97b6-b49dce1043aa\") " pod="openshift-logging/collector-874mq" Jan 26 18:46:16 crc kubenswrapper[4737]: I0126 18:46:16.795729 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/29b6c57e-4752-472d-97b6-b49dce1043aa-config-openshift-service-cacrt\") pod \"collector-874mq\" (UID: \"29b6c57e-4752-472d-97b6-b49dce1043aa\") " pod="openshift-logging/collector-874mq" Jan 26 18:46:16 crc kubenswrapper[4737]: I0126 18:46:16.800511 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/29b6c57e-4752-472d-97b6-b49dce1043aa-collector-token\") pod \"collector-874mq\" (UID: \"29b6c57e-4752-472d-97b6-b49dce1043aa\") " pod="openshift-logging/collector-874mq" Jan 26 18:46:16 crc kubenswrapper[4737]: I0126 18:46:16.803465 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/29b6c57e-4752-472d-97b6-b49dce1043aa-tmp\") pod \"collector-874mq\" (UID: \"29b6c57e-4752-472d-97b6-b49dce1043aa\") " pod="openshift-logging/collector-874mq" Jan 26 18:46:16 crc kubenswrapper[4737]: I0126 18:46:16.820060 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2zb8\" (UniqueName: \"kubernetes.io/projected/29b6c57e-4752-472d-97b6-b49dce1043aa-kube-api-access-k2zb8\") pod \"collector-874mq\" (UID: \"29b6c57e-4752-472d-97b6-b49dce1043aa\") " pod="openshift-logging/collector-874mq" Jan 26 18:46:16 crc kubenswrapper[4737]: I0126 18:46:16.824443 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/29b6c57e-4752-472d-97b6-b49dce1043aa-sa-token\") pod \"collector-874mq\" (UID: \"29b6c57e-4752-472d-97b6-b49dce1043aa\") " pod="openshift-logging/collector-874mq" Jan 26 18:46:17 crc kubenswrapper[4737]: I0126 18:46:17.254312 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-874mq" Jan 26 18:46:17 crc kubenswrapper[4737]: I0126 18:46:17.263876 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-874mq" Jan 26 18:46:17 crc kubenswrapper[4737]: I0126 18:46:17.302309 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/29b6c57e-4752-472d-97b6-b49dce1043aa-collector-syslog-receiver\") pod \"collector-874mq\" (UID: \"29b6c57e-4752-472d-97b6-b49dce1043aa\") " pod="openshift-logging/collector-874mq" Jan 26 18:46:17 crc kubenswrapper[4737]: I0126 18:46:17.302394 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/29b6c57e-4752-472d-97b6-b49dce1043aa-metrics\") pod \"collector-874mq\" (UID: \"29b6c57e-4752-472d-97b6-b49dce1043aa\") " pod="openshift-logging/collector-874mq" Jan 26 18:46:17 crc kubenswrapper[4737]: I0126 18:46:17.305488 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/29b6c57e-4752-472d-97b6-b49dce1043aa-metrics\") pod \"collector-874mq\" (UID: \"29b6c57e-4752-472d-97b6-b49dce1043aa\") " pod="openshift-logging/collector-874mq" Jan 26 18:46:17 crc kubenswrapper[4737]: I0126 18:46:17.305633 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/29b6c57e-4752-472d-97b6-b49dce1043aa-collector-syslog-receiver\") pod \"collector-874mq\" (UID: \"29b6c57e-4752-472d-97b6-b49dce1043aa\") " pod="openshift-logging/collector-874mq" Jan 26 18:46:17 crc kubenswrapper[4737]: I0126 18:46:17.403907 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/29b6c57e-4752-472d-97b6-b49dce1043aa-sa-token\") pod \"29b6c57e-4752-472d-97b6-b49dce1043aa\" (UID: \"29b6c57e-4752-472d-97b6-b49dce1043aa\") " Jan 26 18:46:17 crc kubenswrapper[4737]: I0126 18:46:17.403964 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29b6c57e-4752-472d-97b6-b49dce1043aa-config\") pod \"29b6c57e-4752-472d-97b6-b49dce1043aa\" (UID: \"29b6c57e-4752-472d-97b6-b49dce1043aa\") " Jan 26 18:46:17 crc kubenswrapper[4737]: I0126 18:46:17.404013 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/29b6c57e-4752-472d-97b6-b49dce1043aa-datadir\") pod \"29b6c57e-4752-472d-97b6-b49dce1043aa\" (UID: \"29b6c57e-4752-472d-97b6-b49dce1043aa\") " Jan 26 18:46:17 crc kubenswrapper[4737]: I0126 18:46:17.404114 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/29b6c57e-4752-472d-97b6-b49dce1043aa-tmp\") pod \"29b6c57e-4752-472d-97b6-b49dce1043aa\" (UID: \"29b6c57e-4752-472d-97b6-b49dce1043aa\") " Jan 26 18:46:17 crc kubenswrapper[4737]: I0126 18:46:17.404168 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/29b6c57e-4752-472d-97b6-b49dce1043aa-trusted-ca\") pod \"29b6c57e-4752-472d-97b6-b49dce1043aa\" (UID: \"29b6c57e-4752-472d-97b6-b49dce1043aa\") " Jan 26 18:46:17 crc kubenswrapper[4737]: I0126 18:46:17.404226 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2zb8\" (UniqueName: \"kubernetes.io/projected/29b6c57e-4752-472d-97b6-b49dce1043aa-kube-api-access-k2zb8\") pod \"29b6c57e-4752-472d-97b6-b49dce1043aa\" (UID: \"29b6c57e-4752-472d-97b6-b49dce1043aa\") " Jan 26 18:46:17 crc kubenswrapper[4737]: I0126 18:46:17.404252 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/29b6c57e-4752-472d-97b6-b49dce1043aa-metrics\") pod \"29b6c57e-4752-472d-97b6-b49dce1043aa\" (UID: \"29b6c57e-4752-472d-97b6-b49dce1043aa\") " Jan 26 18:46:17 crc kubenswrapper[4737]: I0126 18:46:17.404295 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/29b6c57e-4752-472d-97b6-b49dce1043aa-collector-syslog-receiver\") pod \"29b6c57e-4752-472d-97b6-b49dce1043aa\" (UID: \"29b6c57e-4752-472d-97b6-b49dce1043aa\") " Jan 26 18:46:17 crc kubenswrapper[4737]: I0126 18:46:17.404241 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29b6c57e-4752-472d-97b6-b49dce1043aa-datadir" (OuterVolumeSpecName: "datadir") pod "29b6c57e-4752-472d-97b6-b49dce1043aa" (UID: "29b6c57e-4752-472d-97b6-b49dce1043aa"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:46:17 crc kubenswrapper[4737]: I0126 18:46:17.404319 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/29b6c57e-4752-472d-97b6-b49dce1043aa-collector-token\") pod \"29b6c57e-4752-472d-97b6-b49dce1043aa\" (UID: \"29b6c57e-4752-472d-97b6-b49dce1043aa\") " Jan 26 18:46:17 crc kubenswrapper[4737]: I0126 18:46:17.404433 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/29b6c57e-4752-472d-97b6-b49dce1043aa-entrypoint\") pod \"29b6c57e-4752-472d-97b6-b49dce1043aa\" (UID: \"29b6c57e-4752-472d-97b6-b49dce1043aa\") " Jan 26 18:46:17 crc kubenswrapper[4737]: I0126 18:46:17.404471 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/29b6c57e-4752-472d-97b6-b49dce1043aa-config-openshift-service-cacrt\") pod \"29b6c57e-4752-472d-97b6-b49dce1043aa\" (UID: \"29b6c57e-4752-472d-97b6-b49dce1043aa\") " Jan 26 18:46:17 crc kubenswrapper[4737]: I0126 18:46:17.404816 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29b6c57e-4752-472d-97b6-b49dce1043aa-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "29b6c57e-4752-472d-97b6-b49dce1043aa" (UID: "29b6c57e-4752-472d-97b6-b49dce1043aa"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:46:17 crc kubenswrapper[4737]: I0126 18:46:17.405017 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29b6c57e-4752-472d-97b6-b49dce1043aa-config" (OuterVolumeSpecName: "config") pod "29b6c57e-4752-472d-97b6-b49dce1043aa" (UID: "29b6c57e-4752-472d-97b6-b49dce1043aa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:46:17 crc kubenswrapper[4737]: I0126 18:46:17.405057 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29b6c57e-4752-472d-97b6-b49dce1043aa-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "29b6c57e-4752-472d-97b6-b49dce1043aa" (UID: "29b6c57e-4752-472d-97b6-b49dce1043aa"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:46:17 crc kubenswrapper[4737]: I0126 18:46:17.405051 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29b6c57e-4752-472d-97b6-b49dce1043aa-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "29b6c57e-4752-472d-97b6-b49dce1043aa" (UID: "29b6c57e-4752-472d-97b6-b49dce1043aa"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:46:17 crc kubenswrapper[4737]: I0126 18:46:17.405186 4737 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/29b6c57e-4752-472d-97b6-b49dce1043aa-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 18:46:17 crc kubenswrapper[4737]: I0126 18:46:17.405204 4737 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/29b6c57e-4752-472d-97b6-b49dce1043aa-datadir\") on node \"crc\" DevicePath \"\"" Jan 26 18:46:17 crc kubenswrapper[4737]: I0126 18:46:17.407466 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29b6c57e-4752-472d-97b6-b49dce1043aa-metrics" (OuterVolumeSpecName: "metrics") pod "29b6c57e-4752-472d-97b6-b49dce1043aa" (UID: "29b6c57e-4752-472d-97b6-b49dce1043aa"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:46:17 crc kubenswrapper[4737]: I0126 18:46:17.408062 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29b6c57e-4752-472d-97b6-b49dce1043aa-sa-token" (OuterVolumeSpecName: "sa-token") pod "29b6c57e-4752-472d-97b6-b49dce1043aa" (UID: "29b6c57e-4752-472d-97b6-b49dce1043aa"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:46:17 crc kubenswrapper[4737]: I0126 18:46:17.408732 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29b6c57e-4752-472d-97b6-b49dce1043aa-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "29b6c57e-4752-472d-97b6-b49dce1043aa" (UID: "29b6c57e-4752-472d-97b6-b49dce1043aa"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:46:17 crc kubenswrapper[4737]: I0126 18:46:17.408818 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29b6c57e-4752-472d-97b6-b49dce1043aa-kube-api-access-k2zb8" (OuterVolumeSpecName: "kube-api-access-k2zb8") pod "29b6c57e-4752-472d-97b6-b49dce1043aa" (UID: "29b6c57e-4752-472d-97b6-b49dce1043aa"). InnerVolumeSpecName "kube-api-access-k2zb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:46:17 crc kubenswrapper[4737]: I0126 18:46:17.409217 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29b6c57e-4752-472d-97b6-b49dce1043aa-tmp" (OuterVolumeSpecName: "tmp") pod "29b6c57e-4752-472d-97b6-b49dce1043aa" (UID: "29b6c57e-4752-472d-97b6-b49dce1043aa"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:46:17 crc kubenswrapper[4737]: I0126 18:46:17.416707 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29b6c57e-4752-472d-97b6-b49dce1043aa-collector-token" (OuterVolumeSpecName: "collector-token") pod "29b6c57e-4752-472d-97b6-b49dce1043aa" (UID: "29b6c57e-4752-472d-97b6-b49dce1043aa"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:46:17 crc kubenswrapper[4737]: I0126 18:46:17.507589 4737 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/29b6c57e-4752-472d-97b6-b49dce1043aa-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 18:46:17 crc kubenswrapper[4737]: I0126 18:46:17.508113 4737 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29b6c57e-4752-472d-97b6-b49dce1043aa-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:46:17 crc kubenswrapper[4737]: I0126 18:46:17.508245 4737 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/29b6c57e-4752-472d-97b6-b49dce1043aa-tmp\") on node \"crc\" DevicePath \"\"" Jan 26 18:46:17 crc kubenswrapper[4737]: I0126 18:46:17.508345 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2zb8\" (UniqueName: \"kubernetes.io/projected/29b6c57e-4752-472d-97b6-b49dce1043aa-kube-api-access-k2zb8\") on node \"crc\" DevicePath \"\"" Jan 26 18:46:17 crc kubenswrapper[4737]: I0126 18:46:17.508453 4737 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/29b6c57e-4752-472d-97b6-b49dce1043aa-metrics\") on node \"crc\" DevicePath \"\"" Jan 26 18:46:17 crc kubenswrapper[4737]: I0126 18:46:17.508545 4737 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/29b6c57e-4752-472d-97b6-b49dce1043aa-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Jan 26 18:46:17 crc kubenswrapper[4737]: I0126 18:46:17.508632 4737 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/29b6c57e-4752-472d-97b6-b49dce1043aa-collector-token\") on node \"crc\" DevicePath \"\"" Jan 26 18:46:17 crc kubenswrapper[4737]: I0126 18:46:17.508715 4737 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/29b6c57e-4752-472d-97b6-b49dce1043aa-entrypoint\") on node \"crc\" DevicePath \"\"" Jan 26 18:46:17 crc kubenswrapper[4737]: I0126 18:46:17.508801 4737 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/29b6c57e-4752-472d-97b6-b49dce1043aa-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Jan 26 18:46:18 crc kubenswrapper[4737]: I0126 18:46:18.260095 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-874mq" Jan 26 18:46:18 crc kubenswrapper[4737]: I0126 18:46:18.317923 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-874mq"] Jan 26 18:46:18 crc kubenswrapper[4737]: I0126 18:46:18.325901 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-874mq"] Jan 26 18:46:18 crc kubenswrapper[4737]: I0126 18:46:18.340308 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-vbgpv"] Jan 26 18:46:18 crc kubenswrapper[4737]: I0126 18:46:18.341400 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-vbgpv" Jan 26 18:46:18 crc kubenswrapper[4737]: I0126 18:46:18.348946 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Jan 26 18:46:18 crc kubenswrapper[4737]: I0126 18:46:18.349197 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Jan 26 18:46:18 crc kubenswrapper[4737]: I0126 18:46:18.349390 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-snp2x" Jan 26 18:46:18 crc kubenswrapper[4737]: I0126 18:46:18.349562 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Jan 26 18:46:18 crc kubenswrapper[4737]: I0126 18:46:18.350139 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-vbgpv"] Jan 26 18:46:18 crc kubenswrapper[4737]: I0126 18:46:18.350223 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Jan 26 18:46:18 crc kubenswrapper[4737]: I0126 18:46:18.356823 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Jan 26 18:46:18 crc kubenswrapper[4737]: I0126 18:46:18.422564 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/6e3d8492-59e3-4dc0-b14a-261053397eb7-metrics\") pod \"collector-vbgpv\" (UID: \"6e3d8492-59e3-4dc0-b14a-261053397eb7\") " pod="openshift-logging/collector-vbgpv" Jan 26 18:46:18 crc kubenswrapper[4737]: I0126 18:46:18.422630 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e3d8492-59e3-4dc0-b14a-261053397eb7-config\") pod \"collector-vbgpv\" (UID: \"6e3d8492-59e3-4dc0-b14a-261053397eb7\") " pod="openshift-logging/collector-vbgpv" Jan 26 18:46:18 crc kubenswrapper[4737]: I0126 18:46:18.422716 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6e3d8492-59e3-4dc0-b14a-261053397eb7-tmp\") pod \"collector-vbgpv\" (UID: \"6e3d8492-59e3-4dc0-b14a-261053397eb7\") " pod="openshift-logging/collector-vbgpv" Jan 26 18:46:18 crc kubenswrapper[4737]: I0126 18:46:18.422761 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h48d\" (UniqueName: \"kubernetes.io/projected/6e3d8492-59e3-4dc0-b14a-261053397eb7-kube-api-access-8h48d\") pod \"collector-vbgpv\" (UID: \"6e3d8492-59e3-4dc0-b14a-261053397eb7\") " pod="openshift-logging/collector-vbgpv" Jan 26 18:46:18 crc kubenswrapper[4737]: I0126 18:46:18.422839 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/6e3d8492-59e3-4dc0-b14a-261053397eb7-collector-syslog-receiver\") pod \"collector-vbgpv\" (UID: \"6e3d8492-59e3-4dc0-b14a-261053397eb7\") " pod="openshift-logging/collector-vbgpv" Jan 26 18:46:18 crc kubenswrapper[4737]: I0126 18:46:18.422873 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6e3d8492-59e3-4dc0-b14a-261053397eb7-trusted-ca\") pod \"collector-vbgpv\" (UID: \"6e3d8492-59e3-4dc0-b14a-261053397eb7\") " pod="openshift-logging/collector-vbgpv" Jan 26 18:46:18 crc kubenswrapper[4737]: I0126 18:46:18.422928 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/6e3d8492-59e3-4dc0-b14a-261053397eb7-entrypoint\") pod \"collector-vbgpv\" (UID: \"6e3d8492-59e3-4dc0-b14a-261053397eb7\") " pod="openshift-logging/collector-vbgpv" Jan 26 18:46:18 crc kubenswrapper[4737]: I0126 18:46:18.423018 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/6e3d8492-59e3-4dc0-b14a-261053397eb7-config-openshift-service-cacrt\") pod \"collector-vbgpv\" (UID: \"6e3d8492-59e3-4dc0-b14a-261053397eb7\") " pod="openshift-logging/collector-vbgpv" Jan 26 18:46:18 crc kubenswrapper[4737]: I0126 18:46:18.423098 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/6e3d8492-59e3-4dc0-b14a-261053397eb7-datadir\") pod \"collector-vbgpv\" (UID: \"6e3d8492-59e3-4dc0-b14a-261053397eb7\") " pod="openshift-logging/collector-vbgpv" Jan 26 18:46:18 crc kubenswrapper[4737]: I0126 18:46:18.423140 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/6e3d8492-59e3-4dc0-b14a-261053397eb7-collector-token\") pod \"collector-vbgpv\" (UID: \"6e3d8492-59e3-4dc0-b14a-261053397eb7\") " pod="openshift-logging/collector-vbgpv" Jan 26 18:46:18 crc kubenswrapper[4737]: I0126 18:46:18.423193 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/6e3d8492-59e3-4dc0-b14a-261053397eb7-sa-token\") pod \"collector-vbgpv\" (UID: \"6e3d8492-59e3-4dc0-b14a-261053397eb7\") " pod="openshift-logging/collector-vbgpv" Jan 26 18:46:18 crc kubenswrapper[4737]: I0126 18:46:18.525579 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/6e3d8492-59e3-4dc0-b14a-261053397eb7-metrics\") pod \"collector-vbgpv\" (UID: \"6e3d8492-59e3-4dc0-b14a-261053397eb7\") " pod="openshift-logging/collector-vbgpv" Jan 26 18:46:18 crc kubenswrapper[4737]: I0126 18:46:18.525651 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e3d8492-59e3-4dc0-b14a-261053397eb7-config\") pod \"collector-vbgpv\" (UID: \"6e3d8492-59e3-4dc0-b14a-261053397eb7\") " pod="openshift-logging/collector-vbgpv" Jan 26 18:46:18 crc kubenswrapper[4737]: I0126 18:46:18.525707 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6e3d8492-59e3-4dc0-b14a-261053397eb7-tmp\") pod \"collector-vbgpv\" (UID: \"6e3d8492-59e3-4dc0-b14a-261053397eb7\") " pod="openshift-logging/collector-vbgpv" Jan 26 18:46:18 crc kubenswrapper[4737]: I0126 18:46:18.525733 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8h48d\" (UniqueName: \"kubernetes.io/projected/6e3d8492-59e3-4dc0-b14a-261053397eb7-kube-api-access-8h48d\") pod \"collector-vbgpv\" (UID: \"6e3d8492-59e3-4dc0-b14a-261053397eb7\") " pod="openshift-logging/collector-vbgpv" Jan 26 18:46:18 crc kubenswrapper[4737]: I0126 18:46:18.525785 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/6e3d8492-59e3-4dc0-b14a-261053397eb7-collector-syslog-receiver\") pod \"collector-vbgpv\" (UID: \"6e3d8492-59e3-4dc0-b14a-261053397eb7\") " pod="openshift-logging/collector-vbgpv" Jan 26 18:46:18 crc kubenswrapper[4737]: I0126 18:46:18.525812 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6e3d8492-59e3-4dc0-b14a-261053397eb7-trusted-ca\") pod \"collector-vbgpv\" (UID: \"6e3d8492-59e3-4dc0-b14a-261053397eb7\") " pod="openshift-logging/collector-vbgpv" Jan 26 18:46:18 crc kubenswrapper[4737]: I0126 18:46:18.525849 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/6e3d8492-59e3-4dc0-b14a-261053397eb7-entrypoint\") pod \"collector-vbgpv\" (UID: \"6e3d8492-59e3-4dc0-b14a-261053397eb7\") " pod="openshift-logging/collector-vbgpv" Jan 26 18:46:18 crc kubenswrapper[4737]: I0126 18:46:18.525873 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/6e3d8492-59e3-4dc0-b14a-261053397eb7-config-openshift-service-cacrt\") pod \"collector-vbgpv\" (UID: \"6e3d8492-59e3-4dc0-b14a-261053397eb7\") " pod="openshift-logging/collector-vbgpv" Jan 26 18:46:18 crc kubenswrapper[4737]: I0126 18:46:18.525898 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/6e3d8492-59e3-4dc0-b14a-261053397eb7-datadir\") pod \"collector-vbgpv\" (UID: \"6e3d8492-59e3-4dc0-b14a-261053397eb7\") " pod="openshift-logging/collector-vbgpv" Jan 26 18:46:18 crc kubenswrapper[4737]: I0126 18:46:18.525919 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/6e3d8492-59e3-4dc0-b14a-261053397eb7-collector-token\") pod \"collector-vbgpv\" (UID: \"6e3d8492-59e3-4dc0-b14a-261053397eb7\") " pod="openshift-logging/collector-vbgpv" Jan 26 18:46:18 crc kubenswrapper[4737]: I0126 18:46:18.525946 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/6e3d8492-59e3-4dc0-b14a-261053397eb7-sa-token\") pod \"collector-vbgpv\" (UID: \"6e3d8492-59e3-4dc0-b14a-261053397eb7\") " pod="openshift-logging/collector-vbgpv" Jan 26 18:46:18 crc kubenswrapper[4737]: I0126 18:46:18.526446 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/6e3d8492-59e3-4dc0-b14a-261053397eb7-datadir\") pod \"collector-vbgpv\" (UID: \"6e3d8492-59e3-4dc0-b14a-261053397eb7\") " pod="openshift-logging/collector-vbgpv" Jan 26 18:46:18 crc kubenswrapper[4737]: I0126 18:46:18.526721 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/6e3d8492-59e3-4dc0-b14a-261053397eb7-config-openshift-service-cacrt\") pod \"collector-vbgpv\" (UID: \"6e3d8492-59e3-4dc0-b14a-261053397eb7\") " pod="openshift-logging/collector-vbgpv" Jan 26 18:46:18 crc kubenswrapper[4737]: I0126 18:46:18.526782 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e3d8492-59e3-4dc0-b14a-261053397eb7-config\") pod \"collector-vbgpv\" (UID: \"6e3d8492-59e3-4dc0-b14a-261053397eb7\") " pod="openshift-logging/collector-vbgpv" Jan 26 18:46:18 crc kubenswrapper[4737]: I0126 18:46:18.527456 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6e3d8492-59e3-4dc0-b14a-261053397eb7-trusted-ca\") pod \"collector-vbgpv\" (UID: \"6e3d8492-59e3-4dc0-b14a-261053397eb7\") " pod="openshift-logging/collector-vbgpv" Jan 26 18:46:18 crc kubenswrapper[4737]: I0126 18:46:18.544681 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/6e3d8492-59e3-4dc0-b14a-261053397eb7-entrypoint\") pod \"collector-vbgpv\" (UID: \"6e3d8492-59e3-4dc0-b14a-261053397eb7\") " pod="openshift-logging/collector-vbgpv" Jan 26 18:46:18 crc kubenswrapper[4737]: I0126 18:46:18.549381 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6e3d8492-59e3-4dc0-b14a-261053397eb7-tmp\") pod \"collector-vbgpv\" (UID: \"6e3d8492-59e3-4dc0-b14a-261053397eb7\") " pod="openshift-logging/collector-vbgpv" Jan 26 18:46:18 crc kubenswrapper[4737]: I0126 18:46:18.549931 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/6e3d8492-59e3-4dc0-b14a-261053397eb7-metrics\") pod \"collector-vbgpv\" (UID: \"6e3d8492-59e3-4dc0-b14a-261053397eb7\") " pod="openshift-logging/collector-vbgpv" Jan 26 18:46:18 crc kubenswrapper[4737]: I0126 18:46:18.550625 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/6e3d8492-59e3-4dc0-b14a-261053397eb7-collector-syslog-receiver\") pod \"collector-vbgpv\" (UID: \"6e3d8492-59e3-4dc0-b14a-261053397eb7\") " pod="openshift-logging/collector-vbgpv" Jan 26 18:46:18 crc kubenswrapper[4737]: I0126 18:46:18.550968 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/6e3d8492-59e3-4dc0-b14a-261053397eb7-collector-token\") pod \"collector-vbgpv\" (UID: \"6e3d8492-59e3-4dc0-b14a-261053397eb7\") " pod="openshift-logging/collector-vbgpv" Jan 26 18:46:18 crc kubenswrapper[4737]: I0126 18:46:18.582809 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/6e3d8492-59e3-4dc0-b14a-261053397eb7-sa-token\") pod \"collector-vbgpv\" (UID: \"6e3d8492-59e3-4dc0-b14a-261053397eb7\") " pod="openshift-logging/collector-vbgpv" Jan 26 18:46:18 crc kubenswrapper[4737]: I0126 18:46:18.595813 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8h48d\" (UniqueName: \"kubernetes.io/projected/6e3d8492-59e3-4dc0-b14a-261053397eb7-kube-api-access-8h48d\") pod \"collector-vbgpv\" (UID: \"6e3d8492-59e3-4dc0-b14a-261053397eb7\") " pod="openshift-logging/collector-vbgpv" Jan 26 18:46:18 crc kubenswrapper[4737]: I0126 18:46:18.666764 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-vbgpv" Jan 26 18:46:18 crc kubenswrapper[4737]: I0126 18:46:18.992762 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29b6c57e-4752-472d-97b6-b49dce1043aa" path="/var/lib/kubelet/pods/29b6c57e-4752-472d-97b6-b49dce1043aa/volumes" Jan 26 18:46:19 crc kubenswrapper[4737]: I0126 18:46:19.077875 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-vbgpv"] Jan 26 18:46:19 crc kubenswrapper[4737]: I0126 18:46:19.269089 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-vbgpv" event={"ID":"6e3d8492-59e3-4dc0-b14a-261053397eb7","Type":"ContainerStarted","Data":"4d278fd22732fcb07cf54a72721031f17c37159614ad048ba000b62ef701fd4c"} Jan 26 18:46:19 crc kubenswrapper[4737]: I0126 18:46:19.538723 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jcw6k" Jan 26 18:46:19 crc kubenswrapper[4737]: I0126 18:46:19.538818 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jcw6k" Jan 26 18:46:19 crc kubenswrapper[4737]: I0126 18:46:19.584587 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jcw6k" Jan 26 18:46:20 crc kubenswrapper[4737]: I0126 18:46:20.323897 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jcw6k" Jan 26 18:46:20 crc kubenswrapper[4737]: I0126 18:46:20.372228 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jcw6k"] Jan 26 18:46:22 crc kubenswrapper[4737]: I0126 18:46:22.302022 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jcw6k" podUID="317b2aaf-bd65-4d5b-ad28-6826be98f201" containerName="registry-server" containerID="cri-o://dc1cddaf721b4f826010e6c7329ebb19a6502e1ec6d4d027729e6d49efb35606" gracePeriod=2 Jan 26 18:46:23 crc kubenswrapper[4737]: I0126 18:46:23.313288 4737 generic.go:334] "Generic (PLEG): container finished" podID="317b2aaf-bd65-4d5b-ad28-6826be98f201" containerID="dc1cddaf721b4f826010e6c7329ebb19a6502e1ec6d4d027729e6d49efb35606" exitCode=0 Jan 26 18:46:23 crc kubenswrapper[4737]: I0126 18:46:23.313499 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jcw6k" event={"ID":"317b2aaf-bd65-4d5b-ad28-6826be98f201","Type":"ContainerDied","Data":"dc1cddaf721b4f826010e6c7329ebb19a6502e1ec6d4d027729e6d49efb35606"} Jan 26 18:46:27 crc kubenswrapper[4737]: I0126 18:46:27.617646 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jcw6k" Jan 26 18:46:27 crc kubenswrapper[4737]: I0126 18:46:27.775678 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdvjf\" (UniqueName: \"kubernetes.io/projected/317b2aaf-bd65-4d5b-ad28-6826be98f201-kube-api-access-cdvjf\") pod \"317b2aaf-bd65-4d5b-ad28-6826be98f201\" (UID: \"317b2aaf-bd65-4d5b-ad28-6826be98f201\") " Jan 26 18:46:27 crc kubenswrapper[4737]: I0126 18:46:27.775818 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/317b2aaf-bd65-4d5b-ad28-6826be98f201-utilities\") pod \"317b2aaf-bd65-4d5b-ad28-6826be98f201\" (UID: \"317b2aaf-bd65-4d5b-ad28-6826be98f201\") " Jan 26 18:46:27 crc kubenswrapper[4737]: I0126 18:46:27.775879 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/317b2aaf-bd65-4d5b-ad28-6826be98f201-catalog-content\") pod \"317b2aaf-bd65-4d5b-ad28-6826be98f201\" (UID: \"317b2aaf-bd65-4d5b-ad28-6826be98f201\") " Jan 26 18:46:27 crc kubenswrapper[4737]: I0126 18:46:27.777828 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/317b2aaf-bd65-4d5b-ad28-6826be98f201-utilities" (OuterVolumeSpecName: "utilities") pod "317b2aaf-bd65-4d5b-ad28-6826be98f201" (UID: "317b2aaf-bd65-4d5b-ad28-6826be98f201"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:46:27 crc kubenswrapper[4737]: I0126 18:46:27.779649 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/317b2aaf-bd65-4d5b-ad28-6826be98f201-kube-api-access-cdvjf" (OuterVolumeSpecName: "kube-api-access-cdvjf") pod "317b2aaf-bd65-4d5b-ad28-6826be98f201" (UID: "317b2aaf-bd65-4d5b-ad28-6826be98f201"). InnerVolumeSpecName "kube-api-access-cdvjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:46:27 crc kubenswrapper[4737]: I0126 18:46:27.798999 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/317b2aaf-bd65-4d5b-ad28-6826be98f201-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "317b2aaf-bd65-4d5b-ad28-6826be98f201" (UID: "317b2aaf-bd65-4d5b-ad28-6826be98f201"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:46:27 crc kubenswrapper[4737]: I0126 18:46:27.877391 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdvjf\" (UniqueName: \"kubernetes.io/projected/317b2aaf-bd65-4d5b-ad28-6826be98f201-kube-api-access-cdvjf\") on node \"crc\" DevicePath \"\"" Jan 26 18:46:27 crc kubenswrapper[4737]: I0126 18:46:27.877421 4737 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/317b2aaf-bd65-4d5b-ad28-6826be98f201-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:46:27 crc kubenswrapper[4737]: I0126 18:46:27.877430 4737 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/317b2aaf-bd65-4d5b-ad28-6826be98f201-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:46:28 crc kubenswrapper[4737]: I0126 18:46:28.351675 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-vbgpv" event={"ID":"6e3d8492-59e3-4dc0-b14a-261053397eb7","Type":"ContainerStarted","Data":"97fbab8f0f95087b14ce74937081d8b481173c9d73cb235a67432b854c72a6da"} Jan 26 18:46:28 crc kubenswrapper[4737]: I0126 18:46:28.353530 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jcw6k" event={"ID":"317b2aaf-bd65-4d5b-ad28-6826be98f201","Type":"ContainerDied","Data":"8e7202f2fda65e2d53dce65b5141097e167215b0b543845f86819dc6e05ee13f"} Jan 26 18:46:28 crc kubenswrapper[4737]: I0126 18:46:28.353566 4737 scope.go:117] "RemoveContainer" containerID="dc1cddaf721b4f826010e6c7329ebb19a6502e1ec6d4d027729e6d49efb35606" Jan 26 18:46:28 crc kubenswrapper[4737]: I0126 18:46:28.353660 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jcw6k" Jan 26 18:46:28 crc kubenswrapper[4737]: I0126 18:46:28.373148 4737 scope.go:117] "RemoveContainer" containerID="7674a9988f366b40dedb1b1cbe003f2bc1f11c20126d53ac6aef3e561f551ae2" Jan 26 18:46:28 crc kubenswrapper[4737]: I0126 18:46:28.377828 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-vbgpv" podStartSLOduration=1.801541359 podStartE2EDuration="10.377809767s" podCreationTimestamp="2026-01-26 18:46:18 +0000 UTC" firstStartedPulling="2026-01-26 18:46:19.096715134 +0000 UTC m=+952.404909842" lastFinishedPulling="2026-01-26 18:46:27.672983542 +0000 UTC m=+960.981178250" observedRunningTime="2026-01-26 18:46:28.372096185 +0000 UTC m=+961.680290913" watchObservedRunningTime="2026-01-26 18:46:28.377809767 +0000 UTC m=+961.686004475" Jan 26 18:46:28 crc kubenswrapper[4737]: I0126 18:46:28.412215 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jcw6k"] Jan 26 18:46:28 crc kubenswrapper[4737]: I0126 18:46:28.423544 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jcw6k"] Jan 26 18:46:28 crc kubenswrapper[4737]: I0126 18:46:28.426959 4737 scope.go:117] "RemoveContainer" containerID="15e7602bfba80c593572685b4e5e19008dfb6b1bd0200a3cb2ab26d5e727cbd9" Jan 26 18:46:28 crc kubenswrapper[4737]: I0126 18:46:28.990832 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="317b2aaf-bd65-4d5b-ad28-6826be98f201" path="/var/lib/kubelet/pods/317b2aaf-bd65-4d5b-ad28-6826be98f201/volumes" Jan 26 18:46:47 crc kubenswrapper[4737]: I0126 18:46:47.513527 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cxqcm"] Jan 26 18:46:47 crc kubenswrapper[4737]: E0126 18:46:47.514657 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="317b2aaf-bd65-4d5b-ad28-6826be98f201" containerName="extract-content" Jan 26 18:46:47 crc kubenswrapper[4737]: I0126 18:46:47.514691 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="317b2aaf-bd65-4d5b-ad28-6826be98f201" containerName="extract-content" Jan 26 18:46:47 crc kubenswrapper[4737]: E0126 18:46:47.514717 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="317b2aaf-bd65-4d5b-ad28-6826be98f201" containerName="registry-server" Jan 26 18:46:47 crc kubenswrapper[4737]: I0126 18:46:47.514725 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="317b2aaf-bd65-4d5b-ad28-6826be98f201" containerName="registry-server" Jan 26 18:46:47 crc kubenswrapper[4737]: E0126 18:46:47.514740 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="317b2aaf-bd65-4d5b-ad28-6826be98f201" containerName="extract-utilities" Jan 26 18:46:47 crc kubenswrapper[4737]: I0126 18:46:47.514767 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="317b2aaf-bd65-4d5b-ad28-6826be98f201" containerName="extract-utilities" Jan 26 18:46:47 crc kubenswrapper[4737]: I0126 18:46:47.514947 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="317b2aaf-bd65-4d5b-ad28-6826be98f201" containerName="registry-server" Jan 26 18:46:47 crc kubenswrapper[4737]: I0126 18:46:47.516712 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cxqcm" Jan 26 18:46:47 crc kubenswrapper[4737]: I0126 18:46:47.528305 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cxqcm"] Jan 26 18:46:47 crc kubenswrapper[4737]: I0126 18:46:47.604520 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7nn5\" (UniqueName: \"kubernetes.io/projected/4fe6529b-b3fc-406d-8e2b-57cadcf1edb3-kube-api-access-j7nn5\") pod \"certified-operators-cxqcm\" (UID: \"4fe6529b-b3fc-406d-8e2b-57cadcf1edb3\") " pod="openshift-marketplace/certified-operators-cxqcm" Jan 26 18:46:47 crc kubenswrapper[4737]: I0126 18:46:47.605198 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fe6529b-b3fc-406d-8e2b-57cadcf1edb3-catalog-content\") pod \"certified-operators-cxqcm\" (UID: \"4fe6529b-b3fc-406d-8e2b-57cadcf1edb3\") " pod="openshift-marketplace/certified-operators-cxqcm" Jan 26 18:46:47 crc kubenswrapper[4737]: I0126 18:46:47.605457 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fe6529b-b3fc-406d-8e2b-57cadcf1edb3-utilities\") pod \"certified-operators-cxqcm\" (UID: \"4fe6529b-b3fc-406d-8e2b-57cadcf1edb3\") " pod="openshift-marketplace/certified-operators-cxqcm" Jan 26 18:46:47 crc kubenswrapper[4737]: I0126 18:46:47.707238 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7nn5\" (UniqueName: \"kubernetes.io/projected/4fe6529b-b3fc-406d-8e2b-57cadcf1edb3-kube-api-access-j7nn5\") pod \"certified-operators-cxqcm\" (UID: \"4fe6529b-b3fc-406d-8e2b-57cadcf1edb3\") " pod="openshift-marketplace/certified-operators-cxqcm" Jan 26 18:46:47 crc kubenswrapper[4737]: I0126 18:46:47.707324 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fe6529b-b3fc-406d-8e2b-57cadcf1edb3-catalog-content\") pod \"certified-operators-cxqcm\" (UID: \"4fe6529b-b3fc-406d-8e2b-57cadcf1edb3\") " pod="openshift-marketplace/certified-operators-cxqcm" Jan 26 18:46:47 crc kubenswrapper[4737]: I0126 18:46:47.707393 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fe6529b-b3fc-406d-8e2b-57cadcf1edb3-utilities\") pod \"certified-operators-cxqcm\" (UID: \"4fe6529b-b3fc-406d-8e2b-57cadcf1edb3\") " pod="openshift-marketplace/certified-operators-cxqcm" Jan 26 18:46:47 crc kubenswrapper[4737]: I0126 18:46:47.707833 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fe6529b-b3fc-406d-8e2b-57cadcf1edb3-utilities\") pod \"certified-operators-cxqcm\" (UID: \"4fe6529b-b3fc-406d-8e2b-57cadcf1edb3\") " pod="openshift-marketplace/certified-operators-cxqcm" Jan 26 18:46:47 crc kubenswrapper[4737]: I0126 18:46:47.707890 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fe6529b-b3fc-406d-8e2b-57cadcf1edb3-catalog-content\") pod \"certified-operators-cxqcm\" (UID: \"4fe6529b-b3fc-406d-8e2b-57cadcf1edb3\") " pod="openshift-marketplace/certified-operators-cxqcm" Jan 26 18:46:47 crc kubenswrapper[4737]: I0126 18:46:47.737807 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7nn5\" (UniqueName: \"kubernetes.io/projected/4fe6529b-b3fc-406d-8e2b-57cadcf1edb3-kube-api-access-j7nn5\") pod \"certified-operators-cxqcm\" (UID: \"4fe6529b-b3fc-406d-8e2b-57cadcf1edb3\") " pod="openshift-marketplace/certified-operators-cxqcm" Jan 26 18:46:47 crc kubenswrapper[4737]: I0126 18:46:47.836231 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cxqcm" Jan 26 18:46:48 crc kubenswrapper[4737]: I0126 18:46:48.320154 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cxqcm"] Jan 26 18:46:48 crc kubenswrapper[4737]: I0126 18:46:48.516882 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cxqcm" event={"ID":"4fe6529b-b3fc-406d-8e2b-57cadcf1edb3","Type":"ContainerStarted","Data":"6ddd9ade337154fb6671073dce25d00fd10978b12a4b96d98cc9ee27d9ca7c17"} Jan 26 18:46:48 crc kubenswrapper[4737]: I0126 18:46:48.518334 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cxqcm" event={"ID":"4fe6529b-b3fc-406d-8e2b-57cadcf1edb3","Type":"ContainerStarted","Data":"1e645f1180140ef6f0272687e066816e5b932a53a16df925b70d64f9d7d724d1"} Jan 26 18:46:49 crc kubenswrapper[4737]: I0126 18:46:49.525227 4737 generic.go:334] "Generic (PLEG): container finished" podID="4fe6529b-b3fc-406d-8e2b-57cadcf1edb3" containerID="6ddd9ade337154fb6671073dce25d00fd10978b12a4b96d98cc9ee27d9ca7c17" exitCode=0 Jan 26 18:46:49 crc kubenswrapper[4737]: I0126 18:46:49.525282 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cxqcm" event={"ID":"4fe6529b-b3fc-406d-8e2b-57cadcf1edb3","Type":"ContainerDied","Data":"6ddd9ade337154fb6671073dce25d00fd10978b12a4b96d98cc9ee27d9ca7c17"} Jan 26 18:46:49 crc kubenswrapper[4737]: I0126 18:46:49.527136 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cxqcm" event={"ID":"4fe6529b-b3fc-406d-8e2b-57cadcf1edb3","Type":"ContainerStarted","Data":"112bc3bfdad6b2f92ce260c327f8ae396938fe32a0414947089f43116890fc94"} Jan 26 18:46:50 crc kubenswrapper[4737]: I0126 18:46:50.538016 4737 generic.go:334] "Generic (PLEG): container finished" podID="4fe6529b-b3fc-406d-8e2b-57cadcf1edb3" containerID="112bc3bfdad6b2f92ce260c327f8ae396938fe32a0414947089f43116890fc94" exitCode=0 Jan 26 18:46:50 crc kubenswrapper[4737]: I0126 18:46:50.538105 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cxqcm" event={"ID":"4fe6529b-b3fc-406d-8e2b-57cadcf1edb3","Type":"ContainerDied","Data":"112bc3bfdad6b2f92ce260c327f8ae396938fe32a0414947089f43116890fc94"} Jan 26 18:46:51 crc kubenswrapper[4737]: I0126 18:46:51.551183 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cxqcm" event={"ID":"4fe6529b-b3fc-406d-8e2b-57cadcf1edb3","Type":"ContainerStarted","Data":"703847f6d78aee1ceb850b0ea8b0711b2d224e5439cdca35df9e139570686b44"} Jan 26 18:46:51 crc kubenswrapper[4737]: I0126 18:46:51.575597 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cxqcm" podStartSLOduration=2.067307679 podStartE2EDuration="4.575580303s" podCreationTimestamp="2026-01-26 18:46:47 +0000 UTC" firstStartedPulling="2026-01-26 18:46:48.518537661 +0000 UTC m=+981.826732369" lastFinishedPulling="2026-01-26 18:46:51.026810285 +0000 UTC m=+984.335004993" observedRunningTime="2026-01-26 18:46:51.573995133 +0000 UTC m=+984.882189841" watchObservedRunningTime="2026-01-26 18:46:51.575580303 +0000 UTC m=+984.883775011" Jan 26 18:46:57 crc kubenswrapper[4737]: I0126 18:46:57.836674 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cxqcm" Jan 26 18:46:57 crc kubenswrapper[4737]: I0126 18:46:57.837607 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cxqcm" Jan 26 18:46:57 crc kubenswrapper[4737]: I0126 18:46:57.896689 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cxqcm" Jan 26 18:46:58 crc kubenswrapper[4737]: I0126 18:46:58.658243 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cxqcm" Jan 26 18:46:58 crc kubenswrapper[4737]: I0126 18:46:58.786441 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139kn98"] Jan 26 18:46:58 crc kubenswrapper[4737]: I0126 18:46:58.787777 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139kn98" Jan 26 18:46:58 crc kubenswrapper[4737]: I0126 18:46:58.790264 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 26 18:46:58 crc kubenswrapper[4737]: I0126 18:46:58.797106 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139kn98"] Jan 26 18:46:58 crc kubenswrapper[4737]: I0126 18:46:58.897890 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7f36ed9b-a077-4329-803a-d5738c97e844-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139kn98\" (UID: \"7f36ed9b-a077-4329-803a-d5738c97e844\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139kn98" Jan 26 18:46:58 crc kubenswrapper[4737]: I0126 18:46:58.897976 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxjjx\" (UniqueName: \"kubernetes.io/projected/7f36ed9b-a077-4329-803a-d5738c97e844-kube-api-access-gxjjx\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139kn98\" (UID: \"7f36ed9b-a077-4329-803a-d5738c97e844\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139kn98" Jan 26 18:46:58 crc kubenswrapper[4737]: I0126 18:46:58.898088 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7f36ed9b-a077-4329-803a-d5738c97e844-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139kn98\" (UID: \"7f36ed9b-a077-4329-803a-d5738c97e844\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139kn98" Jan 26 18:46:58 crc kubenswrapper[4737]: I0126 18:46:58.999207 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7f36ed9b-a077-4329-803a-d5738c97e844-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139kn98\" (UID: \"7f36ed9b-a077-4329-803a-d5738c97e844\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139kn98" Jan 26 18:46:58 crc kubenswrapper[4737]: I0126 18:46:58.999318 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7f36ed9b-a077-4329-803a-d5738c97e844-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139kn98\" (UID: \"7f36ed9b-a077-4329-803a-d5738c97e844\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139kn98" Jan 26 18:46:58 crc kubenswrapper[4737]: I0126 18:46:58.999359 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxjjx\" (UniqueName: \"kubernetes.io/projected/7f36ed9b-a077-4329-803a-d5738c97e844-kube-api-access-gxjjx\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139kn98\" (UID: \"7f36ed9b-a077-4329-803a-d5738c97e844\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139kn98" Jan 26 18:46:58 crc kubenswrapper[4737]: I0126 18:46:58.999685 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7f36ed9b-a077-4329-803a-d5738c97e844-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139kn98\" (UID: \"7f36ed9b-a077-4329-803a-d5738c97e844\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139kn98" Jan 26 18:46:58 crc kubenswrapper[4737]: I0126 18:46:58.999716 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7f36ed9b-a077-4329-803a-d5738c97e844-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139kn98\" (UID: \"7f36ed9b-a077-4329-803a-d5738c97e844\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139kn98" Jan 26 18:46:59 crc kubenswrapper[4737]: I0126 18:46:59.018017 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxjjx\" (UniqueName: \"kubernetes.io/projected/7f36ed9b-a077-4329-803a-d5738c97e844-kube-api-access-gxjjx\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139kn98\" (UID: \"7f36ed9b-a077-4329-803a-d5738c97e844\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139kn98" Jan 26 18:46:59 crc kubenswrapper[4737]: I0126 18:46:59.123556 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139kn98" Jan 26 18:46:59 crc kubenswrapper[4737]: I0126 18:46:59.576916 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139kn98"] Jan 26 18:46:59 crc kubenswrapper[4737]: I0126 18:46:59.616096 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139kn98" event={"ID":"7f36ed9b-a077-4329-803a-d5738c97e844","Type":"ContainerStarted","Data":"00135f57fdb2f465f566b425aeb29313438312b542053596216be874e5f8cb31"} Jan 26 18:47:00 crc kubenswrapper[4737]: I0126 18:47:00.622908 4737 generic.go:334] "Generic (PLEG): container finished" podID="7f36ed9b-a077-4329-803a-d5738c97e844" containerID="d97373e162cb695eea35b47a529b47f8ebcfb3db0478b122ed1106b726cc970e" exitCode=0 Jan 26 18:47:00 crc kubenswrapper[4737]: I0126 18:47:00.622950 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139kn98" event={"ID":"7f36ed9b-a077-4329-803a-d5738c97e844","Type":"ContainerDied","Data":"d97373e162cb695eea35b47a529b47f8ebcfb3db0478b122ed1106b726cc970e"} Jan 26 18:47:00 crc kubenswrapper[4737]: I0126 18:47:00.948767 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:47:00 crc kubenswrapper[4737]: I0126 18:47:00.949152 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:47:02 crc kubenswrapper[4737]: I0126 18:47:02.135640 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cxqcm"] Jan 26 18:47:02 crc kubenswrapper[4737]: I0126 18:47:02.137033 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cxqcm" podUID="4fe6529b-b3fc-406d-8e2b-57cadcf1edb3" containerName="registry-server" containerID="cri-o://703847f6d78aee1ceb850b0ea8b0711b2d224e5439cdca35df9e139570686b44" gracePeriod=2 Jan 26 18:47:02 crc kubenswrapper[4737]: I0126 18:47:02.556308 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cxqcm" Jan 26 18:47:02 crc kubenswrapper[4737]: I0126 18:47:02.637595 4737 generic.go:334] "Generic (PLEG): container finished" podID="7f36ed9b-a077-4329-803a-d5738c97e844" containerID="8ce3f19ea439d161ccb383b7d7835c7280b74f127d083d9a349e38caacd2d7b3" exitCode=0 Jan 26 18:47:02 crc kubenswrapper[4737]: I0126 18:47:02.637672 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139kn98" event={"ID":"7f36ed9b-a077-4329-803a-d5738c97e844","Type":"ContainerDied","Data":"8ce3f19ea439d161ccb383b7d7835c7280b74f127d083d9a349e38caacd2d7b3"} Jan 26 18:47:02 crc kubenswrapper[4737]: I0126 18:47:02.640159 4737 generic.go:334] "Generic (PLEG): container finished" podID="4fe6529b-b3fc-406d-8e2b-57cadcf1edb3" containerID="703847f6d78aee1ceb850b0ea8b0711b2d224e5439cdca35df9e139570686b44" exitCode=0 Jan 26 18:47:02 crc kubenswrapper[4737]: I0126 18:47:02.640197 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cxqcm" event={"ID":"4fe6529b-b3fc-406d-8e2b-57cadcf1edb3","Type":"ContainerDied","Data":"703847f6d78aee1ceb850b0ea8b0711b2d224e5439cdca35df9e139570686b44"} Jan 26 18:47:02 crc kubenswrapper[4737]: I0126 18:47:02.640203 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cxqcm" Jan 26 18:47:02 crc kubenswrapper[4737]: I0126 18:47:02.640227 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cxqcm" event={"ID":"4fe6529b-b3fc-406d-8e2b-57cadcf1edb3","Type":"ContainerDied","Data":"1e645f1180140ef6f0272687e066816e5b932a53a16df925b70d64f9d7d724d1"} Jan 26 18:47:02 crc kubenswrapper[4737]: I0126 18:47:02.640251 4737 scope.go:117] "RemoveContainer" containerID="703847f6d78aee1ceb850b0ea8b0711b2d224e5439cdca35df9e139570686b44" Jan 26 18:47:02 crc kubenswrapper[4737]: I0126 18:47:02.661940 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fe6529b-b3fc-406d-8e2b-57cadcf1edb3-utilities\") pod \"4fe6529b-b3fc-406d-8e2b-57cadcf1edb3\" (UID: \"4fe6529b-b3fc-406d-8e2b-57cadcf1edb3\") " Jan 26 18:47:02 crc kubenswrapper[4737]: I0126 18:47:02.662193 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fe6529b-b3fc-406d-8e2b-57cadcf1edb3-catalog-content\") pod \"4fe6529b-b3fc-406d-8e2b-57cadcf1edb3\" (UID: \"4fe6529b-b3fc-406d-8e2b-57cadcf1edb3\") " Jan 26 18:47:02 crc kubenswrapper[4737]: I0126 18:47:02.662270 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7nn5\" (UniqueName: \"kubernetes.io/projected/4fe6529b-b3fc-406d-8e2b-57cadcf1edb3-kube-api-access-j7nn5\") pod \"4fe6529b-b3fc-406d-8e2b-57cadcf1edb3\" (UID: \"4fe6529b-b3fc-406d-8e2b-57cadcf1edb3\") " Jan 26 18:47:02 crc kubenswrapper[4737]: I0126 18:47:02.665483 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fe6529b-b3fc-406d-8e2b-57cadcf1edb3-utilities" (OuterVolumeSpecName: "utilities") pod "4fe6529b-b3fc-406d-8e2b-57cadcf1edb3" (UID: "4fe6529b-b3fc-406d-8e2b-57cadcf1edb3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:47:02 crc kubenswrapper[4737]: I0126 18:47:02.701886 4737 scope.go:117] "RemoveContainer" containerID="112bc3bfdad6b2f92ce260c327f8ae396938fe32a0414947089f43116890fc94" Jan 26 18:47:02 crc kubenswrapper[4737]: I0126 18:47:02.704732 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fe6529b-b3fc-406d-8e2b-57cadcf1edb3-kube-api-access-j7nn5" (OuterVolumeSpecName: "kube-api-access-j7nn5") pod "4fe6529b-b3fc-406d-8e2b-57cadcf1edb3" (UID: "4fe6529b-b3fc-406d-8e2b-57cadcf1edb3"). InnerVolumeSpecName "kube-api-access-j7nn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:47:02 crc kubenswrapper[4737]: I0126 18:47:02.734362 4737 scope.go:117] "RemoveContainer" containerID="6ddd9ade337154fb6671073dce25d00fd10978b12a4b96d98cc9ee27d9ca7c17" Jan 26 18:47:02 crc kubenswrapper[4737]: I0126 18:47:02.735504 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fe6529b-b3fc-406d-8e2b-57cadcf1edb3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4fe6529b-b3fc-406d-8e2b-57cadcf1edb3" (UID: "4fe6529b-b3fc-406d-8e2b-57cadcf1edb3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:47:02 crc kubenswrapper[4737]: I0126 18:47:02.756670 4737 scope.go:117] "RemoveContainer" containerID="703847f6d78aee1ceb850b0ea8b0711b2d224e5439cdca35df9e139570686b44" Jan 26 18:47:02 crc kubenswrapper[4737]: E0126 18:47:02.757120 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"703847f6d78aee1ceb850b0ea8b0711b2d224e5439cdca35df9e139570686b44\": container with ID starting with 703847f6d78aee1ceb850b0ea8b0711b2d224e5439cdca35df9e139570686b44 not found: ID does not exist" containerID="703847f6d78aee1ceb850b0ea8b0711b2d224e5439cdca35df9e139570686b44" Jan 26 18:47:02 crc kubenswrapper[4737]: I0126 18:47:02.757151 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"703847f6d78aee1ceb850b0ea8b0711b2d224e5439cdca35df9e139570686b44"} err="failed to get container status \"703847f6d78aee1ceb850b0ea8b0711b2d224e5439cdca35df9e139570686b44\": rpc error: code = NotFound desc = could not find container \"703847f6d78aee1ceb850b0ea8b0711b2d224e5439cdca35df9e139570686b44\": container with ID starting with 703847f6d78aee1ceb850b0ea8b0711b2d224e5439cdca35df9e139570686b44 not found: ID does not exist" Jan 26 18:47:02 crc kubenswrapper[4737]: I0126 18:47:02.757171 4737 scope.go:117] "RemoveContainer" containerID="112bc3bfdad6b2f92ce260c327f8ae396938fe32a0414947089f43116890fc94" Jan 26 18:47:02 crc kubenswrapper[4737]: E0126 18:47:02.758340 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"112bc3bfdad6b2f92ce260c327f8ae396938fe32a0414947089f43116890fc94\": container with ID starting with 112bc3bfdad6b2f92ce260c327f8ae396938fe32a0414947089f43116890fc94 not found: ID does not exist" containerID="112bc3bfdad6b2f92ce260c327f8ae396938fe32a0414947089f43116890fc94" Jan 26 18:47:02 crc kubenswrapper[4737]: I0126 18:47:02.758362 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"112bc3bfdad6b2f92ce260c327f8ae396938fe32a0414947089f43116890fc94"} err="failed to get container status \"112bc3bfdad6b2f92ce260c327f8ae396938fe32a0414947089f43116890fc94\": rpc error: code = NotFound desc = could not find container \"112bc3bfdad6b2f92ce260c327f8ae396938fe32a0414947089f43116890fc94\": container with ID starting with 112bc3bfdad6b2f92ce260c327f8ae396938fe32a0414947089f43116890fc94 not found: ID does not exist" Jan 26 18:47:02 crc kubenswrapper[4737]: I0126 18:47:02.758375 4737 scope.go:117] "RemoveContainer" containerID="6ddd9ade337154fb6671073dce25d00fd10978b12a4b96d98cc9ee27d9ca7c17" Jan 26 18:47:02 crc kubenswrapper[4737]: E0126 18:47:02.759718 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ddd9ade337154fb6671073dce25d00fd10978b12a4b96d98cc9ee27d9ca7c17\": container with ID starting with 6ddd9ade337154fb6671073dce25d00fd10978b12a4b96d98cc9ee27d9ca7c17 not found: ID does not exist" containerID="6ddd9ade337154fb6671073dce25d00fd10978b12a4b96d98cc9ee27d9ca7c17" Jan 26 18:47:02 crc kubenswrapper[4737]: I0126 18:47:02.759768 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ddd9ade337154fb6671073dce25d00fd10978b12a4b96d98cc9ee27d9ca7c17"} err="failed to get container status \"6ddd9ade337154fb6671073dce25d00fd10978b12a4b96d98cc9ee27d9ca7c17\": rpc error: code = NotFound desc = could not find container \"6ddd9ade337154fb6671073dce25d00fd10978b12a4b96d98cc9ee27d9ca7c17\": container with ID starting with 6ddd9ade337154fb6671073dce25d00fd10978b12a4b96d98cc9ee27d9ca7c17 not found: ID does not exist" Jan 26 18:47:02 crc kubenswrapper[4737]: I0126 18:47:02.767239 4737 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fe6529b-b3fc-406d-8e2b-57cadcf1edb3-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:47:02 crc kubenswrapper[4737]: I0126 18:47:02.767356 4737 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fe6529b-b3fc-406d-8e2b-57cadcf1edb3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:47:02 crc kubenswrapper[4737]: I0126 18:47:02.767418 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7nn5\" (UniqueName: \"kubernetes.io/projected/4fe6529b-b3fc-406d-8e2b-57cadcf1edb3-kube-api-access-j7nn5\") on node \"crc\" DevicePath \"\"" Jan 26 18:47:02 crc kubenswrapper[4737]: I0126 18:47:02.990509 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cxqcm"] Jan 26 18:47:02 crc kubenswrapper[4737]: I0126 18:47:02.990543 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cxqcm"] Jan 26 18:47:03 crc kubenswrapper[4737]: I0126 18:47:03.652135 4737 generic.go:334] "Generic (PLEG): container finished" podID="7f36ed9b-a077-4329-803a-d5738c97e844" containerID="b6e3a9530d38d405f7ab8bc342479a78046b620c8defdd0ba868eaeecdda71bf" exitCode=0 Jan 26 18:47:03 crc kubenswrapper[4737]: I0126 18:47:03.652169 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139kn98" event={"ID":"7f36ed9b-a077-4329-803a-d5738c97e844","Type":"ContainerDied","Data":"b6e3a9530d38d405f7ab8bc342479a78046b620c8defdd0ba868eaeecdda71bf"} Jan 26 18:47:04 crc kubenswrapper[4737]: I0126 18:47:04.912339 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139kn98" Jan 26 18:47:04 crc kubenswrapper[4737]: I0126 18:47:04.990365 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fe6529b-b3fc-406d-8e2b-57cadcf1edb3" path="/var/lib/kubelet/pods/4fe6529b-b3fc-406d-8e2b-57cadcf1edb3/volumes" Jan 26 18:47:05 crc kubenswrapper[4737]: I0126 18:47:05.001140 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxjjx\" (UniqueName: \"kubernetes.io/projected/7f36ed9b-a077-4329-803a-d5738c97e844-kube-api-access-gxjjx\") pod \"7f36ed9b-a077-4329-803a-d5738c97e844\" (UID: \"7f36ed9b-a077-4329-803a-d5738c97e844\") " Jan 26 18:47:05 crc kubenswrapper[4737]: I0126 18:47:05.001492 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7f36ed9b-a077-4329-803a-d5738c97e844-bundle\") pod \"7f36ed9b-a077-4329-803a-d5738c97e844\" (UID: \"7f36ed9b-a077-4329-803a-d5738c97e844\") " Jan 26 18:47:05 crc kubenswrapper[4737]: I0126 18:47:05.001640 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7f36ed9b-a077-4329-803a-d5738c97e844-util\") pod \"7f36ed9b-a077-4329-803a-d5738c97e844\" (UID: \"7f36ed9b-a077-4329-803a-d5738c97e844\") " Jan 26 18:47:05 crc kubenswrapper[4737]: I0126 18:47:05.001834 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f36ed9b-a077-4329-803a-d5738c97e844-bundle" (OuterVolumeSpecName: "bundle") pod "7f36ed9b-a077-4329-803a-d5738c97e844" (UID: "7f36ed9b-a077-4329-803a-d5738c97e844"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:47:05 crc kubenswrapper[4737]: I0126 18:47:05.002641 4737 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7f36ed9b-a077-4329-803a-d5738c97e844-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:47:05 crc kubenswrapper[4737]: I0126 18:47:05.006876 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f36ed9b-a077-4329-803a-d5738c97e844-kube-api-access-gxjjx" (OuterVolumeSpecName: "kube-api-access-gxjjx") pod "7f36ed9b-a077-4329-803a-d5738c97e844" (UID: "7f36ed9b-a077-4329-803a-d5738c97e844"). InnerVolumeSpecName "kube-api-access-gxjjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:47:05 crc kubenswrapper[4737]: I0126 18:47:05.015991 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f36ed9b-a077-4329-803a-d5738c97e844-util" (OuterVolumeSpecName: "util") pod "7f36ed9b-a077-4329-803a-d5738c97e844" (UID: "7f36ed9b-a077-4329-803a-d5738c97e844"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:47:05 crc kubenswrapper[4737]: I0126 18:47:05.104985 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxjjx\" (UniqueName: \"kubernetes.io/projected/7f36ed9b-a077-4329-803a-d5738c97e844-kube-api-access-gxjjx\") on node \"crc\" DevicePath \"\"" Jan 26 18:47:05 crc kubenswrapper[4737]: I0126 18:47:05.105032 4737 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7f36ed9b-a077-4329-803a-d5738c97e844-util\") on node \"crc\" DevicePath \"\"" Jan 26 18:47:05 crc kubenswrapper[4737]: I0126 18:47:05.666557 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139kn98" event={"ID":"7f36ed9b-a077-4329-803a-d5738c97e844","Type":"ContainerDied","Data":"00135f57fdb2f465f566b425aeb29313438312b542053596216be874e5f8cb31"} Jan 26 18:47:05 crc kubenswrapper[4737]: I0126 18:47:05.666829 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00135f57fdb2f465f566b425aeb29313438312b542053596216be874e5f8cb31" Jan 26 18:47:05 crc kubenswrapper[4737]: I0126 18:47:05.666601 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139kn98" Jan 26 18:47:08 crc kubenswrapper[4737]: I0126 18:47:08.396411 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-dg9v7"] Jan 26 18:47:08 crc kubenswrapper[4737]: E0126 18:47:08.396848 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f36ed9b-a077-4329-803a-d5738c97e844" containerName="pull" Jan 26 18:47:08 crc kubenswrapper[4737]: I0126 18:47:08.396863 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f36ed9b-a077-4329-803a-d5738c97e844" containerName="pull" Jan 26 18:47:08 crc kubenswrapper[4737]: E0126 18:47:08.396883 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fe6529b-b3fc-406d-8e2b-57cadcf1edb3" containerName="extract-utilities" Jan 26 18:47:08 crc kubenswrapper[4737]: I0126 18:47:08.396891 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fe6529b-b3fc-406d-8e2b-57cadcf1edb3" containerName="extract-utilities" Jan 26 18:47:08 crc kubenswrapper[4737]: E0126 18:47:08.396903 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f36ed9b-a077-4329-803a-d5738c97e844" containerName="util" Jan 26 18:47:08 crc kubenswrapper[4737]: I0126 18:47:08.396911 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f36ed9b-a077-4329-803a-d5738c97e844" containerName="util" Jan 26 18:47:08 crc kubenswrapper[4737]: E0126 18:47:08.396924 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fe6529b-b3fc-406d-8e2b-57cadcf1edb3" containerName="registry-server" Jan 26 18:47:08 crc kubenswrapper[4737]: I0126 18:47:08.396932 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fe6529b-b3fc-406d-8e2b-57cadcf1edb3" containerName="registry-server" Jan 26 18:47:08 crc kubenswrapper[4737]: E0126 18:47:08.396951 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fe6529b-b3fc-406d-8e2b-57cadcf1edb3" containerName="extract-content" Jan 26 18:47:08 crc kubenswrapper[4737]: I0126 18:47:08.396958 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fe6529b-b3fc-406d-8e2b-57cadcf1edb3" containerName="extract-content" Jan 26 18:47:08 crc kubenswrapper[4737]: E0126 18:47:08.396979 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f36ed9b-a077-4329-803a-d5738c97e844" containerName="extract" Jan 26 18:47:08 crc kubenswrapper[4737]: I0126 18:47:08.396986 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f36ed9b-a077-4329-803a-d5738c97e844" containerName="extract" Jan 26 18:47:08 crc kubenswrapper[4737]: I0126 18:47:08.397144 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fe6529b-b3fc-406d-8e2b-57cadcf1edb3" containerName="registry-server" Jan 26 18:47:08 crc kubenswrapper[4737]: I0126 18:47:08.397166 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f36ed9b-a077-4329-803a-d5738c97e844" containerName="extract" Jan 26 18:47:08 crc kubenswrapper[4737]: I0126 18:47:08.397835 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-dg9v7" Jan 26 18:47:08 crc kubenswrapper[4737]: I0126 18:47:08.402303 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-lrn9g" Jan 26 18:47:08 crc kubenswrapper[4737]: I0126 18:47:08.402477 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 26 18:47:08 crc kubenswrapper[4737]: I0126 18:47:08.402652 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 26 18:47:08 crc kubenswrapper[4737]: I0126 18:47:08.458923 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-dg9v7"] Jan 26 18:47:08 crc kubenswrapper[4737]: I0126 18:47:08.561103 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vtkm\" (UniqueName: \"kubernetes.io/projected/35a928d3-7171-42be-8005-cbdfec1891c3-kube-api-access-2vtkm\") pod \"nmstate-operator-646758c888-dg9v7\" (UID: \"35a928d3-7171-42be-8005-cbdfec1891c3\") " pod="openshift-nmstate/nmstate-operator-646758c888-dg9v7" Jan 26 18:47:08 crc kubenswrapper[4737]: I0126 18:47:08.663398 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vtkm\" (UniqueName: \"kubernetes.io/projected/35a928d3-7171-42be-8005-cbdfec1891c3-kube-api-access-2vtkm\") pod \"nmstate-operator-646758c888-dg9v7\" (UID: \"35a928d3-7171-42be-8005-cbdfec1891c3\") " pod="openshift-nmstate/nmstate-operator-646758c888-dg9v7" Jan 26 18:47:08 crc kubenswrapper[4737]: I0126 18:47:08.689926 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vtkm\" (UniqueName: \"kubernetes.io/projected/35a928d3-7171-42be-8005-cbdfec1891c3-kube-api-access-2vtkm\") pod \"nmstate-operator-646758c888-dg9v7\" (UID: \"35a928d3-7171-42be-8005-cbdfec1891c3\") " pod="openshift-nmstate/nmstate-operator-646758c888-dg9v7" Jan 26 18:47:08 crc kubenswrapper[4737]: I0126 18:47:08.761752 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-dg9v7" Jan 26 18:47:09 crc kubenswrapper[4737]: I0126 18:47:09.271265 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-dg9v7"] Jan 26 18:47:09 crc kubenswrapper[4737]: I0126 18:47:09.698342 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-dg9v7" event={"ID":"35a928d3-7171-42be-8005-cbdfec1891c3","Type":"ContainerStarted","Data":"704ad8d193c6d5fec00c9e8275b846d265e43c54d5601c9300a67d6ee68571ec"} Jan 26 18:47:12 crc kubenswrapper[4737]: I0126 18:47:12.725512 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-dg9v7" event={"ID":"35a928d3-7171-42be-8005-cbdfec1891c3","Type":"ContainerStarted","Data":"f178f64cc20c0459ad5298a295cd73beaa68ac51d9c1a08fdf927a2bcb8d8022"} Jan 26 18:47:12 crc kubenswrapper[4737]: I0126 18:47:12.744433 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-dg9v7" podStartSLOduration=2.442811906 podStartE2EDuration="4.744409048s" podCreationTimestamp="2026-01-26 18:47:08 +0000 UTC" firstStartedPulling="2026-01-26 18:47:09.278111653 +0000 UTC m=+1002.586306351" lastFinishedPulling="2026-01-26 18:47:11.579708785 +0000 UTC m=+1004.887903493" observedRunningTime="2026-01-26 18:47:12.743858984 +0000 UTC m=+1006.052053712" watchObservedRunningTime="2026-01-26 18:47:12.744409048 +0000 UTC m=+1006.052603746" Jan 26 18:47:13 crc kubenswrapper[4737]: I0126 18:47:13.734729 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-qh796"] Jan 26 18:47:13 crc kubenswrapper[4737]: I0126 18:47:13.736667 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-qh796" Jan 26 18:47:13 crc kubenswrapper[4737]: I0126 18:47:13.738944 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-hdpdn" Jan 26 18:47:13 crc kubenswrapper[4737]: I0126 18:47:13.742128 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-f425m"] Jan 26 18:47:13 crc kubenswrapper[4737]: I0126 18:47:13.743431 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f425m" Jan 26 18:47:13 crc kubenswrapper[4737]: I0126 18:47:13.748132 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-qh796"] Jan 26 18:47:13 crc kubenswrapper[4737]: I0126 18:47:13.749688 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 26 18:47:13 crc kubenswrapper[4737]: I0126 18:47:13.785292 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-f425m"] Jan 26 18:47:13 crc kubenswrapper[4737]: I0126 18:47:13.813716 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-99d4z"] Jan 26 18:47:13 crc kubenswrapper[4737]: I0126 18:47:13.814727 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-99d4z" Jan 26 18:47:13 crc kubenswrapper[4737]: I0126 18:47:13.854965 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtkb7\" (UniqueName: \"kubernetes.io/projected/30e5ad3f-a8b0-4d6d-b128-e8b126a1fba5-kube-api-access-qtkb7\") pod \"nmstate-webhook-8474b5b9d8-f425m\" (UID: \"30e5ad3f-a8b0-4d6d-b128-e8b126a1fba5\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f425m" Jan 26 18:47:13 crc kubenswrapper[4737]: I0126 18:47:13.855349 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/30e5ad3f-a8b0-4d6d-b128-e8b126a1fba5-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-f425m\" (UID: \"30e5ad3f-a8b0-4d6d-b128-e8b126a1fba5\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f425m" Jan 26 18:47:13 crc kubenswrapper[4737]: I0126 18:47:13.855384 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trfww\" (UniqueName: \"kubernetes.io/projected/33e00306-edd4-487d-9bc6-e49fa9692a29-kube-api-access-trfww\") pod \"nmstate-metrics-54757c584b-qh796\" (UID: \"33e00306-edd4-487d-9bc6-e49fa9692a29\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-qh796" Jan 26 18:47:13 crc kubenswrapper[4737]: I0126 18:47:13.925723 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-zdxbz"] Jan 26 18:47:13 crc kubenswrapper[4737]: I0126 18:47:13.926811 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zdxbz" Jan 26 18:47:13 crc kubenswrapper[4737]: I0126 18:47:13.928971 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-rcjgk" Jan 26 18:47:13 crc kubenswrapper[4737]: I0126 18:47:13.929261 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 26 18:47:13 crc kubenswrapper[4737]: I0126 18:47:13.932221 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 26 18:47:13 crc kubenswrapper[4737]: I0126 18:47:13.942681 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-zdxbz"] Jan 26 18:47:13 crc kubenswrapper[4737]: I0126 18:47:13.956684 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/1a140881-5ef3-4582-9694-e24fc14a6fb4-dbus-socket\") pod \"nmstate-handler-99d4z\" (UID: \"1a140881-5ef3-4582-9694-e24fc14a6fb4\") " pod="openshift-nmstate/nmstate-handler-99d4z" Jan 26 18:47:13 crc kubenswrapper[4737]: I0126 18:47:13.956747 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/1a140881-5ef3-4582-9694-e24fc14a6fb4-ovs-socket\") pod \"nmstate-handler-99d4z\" (UID: \"1a140881-5ef3-4582-9694-e24fc14a6fb4\") " pod="openshift-nmstate/nmstate-handler-99d4z" Jan 26 18:47:13 crc kubenswrapper[4737]: I0126 18:47:13.956774 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvk22\" (UniqueName: \"kubernetes.io/projected/1a140881-5ef3-4582-9694-e24fc14a6fb4-kube-api-access-gvk22\") pod \"nmstate-handler-99d4z\" (UID: \"1a140881-5ef3-4582-9694-e24fc14a6fb4\") " pod="openshift-nmstate/nmstate-handler-99d4z" Jan 26 18:47:13 crc kubenswrapper[4737]: I0126 18:47:13.956878 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/1a140881-5ef3-4582-9694-e24fc14a6fb4-nmstate-lock\") pod \"nmstate-handler-99d4z\" (UID: \"1a140881-5ef3-4582-9694-e24fc14a6fb4\") " pod="openshift-nmstate/nmstate-handler-99d4z" Jan 26 18:47:13 crc kubenswrapper[4737]: I0126 18:47:13.957020 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtkb7\" (UniqueName: \"kubernetes.io/projected/30e5ad3f-a8b0-4d6d-b128-e8b126a1fba5-kube-api-access-qtkb7\") pod \"nmstate-webhook-8474b5b9d8-f425m\" (UID: \"30e5ad3f-a8b0-4d6d-b128-e8b126a1fba5\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f425m" Jan 26 18:47:13 crc kubenswrapper[4737]: I0126 18:47:13.957078 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/30e5ad3f-a8b0-4d6d-b128-e8b126a1fba5-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-f425m\" (UID: \"30e5ad3f-a8b0-4d6d-b128-e8b126a1fba5\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f425m" Jan 26 18:47:13 crc kubenswrapper[4737]: I0126 18:47:13.957668 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trfww\" (UniqueName: \"kubernetes.io/projected/33e00306-edd4-487d-9bc6-e49fa9692a29-kube-api-access-trfww\") pod \"nmstate-metrics-54757c584b-qh796\" (UID: \"33e00306-edd4-487d-9bc6-e49fa9692a29\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-qh796" Jan 26 18:47:13 crc kubenswrapper[4737]: E0126 18:47:13.958275 4737 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 26 18:47:13 crc kubenswrapper[4737]: E0126 18:47:13.958336 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/30e5ad3f-a8b0-4d6d-b128-e8b126a1fba5-tls-key-pair podName:30e5ad3f-a8b0-4d6d-b128-e8b126a1fba5 nodeName:}" failed. No retries permitted until 2026-01-26 18:47:14.458319014 +0000 UTC m=+1007.766513732 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/30e5ad3f-a8b0-4d6d-b128-e8b126a1fba5-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-f425m" (UID: "30e5ad3f-a8b0-4d6d-b128-e8b126a1fba5") : secret "openshift-nmstate-webhook" not found Jan 26 18:47:13 crc kubenswrapper[4737]: I0126 18:47:13.989816 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trfww\" (UniqueName: \"kubernetes.io/projected/33e00306-edd4-487d-9bc6-e49fa9692a29-kube-api-access-trfww\") pod \"nmstate-metrics-54757c584b-qh796\" (UID: \"33e00306-edd4-487d-9bc6-e49fa9692a29\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-qh796" Jan 26 18:47:13 crc kubenswrapper[4737]: I0126 18:47:13.990417 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtkb7\" (UniqueName: \"kubernetes.io/projected/30e5ad3f-a8b0-4d6d-b128-e8b126a1fba5-kube-api-access-qtkb7\") pod \"nmstate-webhook-8474b5b9d8-f425m\" (UID: \"30e5ad3f-a8b0-4d6d-b128-e8b126a1fba5\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f425m" Jan 26 18:47:14 crc kubenswrapper[4737]: I0126 18:47:14.059112 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/4c4a0a5e-ab9e-478c-8f90-741563313097-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-zdxbz\" (UID: \"4c4a0a5e-ab9e-478c-8f90-741563313097\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zdxbz" Jan 26 18:47:14 crc kubenswrapper[4737]: I0126 18:47:14.059184 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/1a140881-5ef3-4582-9694-e24fc14a6fb4-dbus-socket\") pod \"nmstate-handler-99d4z\" (UID: \"1a140881-5ef3-4582-9694-e24fc14a6fb4\") " pod="openshift-nmstate/nmstate-handler-99d4z" Jan 26 18:47:14 crc kubenswrapper[4737]: I0126 18:47:14.059239 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/1a140881-5ef3-4582-9694-e24fc14a6fb4-ovs-socket\") pod \"nmstate-handler-99d4z\" (UID: \"1a140881-5ef3-4582-9694-e24fc14a6fb4\") " pod="openshift-nmstate/nmstate-handler-99d4z" Jan 26 18:47:14 crc kubenswrapper[4737]: I0126 18:47:14.059264 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvk22\" (UniqueName: \"kubernetes.io/projected/1a140881-5ef3-4582-9694-e24fc14a6fb4-kube-api-access-gvk22\") pod \"nmstate-handler-99d4z\" (UID: \"1a140881-5ef3-4582-9694-e24fc14a6fb4\") " pod="openshift-nmstate/nmstate-handler-99d4z" Jan 26 18:47:14 crc kubenswrapper[4737]: I0126 18:47:14.059285 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4c4a0a5e-ab9e-478c-8f90-741563313097-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-zdxbz\" (UID: \"4c4a0a5e-ab9e-478c-8f90-741563313097\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zdxbz" Jan 26 18:47:14 crc kubenswrapper[4737]: I0126 18:47:14.059306 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/1a140881-5ef3-4582-9694-e24fc14a6fb4-nmstate-lock\") pod \"nmstate-handler-99d4z\" (UID: \"1a140881-5ef3-4582-9694-e24fc14a6fb4\") " pod="openshift-nmstate/nmstate-handler-99d4z" Jan 26 18:47:14 crc kubenswrapper[4737]: I0126 18:47:14.059329 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svfcm\" (UniqueName: \"kubernetes.io/projected/4c4a0a5e-ab9e-478c-8f90-741563313097-kube-api-access-svfcm\") pod \"nmstate-console-plugin-7754f76f8b-zdxbz\" (UID: \"4c4a0a5e-ab9e-478c-8f90-741563313097\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zdxbz" Jan 26 18:47:14 crc kubenswrapper[4737]: I0126 18:47:14.059693 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/1a140881-5ef3-4582-9694-e24fc14a6fb4-ovs-socket\") pod \"nmstate-handler-99d4z\" (UID: \"1a140881-5ef3-4582-9694-e24fc14a6fb4\") " pod="openshift-nmstate/nmstate-handler-99d4z" Jan 26 18:47:14 crc kubenswrapper[4737]: I0126 18:47:14.059718 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/1a140881-5ef3-4582-9694-e24fc14a6fb4-nmstate-lock\") pod \"nmstate-handler-99d4z\" (UID: \"1a140881-5ef3-4582-9694-e24fc14a6fb4\") " pod="openshift-nmstate/nmstate-handler-99d4z" Jan 26 18:47:14 crc kubenswrapper[4737]: I0126 18:47:14.059880 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/1a140881-5ef3-4582-9694-e24fc14a6fb4-dbus-socket\") pod \"nmstate-handler-99d4z\" (UID: \"1a140881-5ef3-4582-9694-e24fc14a6fb4\") " pod="openshift-nmstate/nmstate-handler-99d4z" Jan 26 18:47:14 crc kubenswrapper[4737]: I0126 18:47:14.071620 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-qh796" Jan 26 18:47:14 crc kubenswrapper[4737]: I0126 18:47:14.101801 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvk22\" (UniqueName: \"kubernetes.io/projected/1a140881-5ef3-4582-9694-e24fc14a6fb4-kube-api-access-gvk22\") pod \"nmstate-handler-99d4z\" (UID: \"1a140881-5ef3-4582-9694-e24fc14a6fb4\") " pod="openshift-nmstate/nmstate-handler-99d4z" Jan 26 18:47:14 crc kubenswrapper[4737]: I0126 18:47:14.134961 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-99d4z" Jan 26 18:47:14 crc kubenswrapper[4737]: I0126 18:47:14.136941 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-645c6f4f57-glmhb"] Jan 26 18:47:14 crc kubenswrapper[4737]: I0126 18:47:14.145365 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-645c6f4f57-glmhb" Jan 26 18:47:14 crc kubenswrapper[4737]: I0126 18:47:14.160594 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4c4a0a5e-ab9e-478c-8f90-741563313097-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-zdxbz\" (UID: \"4c4a0a5e-ab9e-478c-8f90-741563313097\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zdxbz" Jan 26 18:47:14 crc kubenswrapper[4737]: I0126 18:47:14.160639 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svfcm\" (UniqueName: \"kubernetes.io/projected/4c4a0a5e-ab9e-478c-8f90-741563313097-kube-api-access-svfcm\") pod \"nmstate-console-plugin-7754f76f8b-zdxbz\" (UID: \"4c4a0a5e-ab9e-478c-8f90-741563313097\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zdxbz" Jan 26 18:47:14 crc kubenswrapper[4737]: I0126 18:47:14.160722 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/4c4a0a5e-ab9e-478c-8f90-741563313097-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-zdxbz\" (UID: \"4c4a0a5e-ab9e-478c-8f90-741563313097\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zdxbz" Jan 26 18:47:14 crc kubenswrapper[4737]: E0126 18:47:14.160949 4737 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 26 18:47:14 crc kubenswrapper[4737]: E0126 18:47:14.161095 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4c4a0a5e-ab9e-478c-8f90-741563313097-plugin-serving-cert podName:4c4a0a5e-ab9e-478c-8f90-741563313097 nodeName:}" failed. No retries permitted until 2026-01-26 18:47:14.66105913 +0000 UTC m=+1007.969253838 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/4c4a0a5e-ab9e-478c-8f90-741563313097-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-zdxbz" (UID: "4c4a0a5e-ab9e-478c-8f90-741563313097") : secret "plugin-serving-cert" not found Jan 26 18:47:14 crc kubenswrapper[4737]: I0126 18:47:14.161606 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/4c4a0a5e-ab9e-478c-8f90-741563313097-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-zdxbz\" (UID: \"4c4a0a5e-ab9e-478c-8f90-741563313097\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zdxbz" Jan 26 18:47:14 crc kubenswrapper[4737]: I0126 18:47:14.166116 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-645c6f4f57-glmhb"] Jan 26 18:47:14 crc kubenswrapper[4737]: I0126 18:47:14.189967 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svfcm\" (UniqueName: \"kubernetes.io/projected/4c4a0a5e-ab9e-478c-8f90-741563313097-kube-api-access-svfcm\") pod \"nmstate-console-plugin-7754f76f8b-zdxbz\" (UID: \"4c4a0a5e-ab9e-478c-8f90-741563313097\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zdxbz" Jan 26 18:47:14 crc kubenswrapper[4737]: I0126 18:47:14.261991 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d5a33684-359a-48ee-b9de-3f09cd04bc51-console-config\") pod \"console-645c6f4f57-glmhb\" (UID: \"d5a33684-359a-48ee-b9de-3f09cd04bc51\") " pod="openshift-console/console-645c6f4f57-glmhb" Jan 26 18:47:14 crc kubenswrapper[4737]: I0126 18:47:14.262092 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mdk6\" (UniqueName: \"kubernetes.io/projected/d5a33684-359a-48ee-b9de-3f09cd04bc51-kube-api-access-7mdk6\") pod \"console-645c6f4f57-glmhb\" (UID: \"d5a33684-359a-48ee-b9de-3f09cd04bc51\") " pod="openshift-console/console-645c6f4f57-glmhb" Jan 26 18:47:14 crc kubenswrapper[4737]: I0126 18:47:14.262121 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d5a33684-359a-48ee-b9de-3f09cd04bc51-oauth-serving-cert\") pod \"console-645c6f4f57-glmhb\" (UID: \"d5a33684-359a-48ee-b9de-3f09cd04bc51\") " pod="openshift-console/console-645c6f4f57-glmhb" Jan 26 18:47:14 crc kubenswrapper[4737]: I0126 18:47:14.262168 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d5a33684-359a-48ee-b9de-3f09cd04bc51-service-ca\") pod \"console-645c6f4f57-glmhb\" (UID: \"d5a33684-359a-48ee-b9de-3f09cd04bc51\") " pod="openshift-console/console-645c6f4f57-glmhb" Jan 26 18:47:14 crc kubenswrapper[4737]: I0126 18:47:14.262253 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5a33684-359a-48ee-b9de-3f09cd04bc51-trusted-ca-bundle\") pod \"console-645c6f4f57-glmhb\" (UID: \"d5a33684-359a-48ee-b9de-3f09cd04bc51\") " pod="openshift-console/console-645c6f4f57-glmhb" Jan 26 18:47:14 crc kubenswrapper[4737]: I0126 18:47:14.262287 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d5a33684-359a-48ee-b9de-3f09cd04bc51-console-serving-cert\") pod \"console-645c6f4f57-glmhb\" (UID: \"d5a33684-359a-48ee-b9de-3f09cd04bc51\") " pod="openshift-console/console-645c6f4f57-glmhb" Jan 26 18:47:14 crc kubenswrapper[4737]: I0126 18:47:14.262313 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d5a33684-359a-48ee-b9de-3f09cd04bc51-console-oauth-config\") pod \"console-645c6f4f57-glmhb\" (UID: \"d5a33684-359a-48ee-b9de-3f09cd04bc51\") " pod="openshift-console/console-645c6f4f57-glmhb" Jan 26 18:47:14 crc kubenswrapper[4737]: I0126 18:47:14.363613 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mdk6\" (UniqueName: \"kubernetes.io/projected/d5a33684-359a-48ee-b9de-3f09cd04bc51-kube-api-access-7mdk6\") pod \"console-645c6f4f57-glmhb\" (UID: \"d5a33684-359a-48ee-b9de-3f09cd04bc51\") " pod="openshift-console/console-645c6f4f57-glmhb" Jan 26 18:47:14 crc kubenswrapper[4737]: I0126 18:47:14.363678 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d5a33684-359a-48ee-b9de-3f09cd04bc51-oauth-serving-cert\") pod \"console-645c6f4f57-glmhb\" (UID: \"d5a33684-359a-48ee-b9de-3f09cd04bc51\") " pod="openshift-console/console-645c6f4f57-glmhb" Jan 26 18:47:14 crc kubenswrapper[4737]: I0126 18:47:14.363726 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d5a33684-359a-48ee-b9de-3f09cd04bc51-service-ca\") pod \"console-645c6f4f57-glmhb\" (UID: \"d5a33684-359a-48ee-b9de-3f09cd04bc51\") " pod="openshift-console/console-645c6f4f57-glmhb" Jan 26 18:47:14 crc kubenswrapper[4737]: I0126 18:47:14.363800 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5a33684-359a-48ee-b9de-3f09cd04bc51-trusted-ca-bundle\") pod \"console-645c6f4f57-glmhb\" (UID: \"d5a33684-359a-48ee-b9de-3f09cd04bc51\") " pod="openshift-console/console-645c6f4f57-glmhb" Jan 26 18:47:14 crc kubenswrapper[4737]: I0126 18:47:14.363828 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d5a33684-359a-48ee-b9de-3f09cd04bc51-console-serving-cert\") pod \"console-645c6f4f57-glmhb\" (UID: \"d5a33684-359a-48ee-b9de-3f09cd04bc51\") " pod="openshift-console/console-645c6f4f57-glmhb" Jan 26 18:47:14 crc kubenswrapper[4737]: I0126 18:47:14.363851 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d5a33684-359a-48ee-b9de-3f09cd04bc51-console-oauth-config\") pod \"console-645c6f4f57-glmhb\" (UID: \"d5a33684-359a-48ee-b9de-3f09cd04bc51\") " pod="openshift-console/console-645c6f4f57-glmhb" Jan 26 18:47:14 crc kubenswrapper[4737]: I0126 18:47:14.363895 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d5a33684-359a-48ee-b9de-3f09cd04bc51-console-config\") pod \"console-645c6f4f57-glmhb\" (UID: \"d5a33684-359a-48ee-b9de-3f09cd04bc51\") " pod="openshift-console/console-645c6f4f57-glmhb" Jan 26 18:47:14 crc kubenswrapper[4737]: I0126 18:47:14.364729 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d5a33684-359a-48ee-b9de-3f09cd04bc51-oauth-serving-cert\") pod \"console-645c6f4f57-glmhb\" (UID: \"d5a33684-359a-48ee-b9de-3f09cd04bc51\") " pod="openshift-console/console-645c6f4f57-glmhb" Jan 26 18:47:14 crc kubenswrapper[4737]: I0126 18:47:14.364990 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d5a33684-359a-48ee-b9de-3f09cd04bc51-console-config\") pod \"console-645c6f4f57-glmhb\" (UID: \"d5a33684-359a-48ee-b9de-3f09cd04bc51\") " pod="openshift-console/console-645c6f4f57-glmhb" Jan 26 18:47:14 crc kubenswrapper[4737]: I0126 18:47:14.365788 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5a33684-359a-48ee-b9de-3f09cd04bc51-trusted-ca-bundle\") pod \"console-645c6f4f57-glmhb\" (UID: \"d5a33684-359a-48ee-b9de-3f09cd04bc51\") " pod="openshift-console/console-645c6f4f57-glmhb" Jan 26 18:47:14 crc kubenswrapper[4737]: I0126 18:47:14.366633 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d5a33684-359a-48ee-b9de-3f09cd04bc51-service-ca\") pod \"console-645c6f4f57-glmhb\" (UID: \"d5a33684-359a-48ee-b9de-3f09cd04bc51\") " pod="openshift-console/console-645c6f4f57-glmhb" Jan 26 18:47:14 crc kubenswrapper[4737]: I0126 18:47:14.374645 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d5a33684-359a-48ee-b9de-3f09cd04bc51-console-serving-cert\") pod \"console-645c6f4f57-glmhb\" (UID: \"d5a33684-359a-48ee-b9de-3f09cd04bc51\") " pod="openshift-console/console-645c6f4f57-glmhb" Jan 26 18:47:14 crc kubenswrapper[4737]: I0126 18:47:14.374704 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d5a33684-359a-48ee-b9de-3f09cd04bc51-console-oauth-config\") pod \"console-645c6f4f57-glmhb\" (UID: \"d5a33684-359a-48ee-b9de-3f09cd04bc51\") " pod="openshift-console/console-645c6f4f57-glmhb" Jan 26 18:47:14 crc kubenswrapper[4737]: I0126 18:47:14.385172 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mdk6\" (UniqueName: \"kubernetes.io/projected/d5a33684-359a-48ee-b9de-3f09cd04bc51-kube-api-access-7mdk6\") pod \"console-645c6f4f57-glmhb\" (UID: \"d5a33684-359a-48ee-b9de-3f09cd04bc51\") " pod="openshift-console/console-645c6f4f57-glmhb" Jan 26 18:47:14 crc kubenswrapper[4737]: I0126 18:47:14.465084 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/30e5ad3f-a8b0-4d6d-b128-e8b126a1fba5-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-f425m\" (UID: \"30e5ad3f-a8b0-4d6d-b128-e8b126a1fba5\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f425m" Jan 26 18:47:14 crc kubenswrapper[4737]: I0126 18:47:14.468745 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/30e5ad3f-a8b0-4d6d-b128-e8b126a1fba5-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-f425m\" (UID: \"30e5ad3f-a8b0-4d6d-b128-e8b126a1fba5\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f425m" Jan 26 18:47:14 crc kubenswrapper[4737]: I0126 18:47:14.493752 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-645c6f4f57-glmhb" Jan 26 18:47:14 crc kubenswrapper[4737]: I0126 18:47:14.628451 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-qh796"] Jan 26 18:47:14 crc kubenswrapper[4737]: I0126 18:47:14.668713 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4c4a0a5e-ab9e-478c-8f90-741563313097-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-zdxbz\" (UID: \"4c4a0a5e-ab9e-478c-8f90-741563313097\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zdxbz" Jan 26 18:47:14 crc kubenswrapper[4737]: I0126 18:47:14.673374 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4c4a0a5e-ab9e-478c-8f90-741563313097-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-zdxbz\" (UID: \"4c4a0a5e-ab9e-478c-8f90-741563313097\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zdxbz" Jan 26 18:47:14 crc kubenswrapper[4737]: I0126 18:47:14.688728 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f425m" Jan 26 18:47:14 crc kubenswrapper[4737]: I0126 18:47:14.739591 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-qh796" event={"ID":"33e00306-edd4-487d-9bc6-e49fa9692a29","Type":"ContainerStarted","Data":"2bfe1da1fe05c7d47fabf5129454bdaed57e423dbe44f816b51167a13e029d75"} Jan 26 18:47:14 crc kubenswrapper[4737]: I0126 18:47:14.740534 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-99d4z" event={"ID":"1a140881-5ef3-4582-9694-e24fc14a6fb4","Type":"ContainerStarted","Data":"a2d6516b1ff9cc48414d9320b3386fdced4b227885cd167554109fc79d6b1ece"} Jan 26 18:47:14 crc kubenswrapper[4737]: I0126 18:47:14.848533 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zdxbz" Jan 26 18:47:14 crc kubenswrapper[4737]: I0126 18:47:14.974866 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-645c6f4f57-glmhb"] Jan 26 18:47:15 crc kubenswrapper[4737]: I0126 18:47:15.111496 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-f425m"] Jan 26 18:47:15 crc kubenswrapper[4737]: I0126 18:47:15.177455 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-zdxbz"] Jan 26 18:47:15 crc kubenswrapper[4737]: W0126 18:47:15.181458 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c4a0a5e_ab9e_478c_8f90_741563313097.slice/crio-94e1949889a4b98377ab37f3f0fe1fa51326d851b520355cbd0ac9a4c7f025e8 WatchSource:0}: Error finding container 94e1949889a4b98377ab37f3f0fe1fa51326d851b520355cbd0ac9a4c7f025e8: Status 404 returned error can't find the container with id 94e1949889a4b98377ab37f3f0fe1fa51326d851b520355cbd0ac9a4c7f025e8 Jan 26 18:47:15 crc kubenswrapper[4737]: I0126 18:47:15.748388 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-645c6f4f57-glmhb" event={"ID":"d5a33684-359a-48ee-b9de-3f09cd04bc51","Type":"ContainerStarted","Data":"e77c00fcc8e5981a3b4bc1de3b40217df4672344e02bd378be1b988d919d7c17"} Jan 26 18:47:15 crc kubenswrapper[4737]: I0126 18:47:15.748778 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-645c6f4f57-glmhb" event={"ID":"d5a33684-359a-48ee-b9de-3f09cd04bc51","Type":"ContainerStarted","Data":"36bf75cf95b4e46776a21c9c00482713b649e90bbed767f8c6a88367e9225b89"} Jan 26 18:47:15 crc kubenswrapper[4737]: I0126 18:47:15.750048 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zdxbz" event={"ID":"4c4a0a5e-ab9e-478c-8f90-741563313097","Type":"ContainerStarted","Data":"94e1949889a4b98377ab37f3f0fe1fa51326d851b520355cbd0ac9a4c7f025e8"} Jan 26 18:47:15 crc kubenswrapper[4737]: I0126 18:47:15.751011 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f425m" event={"ID":"30e5ad3f-a8b0-4d6d-b128-e8b126a1fba5","Type":"ContainerStarted","Data":"70c6c733ae31a7841e43555a66f84133e62701904f00f15414214141c9714d3a"} Jan 26 18:47:15 crc kubenswrapper[4737]: I0126 18:47:15.770610 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-645c6f4f57-glmhb" podStartSLOduration=1.77058853 podStartE2EDuration="1.77058853s" podCreationTimestamp="2026-01-26 18:47:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:47:15.768478468 +0000 UTC m=+1009.076673186" watchObservedRunningTime="2026-01-26 18:47:15.77058853 +0000 UTC m=+1009.078783238" Jan 26 18:47:18 crc kubenswrapper[4737]: I0126 18:47:18.775756 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-99d4z" event={"ID":"1a140881-5ef3-4582-9694-e24fc14a6fb4","Type":"ContainerStarted","Data":"6ec264c413d51f08576cd357665d32d9aec8c372f84bb5c9edf139bb6d664685"} Jan 26 18:47:18 crc kubenswrapper[4737]: I0126 18:47:18.776215 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-99d4z" Jan 26 18:47:18 crc kubenswrapper[4737]: I0126 18:47:18.778454 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f425m" event={"ID":"30e5ad3f-a8b0-4d6d-b128-e8b126a1fba5","Type":"ContainerStarted","Data":"e6c152fe84ba9fd1fbf715557dfa2c15ef13a483ea58aa7e564e636a6d9b6b14"} Jan 26 18:47:18 crc kubenswrapper[4737]: I0126 18:47:18.778578 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f425m" Jan 26 18:47:18 crc kubenswrapper[4737]: I0126 18:47:18.784326 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-qh796" event={"ID":"33e00306-edd4-487d-9bc6-e49fa9692a29","Type":"ContainerStarted","Data":"4e47fd58830f0367985788b34c0d57c6422ad24858a926947c99e1f89ef70555"} Jan 26 18:47:18 crc kubenswrapper[4737]: I0126 18:47:18.794169 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-99d4z" podStartSLOduration=2.02210778 podStartE2EDuration="5.794054546s" podCreationTimestamp="2026-01-26 18:47:13 +0000 UTC" firstStartedPulling="2026-01-26 18:47:14.201631674 +0000 UTC m=+1007.509826382" lastFinishedPulling="2026-01-26 18:47:17.97357844 +0000 UTC m=+1011.281773148" observedRunningTime="2026-01-26 18:47:18.792040658 +0000 UTC m=+1012.100235366" watchObservedRunningTime="2026-01-26 18:47:18.794054546 +0000 UTC m=+1012.102249254" Jan 26 18:47:18 crc kubenswrapper[4737]: I0126 18:47:18.824457 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f425m" podStartSLOduration=2.94339869 podStartE2EDuration="5.824435563s" podCreationTimestamp="2026-01-26 18:47:13 +0000 UTC" firstStartedPulling="2026-01-26 18:47:15.136676098 +0000 UTC m=+1008.444870806" lastFinishedPulling="2026-01-26 18:47:18.017712971 +0000 UTC m=+1011.325907679" observedRunningTime="2026-01-26 18:47:18.82060761 +0000 UTC m=+1012.128802318" watchObservedRunningTime="2026-01-26 18:47:18.824435563 +0000 UTC m=+1012.132630271" Jan 26 18:47:19 crc kubenswrapper[4737]: I0126 18:47:19.792045 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zdxbz" event={"ID":"4c4a0a5e-ab9e-478c-8f90-741563313097","Type":"ContainerStarted","Data":"14a73f433a3c8219a3e83293a6a3b6412628f76f3323878348a15e512cef85aa"} Jan 26 18:47:19 crc kubenswrapper[4737]: I0126 18:47:19.807551 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zdxbz" podStartSLOduration=2.6629412390000002 podStartE2EDuration="6.807533572s" podCreationTimestamp="2026-01-26 18:47:13 +0000 UTC" firstStartedPulling="2026-01-26 18:47:15.185290507 +0000 UTC m=+1008.493485215" lastFinishedPulling="2026-01-26 18:47:19.32988284 +0000 UTC m=+1012.638077548" observedRunningTime="2026-01-26 18:47:19.806468966 +0000 UTC m=+1013.114663674" watchObservedRunningTime="2026-01-26 18:47:19.807533572 +0000 UTC m=+1013.115728280" Jan 26 18:47:21 crc kubenswrapper[4737]: I0126 18:47:21.807771 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-qh796" event={"ID":"33e00306-edd4-487d-9bc6-e49fa9692a29","Type":"ContainerStarted","Data":"a29fdfa4d53efa982cf01781b631039e0b005924f3151b306e7f364df5150214"} Jan 26 18:47:21 crc kubenswrapper[4737]: I0126 18:47:21.824902 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-qh796" podStartSLOduration=2.348082924 podStartE2EDuration="8.824882631s" podCreationTimestamp="2026-01-26 18:47:13 +0000 UTC" firstStartedPulling="2026-01-26 18:47:14.638028816 +0000 UTC m=+1007.946223524" lastFinishedPulling="2026-01-26 18:47:21.114828523 +0000 UTC m=+1014.423023231" observedRunningTime="2026-01-26 18:47:21.823018476 +0000 UTC m=+1015.131213184" watchObservedRunningTime="2026-01-26 18:47:21.824882631 +0000 UTC m=+1015.133077339" Jan 26 18:47:24 crc kubenswrapper[4737]: I0126 18:47:24.163954 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-99d4z" Jan 26 18:47:24 crc kubenswrapper[4737]: I0126 18:47:24.494376 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-645c6f4f57-glmhb" Jan 26 18:47:24 crc kubenswrapper[4737]: I0126 18:47:24.494432 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-645c6f4f57-glmhb" Jan 26 18:47:24 crc kubenswrapper[4737]: I0126 18:47:24.499225 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-645c6f4f57-glmhb" Jan 26 18:47:24 crc kubenswrapper[4737]: I0126 18:47:24.833147 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-645c6f4f57-glmhb" Jan 26 18:47:24 crc kubenswrapper[4737]: I0126 18:47:24.903476 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-64f7cd9bf9-xgwrd"] Jan 26 18:47:30 crc kubenswrapper[4737]: I0126 18:47:30.949539 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:47:30 crc kubenswrapper[4737]: I0126 18:47:30.950016 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:47:34 crc kubenswrapper[4737]: I0126 18:47:34.693538 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-f425m" Jan 26 18:47:49 crc kubenswrapper[4737]: I0126 18:47:49.948508 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-64f7cd9bf9-xgwrd" podUID="fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2" containerName="console" containerID="cri-o://de17e6a3af95874f5ea0aab3ef32b338f257ab819a737dd6a479c97153e3feda" gracePeriod=15 Jan 26 18:47:50 crc kubenswrapper[4737]: I0126 18:47:50.429133 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-64f7cd9bf9-xgwrd_fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2/console/0.log" Jan 26 18:47:50 crc kubenswrapper[4737]: I0126 18:47:50.429561 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64f7cd9bf9-xgwrd" Jan 26 18:47:50 crc kubenswrapper[4737]: I0126 18:47:50.506631 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2-console-config\") pod \"fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2\" (UID: \"fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2\") " Jan 26 18:47:50 crc kubenswrapper[4737]: I0126 18:47:50.506748 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2-trusted-ca-bundle\") pod \"fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2\" (UID: \"fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2\") " Jan 26 18:47:50 crc kubenswrapper[4737]: I0126 18:47:50.506777 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgdq8\" (UniqueName: \"kubernetes.io/projected/fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2-kube-api-access-pgdq8\") pod \"fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2\" (UID: \"fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2\") " Jan 26 18:47:50 crc kubenswrapper[4737]: I0126 18:47:50.506812 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2-console-serving-cert\") pod \"fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2\" (UID: \"fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2\") " Jan 26 18:47:50 crc kubenswrapper[4737]: I0126 18:47:50.506840 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2-console-oauth-config\") pod \"fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2\" (UID: \"fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2\") " Jan 26 18:47:50 crc kubenswrapper[4737]: I0126 18:47:50.506883 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2-oauth-serving-cert\") pod \"fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2\" (UID: \"fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2\") " Jan 26 18:47:50 crc kubenswrapper[4737]: I0126 18:47:50.506905 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2-service-ca\") pod \"fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2\" (UID: \"fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2\") " Jan 26 18:47:50 crc kubenswrapper[4737]: I0126 18:47:50.508520 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2-console-config" (OuterVolumeSpecName: "console-config") pod "fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2" (UID: "fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:47:50 crc kubenswrapper[4737]: I0126 18:47:50.508830 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2" (UID: "fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:47:50 crc kubenswrapper[4737]: I0126 18:47:50.509049 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2-service-ca" (OuterVolumeSpecName: "service-ca") pod "fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2" (UID: "fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:47:50 crc kubenswrapper[4737]: I0126 18:47:50.509088 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2" (UID: "fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:47:50 crc kubenswrapper[4737]: I0126 18:47:50.513314 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2" (UID: "fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:47:50 crc kubenswrapper[4737]: I0126 18:47:50.514057 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2-kube-api-access-pgdq8" (OuterVolumeSpecName: "kube-api-access-pgdq8") pod "fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2" (UID: "fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2"). InnerVolumeSpecName "kube-api-access-pgdq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:47:50 crc kubenswrapper[4737]: I0126 18:47:50.515511 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2" (UID: "fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:47:50 crc kubenswrapper[4737]: I0126 18:47:50.609326 4737 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:47:50 crc kubenswrapper[4737]: I0126 18:47:50.609354 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgdq8\" (UniqueName: \"kubernetes.io/projected/fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2-kube-api-access-pgdq8\") on node \"crc\" DevicePath \"\"" Jan 26 18:47:50 crc kubenswrapper[4737]: I0126 18:47:50.609367 4737 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:47:50 crc kubenswrapper[4737]: I0126 18:47:50.609376 4737 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:47:50 crc kubenswrapper[4737]: I0126 18:47:50.609384 4737 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:47:50 crc kubenswrapper[4737]: I0126 18:47:50.609395 4737 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 18:47:50 crc kubenswrapper[4737]: I0126 18:47:50.609403 4737 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2-console-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:47:51 crc kubenswrapper[4737]: I0126 18:47:51.054999 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-64f7cd9bf9-xgwrd_fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2/console/0.log" Jan 26 18:47:51 crc kubenswrapper[4737]: I0126 18:47:51.055097 4737 generic.go:334] "Generic (PLEG): container finished" podID="fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2" containerID="de17e6a3af95874f5ea0aab3ef32b338f257ab819a737dd6a479c97153e3feda" exitCode=2 Jan 26 18:47:51 crc kubenswrapper[4737]: I0126 18:47:51.055138 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64f7cd9bf9-xgwrd" event={"ID":"fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2","Type":"ContainerDied","Data":"de17e6a3af95874f5ea0aab3ef32b338f257ab819a737dd6a479c97153e3feda"} Jan 26 18:47:51 crc kubenswrapper[4737]: I0126 18:47:51.055168 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64f7cd9bf9-xgwrd" event={"ID":"fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2","Type":"ContainerDied","Data":"e1f4f889bf489708f34282013ab292842dafc3316391d97bfa100fa2263d2a01"} Jan 26 18:47:51 crc kubenswrapper[4737]: I0126 18:47:51.055189 4737 scope.go:117] "RemoveContainer" containerID="de17e6a3af95874f5ea0aab3ef32b338f257ab819a737dd6a479c97153e3feda" Jan 26 18:47:51 crc kubenswrapper[4737]: I0126 18:47:51.055403 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64f7cd9bf9-xgwrd" Jan 26 18:47:51 crc kubenswrapper[4737]: I0126 18:47:51.094154 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-64f7cd9bf9-xgwrd"] Jan 26 18:47:51 crc kubenswrapper[4737]: I0126 18:47:51.100619 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-64f7cd9bf9-xgwrd"] Jan 26 18:47:51 crc kubenswrapper[4737]: I0126 18:47:51.109769 4737 scope.go:117] "RemoveContainer" containerID="de17e6a3af95874f5ea0aab3ef32b338f257ab819a737dd6a479c97153e3feda" Jan 26 18:47:51 crc kubenswrapper[4737]: E0126 18:47:51.113451 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de17e6a3af95874f5ea0aab3ef32b338f257ab819a737dd6a479c97153e3feda\": container with ID starting with de17e6a3af95874f5ea0aab3ef32b338f257ab819a737dd6a479c97153e3feda not found: ID does not exist" containerID="de17e6a3af95874f5ea0aab3ef32b338f257ab819a737dd6a479c97153e3feda" Jan 26 18:47:51 crc kubenswrapper[4737]: I0126 18:47:51.113493 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de17e6a3af95874f5ea0aab3ef32b338f257ab819a737dd6a479c97153e3feda"} err="failed to get container status \"de17e6a3af95874f5ea0aab3ef32b338f257ab819a737dd6a479c97153e3feda\": rpc error: code = NotFound desc = could not find container \"de17e6a3af95874f5ea0aab3ef32b338f257ab819a737dd6a479c97153e3feda\": container with ID starting with de17e6a3af95874f5ea0aab3ef32b338f257ab819a737dd6a479c97153e3feda not found: ID does not exist" Jan 26 18:47:52 crc kubenswrapper[4737]: I0126 18:47:52.990124 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2" path="/var/lib/kubelet/pods/fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2/volumes" Jan 26 18:47:53 crc kubenswrapper[4737]: I0126 18:47:53.239911 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx99dd"] Jan 26 18:47:53 crc kubenswrapper[4737]: E0126 18:47:53.240271 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2" containerName="console" Jan 26 18:47:53 crc kubenswrapper[4737]: I0126 18:47:53.240294 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2" containerName="console" Jan 26 18:47:53 crc kubenswrapper[4737]: I0126 18:47:53.240437 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="fadc5dc0-34c3-436b-9f35-ed7dc8a74cd2" containerName="console" Jan 26 18:47:53 crc kubenswrapper[4737]: I0126 18:47:53.241552 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx99dd" Jan 26 18:47:53 crc kubenswrapper[4737]: I0126 18:47:53.254052 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx99dd"] Jan 26 18:47:53 crc kubenswrapper[4737]: I0126 18:47:53.256961 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 26 18:47:53 crc kubenswrapper[4737]: I0126 18:47:53.350222 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/31b3687c-76cb-44be-b404-f88ed8a1b901-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx99dd\" (UID: \"31b3687c-76cb-44be-b404-f88ed8a1b901\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx99dd" Jan 26 18:47:53 crc kubenswrapper[4737]: I0126 18:47:53.350304 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/31b3687c-76cb-44be-b404-f88ed8a1b901-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx99dd\" (UID: \"31b3687c-76cb-44be-b404-f88ed8a1b901\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx99dd" Jan 26 18:47:53 crc kubenswrapper[4737]: I0126 18:47:53.350419 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d29wj\" (UniqueName: \"kubernetes.io/projected/31b3687c-76cb-44be-b404-f88ed8a1b901-kube-api-access-d29wj\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx99dd\" (UID: \"31b3687c-76cb-44be-b404-f88ed8a1b901\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx99dd" Jan 26 18:47:53 crc kubenswrapper[4737]: I0126 18:47:53.452295 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/31b3687c-76cb-44be-b404-f88ed8a1b901-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx99dd\" (UID: \"31b3687c-76cb-44be-b404-f88ed8a1b901\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx99dd" Jan 26 18:47:53 crc kubenswrapper[4737]: I0126 18:47:53.452387 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/31b3687c-76cb-44be-b404-f88ed8a1b901-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx99dd\" (UID: \"31b3687c-76cb-44be-b404-f88ed8a1b901\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx99dd" Jan 26 18:47:53 crc kubenswrapper[4737]: I0126 18:47:53.452456 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d29wj\" (UniqueName: \"kubernetes.io/projected/31b3687c-76cb-44be-b404-f88ed8a1b901-kube-api-access-d29wj\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx99dd\" (UID: \"31b3687c-76cb-44be-b404-f88ed8a1b901\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx99dd" Jan 26 18:47:53 crc kubenswrapper[4737]: I0126 18:47:53.452989 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/31b3687c-76cb-44be-b404-f88ed8a1b901-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx99dd\" (UID: \"31b3687c-76cb-44be-b404-f88ed8a1b901\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx99dd" Jan 26 18:47:53 crc kubenswrapper[4737]: I0126 18:47:53.453000 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/31b3687c-76cb-44be-b404-f88ed8a1b901-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx99dd\" (UID: \"31b3687c-76cb-44be-b404-f88ed8a1b901\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx99dd" Jan 26 18:47:53 crc kubenswrapper[4737]: I0126 18:47:53.477258 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d29wj\" (UniqueName: \"kubernetes.io/projected/31b3687c-76cb-44be-b404-f88ed8a1b901-kube-api-access-d29wj\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx99dd\" (UID: \"31b3687c-76cb-44be-b404-f88ed8a1b901\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx99dd" Jan 26 18:47:53 crc kubenswrapper[4737]: I0126 18:47:53.606343 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx99dd" Jan 26 18:47:54 crc kubenswrapper[4737]: I0126 18:47:54.020729 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx99dd"] Jan 26 18:47:54 crc kubenswrapper[4737]: I0126 18:47:54.075858 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx99dd" event={"ID":"31b3687c-76cb-44be-b404-f88ed8a1b901","Type":"ContainerStarted","Data":"8b5316d9e8fae748e2974d60e6bbcd663a2704b7cf273521a6c73dd8634fa579"} Jan 26 18:47:55 crc kubenswrapper[4737]: I0126 18:47:55.082836 4737 generic.go:334] "Generic (PLEG): container finished" podID="31b3687c-76cb-44be-b404-f88ed8a1b901" containerID="102cd4df3b530f057459583c73fe637c0b7082326939e329d0a3be4913b365dd" exitCode=0 Jan 26 18:47:55 crc kubenswrapper[4737]: I0126 18:47:55.082896 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx99dd" event={"ID":"31b3687c-76cb-44be-b404-f88ed8a1b901","Type":"ContainerDied","Data":"102cd4df3b530f057459583c73fe637c0b7082326939e329d0a3be4913b365dd"} Jan 26 18:47:58 crc kubenswrapper[4737]: I0126 18:47:58.110660 4737 generic.go:334] "Generic (PLEG): container finished" podID="31b3687c-76cb-44be-b404-f88ed8a1b901" containerID="d1030d402265d16c4cc17b0adf551da5fed796962f2747c93aeecb34d4d37472" exitCode=0 Jan 26 18:47:58 crc kubenswrapper[4737]: I0126 18:47:58.110769 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx99dd" event={"ID":"31b3687c-76cb-44be-b404-f88ed8a1b901","Type":"ContainerDied","Data":"d1030d402265d16c4cc17b0adf551da5fed796962f2747c93aeecb34d4d37472"} Jan 26 18:47:59 crc kubenswrapper[4737]: I0126 18:47:59.120567 4737 generic.go:334] "Generic (PLEG): container finished" podID="31b3687c-76cb-44be-b404-f88ed8a1b901" containerID="7ca2065f87c15d2595dc99cba57c056a7f76d4a794f21d23e38a2d25ca283f12" exitCode=0 Jan 26 18:47:59 crc kubenswrapper[4737]: I0126 18:47:59.120645 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx99dd" event={"ID":"31b3687c-76cb-44be-b404-f88ed8a1b901","Type":"ContainerDied","Data":"7ca2065f87c15d2595dc99cba57c056a7f76d4a794f21d23e38a2d25ca283f12"} Jan 26 18:48:00 crc kubenswrapper[4737]: I0126 18:48:00.395816 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx99dd" Jan 26 18:48:00 crc kubenswrapper[4737]: I0126 18:48:00.460491 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/31b3687c-76cb-44be-b404-f88ed8a1b901-util\") pod \"31b3687c-76cb-44be-b404-f88ed8a1b901\" (UID: \"31b3687c-76cb-44be-b404-f88ed8a1b901\") " Jan 26 18:48:00 crc kubenswrapper[4737]: I0126 18:48:00.460728 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d29wj\" (UniqueName: \"kubernetes.io/projected/31b3687c-76cb-44be-b404-f88ed8a1b901-kube-api-access-d29wj\") pod \"31b3687c-76cb-44be-b404-f88ed8a1b901\" (UID: \"31b3687c-76cb-44be-b404-f88ed8a1b901\") " Jan 26 18:48:00 crc kubenswrapper[4737]: I0126 18:48:00.462394 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31b3687c-76cb-44be-b404-f88ed8a1b901-bundle" (OuterVolumeSpecName: "bundle") pod "31b3687c-76cb-44be-b404-f88ed8a1b901" (UID: "31b3687c-76cb-44be-b404-f88ed8a1b901"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:48:00 crc kubenswrapper[4737]: I0126 18:48:00.463199 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/31b3687c-76cb-44be-b404-f88ed8a1b901-bundle\") pod \"31b3687c-76cb-44be-b404-f88ed8a1b901\" (UID: \"31b3687c-76cb-44be-b404-f88ed8a1b901\") " Jan 26 18:48:00 crc kubenswrapper[4737]: I0126 18:48:00.463718 4737 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/31b3687c-76cb-44be-b404-f88ed8a1b901-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:48:00 crc kubenswrapper[4737]: I0126 18:48:00.468957 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31b3687c-76cb-44be-b404-f88ed8a1b901-kube-api-access-d29wj" (OuterVolumeSpecName: "kube-api-access-d29wj") pod "31b3687c-76cb-44be-b404-f88ed8a1b901" (UID: "31b3687c-76cb-44be-b404-f88ed8a1b901"). InnerVolumeSpecName "kube-api-access-d29wj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:48:00 crc kubenswrapper[4737]: I0126 18:48:00.480313 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31b3687c-76cb-44be-b404-f88ed8a1b901-util" (OuterVolumeSpecName: "util") pod "31b3687c-76cb-44be-b404-f88ed8a1b901" (UID: "31b3687c-76cb-44be-b404-f88ed8a1b901"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:48:00 crc kubenswrapper[4737]: I0126 18:48:00.565741 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d29wj\" (UniqueName: \"kubernetes.io/projected/31b3687c-76cb-44be-b404-f88ed8a1b901-kube-api-access-d29wj\") on node \"crc\" DevicePath \"\"" Jan 26 18:48:00 crc kubenswrapper[4737]: I0126 18:48:00.565781 4737 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/31b3687c-76cb-44be-b404-f88ed8a1b901-util\") on node \"crc\" DevicePath \"\"" Jan 26 18:48:00 crc kubenswrapper[4737]: I0126 18:48:00.949169 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:48:00 crc kubenswrapper[4737]: I0126 18:48:00.949237 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:48:00 crc kubenswrapper[4737]: I0126 18:48:00.949540 4737 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" Jan 26 18:48:00 crc kubenswrapper[4737]: I0126 18:48:00.950192 4737 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"234088f96dcb5aa606a89e947e92e3f85265b7ec69ab162d10f16abfa114b135"} pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 18:48:00 crc kubenswrapper[4737]: I0126 18:48:00.950259 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" containerID="cri-o://234088f96dcb5aa606a89e947e92e3f85265b7ec69ab162d10f16abfa114b135" gracePeriod=600 Jan 26 18:48:01 crc kubenswrapper[4737]: I0126 18:48:01.137428 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx99dd" event={"ID":"31b3687c-76cb-44be-b404-f88ed8a1b901","Type":"ContainerDied","Data":"8b5316d9e8fae748e2974d60e6bbcd663a2704b7cf273521a6c73dd8634fa579"} Jan 26 18:48:01 crc kubenswrapper[4737]: I0126 18:48:01.137705 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b5316d9e8fae748e2974d60e6bbcd663a2704b7cf273521a6c73dd8634fa579" Jan 26 18:48:01 crc kubenswrapper[4737]: I0126 18:48:01.137480 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx99dd" Jan 26 18:48:01 crc kubenswrapper[4737]: I0126 18:48:01.139999 4737 generic.go:334] "Generic (PLEG): container finished" podID="afd75772-7900-46c3-b392-afb075e1cc08" containerID="234088f96dcb5aa606a89e947e92e3f85265b7ec69ab162d10f16abfa114b135" exitCode=0 Jan 26 18:48:01 crc kubenswrapper[4737]: I0126 18:48:01.140036 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" event={"ID":"afd75772-7900-46c3-b392-afb075e1cc08","Type":"ContainerDied","Data":"234088f96dcb5aa606a89e947e92e3f85265b7ec69ab162d10f16abfa114b135"} Jan 26 18:48:01 crc kubenswrapper[4737]: I0126 18:48:01.140083 4737 scope.go:117] "RemoveContainer" containerID="a5aff21eb61341220e1d5ffef1d177ada5231e294c0204cf3d50e84b8883bcdf" Jan 26 18:48:02 crc kubenswrapper[4737]: I0126 18:48:02.152669 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" event={"ID":"afd75772-7900-46c3-b392-afb075e1cc08","Type":"ContainerStarted","Data":"c76105450930f5c76ed15e2ed040f365f4a322bf2138c5c2073f549076e278fc"} Jan 26 18:48:11 crc kubenswrapper[4737]: I0126 18:48:11.070974 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-948bbcb9c-jrztq"] Jan 26 18:48:11 crc kubenswrapper[4737]: E0126 18:48:11.071824 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31b3687c-76cb-44be-b404-f88ed8a1b901" containerName="util" Jan 26 18:48:11 crc kubenswrapper[4737]: I0126 18:48:11.071839 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="31b3687c-76cb-44be-b404-f88ed8a1b901" containerName="util" Jan 26 18:48:11 crc kubenswrapper[4737]: E0126 18:48:11.071846 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31b3687c-76cb-44be-b404-f88ed8a1b901" containerName="pull" Jan 26 18:48:11 crc kubenswrapper[4737]: I0126 18:48:11.071852 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="31b3687c-76cb-44be-b404-f88ed8a1b901" containerName="pull" Jan 26 18:48:11 crc kubenswrapper[4737]: E0126 18:48:11.071866 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31b3687c-76cb-44be-b404-f88ed8a1b901" containerName="extract" Jan 26 18:48:11 crc kubenswrapper[4737]: I0126 18:48:11.071872 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="31b3687c-76cb-44be-b404-f88ed8a1b901" containerName="extract" Jan 26 18:48:11 crc kubenswrapper[4737]: I0126 18:48:11.072001 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="31b3687c-76cb-44be-b404-f88ed8a1b901" containerName="extract" Jan 26 18:48:11 crc kubenswrapper[4737]: I0126 18:48:11.072525 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-948bbcb9c-jrztq" Jan 26 18:48:11 crc kubenswrapper[4737]: I0126 18:48:11.079397 4737 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-lj88b" Jan 26 18:48:11 crc kubenswrapper[4737]: I0126 18:48:11.080395 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 26 18:48:11 crc kubenswrapper[4737]: I0126 18:48:11.083599 4737 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 26 18:48:11 crc kubenswrapper[4737]: I0126 18:48:11.088271 4737 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 26 18:48:11 crc kubenswrapper[4737]: I0126 18:48:11.092588 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 26 18:48:11 crc kubenswrapper[4737]: I0126 18:48:11.130920 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-948bbcb9c-jrztq"] Jan 26 18:48:11 crc kubenswrapper[4737]: I0126 18:48:11.238993 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0a7ecdef-57dc-45fc-9142-3889fb44d2e9-webhook-cert\") pod \"metallb-operator-controller-manager-948bbcb9c-jrztq\" (UID: \"0a7ecdef-57dc-45fc-9142-3889fb44d2e9\") " pod="metallb-system/metallb-operator-controller-manager-948bbcb9c-jrztq" Jan 26 18:48:11 crc kubenswrapper[4737]: I0126 18:48:11.239064 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c29l4\" (UniqueName: \"kubernetes.io/projected/0a7ecdef-57dc-45fc-9142-3889fb44d2e9-kube-api-access-c29l4\") pod \"metallb-operator-controller-manager-948bbcb9c-jrztq\" (UID: \"0a7ecdef-57dc-45fc-9142-3889fb44d2e9\") " pod="metallb-system/metallb-operator-controller-manager-948bbcb9c-jrztq" Jan 26 18:48:11 crc kubenswrapper[4737]: I0126 18:48:11.239118 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0a7ecdef-57dc-45fc-9142-3889fb44d2e9-apiservice-cert\") pod \"metallb-operator-controller-manager-948bbcb9c-jrztq\" (UID: \"0a7ecdef-57dc-45fc-9142-3889fb44d2e9\") " pod="metallb-system/metallb-operator-controller-manager-948bbcb9c-jrztq" Jan 26 18:48:11 crc kubenswrapper[4737]: I0126 18:48:11.341043 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0a7ecdef-57dc-45fc-9142-3889fb44d2e9-webhook-cert\") pod \"metallb-operator-controller-manager-948bbcb9c-jrztq\" (UID: \"0a7ecdef-57dc-45fc-9142-3889fb44d2e9\") " pod="metallb-system/metallb-operator-controller-manager-948bbcb9c-jrztq" Jan 26 18:48:11 crc kubenswrapper[4737]: I0126 18:48:11.341152 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c29l4\" (UniqueName: \"kubernetes.io/projected/0a7ecdef-57dc-45fc-9142-3889fb44d2e9-kube-api-access-c29l4\") pod \"metallb-operator-controller-manager-948bbcb9c-jrztq\" (UID: \"0a7ecdef-57dc-45fc-9142-3889fb44d2e9\") " pod="metallb-system/metallb-operator-controller-manager-948bbcb9c-jrztq" Jan 26 18:48:11 crc kubenswrapper[4737]: I0126 18:48:11.341191 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0a7ecdef-57dc-45fc-9142-3889fb44d2e9-apiservice-cert\") pod \"metallb-operator-controller-manager-948bbcb9c-jrztq\" (UID: \"0a7ecdef-57dc-45fc-9142-3889fb44d2e9\") " pod="metallb-system/metallb-operator-controller-manager-948bbcb9c-jrztq" Jan 26 18:48:11 crc kubenswrapper[4737]: I0126 18:48:11.348496 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0a7ecdef-57dc-45fc-9142-3889fb44d2e9-apiservice-cert\") pod \"metallb-operator-controller-manager-948bbcb9c-jrztq\" (UID: \"0a7ecdef-57dc-45fc-9142-3889fb44d2e9\") " pod="metallb-system/metallb-operator-controller-manager-948bbcb9c-jrztq" Jan 26 18:48:11 crc kubenswrapper[4737]: I0126 18:48:11.348941 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0a7ecdef-57dc-45fc-9142-3889fb44d2e9-webhook-cert\") pod \"metallb-operator-controller-manager-948bbcb9c-jrztq\" (UID: \"0a7ecdef-57dc-45fc-9142-3889fb44d2e9\") " pod="metallb-system/metallb-operator-controller-manager-948bbcb9c-jrztq" Jan 26 18:48:11 crc kubenswrapper[4737]: I0126 18:48:11.365013 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c29l4\" (UniqueName: \"kubernetes.io/projected/0a7ecdef-57dc-45fc-9142-3889fb44d2e9-kube-api-access-c29l4\") pod \"metallb-operator-controller-manager-948bbcb9c-jrztq\" (UID: \"0a7ecdef-57dc-45fc-9142-3889fb44d2e9\") " pod="metallb-system/metallb-operator-controller-manager-948bbcb9c-jrztq" Jan 26 18:48:11 crc kubenswrapper[4737]: I0126 18:48:11.395801 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-948bbcb9c-jrztq" Jan 26 18:48:11 crc kubenswrapper[4737]: I0126 18:48:11.618166 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-75cffd444d-hgw8t"] Jan 26 18:48:11 crc kubenswrapper[4737]: I0126 18:48:11.620195 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-75cffd444d-hgw8t" Jan 26 18:48:11 crc kubenswrapper[4737]: I0126 18:48:11.623928 4737 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 26 18:48:11 crc kubenswrapper[4737]: I0126 18:48:11.624284 4737 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-c894j" Jan 26 18:48:11 crc kubenswrapper[4737]: I0126 18:48:11.627153 4737 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 26 18:48:11 crc kubenswrapper[4737]: I0126 18:48:11.638814 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-75cffd444d-hgw8t"] Jan 26 18:48:11 crc kubenswrapper[4737]: I0126 18:48:11.749236 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/db9aadf5-9872-40e4-8333-da2779361dcf-webhook-cert\") pod \"metallb-operator-webhook-server-75cffd444d-hgw8t\" (UID: \"db9aadf5-9872-40e4-8333-da2779361dcf\") " pod="metallb-system/metallb-operator-webhook-server-75cffd444d-hgw8t" Jan 26 18:48:11 crc kubenswrapper[4737]: I0126 18:48:11.749281 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/db9aadf5-9872-40e4-8333-da2779361dcf-apiservice-cert\") pod \"metallb-operator-webhook-server-75cffd444d-hgw8t\" (UID: \"db9aadf5-9872-40e4-8333-da2779361dcf\") " pod="metallb-system/metallb-operator-webhook-server-75cffd444d-hgw8t" Jan 26 18:48:11 crc kubenswrapper[4737]: I0126 18:48:11.749308 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58xsr\" (UniqueName: \"kubernetes.io/projected/db9aadf5-9872-40e4-8333-da2779361dcf-kube-api-access-58xsr\") pod \"metallb-operator-webhook-server-75cffd444d-hgw8t\" (UID: \"db9aadf5-9872-40e4-8333-da2779361dcf\") " pod="metallb-system/metallb-operator-webhook-server-75cffd444d-hgw8t" Jan 26 18:48:11 crc kubenswrapper[4737]: I0126 18:48:11.851408 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/db9aadf5-9872-40e4-8333-da2779361dcf-webhook-cert\") pod \"metallb-operator-webhook-server-75cffd444d-hgw8t\" (UID: \"db9aadf5-9872-40e4-8333-da2779361dcf\") " pod="metallb-system/metallb-operator-webhook-server-75cffd444d-hgw8t" Jan 26 18:48:11 crc kubenswrapper[4737]: I0126 18:48:11.851478 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/db9aadf5-9872-40e4-8333-da2779361dcf-apiservice-cert\") pod \"metallb-operator-webhook-server-75cffd444d-hgw8t\" (UID: \"db9aadf5-9872-40e4-8333-da2779361dcf\") " pod="metallb-system/metallb-operator-webhook-server-75cffd444d-hgw8t" Jan 26 18:48:11 crc kubenswrapper[4737]: I0126 18:48:11.851505 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58xsr\" (UniqueName: \"kubernetes.io/projected/db9aadf5-9872-40e4-8333-da2779361dcf-kube-api-access-58xsr\") pod \"metallb-operator-webhook-server-75cffd444d-hgw8t\" (UID: \"db9aadf5-9872-40e4-8333-da2779361dcf\") " pod="metallb-system/metallb-operator-webhook-server-75cffd444d-hgw8t" Jan 26 18:48:11 crc kubenswrapper[4737]: I0126 18:48:11.856610 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/db9aadf5-9872-40e4-8333-da2779361dcf-webhook-cert\") pod \"metallb-operator-webhook-server-75cffd444d-hgw8t\" (UID: \"db9aadf5-9872-40e4-8333-da2779361dcf\") " pod="metallb-system/metallb-operator-webhook-server-75cffd444d-hgw8t" Jan 26 18:48:11 crc kubenswrapper[4737]: I0126 18:48:11.856859 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/db9aadf5-9872-40e4-8333-da2779361dcf-apiservice-cert\") pod \"metallb-operator-webhook-server-75cffd444d-hgw8t\" (UID: \"db9aadf5-9872-40e4-8333-da2779361dcf\") " pod="metallb-system/metallb-operator-webhook-server-75cffd444d-hgw8t" Jan 26 18:48:11 crc kubenswrapper[4737]: I0126 18:48:11.869656 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58xsr\" (UniqueName: \"kubernetes.io/projected/db9aadf5-9872-40e4-8333-da2779361dcf-kube-api-access-58xsr\") pod \"metallb-operator-webhook-server-75cffd444d-hgw8t\" (UID: \"db9aadf5-9872-40e4-8333-da2779361dcf\") " pod="metallb-system/metallb-operator-webhook-server-75cffd444d-hgw8t" Jan 26 18:48:11 crc kubenswrapper[4737]: I0126 18:48:11.961312 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-75cffd444d-hgw8t" Jan 26 18:48:11 crc kubenswrapper[4737]: I0126 18:48:11.979151 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-948bbcb9c-jrztq"] Jan 26 18:48:12 crc kubenswrapper[4737]: I0126 18:48:12.230497 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-948bbcb9c-jrztq" event={"ID":"0a7ecdef-57dc-45fc-9142-3889fb44d2e9","Type":"ContainerStarted","Data":"255c2af27b46faa92c9bd1de0c22d12c8405ff8b57293ec20416ff844e01a67a"} Jan 26 18:48:12 crc kubenswrapper[4737]: W0126 18:48:12.447915 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb9aadf5_9872_40e4_8333_da2779361dcf.slice/crio-0d328a08476429d6c47190ae53c0ff7d18940d438049fb79c4a3ea37623ebbff WatchSource:0}: Error finding container 0d328a08476429d6c47190ae53c0ff7d18940d438049fb79c4a3ea37623ebbff: Status 404 returned error can't find the container with id 0d328a08476429d6c47190ae53c0ff7d18940d438049fb79c4a3ea37623ebbff Jan 26 18:48:12 crc kubenswrapper[4737]: I0126 18:48:12.459576 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-75cffd444d-hgw8t"] Jan 26 18:48:13 crc kubenswrapper[4737]: I0126 18:48:13.240945 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-75cffd444d-hgw8t" event={"ID":"db9aadf5-9872-40e4-8333-da2779361dcf","Type":"ContainerStarted","Data":"0d328a08476429d6c47190ae53c0ff7d18940d438049fb79c4a3ea37623ebbff"} Jan 26 18:48:19 crc kubenswrapper[4737]: I0126 18:48:19.308318 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-75cffd444d-hgw8t" event={"ID":"db9aadf5-9872-40e4-8333-da2779361dcf","Type":"ContainerStarted","Data":"6131a8fbc40120acbfd74410b76ee33c37e35b99505ef9be0cc002d790a46a17"} Jan 26 18:48:19 crc kubenswrapper[4737]: I0126 18:48:19.308869 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-75cffd444d-hgw8t" Jan 26 18:48:19 crc kubenswrapper[4737]: I0126 18:48:19.310213 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-948bbcb9c-jrztq" event={"ID":"0a7ecdef-57dc-45fc-9142-3889fb44d2e9","Type":"ContainerStarted","Data":"b0a0a1dd62b78ebc79be6416f76d08a10b900cc1761cc0ca03f632ee612885ed"} Jan 26 18:48:19 crc kubenswrapper[4737]: I0126 18:48:19.310368 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-948bbcb9c-jrztq" Jan 26 18:48:19 crc kubenswrapper[4737]: I0126 18:48:19.333206 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-75cffd444d-hgw8t" podStartSLOduration=2.232039488 podStartE2EDuration="8.333185875s" podCreationTimestamp="2026-01-26 18:48:11 +0000 UTC" firstStartedPulling="2026-01-26 18:48:12.449542263 +0000 UTC m=+1065.757736971" lastFinishedPulling="2026-01-26 18:48:18.55068865 +0000 UTC m=+1071.858883358" observedRunningTime="2026-01-26 18:48:19.325998841 +0000 UTC m=+1072.634193549" watchObservedRunningTime="2026-01-26 18:48:19.333185875 +0000 UTC m=+1072.641380583" Jan 26 18:48:19 crc kubenswrapper[4737]: I0126 18:48:19.350361 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-948bbcb9c-jrztq" podStartSLOduration=1.836324591 podStartE2EDuration="8.35034376s" podCreationTimestamp="2026-01-26 18:48:11 +0000 UTC" firstStartedPulling="2026-01-26 18:48:11.994781175 +0000 UTC m=+1065.302975893" lastFinishedPulling="2026-01-26 18:48:18.508800354 +0000 UTC m=+1071.816995062" observedRunningTime="2026-01-26 18:48:19.34659825 +0000 UTC m=+1072.654792958" watchObservedRunningTime="2026-01-26 18:48:19.35034376 +0000 UTC m=+1072.658538468" Jan 26 18:48:31 crc kubenswrapper[4737]: I0126 18:48:31.983410 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-75cffd444d-hgw8t" Jan 26 18:48:51 crc kubenswrapper[4737]: I0126 18:48:51.398694 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-948bbcb9c-jrztq" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.210694 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-ts4kl"] Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.213535 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-ts4kl" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.215888 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.215963 4737 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-7v7w6" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.216808 4737 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.243259 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-zg2pm"] Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.244229 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zg2pm" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.247292 4737 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.264966 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-zg2pm"] Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.354044 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/f86f264d-5704-4995-9e15-13b28bd18dc4-metrics\") pod \"frr-k8s-ts4kl\" (UID: \"f86f264d-5704-4995-9e15-13b28bd18dc4\") " pod="metallb-system/frr-k8s-ts4kl" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.354106 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxtsz\" (UniqueName: \"kubernetes.io/projected/db423313-ded0-4540-abdb-a7a8c5944989-kube-api-access-jxtsz\") pod \"frr-k8s-webhook-server-7df86c4f6c-zg2pm\" (UID: \"db423313-ded0-4540-abdb-a7a8c5944989\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zg2pm" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.354131 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f86f264d-5704-4995-9e15-13b28bd18dc4-metrics-certs\") pod \"frr-k8s-ts4kl\" (UID: \"f86f264d-5704-4995-9e15-13b28bd18dc4\") " pod="metallb-system/frr-k8s-ts4kl" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.354155 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/f86f264d-5704-4995-9e15-13b28bd18dc4-frr-conf\") pod \"frr-k8s-ts4kl\" (UID: \"f86f264d-5704-4995-9e15-13b28bd18dc4\") " pod="metallb-system/frr-k8s-ts4kl" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.354215 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/db423313-ded0-4540-abdb-a7a8c5944989-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-zg2pm\" (UID: \"db423313-ded0-4540-abdb-a7a8c5944989\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zg2pm" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.354263 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/f86f264d-5704-4995-9e15-13b28bd18dc4-frr-startup\") pod \"frr-k8s-ts4kl\" (UID: \"f86f264d-5704-4995-9e15-13b28bd18dc4\") " pod="metallb-system/frr-k8s-ts4kl" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.354280 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/f86f264d-5704-4995-9e15-13b28bd18dc4-frr-sockets\") pod \"frr-k8s-ts4kl\" (UID: \"f86f264d-5704-4995-9e15-13b28bd18dc4\") " pod="metallb-system/frr-k8s-ts4kl" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.354299 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/f86f264d-5704-4995-9e15-13b28bd18dc4-reloader\") pod \"frr-k8s-ts4kl\" (UID: \"f86f264d-5704-4995-9e15-13b28bd18dc4\") " pod="metallb-system/frr-k8s-ts4kl" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.354316 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn96z\" (UniqueName: \"kubernetes.io/projected/f86f264d-5704-4995-9e15-13b28bd18dc4-kube-api-access-bn96z\") pod \"frr-k8s-ts4kl\" (UID: \"f86f264d-5704-4995-9e15-13b28bd18dc4\") " pod="metallb-system/frr-k8s-ts4kl" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.360179 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-bs5fc"] Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.361627 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-bs5fc" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.363800 4737 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.364163 4737 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.364441 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.375099 4737 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-v586p" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.375276 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-54gqz"] Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.379929 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-54gqz" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.380898 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-54gqz"] Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.391008 4737 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.455826 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/f86f264d-5704-4995-9e15-13b28bd18dc4-frr-sockets\") pod \"frr-k8s-ts4kl\" (UID: \"f86f264d-5704-4995-9e15-13b28bd18dc4\") " pod="metallb-system/frr-k8s-ts4kl" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.456165 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lsmr\" (UniqueName: \"kubernetes.io/projected/ee468080-345d-4821-ab62-d1034fd7cd01-kube-api-access-5lsmr\") pod \"speaker-bs5fc\" (UID: \"ee468080-345d-4821-ab62-d1034fd7cd01\") " pod="metallb-system/speaker-bs5fc" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.456189 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/f86f264d-5704-4995-9e15-13b28bd18dc4-reloader\") pod \"frr-k8s-ts4kl\" (UID: \"f86f264d-5704-4995-9e15-13b28bd18dc4\") " pod="metallb-system/frr-k8s-ts4kl" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.456240 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bn96z\" (UniqueName: \"kubernetes.io/projected/f86f264d-5704-4995-9e15-13b28bd18dc4-kube-api-access-bn96z\") pod \"frr-k8s-ts4kl\" (UID: \"f86f264d-5704-4995-9e15-13b28bd18dc4\") " pod="metallb-system/frr-k8s-ts4kl" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.456287 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/f86f264d-5704-4995-9e15-13b28bd18dc4-metrics\") pod \"frr-k8s-ts4kl\" (UID: \"f86f264d-5704-4995-9e15-13b28bd18dc4\") " pod="metallb-system/frr-k8s-ts4kl" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.456305 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxtsz\" (UniqueName: \"kubernetes.io/projected/db423313-ded0-4540-abdb-a7a8c5944989-kube-api-access-jxtsz\") pod \"frr-k8s-webhook-server-7df86c4f6c-zg2pm\" (UID: \"db423313-ded0-4540-abdb-a7a8c5944989\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zg2pm" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.456324 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f86f264d-5704-4995-9e15-13b28bd18dc4-metrics-certs\") pod \"frr-k8s-ts4kl\" (UID: \"f86f264d-5704-4995-9e15-13b28bd18dc4\") " pod="metallb-system/frr-k8s-ts4kl" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.456341 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ee468080-345d-4821-ab62-d1034fd7cd01-metrics-certs\") pod \"speaker-bs5fc\" (UID: \"ee468080-345d-4821-ab62-d1034fd7cd01\") " pod="metallb-system/speaker-bs5fc" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.456360 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/f86f264d-5704-4995-9e15-13b28bd18dc4-frr-conf\") pod \"frr-k8s-ts4kl\" (UID: \"f86f264d-5704-4995-9e15-13b28bd18dc4\") " pod="metallb-system/frr-k8s-ts4kl" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.456378 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ee468080-345d-4821-ab62-d1034fd7cd01-memberlist\") pod \"speaker-bs5fc\" (UID: \"ee468080-345d-4821-ab62-d1034fd7cd01\") " pod="metallb-system/speaker-bs5fc" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.456413 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/db423313-ded0-4540-abdb-a7a8c5944989-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-zg2pm\" (UID: \"db423313-ded0-4540-abdb-a7a8c5944989\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zg2pm" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.456473 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/ee468080-345d-4821-ab62-d1034fd7cd01-metallb-excludel2\") pod \"speaker-bs5fc\" (UID: \"ee468080-345d-4821-ab62-d1034fd7cd01\") " pod="metallb-system/speaker-bs5fc" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.456495 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/f86f264d-5704-4995-9e15-13b28bd18dc4-frr-startup\") pod \"frr-k8s-ts4kl\" (UID: \"f86f264d-5704-4995-9e15-13b28bd18dc4\") " pod="metallb-system/frr-k8s-ts4kl" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.457430 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/f86f264d-5704-4995-9e15-13b28bd18dc4-frr-startup\") pod \"frr-k8s-ts4kl\" (UID: \"f86f264d-5704-4995-9e15-13b28bd18dc4\") " pod="metallb-system/frr-k8s-ts4kl" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.457646 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/f86f264d-5704-4995-9e15-13b28bd18dc4-frr-sockets\") pod \"frr-k8s-ts4kl\" (UID: \"f86f264d-5704-4995-9e15-13b28bd18dc4\") " pod="metallb-system/frr-k8s-ts4kl" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.457843 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/f86f264d-5704-4995-9e15-13b28bd18dc4-reloader\") pod \"frr-k8s-ts4kl\" (UID: \"f86f264d-5704-4995-9e15-13b28bd18dc4\") " pod="metallb-system/frr-k8s-ts4kl" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.458356 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/f86f264d-5704-4995-9e15-13b28bd18dc4-metrics\") pod \"frr-k8s-ts4kl\" (UID: \"f86f264d-5704-4995-9e15-13b28bd18dc4\") " pod="metallb-system/frr-k8s-ts4kl" Jan 26 18:48:52 crc kubenswrapper[4737]: E0126 18:48:52.458544 4737 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 26 18:48:52 crc kubenswrapper[4737]: E0126 18:48:52.458585 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f86f264d-5704-4995-9e15-13b28bd18dc4-metrics-certs podName:f86f264d-5704-4995-9e15-13b28bd18dc4 nodeName:}" failed. No retries permitted until 2026-01-26 18:48:52.95857347 +0000 UTC m=+1106.266768178 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f86f264d-5704-4995-9e15-13b28bd18dc4-metrics-certs") pod "frr-k8s-ts4kl" (UID: "f86f264d-5704-4995-9e15-13b28bd18dc4") : secret "frr-k8s-certs-secret" not found Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.458871 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/f86f264d-5704-4995-9e15-13b28bd18dc4-frr-conf\") pod \"frr-k8s-ts4kl\" (UID: \"f86f264d-5704-4995-9e15-13b28bd18dc4\") " pod="metallb-system/frr-k8s-ts4kl" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.467140 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/db423313-ded0-4540-abdb-a7a8c5944989-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-zg2pm\" (UID: \"db423313-ded0-4540-abdb-a7a8c5944989\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zg2pm" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.476904 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bn96z\" (UniqueName: \"kubernetes.io/projected/f86f264d-5704-4995-9e15-13b28bd18dc4-kube-api-access-bn96z\") pod \"frr-k8s-ts4kl\" (UID: \"f86f264d-5704-4995-9e15-13b28bd18dc4\") " pod="metallb-system/frr-k8s-ts4kl" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.479976 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxtsz\" (UniqueName: \"kubernetes.io/projected/db423313-ded0-4540-abdb-a7a8c5944989-kube-api-access-jxtsz\") pod \"frr-k8s-webhook-server-7df86c4f6c-zg2pm\" (UID: \"db423313-ded0-4540-abdb-a7a8c5944989\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zg2pm" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.558316 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zg2pm" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.558658 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ee468080-345d-4821-ab62-d1034fd7cd01-metrics-certs\") pod \"speaker-bs5fc\" (UID: \"ee468080-345d-4821-ab62-d1034fd7cd01\") " pod="metallb-system/speaker-bs5fc" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.558702 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ee468080-345d-4821-ab62-d1034fd7cd01-memberlist\") pod \"speaker-bs5fc\" (UID: \"ee468080-345d-4821-ab62-d1034fd7cd01\") " pod="metallb-system/speaker-bs5fc" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.558760 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/316b58c7-76eb-4b53-adee-6e456286e313-metrics-certs\") pod \"controller-6968d8fdc4-54gqz\" (UID: \"316b58c7-76eb-4b53-adee-6e456286e313\") " pod="metallb-system/controller-6968d8fdc4-54gqz" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.558789 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/316b58c7-76eb-4b53-adee-6e456286e313-cert\") pod \"controller-6968d8fdc4-54gqz\" (UID: \"316b58c7-76eb-4b53-adee-6e456286e313\") " pod="metallb-system/controller-6968d8fdc4-54gqz" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.558808 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/ee468080-345d-4821-ab62-d1034fd7cd01-metallb-excludel2\") pod \"speaker-bs5fc\" (UID: \"ee468080-345d-4821-ab62-d1034fd7cd01\") " pod="metallb-system/speaker-bs5fc" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.558876 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lsmr\" (UniqueName: \"kubernetes.io/projected/ee468080-345d-4821-ab62-d1034fd7cd01-kube-api-access-5lsmr\") pod \"speaker-bs5fc\" (UID: \"ee468080-345d-4821-ab62-d1034fd7cd01\") " pod="metallb-system/speaker-bs5fc" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.558923 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5qzz\" (UniqueName: \"kubernetes.io/projected/316b58c7-76eb-4b53-adee-6e456286e313-kube-api-access-r5qzz\") pod \"controller-6968d8fdc4-54gqz\" (UID: \"316b58c7-76eb-4b53-adee-6e456286e313\") " pod="metallb-system/controller-6968d8fdc4-54gqz" Jan 26 18:48:52 crc kubenswrapper[4737]: E0126 18:48:52.559564 4737 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 26 18:48:52 crc kubenswrapper[4737]: E0126 18:48:52.559604 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee468080-345d-4821-ab62-d1034fd7cd01-memberlist podName:ee468080-345d-4821-ab62-d1034fd7cd01 nodeName:}" failed. No retries permitted until 2026-01-26 18:48:53.05959065 +0000 UTC m=+1106.367785348 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/ee468080-345d-4821-ab62-d1034fd7cd01-memberlist") pod "speaker-bs5fc" (UID: "ee468080-345d-4821-ab62-d1034fd7cd01") : secret "metallb-memberlist" not found Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.560939 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/ee468080-345d-4821-ab62-d1034fd7cd01-metallb-excludel2\") pod \"speaker-bs5fc\" (UID: \"ee468080-345d-4821-ab62-d1034fd7cd01\") " pod="metallb-system/speaker-bs5fc" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.563609 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ee468080-345d-4821-ab62-d1034fd7cd01-metrics-certs\") pod \"speaker-bs5fc\" (UID: \"ee468080-345d-4821-ab62-d1034fd7cd01\") " pod="metallb-system/speaker-bs5fc" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.579816 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lsmr\" (UniqueName: \"kubernetes.io/projected/ee468080-345d-4821-ab62-d1034fd7cd01-kube-api-access-5lsmr\") pod \"speaker-bs5fc\" (UID: \"ee468080-345d-4821-ab62-d1034fd7cd01\") " pod="metallb-system/speaker-bs5fc" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.660341 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5qzz\" (UniqueName: \"kubernetes.io/projected/316b58c7-76eb-4b53-adee-6e456286e313-kube-api-access-r5qzz\") pod \"controller-6968d8fdc4-54gqz\" (UID: \"316b58c7-76eb-4b53-adee-6e456286e313\") " pod="metallb-system/controller-6968d8fdc4-54gqz" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.660783 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/316b58c7-76eb-4b53-adee-6e456286e313-metrics-certs\") pod \"controller-6968d8fdc4-54gqz\" (UID: \"316b58c7-76eb-4b53-adee-6e456286e313\") " pod="metallb-system/controller-6968d8fdc4-54gqz" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.660825 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/316b58c7-76eb-4b53-adee-6e456286e313-cert\") pod \"controller-6968d8fdc4-54gqz\" (UID: \"316b58c7-76eb-4b53-adee-6e456286e313\") " pod="metallb-system/controller-6968d8fdc4-54gqz" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.666679 4737 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.677929 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/316b58c7-76eb-4b53-adee-6e456286e313-metrics-certs\") pod \"controller-6968d8fdc4-54gqz\" (UID: \"316b58c7-76eb-4b53-adee-6e456286e313\") " pod="metallb-system/controller-6968d8fdc4-54gqz" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.678011 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/316b58c7-76eb-4b53-adee-6e456286e313-cert\") pod \"controller-6968d8fdc4-54gqz\" (UID: \"316b58c7-76eb-4b53-adee-6e456286e313\") " pod="metallb-system/controller-6968d8fdc4-54gqz" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.685061 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5qzz\" (UniqueName: \"kubernetes.io/projected/316b58c7-76eb-4b53-adee-6e456286e313-kube-api-access-r5qzz\") pod \"controller-6968d8fdc4-54gqz\" (UID: \"316b58c7-76eb-4b53-adee-6e456286e313\") " pod="metallb-system/controller-6968d8fdc4-54gqz" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.711060 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-54gqz" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.966655 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f86f264d-5704-4995-9e15-13b28bd18dc4-metrics-certs\") pod \"frr-k8s-ts4kl\" (UID: \"f86f264d-5704-4995-9e15-13b28bd18dc4\") " pod="metallb-system/frr-k8s-ts4kl" Jan 26 18:48:52 crc kubenswrapper[4737]: I0126 18:48:52.970477 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f86f264d-5704-4995-9e15-13b28bd18dc4-metrics-certs\") pod \"frr-k8s-ts4kl\" (UID: \"f86f264d-5704-4995-9e15-13b28bd18dc4\") " pod="metallb-system/frr-k8s-ts4kl" Jan 26 18:48:53 crc kubenswrapper[4737]: I0126 18:48:53.050615 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-zg2pm"] Jan 26 18:48:53 crc kubenswrapper[4737]: I0126 18:48:53.057584 4737 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 18:48:53 crc kubenswrapper[4737]: I0126 18:48:53.069005 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ee468080-345d-4821-ab62-d1034fd7cd01-memberlist\") pod \"speaker-bs5fc\" (UID: \"ee468080-345d-4821-ab62-d1034fd7cd01\") " pod="metallb-system/speaker-bs5fc" Jan 26 18:48:53 crc kubenswrapper[4737]: E0126 18:48:53.070047 4737 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 26 18:48:53 crc kubenswrapper[4737]: E0126 18:48:53.070119 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee468080-345d-4821-ab62-d1034fd7cd01-memberlist podName:ee468080-345d-4821-ab62-d1034fd7cd01 nodeName:}" failed. No retries permitted until 2026-01-26 18:48:54.070103394 +0000 UTC m=+1107.378298102 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/ee468080-345d-4821-ab62-d1034fd7cd01-memberlist") pod "speaker-bs5fc" (UID: "ee468080-345d-4821-ab62-d1034fd7cd01") : secret "metallb-memberlist" not found Jan 26 18:48:53 crc kubenswrapper[4737]: I0126 18:48:53.134463 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-ts4kl" Jan 26 18:48:53 crc kubenswrapper[4737]: I0126 18:48:53.208020 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-54gqz"] Jan 26 18:48:53 crc kubenswrapper[4737]: W0126 18:48:53.213410 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod316b58c7_76eb_4b53_adee_6e456286e313.slice/crio-8af7f17ede0a53a12b39fec63add0721cea49b74627c15b287575318284805e6 WatchSource:0}: Error finding container 8af7f17ede0a53a12b39fec63add0721cea49b74627c15b287575318284805e6: Status 404 returned error can't find the container with id 8af7f17ede0a53a12b39fec63add0721cea49b74627c15b287575318284805e6 Jan 26 18:48:53 crc kubenswrapper[4737]: I0126 18:48:53.582792 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-54gqz" event={"ID":"316b58c7-76eb-4b53-adee-6e456286e313","Type":"ContainerStarted","Data":"8af7f17ede0a53a12b39fec63add0721cea49b74627c15b287575318284805e6"} Jan 26 18:48:53 crc kubenswrapper[4737]: I0126 18:48:53.583858 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zg2pm" event={"ID":"db423313-ded0-4540-abdb-a7a8c5944989","Type":"ContainerStarted","Data":"7fcd80d82cac59a039a57a333b82521e75df39e4c3c52f4eedb653be9df47d02"} Jan 26 18:48:54 crc kubenswrapper[4737]: I0126 18:48:54.088284 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ee468080-345d-4821-ab62-d1034fd7cd01-memberlist\") pod \"speaker-bs5fc\" (UID: \"ee468080-345d-4821-ab62-d1034fd7cd01\") " pod="metallb-system/speaker-bs5fc" Jan 26 18:48:54 crc kubenswrapper[4737]: I0126 18:48:54.093234 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ee468080-345d-4821-ab62-d1034fd7cd01-memberlist\") pod \"speaker-bs5fc\" (UID: \"ee468080-345d-4821-ab62-d1034fd7cd01\") " pod="metallb-system/speaker-bs5fc" Jan 26 18:48:54 crc kubenswrapper[4737]: I0126 18:48:54.192029 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-bs5fc" Jan 26 18:48:54 crc kubenswrapper[4737]: I0126 18:48:54.628433 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ts4kl" event={"ID":"f86f264d-5704-4995-9e15-13b28bd18dc4","Type":"ContainerStarted","Data":"2c9865219106622e6102af86fd78a94510181adf475ad8abf61a5e4c60453fa4"} Jan 26 18:48:54 crc kubenswrapper[4737]: I0126 18:48:54.640920 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-54gqz" event={"ID":"316b58c7-76eb-4b53-adee-6e456286e313","Type":"ContainerStarted","Data":"6e49a64fcc013d949fd3e317deadfa953917916f77ae27d957eef1e2ac873242"} Jan 26 18:48:54 crc kubenswrapper[4737]: I0126 18:48:54.640963 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-54gqz" event={"ID":"316b58c7-76eb-4b53-adee-6e456286e313","Type":"ContainerStarted","Data":"5f0f2e53ec8f0e5a6272eccb12bf858e3d5b962990c6e32a4d573754d3074916"} Jan 26 18:48:54 crc kubenswrapper[4737]: I0126 18:48:54.640985 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-54gqz" Jan 26 18:48:54 crc kubenswrapper[4737]: I0126 18:48:54.642433 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-bs5fc" event={"ID":"ee468080-345d-4821-ab62-d1034fd7cd01","Type":"ContainerStarted","Data":"50c3390a40e0e03fc063355664567332b3a8f50cbded320a690def2d9a956cf5"} Jan 26 18:48:54 crc kubenswrapper[4737]: I0126 18:48:54.642471 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-bs5fc" event={"ID":"ee468080-345d-4821-ab62-d1034fd7cd01","Type":"ContainerStarted","Data":"39f7a1b118a50aacc0e8f45a11bab5d17a1e311c8dd0053dd27270747ca2ffda"} Jan 26 18:48:54 crc kubenswrapper[4737]: I0126 18:48:54.681397 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-54gqz" podStartSLOduration=2.681374407 podStartE2EDuration="2.681374407s" podCreationTimestamp="2026-01-26 18:48:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:48:54.679399161 +0000 UTC m=+1107.987593869" watchObservedRunningTime="2026-01-26 18:48:54.681374407 +0000 UTC m=+1107.989569115" Jan 26 18:48:55 crc kubenswrapper[4737]: I0126 18:48:55.651165 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-bs5fc" event={"ID":"ee468080-345d-4821-ab62-d1034fd7cd01","Type":"ContainerStarted","Data":"92708806125dac7321f516010e98da07ed08b8465eda9f2d9cd3a77461ff1cc4"} Jan 26 18:48:55 crc kubenswrapper[4737]: I0126 18:48:55.674464 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-bs5fc" podStartSLOduration=3.674442372 podStartE2EDuration="3.674442372s" podCreationTimestamp="2026-01-26 18:48:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:48:55.672181449 +0000 UTC m=+1108.980376157" watchObservedRunningTime="2026-01-26 18:48:55.674442372 +0000 UTC m=+1108.982637080" Jan 26 18:48:56 crc kubenswrapper[4737]: I0126 18:48:56.659718 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-bs5fc" Jan 26 18:49:02 crc kubenswrapper[4737]: I0126 18:49:02.730334 4737 generic.go:334] "Generic (PLEG): container finished" podID="f86f264d-5704-4995-9e15-13b28bd18dc4" containerID="2617f263bed996c55bb06ad1412a739590ac01860424500d7ec19182229a8f58" exitCode=0 Jan 26 18:49:02 crc kubenswrapper[4737]: I0126 18:49:02.730466 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ts4kl" event={"ID":"f86f264d-5704-4995-9e15-13b28bd18dc4","Type":"ContainerDied","Data":"2617f263bed996c55bb06ad1412a739590ac01860424500d7ec19182229a8f58"} Jan 26 18:49:02 crc kubenswrapper[4737]: I0126 18:49:02.732770 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zg2pm" event={"ID":"db423313-ded0-4540-abdb-a7a8c5944989","Type":"ContainerStarted","Data":"a5eaa47b9485aec676e6d261793a9c886bbca9f31127049a635a2be0199e85b9"} Jan 26 18:49:02 crc kubenswrapper[4737]: I0126 18:49:02.732935 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zg2pm" Jan 26 18:49:02 crc kubenswrapper[4737]: I0126 18:49:02.870263 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zg2pm" podStartSLOduration=2.121276075 podStartE2EDuration="10.87020763s" podCreationTimestamp="2026-01-26 18:48:52 +0000 UTC" firstStartedPulling="2026-01-26 18:48:53.057236232 +0000 UTC m=+1106.365430940" lastFinishedPulling="2026-01-26 18:49:01.806167787 +0000 UTC m=+1115.114362495" observedRunningTime="2026-01-26 18:49:02.833932203 +0000 UTC m=+1116.142126911" watchObservedRunningTime="2026-01-26 18:49:02.87020763 +0000 UTC m=+1116.178402328" Jan 26 18:49:03 crc kubenswrapper[4737]: I0126 18:49:03.743978 4737 generic.go:334] "Generic (PLEG): container finished" podID="f86f264d-5704-4995-9e15-13b28bd18dc4" containerID="d7fc35e08b735108300d3cfa0f848bf47baf0f0efdc00161116a2d704fbf60c1" exitCode=0 Jan 26 18:49:03 crc kubenswrapper[4737]: I0126 18:49:03.744057 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ts4kl" event={"ID":"f86f264d-5704-4995-9e15-13b28bd18dc4","Type":"ContainerDied","Data":"d7fc35e08b735108300d3cfa0f848bf47baf0f0efdc00161116a2d704fbf60c1"} Jan 26 18:49:04 crc kubenswrapper[4737]: I0126 18:49:04.196806 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-bs5fc" Jan 26 18:49:04 crc kubenswrapper[4737]: I0126 18:49:04.754358 4737 generic.go:334] "Generic (PLEG): container finished" podID="f86f264d-5704-4995-9e15-13b28bd18dc4" containerID="dafce5034c8579e114192f0a67cd5fd5f21862556ac0a077847f8e5e64d8ce4c" exitCode=0 Jan 26 18:49:04 crc kubenswrapper[4737]: I0126 18:49:04.754411 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ts4kl" event={"ID":"f86f264d-5704-4995-9e15-13b28bd18dc4","Type":"ContainerDied","Data":"dafce5034c8579e114192f0a67cd5fd5f21862556ac0a077847f8e5e64d8ce4c"} Jan 26 18:49:05 crc kubenswrapper[4737]: I0126 18:49:05.785711 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ts4kl" event={"ID":"f86f264d-5704-4995-9e15-13b28bd18dc4","Type":"ContainerStarted","Data":"122f95671ebcc6ef7c873f5294621c53100efab959f93221c151048fabb38347"} Jan 26 18:49:05 crc kubenswrapper[4737]: I0126 18:49:05.786118 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ts4kl" event={"ID":"f86f264d-5704-4995-9e15-13b28bd18dc4","Type":"ContainerStarted","Data":"e3c77c3dc0eaceb2e575aa7074a8f84bbbc8b5b93694d0a062778fb0019d45a9"} Jan 26 18:49:05 crc kubenswrapper[4737]: I0126 18:49:05.786137 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ts4kl" event={"ID":"f86f264d-5704-4995-9e15-13b28bd18dc4","Type":"ContainerStarted","Data":"4be820939a639e6a9ec7ee7f090a0f31969046f8c86bcbe9b8891ae8ceb5598f"} Jan 26 18:49:05 crc kubenswrapper[4737]: I0126 18:49:05.786147 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ts4kl" event={"ID":"f86f264d-5704-4995-9e15-13b28bd18dc4","Type":"ContainerStarted","Data":"f87b68ae3b8e347b9cec7c52ebab4b7b5bdebf6cbabcd531f920452161ab7986"} Jan 26 18:49:05 crc kubenswrapper[4737]: I0126 18:49:05.786156 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ts4kl" event={"ID":"f86f264d-5704-4995-9e15-13b28bd18dc4","Type":"ContainerStarted","Data":"3cb1072fdd870d6c5610237e06d88abb6f9e0cc8e8013575d345852df985bb23"} Jan 26 18:49:06 crc kubenswrapper[4737]: I0126 18:49:06.797798 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ts4kl" event={"ID":"f86f264d-5704-4995-9e15-13b28bd18dc4","Type":"ContainerStarted","Data":"358d1f50d80f190fa425c367c65037280a876954e0b7f4798e649243249fe14f"} Jan 26 18:49:06 crc kubenswrapper[4737]: I0126 18:49:06.798235 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-ts4kl" Jan 26 18:49:06 crc kubenswrapper[4737]: I0126 18:49:06.822342 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-ts4kl" podStartSLOduration=6.747081978 podStartE2EDuration="14.822322428s" podCreationTimestamp="2026-01-26 18:48:52 +0000 UTC" firstStartedPulling="2026-01-26 18:48:53.7036047 +0000 UTC m=+1107.011799408" lastFinishedPulling="2026-01-26 18:49:01.77884515 +0000 UTC m=+1115.087039858" observedRunningTime="2026-01-26 18:49:06.817865243 +0000 UTC m=+1120.126059951" watchObservedRunningTime="2026-01-26 18:49:06.822322428 +0000 UTC m=+1120.130517136" Jan 26 18:49:07 crc kubenswrapper[4737]: I0126 18:49:07.180411 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-ppc5t"] Jan 26 18:49:07 crc kubenswrapper[4737]: I0126 18:49:07.200785 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-ppc5t" Jan 26 18:49:07 crc kubenswrapper[4737]: I0126 18:49:07.218761 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 26 18:49:07 crc kubenswrapper[4737]: I0126 18:49:07.218989 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-dpbc6" Jan 26 18:49:07 crc kubenswrapper[4737]: I0126 18:49:07.219128 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 26 18:49:07 crc kubenswrapper[4737]: I0126 18:49:07.232891 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-ppc5t"] Jan 26 18:49:07 crc kubenswrapper[4737]: I0126 18:49:07.347857 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22b86\" (UniqueName: \"kubernetes.io/projected/b3f7ad6f-94a0-4fb5-92cf-dccc8602c666-kube-api-access-22b86\") pod \"openstack-operator-index-ppc5t\" (UID: \"b3f7ad6f-94a0-4fb5-92cf-dccc8602c666\") " pod="openstack-operators/openstack-operator-index-ppc5t" Jan 26 18:49:07 crc kubenswrapper[4737]: I0126 18:49:07.450325 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22b86\" (UniqueName: \"kubernetes.io/projected/b3f7ad6f-94a0-4fb5-92cf-dccc8602c666-kube-api-access-22b86\") pod \"openstack-operator-index-ppc5t\" (UID: \"b3f7ad6f-94a0-4fb5-92cf-dccc8602c666\") " pod="openstack-operators/openstack-operator-index-ppc5t" Jan 26 18:49:07 crc kubenswrapper[4737]: I0126 18:49:07.470130 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22b86\" (UniqueName: \"kubernetes.io/projected/b3f7ad6f-94a0-4fb5-92cf-dccc8602c666-kube-api-access-22b86\") pod \"openstack-operator-index-ppc5t\" (UID: \"b3f7ad6f-94a0-4fb5-92cf-dccc8602c666\") " pod="openstack-operators/openstack-operator-index-ppc5t" Jan 26 18:49:07 crc kubenswrapper[4737]: I0126 18:49:07.551818 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-ppc5t" Jan 26 18:49:07 crc kubenswrapper[4737]: I0126 18:49:07.989901 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-ppc5t"] Jan 26 18:49:07 crc kubenswrapper[4737]: W0126 18:49:07.996947 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3f7ad6f_94a0_4fb5_92cf_dccc8602c666.slice/crio-61cba0e10eec9a026e45be42a9886718537a5bba9082e9fa6dcce65df8b99af1 WatchSource:0}: Error finding container 61cba0e10eec9a026e45be42a9886718537a5bba9082e9fa6dcce65df8b99af1: Status 404 returned error can't find the container with id 61cba0e10eec9a026e45be42a9886718537a5bba9082e9fa6dcce65df8b99af1 Jan 26 18:49:08 crc kubenswrapper[4737]: I0126 18:49:08.134900 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-ts4kl" Jan 26 18:49:08 crc kubenswrapper[4737]: I0126 18:49:08.182757 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-ts4kl" Jan 26 18:49:08 crc kubenswrapper[4737]: I0126 18:49:08.823560 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-ppc5t" event={"ID":"b3f7ad6f-94a0-4fb5-92cf-dccc8602c666","Type":"ContainerStarted","Data":"61cba0e10eec9a026e45be42a9886718537a5bba9082e9fa6dcce65df8b99af1"} Jan 26 18:49:10 crc kubenswrapper[4737]: I0126 18:49:10.557588 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-ppc5t"] Jan 26 18:49:10 crc kubenswrapper[4737]: I0126 18:49:10.839215 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-ppc5t" event={"ID":"b3f7ad6f-94a0-4fb5-92cf-dccc8602c666","Type":"ContainerStarted","Data":"0416147e61dab23a1ea8d9e04c50042ff8f1bced36025fc2e651106fd8cd83ca"} Jan 26 18:49:10 crc kubenswrapper[4737]: I0126 18:49:10.863970 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-ppc5t" podStartSLOduration=2.035616702 podStartE2EDuration="3.863947546s" podCreationTimestamp="2026-01-26 18:49:07 +0000 UTC" firstStartedPulling="2026-01-26 18:49:07.998903619 +0000 UTC m=+1121.307098327" lastFinishedPulling="2026-01-26 18:49:09.827234463 +0000 UTC m=+1123.135429171" observedRunningTime="2026-01-26 18:49:10.854340292 +0000 UTC m=+1124.162535020" watchObservedRunningTime="2026-01-26 18:49:10.863947546 +0000 UTC m=+1124.172142254" Jan 26 18:49:11 crc kubenswrapper[4737]: I0126 18:49:11.167135 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-n9rk2"] Jan 26 18:49:11 crc kubenswrapper[4737]: I0126 18:49:11.168152 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-n9rk2" Jan 26 18:49:11 crc kubenswrapper[4737]: I0126 18:49:11.181897 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-n9rk2"] Jan 26 18:49:11 crc kubenswrapper[4737]: I0126 18:49:11.325487 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rtfg\" (UniqueName: \"kubernetes.io/projected/8f103d19-388b-408e-a7e5-b17428b986c9-kube-api-access-8rtfg\") pod \"openstack-operator-index-n9rk2\" (UID: \"8f103d19-388b-408e-a7e5-b17428b986c9\") " pod="openstack-operators/openstack-operator-index-n9rk2" Jan 26 18:49:11 crc kubenswrapper[4737]: I0126 18:49:11.427376 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rtfg\" (UniqueName: \"kubernetes.io/projected/8f103d19-388b-408e-a7e5-b17428b986c9-kube-api-access-8rtfg\") pod \"openstack-operator-index-n9rk2\" (UID: \"8f103d19-388b-408e-a7e5-b17428b986c9\") " pod="openstack-operators/openstack-operator-index-n9rk2" Jan 26 18:49:11 crc kubenswrapper[4737]: I0126 18:49:11.448035 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rtfg\" (UniqueName: \"kubernetes.io/projected/8f103d19-388b-408e-a7e5-b17428b986c9-kube-api-access-8rtfg\") pod \"openstack-operator-index-n9rk2\" (UID: \"8f103d19-388b-408e-a7e5-b17428b986c9\") " pod="openstack-operators/openstack-operator-index-n9rk2" Jan 26 18:49:11 crc kubenswrapper[4737]: I0126 18:49:11.489160 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-n9rk2" Jan 26 18:49:11 crc kubenswrapper[4737]: I0126 18:49:11.846245 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-ppc5t" podUID="b3f7ad6f-94a0-4fb5-92cf-dccc8602c666" containerName="registry-server" containerID="cri-o://0416147e61dab23a1ea8d9e04c50042ff8f1bced36025fc2e651106fd8cd83ca" gracePeriod=2 Jan 26 18:49:11 crc kubenswrapper[4737]: W0126 18:49:11.907584 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f103d19_388b_408e_a7e5_b17428b986c9.slice/crio-dc159089e982191112e58ed647ffa7d96e4c93b7fb4059e0225aa04cc81c00a6 WatchSource:0}: Error finding container dc159089e982191112e58ed647ffa7d96e4c93b7fb4059e0225aa04cc81c00a6: Status 404 returned error can't find the container with id dc159089e982191112e58ed647ffa7d96e4c93b7fb4059e0225aa04cc81c00a6 Jan 26 18:49:11 crc kubenswrapper[4737]: I0126 18:49:11.909961 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-n9rk2"] Jan 26 18:49:12 crc kubenswrapper[4737]: I0126 18:49:12.565179 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-zg2pm" Jan 26 18:49:12 crc kubenswrapper[4737]: I0126 18:49:12.716665 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-54gqz" Jan 26 18:49:12 crc kubenswrapper[4737]: I0126 18:49:12.854208 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-n9rk2" event={"ID":"8f103d19-388b-408e-a7e5-b17428b986c9","Type":"ContainerStarted","Data":"dc159089e982191112e58ed647ffa7d96e4c93b7fb4059e0225aa04cc81c00a6"} Jan 26 18:49:12 crc kubenswrapper[4737]: I0126 18:49:12.856271 4737 generic.go:334] "Generic (PLEG): container finished" podID="b3f7ad6f-94a0-4fb5-92cf-dccc8602c666" containerID="0416147e61dab23a1ea8d9e04c50042ff8f1bced36025fc2e651106fd8cd83ca" exitCode=0 Jan 26 18:49:12 crc kubenswrapper[4737]: I0126 18:49:12.856305 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-ppc5t" event={"ID":"b3f7ad6f-94a0-4fb5-92cf-dccc8602c666","Type":"ContainerDied","Data":"0416147e61dab23a1ea8d9e04c50042ff8f1bced36025fc2e651106fd8cd83ca"} Jan 26 18:49:13 crc kubenswrapper[4737]: I0126 18:49:13.864145 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-n9rk2" event={"ID":"8f103d19-388b-408e-a7e5-b17428b986c9","Type":"ContainerStarted","Data":"6c47e1535c890afdcc0d09d0e35d786bfd9a602395ec7e58b03aba329234f5ac"} Jan 26 18:49:13 crc kubenswrapper[4737]: I0126 18:49:13.884898 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-n9rk2" podStartSLOduration=2.504495051 podStartE2EDuration="2.884871635s" podCreationTimestamp="2026-01-26 18:49:11 +0000 UTC" firstStartedPulling="2026-01-26 18:49:11.911885523 +0000 UTC m=+1125.220080251" lastFinishedPulling="2026-01-26 18:49:12.292262127 +0000 UTC m=+1125.600456835" observedRunningTime="2026-01-26 18:49:13.882301915 +0000 UTC m=+1127.190496623" watchObservedRunningTime="2026-01-26 18:49:13.884871635 +0000 UTC m=+1127.193066353" Jan 26 18:49:14 crc kubenswrapper[4737]: I0126 18:49:14.026050 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-ppc5t" Jan 26 18:49:14 crc kubenswrapper[4737]: I0126 18:49:14.172778 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22b86\" (UniqueName: \"kubernetes.io/projected/b3f7ad6f-94a0-4fb5-92cf-dccc8602c666-kube-api-access-22b86\") pod \"b3f7ad6f-94a0-4fb5-92cf-dccc8602c666\" (UID: \"b3f7ad6f-94a0-4fb5-92cf-dccc8602c666\") " Jan 26 18:49:14 crc kubenswrapper[4737]: I0126 18:49:14.178815 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3f7ad6f-94a0-4fb5-92cf-dccc8602c666-kube-api-access-22b86" (OuterVolumeSpecName: "kube-api-access-22b86") pod "b3f7ad6f-94a0-4fb5-92cf-dccc8602c666" (UID: "b3f7ad6f-94a0-4fb5-92cf-dccc8602c666"). InnerVolumeSpecName "kube-api-access-22b86". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:49:14 crc kubenswrapper[4737]: I0126 18:49:14.275313 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22b86\" (UniqueName: \"kubernetes.io/projected/b3f7ad6f-94a0-4fb5-92cf-dccc8602c666-kube-api-access-22b86\") on node \"crc\" DevicePath \"\"" Jan 26 18:49:14 crc kubenswrapper[4737]: I0126 18:49:14.872611 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-ppc5t" event={"ID":"b3f7ad6f-94a0-4fb5-92cf-dccc8602c666","Type":"ContainerDied","Data":"61cba0e10eec9a026e45be42a9886718537a5bba9082e9fa6dcce65df8b99af1"} Jan 26 18:49:14 crc kubenswrapper[4737]: I0126 18:49:14.872670 4737 scope.go:117] "RemoveContainer" containerID="0416147e61dab23a1ea8d9e04c50042ff8f1bced36025fc2e651106fd8cd83ca" Jan 26 18:49:14 crc kubenswrapper[4737]: I0126 18:49:14.872795 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-ppc5t" Jan 26 18:49:14 crc kubenswrapper[4737]: I0126 18:49:14.903738 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-ppc5t"] Jan 26 18:49:14 crc kubenswrapper[4737]: I0126 18:49:14.909757 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-ppc5t"] Jan 26 18:49:14 crc kubenswrapper[4737]: I0126 18:49:14.990674 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3f7ad6f-94a0-4fb5-92cf-dccc8602c666" path="/var/lib/kubelet/pods/b3f7ad6f-94a0-4fb5-92cf-dccc8602c666/volumes" Jan 26 18:49:21 crc kubenswrapper[4737]: I0126 18:49:21.489632 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-n9rk2" Jan 26 18:49:21 crc kubenswrapper[4737]: I0126 18:49:21.490397 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-n9rk2" Jan 26 18:49:21 crc kubenswrapper[4737]: I0126 18:49:21.522315 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-n9rk2" Jan 26 18:49:21 crc kubenswrapper[4737]: I0126 18:49:21.948128 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-n9rk2" Jan 26 18:49:23 crc kubenswrapper[4737]: I0126 18:49:23.140349 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-ts4kl" Jan 26 18:49:28 crc kubenswrapper[4737]: I0126 18:49:28.880625 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/5c66b61d564fc639d515e788b700d4c5c2c3cff0a71ecd99e42f80cf9454pgp"] Jan 26 18:49:28 crc kubenswrapper[4737]: E0126 18:49:28.881539 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3f7ad6f-94a0-4fb5-92cf-dccc8602c666" containerName="registry-server" Jan 26 18:49:28 crc kubenswrapper[4737]: I0126 18:49:28.881555 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3f7ad6f-94a0-4fb5-92cf-dccc8602c666" containerName="registry-server" Jan 26 18:49:28 crc kubenswrapper[4737]: I0126 18:49:28.881711 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3f7ad6f-94a0-4fb5-92cf-dccc8602c666" containerName="registry-server" Jan 26 18:49:28 crc kubenswrapper[4737]: I0126 18:49:28.882972 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5c66b61d564fc639d515e788b700d4c5c2c3cff0a71ecd99e42f80cf9454pgp" Jan 26 18:49:28 crc kubenswrapper[4737]: I0126 18:49:28.885438 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-rtt5z" Jan 26 18:49:28 crc kubenswrapper[4737]: I0126 18:49:28.889781 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/5c66b61d564fc639d515e788b700d4c5c2c3cff0a71ecd99e42f80cf9454pgp"] Jan 26 18:49:28 crc kubenswrapper[4737]: I0126 18:49:28.999108 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzg45\" (UniqueName: \"kubernetes.io/projected/ad64c1f6-5d9c-4ec5-990c-354f54f9f183-kube-api-access-xzg45\") pod \"5c66b61d564fc639d515e788b700d4c5c2c3cff0a71ecd99e42f80cf9454pgp\" (UID: \"ad64c1f6-5d9c-4ec5-990c-354f54f9f183\") " pod="openstack-operators/5c66b61d564fc639d515e788b700d4c5c2c3cff0a71ecd99e42f80cf9454pgp" Jan 26 18:49:28 crc kubenswrapper[4737]: I0126 18:49:28.999663 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ad64c1f6-5d9c-4ec5-990c-354f54f9f183-bundle\") pod \"5c66b61d564fc639d515e788b700d4c5c2c3cff0a71ecd99e42f80cf9454pgp\" (UID: \"ad64c1f6-5d9c-4ec5-990c-354f54f9f183\") " pod="openstack-operators/5c66b61d564fc639d515e788b700d4c5c2c3cff0a71ecd99e42f80cf9454pgp" Jan 26 18:49:28 crc kubenswrapper[4737]: I0126 18:49:28.999771 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ad64c1f6-5d9c-4ec5-990c-354f54f9f183-util\") pod \"5c66b61d564fc639d515e788b700d4c5c2c3cff0a71ecd99e42f80cf9454pgp\" (UID: \"ad64c1f6-5d9c-4ec5-990c-354f54f9f183\") " pod="openstack-operators/5c66b61d564fc639d515e788b700d4c5c2c3cff0a71ecd99e42f80cf9454pgp" Jan 26 18:49:29 crc kubenswrapper[4737]: I0126 18:49:29.101535 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzg45\" (UniqueName: \"kubernetes.io/projected/ad64c1f6-5d9c-4ec5-990c-354f54f9f183-kube-api-access-xzg45\") pod \"5c66b61d564fc639d515e788b700d4c5c2c3cff0a71ecd99e42f80cf9454pgp\" (UID: \"ad64c1f6-5d9c-4ec5-990c-354f54f9f183\") " pod="openstack-operators/5c66b61d564fc639d515e788b700d4c5c2c3cff0a71ecd99e42f80cf9454pgp" Jan 26 18:49:29 crc kubenswrapper[4737]: I0126 18:49:29.101593 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ad64c1f6-5d9c-4ec5-990c-354f54f9f183-bundle\") pod \"5c66b61d564fc639d515e788b700d4c5c2c3cff0a71ecd99e42f80cf9454pgp\" (UID: \"ad64c1f6-5d9c-4ec5-990c-354f54f9f183\") " pod="openstack-operators/5c66b61d564fc639d515e788b700d4c5c2c3cff0a71ecd99e42f80cf9454pgp" Jan 26 18:49:29 crc kubenswrapper[4737]: I0126 18:49:29.101626 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ad64c1f6-5d9c-4ec5-990c-354f54f9f183-util\") pod \"5c66b61d564fc639d515e788b700d4c5c2c3cff0a71ecd99e42f80cf9454pgp\" (UID: \"ad64c1f6-5d9c-4ec5-990c-354f54f9f183\") " pod="openstack-operators/5c66b61d564fc639d515e788b700d4c5c2c3cff0a71ecd99e42f80cf9454pgp" Jan 26 18:49:29 crc kubenswrapper[4737]: I0126 18:49:29.102362 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ad64c1f6-5d9c-4ec5-990c-354f54f9f183-util\") pod \"5c66b61d564fc639d515e788b700d4c5c2c3cff0a71ecd99e42f80cf9454pgp\" (UID: \"ad64c1f6-5d9c-4ec5-990c-354f54f9f183\") " pod="openstack-operators/5c66b61d564fc639d515e788b700d4c5c2c3cff0a71ecd99e42f80cf9454pgp" Jan 26 18:49:29 crc kubenswrapper[4737]: I0126 18:49:29.102442 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ad64c1f6-5d9c-4ec5-990c-354f54f9f183-bundle\") pod \"5c66b61d564fc639d515e788b700d4c5c2c3cff0a71ecd99e42f80cf9454pgp\" (UID: \"ad64c1f6-5d9c-4ec5-990c-354f54f9f183\") " pod="openstack-operators/5c66b61d564fc639d515e788b700d4c5c2c3cff0a71ecd99e42f80cf9454pgp" Jan 26 18:49:29 crc kubenswrapper[4737]: I0126 18:49:29.136500 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzg45\" (UniqueName: \"kubernetes.io/projected/ad64c1f6-5d9c-4ec5-990c-354f54f9f183-kube-api-access-xzg45\") pod \"5c66b61d564fc639d515e788b700d4c5c2c3cff0a71ecd99e42f80cf9454pgp\" (UID: \"ad64c1f6-5d9c-4ec5-990c-354f54f9f183\") " pod="openstack-operators/5c66b61d564fc639d515e788b700d4c5c2c3cff0a71ecd99e42f80cf9454pgp" Jan 26 18:49:29 crc kubenswrapper[4737]: I0126 18:49:29.202792 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5c66b61d564fc639d515e788b700d4c5c2c3cff0a71ecd99e42f80cf9454pgp" Jan 26 18:49:29 crc kubenswrapper[4737]: I0126 18:49:29.754245 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/5c66b61d564fc639d515e788b700d4c5c2c3cff0a71ecd99e42f80cf9454pgp"] Jan 26 18:49:29 crc kubenswrapper[4737]: I0126 18:49:29.978513 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5c66b61d564fc639d515e788b700d4c5c2c3cff0a71ecd99e42f80cf9454pgp" event={"ID":"ad64c1f6-5d9c-4ec5-990c-354f54f9f183","Type":"ContainerStarted","Data":"7511aae43cd647e6572178ca58cbf0660ebf705d42a71670eaec2774118acbcd"} Jan 26 18:49:29 crc kubenswrapper[4737]: I0126 18:49:29.978900 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5c66b61d564fc639d515e788b700d4c5c2c3cff0a71ecd99e42f80cf9454pgp" event={"ID":"ad64c1f6-5d9c-4ec5-990c-354f54f9f183","Type":"ContainerStarted","Data":"619b001410ae45af05264409bd449333f7b0cbe80ca692044aa2cbae3da360a6"} Jan 26 18:49:30 crc kubenswrapper[4737]: I0126 18:49:30.989720 4737 generic.go:334] "Generic (PLEG): container finished" podID="ad64c1f6-5d9c-4ec5-990c-354f54f9f183" containerID="7511aae43cd647e6572178ca58cbf0660ebf705d42a71670eaec2774118acbcd" exitCode=0 Jan 26 18:49:31 crc kubenswrapper[4737]: I0126 18:49:31.001127 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5c66b61d564fc639d515e788b700d4c5c2c3cff0a71ecd99e42f80cf9454pgp" event={"ID":"ad64c1f6-5d9c-4ec5-990c-354f54f9f183","Type":"ContainerDied","Data":"7511aae43cd647e6572178ca58cbf0660ebf705d42a71670eaec2774118acbcd"} Jan 26 18:49:32 crc kubenswrapper[4737]: I0126 18:49:31.999771 4737 generic.go:334] "Generic (PLEG): container finished" podID="ad64c1f6-5d9c-4ec5-990c-354f54f9f183" containerID="2be2dbe6d9c0a30592c4b4e91aedb34663e56fb208a20964d8fb2f603c549732" exitCode=0 Jan 26 18:49:32 crc kubenswrapper[4737]: I0126 18:49:32.000165 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5c66b61d564fc639d515e788b700d4c5c2c3cff0a71ecd99e42f80cf9454pgp" event={"ID":"ad64c1f6-5d9c-4ec5-990c-354f54f9f183","Type":"ContainerDied","Data":"2be2dbe6d9c0a30592c4b4e91aedb34663e56fb208a20964d8fb2f603c549732"} Jan 26 18:49:33 crc kubenswrapper[4737]: I0126 18:49:33.012409 4737 generic.go:334] "Generic (PLEG): container finished" podID="ad64c1f6-5d9c-4ec5-990c-354f54f9f183" containerID="09392905714c2db0daa5a9b8bbbbc5b801fc81d71be1742030738fbe67be1669" exitCode=0 Jan 26 18:49:33 crc kubenswrapper[4737]: I0126 18:49:33.012481 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5c66b61d564fc639d515e788b700d4c5c2c3cff0a71ecd99e42f80cf9454pgp" event={"ID":"ad64c1f6-5d9c-4ec5-990c-354f54f9f183","Type":"ContainerDied","Data":"09392905714c2db0daa5a9b8bbbbc5b801fc81d71be1742030738fbe67be1669"} Jan 26 18:49:34 crc kubenswrapper[4737]: I0126 18:49:34.301313 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5c66b61d564fc639d515e788b700d4c5c2c3cff0a71ecd99e42f80cf9454pgp" Jan 26 18:49:34 crc kubenswrapper[4737]: I0126 18:49:34.487187 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzg45\" (UniqueName: \"kubernetes.io/projected/ad64c1f6-5d9c-4ec5-990c-354f54f9f183-kube-api-access-xzg45\") pod \"ad64c1f6-5d9c-4ec5-990c-354f54f9f183\" (UID: \"ad64c1f6-5d9c-4ec5-990c-354f54f9f183\") " Jan 26 18:49:34 crc kubenswrapper[4737]: I0126 18:49:34.487291 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ad64c1f6-5d9c-4ec5-990c-354f54f9f183-util\") pod \"ad64c1f6-5d9c-4ec5-990c-354f54f9f183\" (UID: \"ad64c1f6-5d9c-4ec5-990c-354f54f9f183\") " Jan 26 18:49:34 crc kubenswrapper[4737]: I0126 18:49:34.487351 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ad64c1f6-5d9c-4ec5-990c-354f54f9f183-bundle\") pod \"ad64c1f6-5d9c-4ec5-990c-354f54f9f183\" (UID: \"ad64c1f6-5d9c-4ec5-990c-354f54f9f183\") " Jan 26 18:49:34 crc kubenswrapper[4737]: I0126 18:49:34.488367 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad64c1f6-5d9c-4ec5-990c-354f54f9f183-bundle" (OuterVolumeSpecName: "bundle") pod "ad64c1f6-5d9c-4ec5-990c-354f54f9f183" (UID: "ad64c1f6-5d9c-4ec5-990c-354f54f9f183"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:49:34 crc kubenswrapper[4737]: I0126 18:49:34.492870 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad64c1f6-5d9c-4ec5-990c-354f54f9f183-kube-api-access-xzg45" (OuterVolumeSpecName: "kube-api-access-xzg45") pod "ad64c1f6-5d9c-4ec5-990c-354f54f9f183" (UID: "ad64c1f6-5d9c-4ec5-990c-354f54f9f183"). InnerVolumeSpecName "kube-api-access-xzg45". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:49:34 crc kubenswrapper[4737]: I0126 18:49:34.506251 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad64c1f6-5d9c-4ec5-990c-354f54f9f183-util" (OuterVolumeSpecName: "util") pod "ad64c1f6-5d9c-4ec5-990c-354f54f9f183" (UID: "ad64c1f6-5d9c-4ec5-990c-354f54f9f183"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:49:34 crc kubenswrapper[4737]: I0126 18:49:34.589296 4737 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ad64c1f6-5d9c-4ec5-990c-354f54f9f183-util\") on node \"crc\" DevicePath \"\"" Jan 26 18:49:34 crc kubenswrapper[4737]: I0126 18:49:34.589334 4737 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ad64c1f6-5d9c-4ec5-990c-354f54f9f183-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:49:34 crc kubenswrapper[4737]: I0126 18:49:34.589344 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzg45\" (UniqueName: \"kubernetes.io/projected/ad64c1f6-5d9c-4ec5-990c-354f54f9f183-kube-api-access-xzg45\") on node \"crc\" DevicePath \"\"" Jan 26 18:49:35 crc kubenswrapper[4737]: I0126 18:49:35.027707 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5c66b61d564fc639d515e788b700d4c5c2c3cff0a71ecd99e42f80cf9454pgp" event={"ID":"ad64c1f6-5d9c-4ec5-990c-354f54f9f183","Type":"ContainerDied","Data":"619b001410ae45af05264409bd449333f7b0cbe80ca692044aa2cbae3da360a6"} Jan 26 18:49:35 crc kubenswrapper[4737]: I0126 18:49:35.027773 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5c66b61d564fc639d515e788b700d4c5c2c3cff0a71ecd99e42f80cf9454pgp" Jan 26 18:49:35 crc kubenswrapper[4737]: I0126 18:49:35.027783 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="619b001410ae45af05264409bd449333f7b0cbe80ca692044aa2cbae3da360a6" Jan 26 18:49:40 crc kubenswrapper[4737]: I0126 18:49:40.325987 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-848546446f-8xbh6"] Jan 26 18:49:40 crc kubenswrapper[4737]: E0126 18:49:40.331589 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad64c1f6-5d9c-4ec5-990c-354f54f9f183" containerName="pull" Jan 26 18:49:40 crc kubenswrapper[4737]: I0126 18:49:40.331673 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad64c1f6-5d9c-4ec5-990c-354f54f9f183" containerName="pull" Jan 26 18:49:40 crc kubenswrapper[4737]: E0126 18:49:40.331744 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad64c1f6-5d9c-4ec5-990c-354f54f9f183" containerName="extract" Jan 26 18:49:40 crc kubenswrapper[4737]: I0126 18:49:40.331797 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad64c1f6-5d9c-4ec5-990c-354f54f9f183" containerName="extract" Jan 26 18:49:40 crc kubenswrapper[4737]: E0126 18:49:40.331855 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad64c1f6-5d9c-4ec5-990c-354f54f9f183" containerName="util" Jan 26 18:49:40 crc kubenswrapper[4737]: I0126 18:49:40.331906 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad64c1f6-5d9c-4ec5-990c-354f54f9f183" containerName="util" Jan 26 18:49:40 crc kubenswrapper[4737]: I0126 18:49:40.332150 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad64c1f6-5d9c-4ec5-990c-354f54f9f183" containerName="extract" Jan 26 18:49:40 crc kubenswrapper[4737]: I0126 18:49:40.332822 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-848546446f-8xbh6" Jan 26 18:49:40 crc kubenswrapper[4737]: I0126 18:49:40.334927 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-h29ds" Jan 26 18:49:40 crc kubenswrapper[4737]: I0126 18:49:40.370526 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-848546446f-8xbh6"] Jan 26 18:49:40 crc kubenswrapper[4737]: I0126 18:49:40.483679 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbv89\" (UniqueName: \"kubernetes.io/projected/de29bea2-d234-4bc2-b1fc-90a18e84ed17-kube-api-access-mbv89\") pod \"openstack-operator-controller-init-848546446f-8xbh6\" (UID: \"de29bea2-d234-4bc2-b1fc-90a18e84ed17\") " pod="openstack-operators/openstack-operator-controller-init-848546446f-8xbh6" Jan 26 18:49:40 crc kubenswrapper[4737]: I0126 18:49:40.585418 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbv89\" (UniqueName: \"kubernetes.io/projected/de29bea2-d234-4bc2-b1fc-90a18e84ed17-kube-api-access-mbv89\") pod \"openstack-operator-controller-init-848546446f-8xbh6\" (UID: \"de29bea2-d234-4bc2-b1fc-90a18e84ed17\") " pod="openstack-operators/openstack-operator-controller-init-848546446f-8xbh6" Jan 26 18:49:40 crc kubenswrapper[4737]: I0126 18:49:40.615884 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbv89\" (UniqueName: \"kubernetes.io/projected/de29bea2-d234-4bc2-b1fc-90a18e84ed17-kube-api-access-mbv89\") pod \"openstack-operator-controller-init-848546446f-8xbh6\" (UID: \"de29bea2-d234-4bc2-b1fc-90a18e84ed17\") " pod="openstack-operators/openstack-operator-controller-init-848546446f-8xbh6" Jan 26 18:49:40 crc kubenswrapper[4737]: I0126 18:49:40.660854 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-848546446f-8xbh6" Jan 26 18:49:41 crc kubenswrapper[4737]: I0126 18:49:41.343351 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-848546446f-8xbh6"] Jan 26 18:49:42 crc kubenswrapper[4737]: I0126 18:49:42.086842 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-848546446f-8xbh6" event={"ID":"de29bea2-d234-4bc2-b1fc-90a18e84ed17","Type":"ContainerStarted","Data":"66bb812fc3edfe8987f46b2dfce57df7df145495d004636f6274d66fc59523bf"} Jan 26 18:49:47 crc kubenswrapper[4737]: I0126 18:49:47.203650 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-848546446f-8xbh6" event={"ID":"de29bea2-d234-4bc2-b1fc-90a18e84ed17","Type":"ContainerStarted","Data":"ef7b5ff84bb2c78a06e97a5f177a8dc0fa8243fa35dff2ef8400f842dee2a6f3"} Jan 26 18:49:47 crc kubenswrapper[4737]: I0126 18:49:47.205590 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-848546446f-8xbh6" Jan 26 18:50:00 crc kubenswrapper[4737]: I0126 18:50:00.664138 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-848546446f-8xbh6" Jan 26 18:50:00 crc kubenswrapper[4737]: I0126 18:50:00.692697 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-848546446f-8xbh6" podStartSLOduration=15.116985416 podStartE2EDuration="20.692679055s" podCreationTimestamp="2026-01-26 18:49:40 +0000 UTC" firstStartedPulling="2026-01-26 18:49:41.362957119 +0000 UTC m=+1154.671151827" lastFinishedPulling="2026-01-26 18:49:46.938650758 +0000 UTC m=+1160.246845466" observedRunningTime="2026-01-26 18:49:47.234790175 +0000 UTC m=+1160.542984883" watchObservedRunningTime="2026-01-26 18:50:00.692679055 +0000 UTC m=+1174.000873763" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.343264 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-p42h8"] Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.345241 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-p42h8" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.349272 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-bsrfl" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.362731 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-hbqjs"] Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.364056 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-hbqjs" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.367661 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-4wvv7" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.384472 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-p42h8"] Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.398131 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-hbqjs"] Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.432324 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-bl8hk"] Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.435873 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-bl8hk" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.455680 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-8trpx" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.481490 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-6mjbw"] Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.483489 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6mjbw" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.497009 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-trxwh" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.517807 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl86t\" (UniqueName: \"kubernetes.io/projected/97c0989d-f677-4460-b62b-4733c7db29d4-kube-api-access-gl86t\") pod \"glance-operator-controller-manager-78fdd796fd-bl8hk\" (UID: \"97c0989d-f677-4460-b62b-4733c7db29d4\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-bl8hk" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.518153 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv25t\" (UniqueName: \"kubernetes.io/projected/6cc46694-b15a-4417-a0a9-f4c13184f2ca-kube-api-access-rv25t\") pod \"cinder-operator-controller-manager-7478f7dbf9-hbqjs\" (UID: \"6cc46694-b15a-4417-a0a9-f4c13184f2ca\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-hbqjs" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.518317 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnt7x\" (UniqueName: \"kubernetes.io/projected/288df3c7-1220-419c-bde6-67ee3922b8ad-kube-api-access-vnt7x\") pod \"barbican-operator-controller-manager-7f86f8796f-p42h8\" (UID: \"288df3c7-1220-419c-bde6-67ee3922b8ad\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-p42h8" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.518430 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9fgd\" (UniqueName: \"kubernetes.io/projected/62ddf97f-7d75-4667-9480-17cb809b98f5-kube-api-access-v9fgd\") pod \"designate-operator-controller-manager-b45d7bf98-6mjbw\" (UID: \"62ddf97f-7d75-4667-9480-17cb809b98f5\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6mjbw" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.552153 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-bl8hk"] Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.567153 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-j9nc9"] Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.568532 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-j9nc9" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.575232 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-6mjbw"] Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.579247 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-hp8l5" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.600492 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-j9nc9"] Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.614005 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-kq82d"] Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.615526 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-kq82d" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.619378 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-9hx59" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.620705 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpwkb\" (UniqueName: \"kubernetes.io/projected/d80defd5-46d2-4e20-b093-dff95dca651b-kube-api-access-vpwkb\") pod \"horizon-operator-controller-manager-77d5c5b54f-kq82d\" (UID: \"d80defd5-46d2-4e20-b093-dff95dca651b\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-kq82d" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.620740 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tbn2\" (UniqueName: \"kubernetes.io/projected/3508c1f8-c9d9-41bf-b71e-eebb13eb5e86-kube-api-access-2tbn2\") pod \"heat-operator-controller-manager-594c8c9d5d-j9nc9\" (UID: \"3508c1f8-c9d9-41bf-b71e-eebb13eb5e86\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-j9nc9" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.620764 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv25t\" (UniqueName: \"kubernetes.io/projected/6cc46694-b15a-4417-a0a9-f4c13184f2ca-kube-api-access-rv25t\") pod \"cinder-operator-controller-manager-7478f7dbf9-hbqjs\" (UID: \"6cc46694-b15a-4417-a0a9-f4c13184f2ca\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-hbqjs" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.620808 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnt7x\" (UniqueName: \"kubernetes.io/projected/288df3c7-1220-419c-bde6-67ee3922b8ad-kube-api-access-vnt7x\") pod \"barbican-operator-controller-manager-7f86f8796f-p42h8\" (UID: \"288df3c7-1220-419c-bde6-67ee3922b8ad\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-p42h8" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.620855 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9fgd\" (UniqueName: \"kubernetes.io/projected/62ddf97f-7d75-4667-9480-17cb809b98f5-kube-api-access-v9fgd\") pod \"designate-operator-controller-manager-b45d7bf98-6mjbw\" (UID: \"62ddf97f-7d75-4667-9480-17cb809b98f5\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6mjbw" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.620919 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gl86t\" (UniqueName: \"kubernetes.io/projected/97c0989d-f677-4460-b62b-4733c7db29d4-kube-api-access-gl86t\") pod \"glance-operator-controller-manager-78fdd796fd-bl8hk\" (UID: \"97c0989d-f677-4460-b62b-4733c7db29d4\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-bl8hk" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.628180 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-9lqk4"] Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.629900 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-9lqk4" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.632347 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-c27vq" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.632596 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.655460 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-kq82d"] Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.662521 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gl86t\" (UniqueName: \"kubernetes.io/projected/97c0989d-f677-4460-b62b-4733c7db29d4-kube-api-access-gl86t\") pod \"glance-operator-controller-manager-78fdd796fd-bl8hk\" (UID: \"97c0989d-f677-4460-b62b-4733c7db29d4\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-bl8hk" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.666668 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv25t\" (UniqueName: \"kubernetes.io/projected/6cc46694-b15a-4417-a0a9-f4c13184f2ca-kube-api-access-rv25t\") pod \"cinder-operator-controller-manager-7478f7dbf9-hbqjs\" (UID: \"6cc46694-b15a-4417-a0a9-f4c13184f2ca\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-hbqjs" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.671652 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-9lqk4"] Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.693298 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnt7x\" (UniqueName: \"kubernetes.io/projected/288df3c7-1220-419c-bde6-67ee3922b8ad-kube-api-access-vnt7x\") pod \"barbican-operator-controller-manager-7f86f8796f-p42h8\" (UID: \"288df3c7-1220-419c-bde6-67ee3922b8ad\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-p42h8" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.700264 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-jpmmh"] Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.701443 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-jpmmh" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.706441 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-hbqjs" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.717597 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-fz5kc" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.722010 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25wb5\" (UniqueName: \"kubernetes.io/projected/b3a010fd-4f62-40c6-a377-be5c6f2e6ba7-kube-api-access-25wb5\") pod \"ironic-operator-controller-manager-598f7747c9-jpmmh\" (UID: \"b3a010fd-4f62-40c6-a377-be5c6f2e6ba7\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-jpmmh" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.722092 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpwkb\" (UniqueName: \"kubernetes.io/projected/d80defd5-46d2-4e20-b093-dff95dca651b-kube-api-access-vpwkb\") pod \"horizon-operator-controller-manager-77d5c5b54f-kq82d\" (UID: \"d80defd5-46d2-4e20-b093-dff95dca651b\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-kq82d" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.722130 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tbn2\" (UniqueName: \"kubernetes.io/projected/3508c1f8-c9d9-41bf-b71e-eebb13eb5e86-kube-api-access-2tbn2\") pod \"heat-operator-controller-manager-594c8c9d5d-j9nc9\" (UID: \"3508c1f8-c9d9-41bf-b71e-eebb13eb5e86\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-j9nc9" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.722229 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6904aa8b-12dd-4139-9a9f-f60be010cf3b-cert\") pod \"infra-operator-controller-manager-694cf4f878-9lqk4\" (UID: \"6904aa8b-12dd-4139-9a9f-f60be010cf3b\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-9lqk4" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.722264 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h2qs\" (UniqueName: \"kubernetes.io/projected/6904aa8b-12dd-4139-9a9f-f60be010cf3b-kube-api-access-5h2qs\") pod \"infra-operator-controller-manager-694cf4f878-9lqk4\" (UID: \"6904aa8b-12dd-4139-9a9f-f60be010cf3b\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-9lqk4" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.732019 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9fgd\" (UniqueName: \"kubernetes.io/projected/62ddf97f-7d75-4667-9480-17cb809b98f5-kube-api-access-v9fgd\") pod \"designate-operator-controller-manager-b45d7bf98-6mjbw\" (UID: \"62ddf97f-7d75-4667-9480-17cb809b98f5\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6mjbw" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.752177 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-zbp84"] Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.753385 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tbn2\" (UniqueName: \"kubernetes.io/projected/3508c1f8-c9d9-41bf-b71e-eebb13eb5e86-kube-api-access-2tbn2\") pod \"heat-operator-controller-manager-594c8c9d5d-j9nc9\" (UID: \"3508c1f8-c9d9-41bf-b71e-eebb13eb5e86\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-j9nc9" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.753813 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-zbp84" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.754385 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-jpmmh"] Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.756244 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpwkb\" (UniqueName: \"kubernetes.io/projected/d80defd5-46d2-4e20-b093-dff95dca651b-kube-api-access-vpwkb\") pod \"horizon-operator-controller-manager-77d5c5b54f-kq82d\" (UID: \"d80defd5-46d2-4e20-b093-dff95dca651b\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-kq82d" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.761602 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-wsgdf" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.781134 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-zbp84"] Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.788939 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-v9b85"] Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.790287 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-v9b85" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.794336 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-s84ns" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.795193 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-v9b85"] Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.806609 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-c4hpz"] Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.808215 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-c4hpz" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.810143 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-c87q7" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.811228 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-c4hpz"] Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.813988 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-bl8hk" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.818666 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-tz995"] Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.819762 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-tz995" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.820266 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6mjbw" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.830419 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-tpfjr" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.833469 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6904aa8b-12dd-4139-9a9f-f60be010cf3b-cert\") pod \"infra-operator-controller-manager-694cf4f878-9lqk4\" (UID: \"6904aa8b-12dd-4139-9a9f-f60be010cf3b\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-9lqk4" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.833534 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5h2qs\" (UniqueName: \"kubernetes.io/projected/6904aa8b-12dd-4139-9a9f-f60be010cf3b-kube-api-access-5h2qs\") pod \"infra-operator-controller-manager-694cf4f878-9lqk4\" (UID: \"6904aa8b-12dd-4139-9a9f-f60be010cf3b\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-9lqk4" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.833634 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25wb5\" (UniqueName: \"kubernetes.io/projected/b3a010fd-4f62-40c6-a377-be5c6f2e6ba7-kube-api-access-25wb5\") pod \"ironic-operator-controller-manager-598f7747c9-jpmmh\" (UID: \"b3a010fd-4f62-40c6-a377-be5c6f2e6ba7\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-jpmmh" Jan 26 18:50:30 crc kubenswrapper[4737]: E0126 18:50:30.833701 4737 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 18:50:30 crc kubenswrapper[4737]: E0126 18:50:30.833774 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6904aa8b-12dd-4139-9a9f-f60be010cf3b-cert podName:6904aa8b-12dd-4139-9a9f-f60be010cf3b nodeName:}" failed. No retries permitted until 2026-01-26 18:50:31.333751257 +0000 UTC m=+1204.641946015 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6904aa8b-12dd-4139-9a9f-f60be010cf3b-cert") pod "infra-operator-controller-manager-694cf4f878-9lqk4" (UID: "6904aa8b-12dd-4139-9a9f-f60be010cf3b") : secret "infra-operator-webhook-server-cert" not found Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.847950 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-xrm44"] Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.849543 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-xrm44" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.851446 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-qsgbl" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.866763 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h2qs\" (UniqueName: \"kubernetes.io/projected/6904aa8b-12dd-4139-9a9f-f60be010cf3b-kube-api-access-5h2qs\") pod \"infra-operator-controller-manager-694cf4f878-9lqk4\" (UID: \"6904aa8b-12dd-4139-9a9f-f60be010cf3b\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-9lqk4" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.869899 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25wb5\" (UniqueName: \"kubernetes.io/projected/b3a010fd-4f62-40c6-a377-be5c6f2e6ba7-kube-api-access-25wb5\") pod \"ironic-operator-controller-manager-598f7747c9-jpmmh\" (UID: \"b3a010fd-4f62-40c6-a377-be5c6f2e6ba7\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-jpmmh" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.882284 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-tz995"] Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.901719 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-j9nc9" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.922371 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qr8vf"] Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.932123 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qr8vf" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.935384 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knvqh\" (UniqueName: \"kubernetes.io/projected/01b83dfe-58bb-40fa-a0e8-b942b4c79b72-kube-api-access-knvqh\") pod \"neutron-operator-controller-manager-78d58447c5-tz995\" (UID: \"01b83dfe-58bb-40fa-a0e8-b942b4c79b72\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-tz995" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.935442 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nxp4\" (UniqueName: \"kubernetes.io/projected/0d2709bf-2113-45d7-94a1-816bc230044a-kube-api-access-7nxp4\") pod \"manila-operator-controller-manager-78c6999f6f-v9b85\" (UID: \"0d2709bf-2113-45d7-94a1-816bc230044a\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-v9b85" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.935553 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9cvv\" (UniqueName: \"kubernetes.io/projected/5b2ad507-8ef0-40e5-a10c-d5ed62a8181e-kube-api-access-l9cvv\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-c4hpz\" (UID: \"5b2ad507-8ef0-40e5-a10c-d5ed62a8181e\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-c4hpz" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.935578 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwrsg\" (UniqueName: \"kubernetes.io/projected/03d41d00-eefc-45c4-aaea-f09a5e34362b-kube-api-access-dwrsg\") pod \"keystone-operator-controller-manager-b8b6d4659-zbp84\" (UID: \"03d41d00-eefc-45c4-aaea-f09a5e34362b\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-zbp84" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.941581 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-6w8dt" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.949013 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.949140 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.954533 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-kq82d" Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.968178 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-xrm44"] Jan 26 18:50:30 crc kubenswrapper[4737]: I0126 18:50:30.990254 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-p42h8" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.037159 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nxp4\" (UniqueName: \"kubernetes.io/projected/0d2709bf-2113-45d7-94a1-816bc230044a-kube-api-access-7nxp4\") pod \"manila-operator-controller-manager-78c6999f6f-v9b85\" (UID: \"0d2709bf-2113-45d7-94a1-816bc230044a\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-v9b85" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.037274 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b55z\" (UniqueName: \"kubernetes.io/projected/3164f5a5-0f37-4ab6-bc2a-51978eb9f842-kube-api-access-9b55z\") pod \"nova-operator-controller-manager-7bdb645866-xrm44\" (UID: \"3164f5a5-0f37-4ab6-bc2a-51978eb9f842\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-xrm44" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.037376 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh8dw\" (UniqueName: \"kubernetes.io/projected/284309e9-61a9-47c4-918a-6f097cf10aa1-kube-api-access-rh8dw\") pod \"octavia-operator-controller-manager-5f4cd88d46-qr8vf\" (UID: \"284309e9-61a9-47c4-918a-6f097cf10aa1\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qr8vf" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.037418 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9cvv\" (UniqueName: \"kubernetes.io/projected/5b2ad507-8ef0-40e5-a10c-d5ed62a8181e-kube-api-access-l9cvv\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-c4hpz\" (UID: \"5b2ad507-8ef0-40e5-a10c-d5ed62a8181e\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-c4hpz" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.037451 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwrsg\" (UniqueName: \"kubernetes.io/projected/03d41d00-eefc-45c4-aaea-f09a5e34362b-kube-api-access-dwrsg\") pod \"keystone-operator-controller-manager-b8b6d4659-zbp84\" (UID: \"03d41d00-eefc-45c4-aaea-f09a5e34362b\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-zbp84" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.037497 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knvqh\" (UniqueName: \"kubernetes.io/projected/01b83dfe-58bb-40fa-a0e8-b942b4c79b72-kube-api-access-knvqh\") pod \"neutron-operator-controller-manager-78d58447c5-tz995\" (UID: \"01b83dfe-58bb-40fa-a0e8-b942b4c79b72\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-tz995" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.138484 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rh8dw\" (UniqueName: \"kubernetes.io/projected/284309e9-61a9-47c4-918a-6f097cf10aa1-kube-api-access-rh8dw\") pod \"octavia-operator-controller-manager-5f4cd88d46-qr8vf\" (UID: \"284309e9-61a9-47c4-918a-6f097cf10aa1\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qr8vf" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.138647 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b55z\" (UniqueName: \"kubernetes.io/projected/3164f5a5-0f37-4ab6-bc2a-51978eb9f842-kube-api-access-9b55z\") pod \"nova-operator-controller-manager-7bdb645866-xrm44\" (UID: \"3164f5a5-0f37-4ab6-bc2a-51978eb9f842\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-xrm44" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.176960 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwrsg\" (UniqueName: \"kubernetes.io/projected/03d41d00-eefc-45c4-aaea-f09a5e34362b-kube-api-access-dwrsg\") pod \"keystone-operator-controller-manager-b8b6d4659-zbp84\" (UID: \"03d41d00-eefc-45c4-aaea-f09a5e34362b\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-zbp84" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.177661 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nxp4\" (UniqueName: \"kubernetes.io/projected/0d2709bf-2113-45d7-94a1-816bc230044a-kube-api-access-7nxp4\") pod \"manila-operator-controller-manager-78c6999f6f-v9b85\" (UID: \"0d2709bf-2113-45d7-94a1-816bc230044a\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-v9b85" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.178543 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-jpmmh" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.179032 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-zbp84" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.184615 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b55z\" (UniqueName: \"kubernetes.io/projected/3164f5a5-0f37-4ab6-bc2a-51978eb9f842-kube-api-access-9b55z\") pod \"nova-operator-controller-manager-7bdb645866-xrm44\" (UID: \"3164f5a5-0f37-4ab6-bc2a-51978eb9f842\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-xrm44" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.187878 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rh8dw\" (UniqueName: \"kubernetes.io/projected/284309e9-61a9-47c4-918a-6f097cf10aa1-kube-api-access-rh8dw\") pod \"octavia-operator-controller-manager-5f4cd88d46-qr8vf\" (UID: \"284309e9-61a9-47c4-918a-6f097cf10aa1\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qr8vf" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.188959 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-v9b85" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.210904 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9cvv\" (UniqueName: \"kubernetes.io/projected/5b2ad507-8ef0-40e5-a10c-d5ed62a8181e-kube-api-access-l9cvv\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-c4hpz\" (UID: \"5b2ad507-8ef0-40e5-a10c-d5ed62a8181e\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-c4hpz" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.237307 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-c4hpz" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.312366 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knvqh\" (UniqueName: \"kubernetes.io/projected/01b83dfe-58bb-40fa-a0e8-b942b4c79b72-kube-api-access-knvqh\") pod \"neutron-operator-controller-manager-78d58447c5-tz995\" (UID: \"01b83dfe-58bb-40fa-a0e8-b942b4c79b72\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-tz995" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.333837 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-tz995" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.352372 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6904aa8b-12dd-4139-9a9f-f60be010cf3b-cert\") pod \"infra-operator-controller-manager-694cf4f878-9lqk4\" (UID: \"6904aa8b-12dd-4139-9a9f-f60be010cf3b\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-9lqk4" Jan 26 18:50:31 crc kubenswrapper[4737]: E0126 18:50:31.355209 4737 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 18:50:31 crc kubenswrapper[4737]: E0126 18:50:31.364756 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6904aa8b-12dd-4139-9a9f-f60be010cf3b-cert podName:6904aa8b-12dd-4139-9a9f-f60be010cf3b nodeName:}" failed. No retries permitted until 2026-01-26 18:50:32.364704378 +0000 UTC m=+1205.672899086 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6904aa8b-12dd-4139-9a9f-f60be010cf3b-cert") pod "infra-operator-controller-manager-694cf4f878-9lqk4" (UID: "6904aa8b-12dd-4139-9a9f-f60be010cf3b") : secret "infra-operator-webhook-server-cert" not found Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.373688 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-xrm44" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.395482 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qr8vf"] Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.395526 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854wx9kv"] Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.401090 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qr8vf" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.428532 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-55xkx"] Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.428669 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854wx9kv" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.432761 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-55xkx"] Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.432772 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.432866 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-55xkx" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.432968 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-5bcds" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.433101 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854wx9kv"] Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.433823 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-9lkfc"] Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.440782 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-lfh5n"] Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.442006 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-9lkfc" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.454611 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-5hsnc" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.461861 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-ltnjj" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.464661 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-9lkfc"] Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.464786 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-lfh5n" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.488791 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-8nxc4" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.519543 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-lfh5n"] Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.528229 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6cf49855b4-zfzgj"] Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.531133 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6cf49855b4-zfzgj" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.533301 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-7b28l" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.572176 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgnvh\" (UniqueName: \"kubernetes.io/projected/11c8ec8e-f710-4b3f-9bf2-be1834ddffb9-kube-api-access-pgnvh\") pod \"placement-operator-controller-manager-79d5ccc684-lfh5n\" (UID: \"11c8ec8e-f710-4b3f-9bf2-be1834ddffb9\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-lfh5n" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.572230 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptvtb\" (UniqueName: \"kubernetes.io/projected/8aa44595-2352-4a3e-888f-3409254cde36-kube-api-access-ptvtb\") pod \"swift-operator-controller-manager-547cbdb99f-9lkfc\" (UID: \"8aa44595-2352-4a3e-888f-3409254cde36\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-9lkfc" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.572304 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5175d9d3-4bf9-4f52-be13-e33b02e03592-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854wx9kv\" (UID: \"5175d9d3-4bf9-4f52-be13-e33b02e03592\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854wx9kv" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.572352 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd4x8\" (UniqueName: \"kubernetes.io/projected/c9b745b4-487d-4ccb-a398-8d9af643ae50-kube-api-access-dd4x8\") pod \"ovn-operator-controller-manager-6f75f45d54-55xkx\" (UID: \"c9b745b4-487d-4ccb-a398-8d9af643ae50\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-55xkx" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.572372 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72257\" (UniqueName: \"kubernetes.io/projected/5175d9d3-4bf9-4f52-be13-e33b02e03592-kube-api-access-72257\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854wx9kv\" (UID: \"5175d9d3-4bf9-4f52-be13-e33b02e03592\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854wx9kv" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.591243 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6cf49855b4-zfzgj"] Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.656652 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-4n95b"] Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.658787 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-4n95b" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.673723 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptvtb\" (UniqueName: \"kubernetes.io/projected/8aa44595-2352-4a3e-888f-3409254cde36-kube-api-access-ptvtb\") pod \"swift-operator-controller-manager-547cbdb99f-9lkfc\" (UID: \"8aa44595-2352-4a3e-888f-3409254cde36\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-9lkfc" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.673806 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4hwl\" (UniqueName: \"kubernetes.io/projected/0716cfbf-95d3-44fd-9e28-9b861568b791-kube-api-access-m4hwl\") pod \"telemetry-operator-controller-manager-6cf49855b4-zfzgj\" (UID: \"0716cfbf-95d3-44fd-9e28-9b861568b791\") " pod="openstack-operators/telemetry-operator-controller-manager-6cf49855b4-zfzgj" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.673873 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5175d9d3-4bf9-4f52-be13-e33b02e03592-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854wx9kv\" (UID: \"5175d9d3-4bf9-4f52-be13-e33b02e03592\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854wx9kv" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.673945 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dd4x8\" (UniqueName: \"kubernetes.io/projected/c9b745b4-487d-4ccb-a398-8d9af643ae50-kube-api-access-dd4x8\") pod \"ovn-operator-controller-manager-6f75f45d54-55xkx\" (UID: \"c9b745b4-487d-4ccb-a398-8d9af643ae50\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-55xkx" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.674024 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72257\" (UniqueName: \"kubernetes.io/projected/5175d9d3-4bf9-4f52-be13-e33b02e03592-kube-api-access-72257\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854wx9kv\" (UID: \"5175d9d3-4bf9-4f52-be13-e33b02e03592\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854wx9kv" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.674104 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgnvh\" (UniqueName: \"kubernetes.io/projected/11c8ec8e-f710-4b3f-9bf2-be1834ddffb9-kube-api-access-pgnvh\") pod \"placement-operator-controller-manager-79d5ccc684-lfh5n\" (UID: \"11c8ec8e-f710-4b3f-9bf2-be1834ddffb9\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-lfh5n" Jan 26 18:50:31 crc kubenswrapper[4737]: E0126 18:50:31.674731 4737 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 18:50:31 crc kubenswrapper[4737]: E0126 18:50:31.674787 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5175d9d3-4bf9-4f52-be13-e33b02e03592-cert podName:5175d9d3-4bf9-4f52-be13-e33b02e03592 nodeName:}" failed. No retries permitted until 2026-01-26 18:50:32.174769371 +0000 UTC m=+1205.482964079 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5175d9d3-4bf9-4f52-be13-e33b02e03592-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854wx9kv" (UID: "5175d9d3-4bf9-4f52-be13-e33b02e03592") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.696248 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-4n95b"] Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.709658 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-hx2gj"] Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.710934 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-hx2gj" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.725206 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-hx2gj"] Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.731795 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6ffbd5d47c-xwdkt"] Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.733592 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6ffbd5d47c-xwdkt" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.734143 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6ffbd5d47c-xwdkt"] Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.744638 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5xvj4"] Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.747900 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5xvj4"] Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.747984 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5xvj4" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.776042 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4hwl\" (UniqueName: \"kubernetes.io/projected/0716cfbf-95d3-44fd-9e28-9b861568b791-kube-api-access-m4hwl\") pod \"telemetry-operator-controller-manager-6cf49855b4-zfzgj\" (UID: \"0716cfbf-95d3-44fd-9e28-9b861568b791\") " pod="openstack-operators/telemetry-operator-controller-manager-6cf49855b4-zfzgj" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.776162 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z5mj\" (UniqueName: \"kubernetes.io/projected/c68a8293-a298-4384-83f0-4a7e50517d3b-kube-api-access-9z5mj\") pod \"test-operator-controller-manager-69797bbcbd-4n95b\" (UID: \"c68a8293-a298-4384-83f0-4a7e50517d3b\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-4n95b" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.857381 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.857682 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-6jnkf" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.857800 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-gvd5z" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.857883 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.858050 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-25vjx" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.858310 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-tbwd6" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.893109 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dd4x8\" (UniqueName: \"kubernetes.io/projected/c9b745b4-487d-4ccb-a398-8d9af643ae50-kube-api-access-dd4x8\") pod \"ovn-operator-controller-manager-6f75f45d54-55xkx\" (UID: \"c9b745b4-487d-4ccb-a398-8d9af643ae50\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-55xkx" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.897277 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4hwl\" (UniqueName: \"kubernetes.io/projected/0716cfbf-95d3-44fd-9e28-9b861568b791-kube-api-access-m4hwl\") pod \"telemetry-operator-controller-manager-6cf49855b4-zfzgj\" (UID: \"0716cfbf-95d3-44fd-9e28-9b861568b791\") " pod="openstack-operators/telemetry-operator-controller-manager-6cf49855b4-zfzgj" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.898678 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgfkc\" (UniqueName: \"kubernetes.io/projected/c7cfbb47-6d43-4030-a3d1-516430aeffb7-kube-api-access-fgfkc\") pod \"openstack-operator-controller-manager-6ffbd5d47c-xwdkt\" (UID: \"c7cfbb47-6d43-4030-a3d1-516430aeffb7\") " pod="openstack-operators/openstack-operator-controller-manager-6ffbd5d47c-xwdkt" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.898734 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c7cfbb47-6d43-4030-a3d1-516430aeffb7-webhook-certs\") pod \"openstack-operator-controller-manager-6ffbd5d47c-xwdkt\" (UID: \"c7cfbb47-6d43-4030-a3d1-516430aeffb7\") " pod="openstack-operators/openstack-operator-controller-manager-6ffbd5d47c-xwdkt" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.898765 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd8q7\" (UniqueName: \"kubernetes.io/projected/148ce19e-3a70-4b27-98e1-87807dee6178-kube-api-access-fd8q7\") pod \"watcher-operator-controller-manager-564965969-hx2gj\" (UID: \"148ce19e-3a70-4b27-98e1-87807dee6178\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-hx2gj" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.898859 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z5mj\" (UniqueName: \"kubernetes.io/projected/c68a8293-a298-4384-83f0-4a7e50517d3b-kube-api-access-9z5mj\") pod \"test-operator-controller-manager-69797bbcbd-4n95b\" (UID: \"c68a8293-a298-4384-83f0-4a7e50517d3b\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-4n95b" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.898918 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c7cfbb47-6d43-4030-a3d1-516430aeffb7-metrics-certs\") pod \"openstack-operator-controller-manager-6ffbd5d47c-xwdkt\" (UID: \"c7cfbb47-6d43-4030-a3d1-516430aeffb7\") " pod="openstack-operators/openstack-operator-controller-manager-6ffbd5d47c-xwdkt" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.898959 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr5cd\" (UniqueName: \"kubernetes.io/projected/3c491fdc-889c-4d4a-aedd-60a242e26027-kube-api-access-qr5cd\") pod \"rabbitmq-cluster-operator-manager-668c99d594-5xvj4\" (UID: \"3c491fdc-889c-4d4a-aedd-60a242e26027\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5xvj4" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.919159 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptvtb\" (UniqueName: \"kubernetes.io/projected/8aa44595-2352-4a3e-888f-3409254cde36-kube-api-access-ptvtb\") pod \"swift-operator-controller-manager-547cbdb99f-9lkfc\" (UID: \"8aa44595-2352-4a3e-888f-3409254cde36\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-9lkfc" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.924029 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgnvh\" (UniqueName: \"kubernetes.io/projected/11c8ec8e-f710-4b3f-9bf2-be1834ddffb9-kube-api-access-pgnvh\") pod \"placement-operator-controller-manager-79d5ccc684-lfh5n\" (UID: \"11c8ec8e-f710-4b3f-9bf2-be1834ddffb9\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-lfh5n" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.925455 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z5mj\" (UniqueName: \"kubernetes.io/projected/c68a8293-a298-4384-83f0-4a7e50517d3b-kube-api-access-9z5mj\") pod \"test-operator-controller-manager-69797bbcbd-4n95b\" (UID: \"c68a8293-a298-4384-83f0-4a7e50517d3b\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-4n95b" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.928024 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72257\" (UniqueName: \"kubernetes.io/projected/5175d9d3-4bf9-4f52-be13-e33b02e03592-kube-api-access-72257\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854wx9kv\" (UID: \"5175d9d3-4bf9-4f52-be13-e33b02e03592\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854wx9kv" Jan 26 18:50:31 crc kubenswrapper[4737]: I0126 18:50:31.957436 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-hbqjs"] Jan 26 18:50:32 crc kubenswrapper[4737]: I0126 18:50:32.010815 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgfkc\" (UniqueName: \"kubernetes.io/projected/c7cfbb47-6d43-4030-a3d1-516430aeffb7-kube-api-access-fgfkc\") pod \"openstack-operator-controller-manager-6ffbd5d47c-xwdkt\" (UID: \"c7cfbb47-6d43-4030-a3d1-516430aeffb7\") " pod="openstack-operators/openstack-operator-controller-manager-6ffbd5d47c-xwdkt" Jan 26 18:50:32 crc kubenswrapper[4737]: I0126 18:50:32.011204 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c7cfbb47-6d43-4030-a3d1-516430aeffb7-webhook-certs\") pod \"openstack-operator-controller-manager-6ffbd5d47c-xwdkt\" (UID: \"c7cfbb47-6d43-4030-a3d1-516430aeffb7\") " pod="openstack-operators/openstack-operator-controller-manager-6ffbd5d47c-xwdkt" Jan 26 18:50:32 crc kubenswrapper[4737]: I0126 18:50:32.011239 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fd8q7\" (UniqueName: \"kubernetes.io/projected/148ce19e-3a70-4b27-98e1-87807dee6178-kube-api-access-fd8q7\") pod \"watcher-operator-controller-manager-564965969-hx2gj\" (UID: \"148ce19e-3a70-4b27-98e1-87807dee6178\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-hx2gj" Jan 26 18:50:32 crc kubenswrapper[4737]: I0126 18:50:32.011349 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c7cfbb47-6d43-4030-a3d1-516430aeffb7-metrics-certs\") pod \"openstack-operator-controller-manager-6ffbd5d47c-xwdkt\" (UID: \"c7cfbb47-6d43-4030-a3d1-516430aeffb7\") " pod="openstack-operators/openstack-operator-controller-manager-6ffbd5d47c-xwdkt" Jan 26 18:50:32 crc kubenswrapper[4737]: I0126 18:50:32.011378 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qr5cd\" (UniqueName: \"kubernetes.io/projected/3c491fdc-889c-4d4a-aedd-60a242e26027-kube-api-access-qr5cd\") pod \"rabbitmq-cluster-operator-manager-668c99d594-5xvj4\" (UID: \"3c491fdc-889c-4d4a-aedd-60a242e26027\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5xvj4" Jan 26 18:50:32 crc kubenswrapper[4737]: E0126 18:50:32.011801 4737 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 18:50:32 crc kubenswrapper[4737]: E0126 18:50:32.011840 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c7cfbb47-6d43-4030-a3d1-516430aeffb7-webhook-certs podName:c7cfbb47-6d43-4030-a3d1-516430aeffb7 nodeName:}" failed. No retries permitted until 2026-01-26 18:50:32.511826543 +0000 UTC m=+1205.820021251 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c7cfbb47-6d43-4030-a3d1-516430aeffb7-webhook-certs") pod "openstack-operator-controller-manager-6ffbd5d47c-xwdkt" (UID: "c7cfbb47-6d43-4030-a3d1-516430aeffb7") : secret "webhook-server-cert" not found Jan 26 18:50:32 crc kubenswrapper[4737]: E0126 18:50:32.011981 4737 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 18:50:32 crc kubenswrapper[4737]: E0126 18:50:32.012005 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c7cfbb47-6d43-4030-a3d1-516430aeffb7-metrics-certs podName:c7cfbb47-6d43-4030-a3d1-516430aeffb7 nodeName:}" failed. No retries permitted until 2026-01-26 18:50:32.511998567 +0000 UTC m=+1205.820193275 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c7cfbb47-6d43-4030-a3d1-516430aeffb7-metrics-certs") pod "openstack-operator-controller-manager-6ffbd5d47c-xwdkt" (UID: "c7cfbb47-6d43-4030-a3d1-516430aeffb7") : secret "metrics-server-cert" not found Jan 26 18:50:32 crc kubenswrapper[4737]: I0126 18:50:32.075375 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fd8q7\" (UniqueName: \"kubernetes.io/projected/148ce19e-3a70-4b27-98e1-87807dee6178-kube-api-access-fd8q7\") pod \"watcher-operator-controller-manager-564965969-hx2gj\" (UID: \"148ce19e-3a70-4b27-98e1-87807dee6178\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-hx2gj" Jan 26 18:50:32 crc kubenswrapper[4737]: I0126 18:50:32.083514 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qr5cd\" (UniqueName: \"kubernetes.io/projected/3c491fdc-889c-4d4a-aedd-60a242e26027-kube-api-access-qr5cd\") pod \"rabbitmq-cluster-operator-manager-668c99d594-5xvj4\" (UID: \"3c491fdc-889c-4d4a-aedd-60a242e26027\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5xvj4" Jan 26 18:50:32 crc kubenswrapper[4737]: I0126 18:50:32.135154 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgfkc\" (UniqueName: \"kubernetes.io/projected/c7cfbb47-6d43-4030-a3d1-516430aeffb7-kube-api-access-fgfkc\") pod \"openstack-operator-controller-manager-6ffbd5d47c-xwdkt\" (UID: \"c7cfbb47-6d43-4030-a3d1-516430aeffb7\") " pod="openstack-operators/openstack-operator-controller-manager-6ffbd5d47c-xwdkt" Jan 26 18:50:32 crc kubenswrapper[4737]: I0126 18:50:32.157417 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-55xkx" Jan 26 18:50:32 crc kubenswrapper[4737]: I0126 18:50:32.294533 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-lfh5n" Jan 26 18:50:32 crc kubenswrapper[4737]: I0126 18:50:32.295644 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-9lkfc" Jan 26 18:50:32 crc kubenswrapper[4737]: I0126 18:50:32.296548 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5175d9d3-4bf9-4f52-be13-e33b02e03592-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854wx9kv\" (UID: \"5175d9d3-4bf9-4f52-be13-e33b02e03592\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854wx9kv" Jan 26 18:50:32 crc kubenswrapper[4737]: E0126 18:50:32.296760 4737 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 18:50:32 crc kubenswrapper[4737]: E0126 18:50:32.296827 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5175d9d3-4bf9-4f52-be13-e33b02e03592-cert podName:5175d9d3-4bf9-4f52-be13-e33b02e03592 nodeName:}" failed. No retries permitted until 2026-01-26 18:50:33.296803539 +0000 UTC m=+1206.604998247 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5175d9d3-4bf9-4f52-be13-e33b02e03592-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854wx9kv" (UID: "5175d9d3-4bf9-4f52-be13-e33b02e03592") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 18:50:32 crc kubenswrapper[4737]: I0126 18:50:32.322632 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6cf49855b4-zfzgj" Jan 26 18:50:32 crc kubenswrapper[4737]: I0126 18:50:32.355410 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-4n95b" Jan 26 18:50:32 crc kubenswrapper[4737]: I0126 18:50:32.387294 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-hx2gj" Jan 26 18:50:32 crc kubenswrapper[4737]: I0126 18:50:32.397375 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6904aa8b-12dd-4139-9a9f-f60be010cf3b-cert\") pod \"infra-operator-controller-manager-694cf4f878-9lqk4\" (UID: \"6904aa8b-12dd-4139-9a9f-f60be010cf3b\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-9lqk4" Jan 26 18:50:32 crc kubenswrapper[4737]: E0126 18:50:32.398094 4737 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 18:50:32 crc kubenswrapper[4737]: E0126 18:50:32.398157 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6904aa8b-12dd-4139-9a9f-f60be010cf3b-cert podName:6904aa8b-12dd-4139-9a9f-f60be010cf3b nodeName:}" failed. No retries permitted until 2026-01-26 18:50:34.398136416 +0000 UTC m=+1207.706331124 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6904aa8b-12dd-4139-9a9f-f60be010cf3b-cert") pod "infra-operator-controller-manager-694cf4f878-9lqk4" (UID: "6904aa8b-12dd-4139-9a9f-f60be010cf3b") : secret "infra-operator-webhook-server-cert" not found Jan 26 18:50:32 crc kubenswrapper[4737]: I0126 18:50:32.435782 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5xvj4" Jan 26 18:50:32 crc kubenswrapper[4737]: I0126 18:50:32.600398 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c7cfbb47-6d43-4030-a3d1-516430aeffb7-webhook-certs\") pod \"openstack-operator-controller-manager-6ffbd5d47c-xwdkt\" (UID: \"c7cfbb47-6d43-4030-a3d1-516430aeffb7\") " pod="openstack-operators/openstack-operator-controller-manager-6ffbd5d47c-xwdkt" Jan 26 18:50:32 crc kubenswrapper[4737]: I0126 18:50:32.600605 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c7cfbb47-6d43-4030-a3d1-516430aeffb7-metrics-certs\") pod \"openstack-operator-controller-manager-6ffbd5d47c-xwdkt\" (UID: \"c7cfbb47-6d43-4030-a3d1-516430aeffb7\") " pod="openstack-operators/openstack-operator-controller-manager-6ffbd5d47c-xwdkt" Jan 26 18:50:32 crc kubenswrapper[4737]: E0126 18:50:32.600866 4737 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 18:50:32 crc kubenswrapper[4737]: E0126 18:50:32.600941 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c7cfbb47-6d43-4030-a3d1-516430aeffb7-metrics-certs podName:c7cfbb47-6d43-4030-a3d1-516430aeffb7 nodeName:}" failed. No retries permitted until 2026-01-26 18:50:33.600923942 +0000 UTC m=+1206.909118650 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c7cfbb47-6d43-4030-a3d1-516430aeffb7-metrics-certs") pod "openstack-operator-controller-manager-6ffbd5d47c-xwdkt" (UID: "c7cfbb47-6d43-4030-a3d1-516430aeffb7") : secret "metrics-server-cert" not found Jan 26 18:50:32 crc kubenswrapper[4737]: E0126 18:50:32.601012 4737 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 18:50:32 crc kubenswrapper[4737]: E0126 18:50:32.601037 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c7cfbb47-6d43-4030-a3d1-516430aeffb7-webhook-certs podName:c7cfbb47-6d43-4030-a3d1-516430aeffb7 nodeName:}" failed. No retries permitted until 2026-01-26 18:50:33.601029835 +0000 UTC m=+1206.909224543 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c7cfbb47-6d43-4030-a3d1-516430aeffb7-webhook-certs") pod "openstack-operator-controller-manager-6ffbd5d47c-xwdkt" (UID: "c7cfbb47-6d43-4030-a3d1-516430aeffb7") : secret "webhook-server-cert" not found Jan 26 18:50:32 crc kubenswrapper[4737]: I0126 18:50:32.973190 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-hbqjs" event={"ID":"6cc46694-b15a-4417-a0a9-f4c13184f2ca","Type":"ContainerStarted","Data":"c8d84e006bff4351464cb04e2859b7d19ee7797d469b489d8297cc05af041c5a"} Jan 26 18:50:33 crc kubenswrapper[4737]: I0126 18:50:33.330803 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5175d9d3-4bf9-4f52-be13-e33b02e03592-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854wx9kv\" (UID: \"5175d9d3-4bf9-4f52-be13-e33b02e03592\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854wx9kv" Jan 26 18:50:33 crc kubenswrapper[4737]: E0126 18:50:33.331241 4737 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 18:50:33 crc kubenswrapper[4737]: E0126 18:50:33.331290 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5175d9d3-4bf9-4f52-be13-e33b02e03592-cert podName:5175d9d3-4bf9-4f52-be13-e33b02e03592 nodeName:}" failed. No retries permitted until 2026-01-26 18:50:35.331274831 +0000 UTC m=+1208.639469539 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5175d9d3-4bf9-4f52-be13-e33b02e03592-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854wx9kv" (UID: "5175d9d3-4bf9-4f52-be13-e33b02e03592") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 18:50:33 crc kubenswrapper[4737]: I0126 18:50:33.636233 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c7cfbb47-6d43-4030-a3d1-516430aeffb7-webhook-certs\") pod \"openstack-operator-controller-manager-6ffbd5d47c-xwdkt\" (UID: \"c7cfbb47-6d43-4030-a3d1-516430aeffb7\") " pod="openstack-operators/openstack-operator-controller-manager-6ffbd5d47c-xwdkt" Jan 26 18:50:33 crc kubenswrapper[4737]: I0126 18:50:33.636368 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c7cfbb47-6d43-4030-a3d1-516430aeffb7-metrics-certs\") pod \"openstack-operator-controller-manager-6ffbd5d47c-xwdkt\" (UID: \"c7cfbb47-6d43-4030-a3d1-516430aeffb7\") " pod="openstack-operators/openstack-operator-controller-manager-6ffbd5d47c-xwdkt" Jan 26 18:50:33 crc kubenswrapper[4737]: E0126 18:50:33.636521 4737 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 18:50:33 crc kubenswrapper[4737]: E0126 18:50:33.636585 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c7cfbb47-6d43-4030-a3d1-516430aeffb7-webhook-certs podName:c7cfbb47-6d43-4030-a3d1-516430aeffb7 nodeName:}" failed. No retries permitted until 2026-01-26 18:50:35.636570652 +0000 UTC m=+1208.944765360 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c7cfbb47-6d43-4030-a3d1-516430aeffb7-webhook-certs") pod "openstack-operator-controller-manager-6ffbd5d47c-xwdkt" (UID: "c7cfbb47-6d43-4030-a3d1-516430aeffb7") : secret "webhook-server-cert" not found Jan 26 18:50:33 crc kubenswrapper[4737]: E0126 18:50:33.636588 4737 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 18:50:33 crc kubenswrapper[4737]: E0126 18:50:33.636636 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c7cfbb47-6d43-4030-a3d1-516430aeffb7-metrics-certs podName:c7cfbb47-6d43-4030-a3d1-516430aeffb7 nodeName:}" failed. No retries permitted until 2026-01-26 18:50:35.636622573 +0000 UTC m=+1208.944817271 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c7cfbb47-6d43-4030-a3d1-516430aeffb7-metrics-certs") pod "openstack-operator-controller-manager-6ffbd5d47c-xwdkt" (UID: "c7cfbb47-6d43-4030-a3d1-516430aeffb7") : secret "metrics-server-cert" not found Jan 26 18:50:33 crc kubenswrapper[4737]: I0126 18:50:33.839528 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-j9nc9"] Jan 26 18:50:33 crc kubenswrapper[4737]: I0126 18:50:33.852477 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-jpmmh"] Jan 26 18:50:33 crc kubenswrapper[4737]: W0126 18:50:33.864370 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3508c1f8_c9d9_41bf_b71e_eebb13eb5e86.slice/crio-afcb23976a37aae8751728b73d2683792bea434510fac3700d21990b097132f3 WatchSource:0}: Error finding container afcb23976a37aae8751728b73d2683792bea434510fac3700d21990b097132f3: Status 404 returned error can't find the container with id afcb23976a37aae8751728b73d2683792bea434510fac3700d21990b097132f3 Jan 26 18:50:33 crc kubenswrapper[4737]: I0126 18:50:33.882888 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-bl8hk"] Jan 26 18:50:33 crc kubenswrapper[4737]: I0126 18:50:33.894553 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-c4hpz"] Jan 26 18:50:33 crc kubenswrapper[4737]: I0126 18:50:33.907115 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-kq82d"] Jan 26 18:50:33 crc kubenswrapper[4737]: I0126 18:50:33.982846 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-j9nc9" event={"ID":"3508c1f8-c9d9-41bf-b71e-eebb13eb5e86","Type":"ContainerStarted","Data":"afcb23976a37aae8751728b73d2683792bea434510fac3700d21990b097132f3"} Jan 26 18:50:33 crc kubenswrapper[4737]: I0126 18:50:33.985938 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-kq82d" event={"ID":"d80defd5-46d2-4e20-b093-dff95dca651b","Type":"ContainerStarted","Data":"39ad69607ece7cd0583cb7a6b4f97a25b19ccb4fe36cb2efc02bc5d4efa85df1"} Jan 26 18:50:33 crc kubenswrapper[4737]: I0126 18:50:33.988271 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-bl8hk" event={"ID":"97c0989d-f677-4460-b62b-4733c7db29d4","Type":"ContainerStarted","Data":"9df7a1731a99ba6ff16c85eece4b80d21dd7d86f475a3d29b39d737e277a8ae1"} Jan 26 18:50:33 crc kubenswrapper[4737]: I0126 18:50:33.989887 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-jpmmh" event={"ID":"b3a010fd-4f62-40c6-a377-be5c6f2e6ba7","Type":"ContainerStarted","Data":"664013ee6540ae18180edf9a0c11449227082c1e53551a0aa0ccf80dd4195dc4"} Jan 26 18:50:33 crc kubenswrapper[4737]: I0126 18:50:33.991346 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-c4hpz" event={"ID":"5b2ad507-8ef0-40e5-a10c-d5ed62a8181e","Type":"ContainerStarted","Data":"07e2cc54ac2adf431e71821eaa0ebc15b8e6ca4238da6967cd48edd29864a8e2"} Jan 26 18:50:34 crc kubenswrapper[4737]: I0126 18:50:34.188642 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-6mjbw"] Jan 26 18:50:34 crc kubenswrapper[4737]: I0126 18:50:34.202306 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-zbp84"] Jan 26 18:50:34 crc kubenswrapper[4737]: W0126 18:50:34.206045 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03d41d00_eefc_45c4_aaea_f09a5e34362b.slice/crio-c799ad4433ce3f69488f031e8684cb8ffdb048fe2446cd0d043b8898ba169630 WatchSource:0}: Error finding container c799ad4433ce3f69488f031e8684cb8ffdb048fe2446cd0d043b8898ba169630: Status 404 returned error can't find the container with id c799ad4433ce3f69488f031e8684cb8ffdb048fe2446cd0d043b8898ba169630 Jan 26 18:50:34 crc kubenswrapper[4737]: W0126 18:50:34.208751 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d2709bf_2113_45d7_94a1_816bc230044a.slice/crio-610a219a9533636ab9dfad5c08f1586477dc4493f3ee43721e3cd8425ea637a5 WatchSource:0}: Error finding container 610a219a9533636ab9dfad5c08f1586477dc4493f3ee43721e3cd8425ea637a5: Status 404 returned error can't find the container with id 610a219a9533636ab9dfad5c08f1586477dc4493f3ee43721e3cd8425ea637a5 Jan 26 18:50:34 crc kubenswrapper[4737]: I0126 18:50:34.215028 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-v9b85"] Jan 26 18:50:34 crc kubenswrapper[4737]: I0126 18:50:34.453672 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6904aa8b-12dd-4139-9a9f-f60be010cf3b-cert\") pod \"infra-operator-controller-manager-694cf4f878-9lqk4\" (UID: \"6904aa8b-12dd-4139-9a9f-f60be010cf3b\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-9lqk4" Jan 26 18:50:34 crc kubenswrapper[4737]: E0126 18:50:34.453789 4737 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 18:50:34 crc kubenswrapper[4737]: E0126 18:50:34.453862 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6904aa8b-12dd-4139-9a9f-f60be010cf3b-cert podName:6904aa8b-12dd-4139-9a9f-f60be010cf3b nodeName:}" failed. No retries permitted until 2026-01-26 18:50:38.45384425 +0000 UTC m=+1211.762038958 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6904aa8b-12dd-4139-9a9f-f60be010cf3b-cert") pod "infra-operator-controller-manager-694cf4f878-9lqk4" (UID: "6904aa8b-12dd-4139-9a9f-f60be010cf3b") : secret "infra-operator-webhook-server-cert" not found Jan 26 18:50:34 crc kubenswrapper[4737]: I0126 18:50:34.843631 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-hx2gj"] Jan 26 18:50:34 crc kubenswrapper[4737]: I0126 18:50:34.905147 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-xrm44"] Jan 26 18:50:34 crc kubenswrapper[4737]: I0126 18:50:34.918377 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-55xkx"] Jan 26 18:50:34 crc kubenswrapper[4737]: I0126 18:50:34.925286 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-9lkfc"] Jan 26 18:50:34 crc kubenswrapper[4737]: I0126 18:50:34.931658 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-lfh5n"] Jan 26 18:50:34 crc kubenswrapper[4737]: I0126 18:50:34.938238 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-tz995"] Jan 26 18:50:34 crc kubenswrapper[4737]: I0126 18:50:34.945290 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6cf49855b4-zfzgj"] Jan 26 18:50:34 crc kubenswrapper[4737]: I0126 18:50:34.951815 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-p42h8"] Jan 26 18:50:34 crc kubenswrapper[4737]: I0126 18:50:34.958378 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qr8vf"] Jan 26 18:50:34 crc kubenswrapper[4737]: I0126 18:50:34.965118 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-4n95b"] Jan 26 18:50:34 crc kubenswrapper[4737]: I0126 18:50:34.970927 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5xvj4"] Jan 26 18:50:34 crc kubenswrapper[4737]: I0126 18:50:34.999665 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6mjbw" event={"ID":"62ddf97f-7d75-4667-9480-17cb809b98f5","Type":"ContainerStarted","Data":"c3cebd06d68b601078efb4e1745828419dce66376af9a9109e4729297a3218aa"} Jan 26 18:50:35 crc kubenswrapper[4737]: I0126 18:50:35.001673 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-zbp84" event={"ID":"03d41d00-eefc-45c4-aaea-f09a5e34362b","Type":"ContainerStarted","Data":"c799ad4433ce3f69488f031e8684cb8ffdb048fe2446cd0d043b8898ba169630"} Jan 26 18:50:35 crc kubenswrapper[4737]: I0126 18:50:35.005661 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-v9b85" event={"ID":"0d2709bf-2113-45d7-94a1-816bc230044a","Type":"ContainerStarted","Data":"610a219a9533636ab9dfad5c08f1586477dc4493f3ee43721e3cd8425ea637a5"} Jan 26 18:50:35 crc kubenswrapper[4737]: W0126 18:50:35.163287 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0716cfbf_95d3_44fd_9e28_9b861568b791.slice/crio-0e0b387258a2be7c025a2d255d90573259443d43dbe7d8e4cfabad3df639890a WatchSource:0}: Error finding container 0e0b387258a2be7c025a2d255d90573259443d43dbe7d8e4cfabad3df639890a: Status 404 returned error can't find the container with id 0e0b387258a2be7c025a2d255d90573259443d43dbe7d8e4cfabad3df639890a Jan 26 18:50:35 crc kubenswrapper[4737]: W0126 18:50:35.164738 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3164f5a5_0f37_4ab6_bc2a_51978eb9f842.slice/crio-a24c524bd9915a976db75086ccecdaba1df95a439c5deaa27442ecd2bbe36a62 WatchSource:0}: Error finding container a24c524bd9915a976db75086ccecdaba1df95a439c5deaa27442ecd2bbe36a62: Status 404 returned error can't find the container with id a24c524bd9915a976db75086ccecdaba1df95a439c5deaa27442ecd2bbe36a62 Jan 26 18:50:35 crc kubenswrapper[4737]: W0126 18:50:35.178462 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9b745b4_487d_4ccb_a398_8d9af643ae50.slice/crio-af7aecb8679dc1571cf58ae2f1a12a03bd790b6f78ed29e5776184347739e133 WatchSource:0}: Error finding container af7aecb8679dc1571cf58ae2f1a12a03bd790b6f78ed29e5776184347739e133: Status 404 returned error can't find the container with id af7aecb8679dc1571cf58ae2f1a12a03bd790b6f78ed29e5776184347739e133 Jan 26 18:50:35 crc kubenswrapper[4737]: W0126 18:50:35.182833 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc68a8293_a298_4384_83f0_4a7e50517d3b.slice/crio-d2d582ac3426f0ee7071ab3cfb2641514e6b63c9f97aad02b262436681422984 WatchSource:0}: Error finding container d2d582ac3426f0ee7071ab3cfb2641514e6b63c9f97aad02b262436681422984: Status 404 returned error can't find the container with id d2d582ac3426f0ee7071ab3cfb2641514e6b63c9f97aad02b262436681422984 Jan 26 18:50:35 crc kubenswrapper[4737]: W0126 18:50:35.185424 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod284309e9_61a9_47c4_918a_6f097cf10aa1.slice/crio-9a2ed2b59e8ffba69d83bb36215a101d8b98da0a34a638aa1c9f73a6e9fe17a7 WatchSource:0}: Error finding container 9a2ed2b59e8ffba69d83bb36215a101d8b98da0a34a638aa1c9f73a6e9fe17a7: Status 404 returned error can't find the container with id 9a2ed2b59e8ffba69d83bb36215a101d8b98da0a34a638aa1c9f73a6e9fe17a7 Jan 26 18:50:35 crc kubenswrapper[4737]: W0126 18:50:35.205394 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod288df3c7_1220_419c_bde6_67ee3922b8ad.slice/crio-7e0117286f34b6dae5eac13d6159cb82b460bf5a7d8e64f098e13f4b7624b744 WatchSource:0}: Error finding container 7e0117286f34b6dae5eac13d6159cb82b460bf5a7d8e64f098e13f4b7624b744: Status 404 returned error can't find the container with id 7e0117286f34b6dae5eac13d6159cb82b460bf5a7d8e64f098e13f4b7624b744 Jan 26 18:50:35 crc kubenswrapper[4737]: E0126 18:50:35.210522 4737 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:c94116e32fb9af850accd9d7ae46765559eef3fbe2ba75472c1c1ac91b2c33fd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vnt7x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-7f86f8796f-p42h8_openstack-operators(288df3c7-1220-419c-bde6-67ee3922b8ad): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 18:50:35 crc kubenswrapper[4737]: E0126 18:50:35.211789 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-p42h8" podUID="288df3c7-1220-419c-bde6-67ee3922b8ad" Jan 26 18:50:35 crc kubenswrapper[4737]: W0126 18:50:35.216335 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01b83dfe_58bb_40fa_a0e8_b942b4c79b72.slice/crio-2e80326cd2027ac0a5bd66db441b193a0442412358636e04bb2bb500541e3a1a WatchSource:0}: Error finding container 2e80326cd2027ac0a5bd66db441b193a0442412358636e04bb2bb500541e3a1a: Status 404 returned error can't find the container with id 2e80326cd2027ac0a5bd66db441b193a0442412358636e04bb2bb500541e3a1a Jan 26 18:50:35 crc kubenswrapper[4737]: E0126 18:50:35.235360 4737 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ptvtb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-547cbdb99f-9lkfc_openstack-operators(8aa44595-2352-4a3e-888f-3409254cde36): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 18:50:35 crc kubenswrapper[4737]: E0126 18:50:35.235690 4737 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-knvqh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-78d58447c5-tz995_openstack-operators(01b83dfe-58bb-40fa-a0e8-b942b4c79b72): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 18:50:35 crc kubenswrapper[4737]: E0126 18:50:35.237216 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-9lkfc" podUID="8aa44595-2352-4a3e-888f-3409254cde36" Jan 26 18:50:35 crc kubenswrapper[4737]: E0126 18:50:35.237252 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-tz995" podUID="01b83dfe-58bb-40fa-a0e8-b942b4c79b72" Jan 26 18:50:35 crc kubenswrapper[4737]: I0126 18:50:35.384696 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5175d9d3-4bf9-4f52-be13-e33b02e03592-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854wx9kv\" (UID: \"5175d9d3-4bf9-4f52-be13-e33b02e03592\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854wx9kv" Jan 26 18:50:35 crc kubenswrapper[4737]: E0126 18:50:35.385028 4737 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 18:50:35 crc kubenswrapper[4737]: E0126 18:50:35.385277 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5175d9d3-4bf9-4f52-be13-e33b02e03592-cert podName:5175d9d3-4bf9-4f52-be13-e33b02e03592 nodeName:}" failed. No retries permitted until 2026-01-26 18:50:39.385261065 +0000 UTC m=+1212.693455773 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5175d9d3-4bf9-4f52-be13-e33b02e03592-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854wx9kv" (UID: "5175d9d3-4bf9-4f52-be13-e33b02e03592") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 18:50:35 crc kubenswrapper[4737]: I0126 18:50:35.690389 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c7cfbb47-6d43-4030-a3d1-516430aeffb7-metrics-certs\") pod \"openstack-operator-controller-manager-6ffbd5d47c-xwdkt\" (UID: \"c7cfbb47-6d43-4030-a3d1-516430aeffb7\") " pod="openstack-operators/openstack-operator-controller-manager-6ffbd5d47c-xwdkt" Jan 26 18:50:35 crc kubenswrapper[4737]: I0126 18:50:35.690802 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c7cfbb47-6d43-4030-a3d1-516430aeffb7-webhook-certs\") pod \"openstack-operator-controller-manager-6ffbd5d47c-xwdkt\" (UID: \"c7cfbb47-6d43-4030-a3d1-516430aeffb7\") " pod="openstack-operators/openstack-operator-controller-manager-6ffbd5d47c-xwdkt" Jan 26 18:50:35 crc kubenswrapper[4737]: E0126 18:50:35.690551 4737 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 18:50:35 crc kubenswrapper[4737]: E0126 18:50:35.690959 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c7cfbb47-6d43-4030-a3d1-516430aeffb7-metrics-certs podName:c7cfbb47-6d43-4030-a3d1-516430aeffb7 nodeName:}" failed. No retries permitted until 2026-01-26 18:50:39.690942845 +0000 UTC m=+1212.999137553 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c7cfbb47-6d43-4030-a3d1-516430aeffb7-metrics-certs") pod "openstack-operator-controller-manager-6ffbd5d47c-xwdkt" (UID: "c7cfbb47-6d43-4030-a3d1-516430aeffb7") : secret "metrics-server-cert" not found Jan 26 18:50:35 crc kubenswrapper[4737]: E0126 18:50:35.690909 4737 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 18:50:35 crc kubenswrapper[4737]: E0126 18:50:35.691257 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c7cfbb47-6d43-4030-a3d1-516430aeffb7-webhook-certs podName:c7cfbb47-6d43-4030-a3d1-516430aeffb7 nodeName:}" failed. No retries permitted until 2026-01-26 18:50:39.691249242 +0000 UTC m=+1212.999443950 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c7cfbb47-6d43-4030-a3d1-516430aeffb7-webhook-certs") pod "openstack-operator-controller-manager-6ffbd5d47c-xwdkt" (UID: "c7cfbb47-6d43-4030-a3d1-516430aeffb7") : secret "webhook-server-cert" not found Jan 26 18:50:36 crc kubenswrapper[4737]: I0126 18:50:36.033541 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-55xkx" event={"ID":"c9b745b4-487d-4ccb-a398-8d9af643ae50","Type":"ContainerStarted","Data":"af7aecb8679dc1571cf58ae2f1a12a03bd790b6f78ed29e5776184347739e133"} Jan 26 18:50:36 crc kubenswrapper[4737]: I0126 18:50:36.038380 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-tz995" event={"ID":"01b83dfe-58bb-40fa-a0e8-b942b4c79b72","Type":"ContainerStarted","Data":"2e80326cd2027ac0a5bd66db441b193a0442412358636e04bb2bb500541e3a1a"} Jan 26 18:50:36 crc kubenswrapper[4737]: I0126 18:50:36.040901 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qr8vf" event={"ID":"284309e9-61a9-47c4-918a-6f097cf10aa1","Type":"ContainerStarted","Data":"9a2ed2b59e8ffba69d83bb36215a101d8b98da0a34a638aa1c9f73a6e9fe17a7"} Jan 26 18:50:36 crc kubenswrapper[4737]: E0126 18:50:36.042102 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-tz995" podUID="01b83dfe-58bb-40fa-a0e8-b942b4c79b72" Jan 26 18:50:36 crc kubenswrapper[4737]: I0126 18:50:36.046134 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-hx2gj" event={"ID":"148ce19e-3a70-4b27-98e1-87807dee6178","Type":"ContainerStarted","Data":"b5dd20f7ab1dbec25a259d971c8c22acaeb0b96a6321a902047f84d29f85c1e5"} Jan 26 18:50:36 crc kubenswrapper[4737]: I0126 18:50:36.047380 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-9lkfc" event={"ID":"8aa44595-2352-4a3e-888f-3409254cde36","Type":"ContainerStarted","Data":"fde66ca6a53f2f426cca7d706bfda82091092bd8a5f64755bc88233613408fce"} Jan 26 18:50:36 crc kubenswrapper[4737]: E0126 18:50:36.050105 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-9lkfc" podUID="8aa44595-2352-4a3e-888f-3409254cde36" Jan 26 18:50:36 crc kubenswrapper[4737]: I0126 18:50:36.055584 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-lfh5n" event={"ID":"11c8ec8e-f710-4b3f-9bf2-be1834ddffb9","Type":"ContainerStarted","Data":"446328d4b28ba25238b62a29a0657826d8818d66dd7c846fd9ade600ba34eace"} Jan 26 18:50:36 crc kubenswrapper[4737]: I0126 18:50:36.057734 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5xvj4" event={"ID":"3c491fdc-889c-4d4a-aedd-60a242e26027","Type":"ContainerStarted","Data":"2a8fde112a662aae546612a5a70ef380767f8fc2ac060bb9c1afbe792a0a00af"} Jan 26 18:50:36 crc kubenswrapper[4737]: I0126 18:50:36.065791 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6cf49855b4-zfzgj" event={"ID":"0716cfbf-95d3-44fd-9e28-9b861568b791","Type":"ContainerStarted","Data":"0e0b387258a2be7c025a2d255d90573259443d43dbe7d8e4cfabad3df639890a"} Jan 26 18:50:36 crc kubenswrapper[4737]: I0126 18:50:36.072465 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-hbqjs" event={"ID":"6cc46694-b15a-4417-a0a9-f4c13184f2ca","Type":"ContainerStarted","Data":"07fa1ddfb90db659865f5f5feaa7516a59be7503a224661abeba34571e46ab48"} Jan 26 18:50:36 crc kubenswrapper[4737]: I0126 18:50:36.073592 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-hbqjs" Jan 26 18:50:36 crc kubenswrapper[4737]: I0126 18:50:36.117201 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-hbqjs" podStartSLOduration=3.154368639 podStartE2EDuration="6.11718255s" podCreationTimestamp="2026-01-26 18:50:30 +0000 UTC" firstStartedPulling="2026-01-26 18:50:32.295175761 +0000 UTC m=+1205.603370469" lastFinishedPulling="2026-01-26 18:50:35.257989672 +0000 UTC m=+1208.566184380" observedRunningTime="2026-01-26 18:50:36.116555735 +0000 UTC m=+1209.424750443" watchObservedRunningTime="2026-01-26 18:50:36.11718255 +0000 UTC m=+1209.425377258" Jan 26 18:50:36 crc kubenswrapper[4737]: I0126 18:50:36.134650 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-4n95b" event={"ID":"c68a8293-a298-4384-83f0-4a7e50517d3b","Type":"ContainerStarted","Data":"d2d582ac3426f0ee7071ab3cfb2641514e6b63c9f97aad02b262436681422984"} Jan 26 18:50:36 crc kubenswrapper[4737]: I0126 18:50:36.137668 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-xrm44" event={"ID":"3164f5a5-0f37-4ab6-bc2a-51978eb9f842","Type":"ContainerStarted","Data":"a24c524bd9915a976db75086ccecdaba1df95a439c5deaa27442ecd2bbe36a62"} Jan 26 18:50:36 crc kubenswrapper[4737]: I0126 18:50:36.149135 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-p42h8" event={"ID":"288df3c7-1220-419c-bde6-67ee3922b8ad","Type":"ContainerStarted","Data":"7e0117286f34b6dae5eac13d6159cb82b460bf5a7d8e64f098e13f4b7624b744"} Jan 26 18:50:36 crc kubenswrapper[4737]: E0126 18:50:36.155062 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:c94116e32fb9af850accd9d7ae46765559eef3fbe2ba75472c1c1ac91b2c33fd\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-p42h8" podUID="288df3c7-1220-419c-bde6-67ee3922b8ad" Jan 26 18:50:37 crc kubenswrapper[4737]: E0126 18:50:37.178032 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-tz995" podUID="01b83dfe-58bb-40fa-a0e8-b942b4c79b72" Jan 26 18:50:37 crc kubenswrapper[4737]: E0126 18:50:37.178420 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:c94116e32fb9af850accd9d7ae46765559eef3fbe2ba75472c1c1ac91b2c33fd\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-p42h8" podUID="288df3c7-1220-419c-bde6-67ee3922b8ad" Jan 26 18:50:37 crc kubenswrapper[4737]: E0126 18:50:37.178529 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-9lkfc" podUID="8aa44595-2352-4a3e-888f-3409254cde36" Jan 26 18:50:38 crc kubenswrapper[4737]: I0126 18:50:38.509001 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6904aa8b-12dd-4139-9a9f-f60be010cf3b-cert\") pod \"infra-operator-controller-manager-694cf4f878-9lqk4\" (UID: \"6904aa8b-12dd-4139-9a9f-f60be010cf3b\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-9lqk4" Jan 26 18:50:38 crc kubenswrapper[4737]: E0126 18:50:38.510273 4737 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 18:50:38 crc kubenswrapper[4737]: E0126 18:50:38.510340 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6904aa8b-12dd-4139-9a9f-f60be010cf3b-cert podName:6904aa8b-12dd-4139-9a9f-f60be010cf3b nodeName:}" failed. No retries permitted until 2026-01-26 18:50:46.510321506 +0000 UTC m=+1219.818516214 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6904aa8b-12dd-4139-9a9f-f60be010cf3b-cert") pod "infra-operator-controller-manager-694cf4f878-9lqk4" (UID: "6904aa8b-12dd-4139-9a9f-f60be010cf3b") : secret "infra-operator-webhook-server-cert" not found Jan 26 18:50:39 crc kubenswrapper[4737]: I0126 18:50:39.403468 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5175d9d3-4bf9-4f52-be13-e33b02e03592-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854wx9kv\" (UID: \"5175d9d3-4bf9-4f52-be13-e33b02e03592\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854wx9kv" Jan 26 18:50:39 crc kubenswrapper[4737]: E0126 18:50:39.403646 4737 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 18:50:39 crc kubenswrapper[4737]: E0126 18:50:39.403742 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5175d9d3-4bf9-4f52-be13-e33b02e03592-cert podName:5175d9d3-4bf9-4f52-be13-e33b02e03592 nodeName:}" failed. No retries permitted until 2026-01-26 18:50:47.403719022 +0000 UTC m=+1220.711913770 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5175d9d3-4bf9-4f52-be13-e33b02e03592-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854wx9kv" (UID: "5175d9d3-4bf9-4f52-be13-e33b02e03592") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 18:50:39 crc kubenswrapper[4737]: I0126 18:50:39.709036 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c7cfbb47-6d43-4030-a3d1-516430aeffb7-webhook-certs\") pod \"openstack-operator-controller-manager-6ffbd5d47c-xwdkt\" (UID: \"c7cfbb47-6d43-4030-a3d1-516430aeffb7\") " pod="openstack-operators/openstack-operator-controller-manager-6ffbd5d47c-xwdkt" Jan 26 18:50:39 crc kubenswrapper[4737]: I0126 18:50:39.709179 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c7cfbb47-6d43-4030-a3d1-516430aeffb7-metrics-certs\") pod \"openstack-operator-controller-manager-6ffbd5d47c-xwdkt\" (UID: \"c7cfbb47-6d43-4030-a3d1-516430aeffb7\") " pod="openstack-operators/openstack-operator-controller-manager-6ffbd5d47c-xwdkt" Jan 26 18:50:39 crc kubenswrapper[4737]: E0126 18:50:39.709241 4737 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 18:50:39 crc kubenswrapper[4737]: E0126 18:50:39.709315 4737 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 18:50:39 crc kubenswrapper[4737]: E0126 18:50:39.709361 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c7cfbb47-6d43-4030-a3d1-516430aeffb7-metrics-certs podName:c7cfbb47-6d43-4030-a3d1-516430aeffb7 nodeName:}" failed. No retries permitted until 2026-01-26 18:50:47.70934701 +0000 UTC m=+1221.017541718 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c7cfbb47-6d43-4030-a3d1-516430aeffb7-metrics-certs") pod "openstack-operator-controller-manager-6ffbd5d47c-xwdkt" (UID: "c7cfbb47-6d43-4030-a3d1-516430aeffb7") : secret "metrics-server-cert" not found Jan 26 18:50:39 crc kubenswrapper[4737]: E0126 18:50:39.709377 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c7cfbb47-6d43-4030-a3d1-516430aeffb7-webhook-certs podName:c7cfbb47-6d43-4030-a3d1-516430aeffb7 nodeName:}" failed. No retries permitted until 2026-01-26 18:50:47.709370201 +0000 UTC m=+1221.017564909 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c7cfbb47-6d43-4030-a3d1-516430aeffb7-webhook-certs") pod "openstack-operator-controller-manager-6ffbd5d47c-xwdkt" (UID: "c7cfbb47-6d43-4030-a3d1-516430aeffb7") : secret "webhook-server-cert" not found Jan 26 18:50:40 crc kubenswrapper[4737]: I0126 18:50:40.709802 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-hbqjs" Jan 26 18:50:46 crc kubenswrapper[4737]: I0126 18:50:46.602061 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6904aa8b-12dd-4139-9a9f-f60be010cf3b-cert\") pod \"infra-operator-controller-manager-694cf4f878-9lqk4\" (UID: \"6904aa8b-12dd-4139-9a9f-f60be010cf3b\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-9lqk4" Jan 26 18:50:46 crc kubenswrapper[4737]: E0126 18:50:46.602840 4737 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 18:50:46 crc kubenswrapper[4737]: E0126 18:50:46.602888 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6904aa8b-12dd-4139-9a9f-f60be010cf3b-cert podName:6904aa8b-12dd-4139-9a9f-f60be010cf3b nodeName:}" failed. No retries permitted until 2026-01-26 18:51:02.602874859 +0000 UTC m=+1235.911069567 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6904aa8b-12dd-4139-9a9f-f60be010cf3b-cert") pod "infra-operator-controller-manager-694cf4f878-9lqk4" (UID: "6904aa8b-12dd-4139-9a9f-f60be010cf3b") : secret "infra-operator-webhook-server-cert" not found Jan 26 18:50:47 crc kubenswrapper[4737]: I0126 18:50:47.410117 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5175d9d3-4bf9-4f52-be13-e33b02e03592-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854wx9kv\" (UID: \"5175d9d3-4bf9-4f52-be13-e33b02e03592\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854wx9kv" Jan 26 18:50:47 crc kubenswrapper[4737]: E0126 18:50:47.410311 4737 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 18:50:47 crc kubenswrapper[4737]: E0126 18:50:47.410393 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5175d9d3-4bf9-4f52-be13-e33b02e03592-cert podName:5175d9d3-4bf9-4f52-be13-e33b02e03592 nodeName:}" failed. No retries permitted until 2026-01-26 18:51:03.4103744 +0000 UTC m=+1236.718569098 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5175d9d3-4bf9-4f52-be13-e33b02e03592-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854wx9kv" (UID: "5175d9d3-4bf9-4f52-be13-e33b02e03592") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 18:50:47 crc kubenswrapper[4737]: I0126 18:50:47.717642 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c7cfbb47-6d43-4030-a3d1-516430aeffb7-webhook-certs\") pod \"openstack-operator-controller-manager-6ffbd5d47c-xwdkt\" (UID: \"c7cfbb47-6d43-4030-a3d1-516430aeffb7\") " pod="openstack-operators/openstack-operator-controller-manager-6ffbd5d47c-xwdkt" Jan 26 18:50:47 crc kubenswrapper[4737]: I0126 18:50:47.717755 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c7cfbb47-6d43-4030-a3d1-516430aeffb7-metrics-certs\") pod \"openstack-operator-controller-manager-6ffbd5d47c-xwdkt\" (UID: \"c7cfbb47-6d43-4030-a3d1-516430aeffb7\") " pod="openstack-operators/openstack-operator-controller-manager-6ffbd5d47c-xwdkt" Jan 26 18:50:47 crc kubenswrapper[4737]: E0126 18:50:47.717967 4737 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 18:50:47 crc kubenswrapper[4737]: E0126 18:50:47.718103 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c7cfbb47-6d43-4030-a3d1-516430aeffb7-metrics-certs podName:c7cfbb47-6d43-4030-a3d1-516430aeffb7 nodeName:}" failed. No retries permitted until 2026-01-26 18:51:03.718026456 +0000 UTC m=+1237.026221164 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c7cfbb47-6d43-4030-a3d1-516430aeffb7-metrics-certs") pod "openstack-operator-controller-manager-6ffbd5d47c-xwdkt" (UID: "c7cfbb47-6d43-4030-a3d1-516430aeffb7") : secret "metrics-server-cert" not found Jan 26 18:50:47 crc kubenswrapper[4737]: E0126 18:50:47.719033 4737 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 18:50:47 crc kubenswrapper[4737]: E0126 18:50:47.719213 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c7cfbb47-6d43-4030-a3d1-516430aeffb7-webhook-certs podName:c7cfbb47-6d43-4030-a3d1-516430aeffb7 nodeName:}" failed. No retries permitted until 2026-01-26 18:51:03.719199964 +0000 UTC m=+1237.027394682 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c7cfbb47-6d43-4030-a3d1-516430aeffb7-webhook-certs") pod "openstack-operator-controller-manager-6ffbd5d47c-xwdkt" (UID: "c7cfbb47-6d43-4030-a3d1-516430aeffb7") : secret "webhook-server-cert" not found Jan 26 18:50:51 crc kubenswrapper[4737]: E0126 18:50:51.756753 4737 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822" Jan 26 18:50:51 crc kubenswrapper[4737]: E0126 18:50:51.757552 4737 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vpwkb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-77d5c5b54f-kq82d_openstack-operators(d80defd5-46d2-4e20-b093-dff95dca651b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 18:50:51 crc kubenswrapper[4737]: E0126 18:50:51.758997 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-kq82d" podUID="d80defd5-46d2-4e20-b093-dff95dca651b" Jan 26 18:50:52 crc kubenswrapper[4737]: E0126 18:50:52.351486 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-kq82d" podUID="d80defd5-46d2-4e20-b093-dff95dca651b" Jan 26 18:50:52 crc kubenswrapper[4737]: E0126 18:50:52.772577 4737 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:4d55bd6418df3f63f4d3fe47bebf3f5498a520b3e14af98fe16c85ef9fd54d5e" Jan 26 18:50:52 crc kubenswrapper[4737]: E0126 18:50:52.772800 4737 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:4d55bd6418df3f63f4d3fe47bebf3f5498a520b3e14af98fe16c85ef9fd54d5e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-25wb5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-598f7747c9-jpmmh_openstack-operators(b3a010fd-4f62-40c6-a377-be5c6f2e6ba7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 18:50:52 crc kubenswrapper[4737]: E0126 18:50:52.774003 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-jpmmh" podUID="b3a010fd-4f62-40c6-a377-be5c6f2e6ba7" Jan 26 18:50:53 crc kubenswrapper[4737]: E0126 18:50:53.360235 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:4d55bd6418df3f63f4d3fe47bebf3f5498a520b3e14af98fe16c85ef9fd54d5e\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-jpmmh" podUID="b3a010fd-4f62-40c6-a377-be5c6f2e6ba7" Jan 26 18:50:54 crc kubenswrapper[4737]: E0126 18:50:54.580725 4737 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84" Jan 26 18:50:54 crc kubenswrapper[4737]: E0126 18:50:54.581144 4737 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l9cvv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6b9fb5fdcb-c4hpz_openstack-operators(5b2ad507-8ef0-40e5-a10c-d5ed62a8181e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 18:50:54 crc kubenswrapper[4737]: E0126 18:50:54.582329 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-c4hpz" podUID="5b2ad507-8ef0-40e5-a10c-d5ed62a8181e" Jan 26 18:50:55 crc kubenswrapper[4737]: E0126 18:50:55.496937 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-c4hpz" podUID="5b2ad507-8ef0-40e5-a10c-d5ed62a8181e" Jan 26 18:50:55 crc kubenswrapper[4737]: E0126 18:50:55.502971 4737 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd" Jan 26 18:50:55 crc kubenswrapper[4737]: E0126 18:50:55.503170 4737 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rh8dw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-5f4cd88d46-qr8vf_openstack-operators(284309e9-61a9-47c4-918a-6f097cf10aa1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 18:50:55 crc kubenswrapper[4737]: E0126 18:50:55.504408 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qr8vf" podUID="284309e9-61a9-47c4-918a-6f097cf10aa1" Jan 26 18:50:56 crc kubenswrapper[4737]: E0126 18:50:56.503432 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qr8vf" podUID="284309e9-61a9-47c4-918a-6f097cf10aa1" Jan 26 18:50:56 crc kubenswrapper[4737]: E0126 18:50:56.793818 4737 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d" Jan 26 18:50:56 crc kubenswrapper[4737]: E0126 18:50:56.794499 4737 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pgnvh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-79d5ccc684-lfh5n_openstack-operators(11c8ec8e-f710-4b3f-9bf2-be1834ddffb9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 18:50:56 crc kubenswrapper[4737]: E0126 18:50:56.796046 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-lfh5n" podUID="11c8ec8e-f710-4b3f-9bf2-be1834ddffb9" Jan 26 18:50:57 crc kubenswrapper[4737]: E0126 18:50:57.435374 4737 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327" Jan 26 18:50:57 crc kubenswrapper[4737]: E0126 18:50:57.435704 4737 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dd4x8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-6f75f45d54-55xkx_openstack-operators(c9b745b4-487d-4ccb-a398-8d9af643ae50): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 18:50:57 crc kubenswrapper[4737]: E0126 18:50:57.437516 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-55xkx" podUID="c9b745b4-487d-4ccb-a398-8d9af643ae50" Jan 26 18:50:57 crc kubenswrapper[4737]: E0126 18:50:57.511211 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d\\\"\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-lfh5n" podUID="11c8ec8e-f710-4b3f-9bf2-be1834ddffb9" Jan 26 18:50:57 crc kubenswrapper[4737]: E0126 18:50:57.511772 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-55xkx" podUID="c9b745b4-487d-4ccb-a398-8d9af643ae50" Jan 26 18:50:58 crc kubenswrapper[4737]: E0126 18:50:58.089175 4737 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492" Jan 26 18:50:58 crc kubenswrapper[4737]: E0126 18:50:58.089371 4737 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2tbn2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-594c8c9d5d-j9nc9_openstack-operators(3508c1f8-c9d9-41bf-b71e-eebb13eb5e86): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 18:50:58 crc kubenswrapper[4737]: E0126 18:50:58.091178 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-j9nc9" podUID="3508c1f8-c9d9-41bf-b71e-eebb13eb5e86" Jan 26 18:50:58 crc kubenswrapper[4737]: E0126 18:50:58.518169 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492\\\"\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-j9nc9" podUID="3508c1f8-c9d9-41bf-b71e-eebb13eb5e86" Jan 26 18:50:59 crc kubenswrapper[4737]: E0126 18:50:59.577593 4737 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b" Jan 26 18:50:59 crc kubenswrapper[4737]: E0126 18:50:59.578032 4737 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fd8q7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-hx2gj_openstack-operators(148ce19e-3a70-4b27-98e1-87807dee6178): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 18:50:59 crc kubenswrapper[4737]: E0126 18:50:59.579796 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-hx2gj" podUID="148ce19e-3a70-4b27-98e1-87807dee6178" Jan 26 18:51:00 crc kubenswrapper[4737]: E0126 18:51:00.530470 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-hx2gj" podUID="148ce19e-3a70-4b27-98e1-87807dee6178" Jan 26 18:51:00 crc kubenswrapper[4737]: I0126 18:51:00.949327 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:51:00 crc kubenswrapper[4737]: I0126 18:51:00.950363 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:51:02 crc kubenswrapper[4737]: I0126 18:51:02.627797 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6904aa8b-12dd-4139-9a9f-f60be010cf3b-cert\") pod \"infra-operator-controller-manager-694cf4f878-9lqk4\" (UID: \"6904aa8b-12dd-4139-9a9f-f60be010cf3b\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-9lqk4" Jan 26 18:51:02 crc kubenswrapper[4737]: I0126 18:51:02.635022 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6904aa8b-12dd-4139-9a9f-f60be010cf3b-cert\") pod \"infra-operator-controller-manager-694cf4f878-9lqk4\" (UID: \"6904aa8b-12dd-4139-9a9f-f60be010cf3b\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-9lqk4" Jan 26 18:51:02 crc kubenswrapper[4737]: I0126 18:51:02.908664 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-9lqk4" Jan 26 18:51:03 crc kubenswrapper[4737]: I0126 18:51:03.444389 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5175d9d3-4bf9-4f52-be13-e33b02e03592-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854wx9kv\" (UID: \"5175d9d3-4bf9-4f52-be13-e33b02e03592\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854wx9kv" Jan 26 18:51:03 crc kubenswrapper[4737]: I0126 18:51:03.450217 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5175d9d3-4bf9-4f52-be13-e33b02e03592-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854wx9kv\" (UID: \"5175d9d3-4bf9-4f52-be13-e33b02e03592\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854wx9kv" Jan 26 18:51:03 crc kubenswrapper[4737]: I0126 18:51:03.583647 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854wx9kv" Jan 26 18:51:03 crc kubenswrapper[4737]: I0126 18:51:03.749447 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c7cfbb47-6d43-4030-a3d1-516430aeffb7-metrics-certs\") pod \"openstack-operator-controller-manager-6ffbd5d47c-xwdkt\" (UID: \"c7cfbb47-6d43-4030-a3d1-516430aeffb7\") " pod="openstack-operators/openstack-operator-controller-manager-6ffbd5d47c-xwdkt" Jan 26 18:51:03 crc kubenswrapper[4737]: I0126 18:51:03.749582 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c7cfbb47-6d43-4030-a3d1-516430aeffb7-webhook-certs\") pod \"openstack-operator-controller-manager-6ffbd5d47c-xwdkt\" (UID: \"c7cfbb47-6d43-4030-a3d1-516430aeffb7\") " pod="openstack-operators/openstack-operator-controller-manager-6ffbd5d47c-xwdkt" Jan 26 18:51:03 crc kubenswrapper[4737]: I0126 18:51:03.753512 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c7cfbb47-6d43-4030-a3d1-516430aeffb7-metrics-certs\") pod \"openstack-operator-controller-manager-6ffbd5d47c-xwdkt\" (UID: \"c7cfbb47-6d43-4030-a3d1-516430aeffb7\") " pod="openstack-operators/openstack-operator-controller-manager-6ffbd5d47c-xwdkt" Jan 26 18:51:03 crc kubenswrapper[4737]: I0126 18:51:03.755806 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c7cfbb47-6d43-4030-a3d1-516430aeffb7-webhook-certs\") pod \"openstack-operator-controller-manager-6ffbd5d47c-xwdkt\" (UID: \"c7cfbb47-6d43-4030-a3d1-516430aeffb7\") " pod="openstack-operators/openstack-operator-controller-manager-6ffbd5d47c-xwdkt" Jan 26 18:51:03 crc kubenswrapper[4737]: I0126 18:51:03.909767 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6ffbd5d47c-xwdkt" Jan 26 18:51:04 crc kubenswrapper[4737]: I0126 18:51:04.177470 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-ts4kl" podUID="f86f264d-5704-4995-9e15-13b28bd18dc4" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 18:51:08 crc kubenswrapper[4737]: E0126 18:51:08.702033 4737 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337" Jan 26 18:51:08 crc kubenswrapper[4737]: E0126 18:51:08.702886 4737 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gl86t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-78fdd796fd-bl8hk_openstack-operators(97c0989d-f677-4460-b62b-4733c7db29d4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 18:51:08 crc kubenswrapper[4737]: E0126 18:51:08.704180 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-bl8hk" podUID="97c0989d-f677-4460-b62b-4733c7db29d4" Jan 26 18:51:09 crc kubenswrapper[4737]: E0126 18:51:09.602368 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337\\\"\"" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-bl8hk" podUID="97c0989d-f677-4460-b62b-4733c7db29d4" Jan 26 18:51:10 crc kubenswrapper[4737]: E0126 18:51:10.466404 4737 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e" Jan 26 18:51:10 crc kubenswrapper[4737]: E0126 18:51:10.466848 4737 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-knvqh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-78d58447c5-tz995_openstack-operators(01b83dfe-58bb-40fa-a0e8-b942b4c79b72): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 18:51:10 crc kubenswrapper[4737]: E0126 18:51:10.468020 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-tz995" podUID="01b83dfe-58bb-40fa-a0e8-b942b4c79b72" Jan 26 18:51:11 crc kubenswrapper[4737]: E0126 18:51:11.146146 4737 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922" Jan 26 18:51:11 crc kubenswrapper[4737]: E0126 18:51:11.146337 4737 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ptvtb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-547cbdb99f-9lkfc_openstack-operators(8aa44595-2352-4a3e-888f-3409254cde36): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 18:51:11 crc kubenswrapper[4737]: E0126 18:51:11.147703 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-9lkfc" podUID="8aa44595-2352-4a3e-888f-3409254cde36" Jan 26 18:51:11 crc kubenswrapper[4737]: E0126 18:51:11.623011 4737 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 26 18:51:11 crc kubenswrapper[4737]: E0126 18:51:11.623210 4737 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qr5cd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-5xvj4_openstack-operators(3c491fdc-889c-4d4a-aedd-60a242e26027): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 18:51:11 crc kubenswrapper[4737]: E0126 18:51:11.624415 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5xvj4" podUID="3c491fdc-889c-4d4a-aedd-60a242e26027" Jan 26 18:51:12 crc kubenswrapper[4737]: E0126 18:51:12.627204 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5xvj4" podUID="3c491fdc-889c-4d4a-aedd-60a242e26027" Jan 26 18:51:15 crc kubenswrapper[4737]: E0126 18:51:15.820569 4737 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658" Jan 26 18:51:15 crc kubenswrapper[4737]: E0126 18:51:15.821391 4737 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9b55z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-7bdb645866-xrm44_openstack-operators(3164f5a5-0f37-4ab6-bc2a-51978eb9f842): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 18:51:15 crc kubenswrapper[4737]: E0126 18:51:15.822580 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-xrm44" podUID="3164f5a5-0f37-4ab6-bc2a-51978eb9f842" Jan 26 18:51:16 crc kubenswrapper[4737]: E0126 18:51:16.278742 4737 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349" Jan 26 18:51:16 crc kubenswrapper[4737]: E0126 18:51:16.279363 4737 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dwrsg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b8b6d4659-zbp84_openstack-operators(03d41d00-eefc-45c4-aaea-f09a5e34362b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 18:51:16 crc kubenswrapper[4737]: E0126 18:51:16.280650 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-zbp84" podUID="03d41d00-eefc-45c4-aaea-f09a5e34362b" Jan 26 18:51:16 crc kubenswrapper[4737]: E0126 18:51:16.780107 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658\\\"\"" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-xrm44" podUID="3164f5a5-0f37-4ab6-bc2a-51978eb9f842" Jan 26 18:51:16 crc kubenswrapper[4737]: E0126 18:51:16.781451 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-zbp84" podUID="03d41d00-eefc-45c4-aaea-f09a5e34362b" Jan 26 18:51:16 crc kubenswrapper[4737]: I0126 18:51:16.955930 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854wx9kv"] Jan 26 18:51:17 crc kubenswrapper[4737]: W0126 18:51:17.053706 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5175d9d3_4bf9_4f52_be13_e33b02e03592.slice/crio-03cf8b0603047aac8bfccd05d05abe742dfdbc8a8d48593bfc28598db6c5373e WatchSource:0}: Error finding container 03cf8b0603047aac8bfccd05d05abe742dfdbc8a8d48593bfc28598db6c5373e: Status 404 returned error can't find the container with id 03cf8b0603047aac8bfccd05d05abe742dfdbc8a8d48593bfc28598db6c5373e Jan 26 18:51:17 crc kubenswrapper[4737]: I0126 18:51:17.188535 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6ffbd5d47c-xwdkt"] Jan 26 18:51:17 crc kubenswrapper[4737]: I0126 18:51:17.202790 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-9lqk4"] Jan 26 18:51:17 crc kubenswrapper[4737]: I0126 18:51:17.676034 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-55xkx" event={"ID":"c9b745b4-487d-4ccb-a398-8d9af643ae50","Type":"ContainerStarted","Data":"9b9adfe142a61f836adc6a4634168e299846cccbb7ee408a8c974de2967b7702"} Jan 26 18:51:17 crc kubenswrapper[4737]: I0126 18:51:17.676262 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-55xkx" Jan 26 18:51:17 crc kubenswrapper[4737]: I0126 18:51:17.683938 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-j9nc9" event={"ID":"3508c1f8-c9d9-41bf-b71e-eebb13eb5e86","Type":"ContainerStarted","Data":"fb7059aabd3c5dc4ff44cab3c3848238157a9ea844a2ec9fe6ac0fce8190e0c1"} Jan 26 18:51:17 crc kubenswrapper[4737]: I0126 18:51:17.684201 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-j9nc9" Jan 26 18:51:17 crc kubenswrapper[4737]: I0126 18:51:17.692864 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-c4hpz" event={"ID":"5b2ad507-8ef0-40e5-a10c-d5ed62a8181e","Type":"ContainerStarted","Data":"e18c2d4120e856481254f489cf94803cc70cccc8599910725e574109b1d5d569"} Jan 26 18:51:17 crc kubenswrapper[4737]: I0126 18:51:17.693279 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-c4hpz" Jan 26 18:51:17 crc kubenswrapper[4737]: I0126 18:51:17.698741 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6ffbd5d47c-xwdkt" event={"ID":"c7cfbb47-6d43-4030-a3d1-516430aeffb7","Type":"ContainerStarted","Data":"9c5c140c58701574b5f2db7fa56da557ce5fd5f84b0b863354c3ecf8921ed9f7"} Jan 26 18:51:17 crc kubenswrapper[4737]: I0126 18:51:17.704352 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-4n95b" event={"ID":"c68a8293-a298-4384-83f0-4a7e50517d3b","Type":"ContainerStarted","Data":"8256fd9cc7f0dd262d61b1d4b3cc80909469508a365f56f4fca2f063ccb0e17c"} Jan 26 18:51:17 crc kubenswrapper[4737]: I0126 18:51:17.708019 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-55xkx" podStartSLOduration=6.417363524 podStartE2EDuration="47.707994851s" podCreationTimestamp="2026-01-26 18:50:30 +0000 UTC" firstStartedPulling="2026-01-26 18:50:35.183918552 +0000 UTC m=+1208.492113260" lastFinishedPulling="2026-01-26 18:51:16.474549879 +0000 UTC m=+1249.782744587" observedRunningTime="2026-01-26 18:51:17.702525755 +0000 UTC m=+1251.010720463" watchObservedRunningTime="2026-01-26 18:51:17.707994851 +0000 UTC m=+1251.016189559" Jan 26 18:51:17 crc kubenswrapper[4737]: I0126 18:51:17.708768 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6mjbw" event={"ID":"62ddf97f-7d75-4667-9480-17cb809b98f5","Type":"ContainerStarted","Data":"d52cbd33ac55700170d35f4cb214066bb196f075936429be55b47849eed8ab18"} Jan 26 18:51:17 crc kubenswrapper[4737]: I0126 18:51:17.709587 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6mjbw" Jan 26 18:51:17 crc kubenswrapper[4737]: I0126 18:51:17.716044 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-hx2gj" event={"ID":"148ce19e-3a70-4b27-98e1-87807dee6178","Type":"ContainerStarted","Data":"e2a741783db5b8a9bcac3233129331ce4700e121f40e84c56dbe38b2f51bdc34"} Jan 26 18:51:17 crc kubenswrapper[4737]: I0126 18:51:17.716318 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-hx2gj" Jan 26 18:51:17 crc kubenswrapper[4737]: I0126 18:51:17.724221 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-9lqk4" event={"ID":"6904aa8b-12dd-4139-9a9f-f60be010cf3b","Type":"ContainerStarted","Data":"e0430d58193ba1643b7b5ebf3d9702a5512a4cd5d8e4fc3201a974f9705ca046"} Jan 26 18:51:17 crc kubenswrapper[4737]: I0126 18:51:17.725901 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-lfh5n" event={"ID":"11c8ec8e-f710-4b3f-9bf2-be1834ddffb9","Type":"ContainerStarted","Data":"3fc9b78f8024c73b9af1c9443f7c2ffc5597a367d9e3a4af134869d05dadc4e8"} Jan 26 18:51:17 crc kubenswrapper[4737]: I0126 18:51:17.727324 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854wx9kv" event={"ID":"5175d9d3-4bf9-4f52-be13-e33b02e03592","Type":"ContainerStarted","Data":"03cf8b0603047aac8bfccd05d05abe742dfdbc8a8d48593bfc28598db6c5373e"} Jan 26 18:51:17 crc kubenswrapper[4737]: I0126 18:51:17.749556 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-v9b85" event={"ID":"0d2709bf-2113-45d7-94a1-816bc230044a","Type":"ContainerStarted","Data":"1a8dd68f13018eae856588fb14ac0d655b3f27275a4dab7078ae7d3b46aa86dd"} Jan 26 18:51:17 crc kubenswrapper[4737]: I0126 18:51:17.750502 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-v9b85" Jan 26 18:51:17 crc kubenswrapper[4737]: I0126 18:51:17.756665 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6cf49855b4-zfzgj" event={"ID":"0716cfbf-95d3-44fd-9e28-9b861568b791","Type":"ContainerStarted","Data":"508f4464be78e9cb7def08c510c65a972cf91274570c70372158c705229a3338"} Jan 26 18:51:17 crc kubenswrapper[4737]: I0126 18:51:17.757520 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-6cf49855b4-zfzgj" Jan 26 18:51:17 crc kubenswrapper[4737]: I0126 18:51:17.761719 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-jpmmh" event={"ID":"b3a010fd-4f62-40c6-a377-be5c6f2e6ba7","Type":"ContainerStarted","Data":"5c9308e294c7eadb57b578c27bd4eae8c0a24219dba009bf0073e28550b39a9a"} Jan 26 18:51:17 crc kubenswrapper[4737]: I0126 18:51:17.762511 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-jpmmh" Jan 26 18:51:17 crc kubenswrapper[4737]: I0126 18:51:17.769376 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qr8vf" event={"ID":"284309e9-61a9-47c4-918a-6f097cf10aa1","Type":"ContainerStarted","Data":"650c7684e58f053d40930ce2b7b118d28826e7494996336c9370230f72a161b9"} Jan 26 18:51:17 crc kubenswrapper[4737]: I0126 18:51:17.770195 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qr8vf" Jan 26 18:51:17 crc kubenswrapper[4737]: I0126 18:51:17.773157 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-p42h8" event={"ID":"288df3c7-1220-419c-bde6-67ee3922b8ad","Type":"ContainerStarted","Data":"4ee7abff79366c3cc52115b20c91c503e72422d18a14017c7ddd30ed5db7ca54"} Jan 26 18:51:17 crc kubenswrapper[4737]: I0126 18:51:17.773618 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-p42h8" Jan 26 18:51:17 crc kubenswrapper[4737]: I0126 18:51:17.791991 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-kq82d" event={"ID":"d80defd5-46d2-4e20-b093-dff95dca651b","Type":"ContainerStarted","Data":"a8a419a30bc6d1b2ca71393ac458d91aca3d4cabec4f5adf562977b0c59a2496"} Jan 26 18:51:17 crc kubenswrapper[4737]: I0126 18:51:17.792826 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-kq82d" Jan 26 18:51:17 crc kubenswrapper[4737]: I0126 18:51:17.833890 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-j9nc9" podStartSLOduration=5.227118187 podStartE2EDuration="47.833865242s" podCreationTimestamp="2026-01-26 18:50:30 +0000 UTC" firstStartedPulling="2026-01-26 18:50:33.874493799 +0000 UTC m=+1207.182688507" lastFinishedPulling="2026-01-26 18:51:16.481240854 +0000 UTC m=+1249.789435562" observedRunningTime="2026-01-26 18:51:17.741552934 +0000 UTC m=+1251.049747652" watchObservedRunningTime="2026-01-26 18:51:17.833865242 +0000 UTC m=+1251.142059950" Jan 26 18:51:17 crc kubenswrapper[4737]: I0126 18:51:17.834039 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-c4hpz" podStartSLOduration=5.330365329 podStartE2EDuration="47.834032686s" podCreationTimestamp="2026-01-26 18:50:30 +0000 UTC" firstStartedPulling="2026-01-26 18:50:33.91437226 +0000 UTC m=+1207.222566968" lastFinishedPulling="2026-01-26 18:51:16.418039607 +0000 UTC m=+1249.726234325" observedRunningTime="2026-01-26 18:51:17.833464153 +0000 UTC m=+1251.141658851" watchObservedRunningTime="2026-01-26 18:51:17.834032686 +0000 UTC m=+1251.142227394" Jan 26 18:51:17 crc kubenswrapper[4737]: I0126 18:51:17.888322 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-v9b85" podStartSLOduration=10.497647778 podStartE2EDuration="47.888302777s" podCreationTimestamp="2026-01-26 18:50:30 +0000 UTC" firstStartedPulling="2026-01-26 18:50:34.212880882 +0000 UTC m=+1207.521075590" lastFinishedPulling="2026-01-26 18:51:11.603535881 +0000 UTC m=+1244.911730589" observedRunningTime="2026-01-26 18:51:17.882646627 +0000 UTC m=+1251.190841335" watchObservedRunningTime="2026-01-26 18:51:17.888302777 +0000 UTC m=+1251.196497485" Jan 26 18:51:17 crc kubenswrapper[4737]: I0126 18:51:17.954592 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-kq82d" podStartSLOduration=5.456509307 podStartE2EDuration="47.954573104s" podCreationTimestamp="2026-01-26 18:50:30 +0000 UTC" firstStartedPulling="2026-01-26 18:50:33.918721081 +0000 UTC m=+1207.226915789" lastFinishedPulling="2026-01-26 18:51:16.416784878 +0000 UTC m=+1249.724979586" observedRunningTime="2026-01-26 18:51:17.954107934 +0000 UTC m=+1251.262302642" watchObservedRunningTime="2026-01-26 18:51:17.954573104 +0000 UTC m=+1251.262767812" Jan 26 18:51:18 crc kubenswrapper[4737]: I0126 18:51:18.013534 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-6cf49855b4-zfzgj" podStartSLOduration=7.40778025 podStartE2EDuration="48.013517083s" podCreationTimestamp="2026-01-26 18:50:30 +0000 UTC" firstStartedPulling="2026-01-26 18:50:35.173857388 +0000 UTC m=+1208.482052096" lastFinishedPulling="2026-01-26 18:51:15.779594221 +0000 UTC m=+1249.087788929" observedRunningTime="2026-01-26 18:51:18.007847202 +0000 UTC m=+1251.316041910" watchObservedRunningTime="2026-01-26 18:51:18.013517083 +0000 UTC m=+1251.321711791" Jan 26 18:51:18 crc kubenswrapper[4737]: I0126 18:51:18.070212 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qr8vf" podStartSLOduration=6.754611089 podStartE2EDuration="48.07019027s" podCreationTimestamp="2026-01-26 18:50:30 +0000 UTC" firstStartedPulling="2026-01-26 18:50:35.189527123 +0000 UTC m=+1208.497721831" lastFinishedPulling="2026-01-26 18:51:16.505106304 +0000 UTC m=+1249.813301012" observedRunningTime="2026-01-26 18:51:18.061800546 +0000 UTC m=+1251.369995264" watchObservedRunningTime="2026-01-26 18:51:18.07019027 +0000 UTC m=+1251.378384978" Jan 26 18:51:18 crc kubenswrapper[4737]: I0126 18:51:18.121843 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-4n95b" podStartSLOduration=11.705299023 podStartE2EDuration="48.12182426s" podCreationTimestamp="2026-01-26 18:50:30 +0000 UTC" firstStartedPulling="2026-01-26 18:50:35.185944679 +0000 UTC m=+1208.494139387" lastFinishedPulling="2026-01-26 18:51:11.602469916 +0000 UTC m=+1244.910664624" observedRunningTime="2026-01-26 18:51:18.111269896 +0000 UTC m=+1251.419464614" watchObservedRunningTime="2026-01-26 18:51:18.12182426 +0000 UTC m=+1251.430018968" Jan 26 18:51:18 crc kubenswrapper[4737]: I0126 18:51:18.147056 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-p42h8" podStartSLOduration=7.096114338 podStartE2EDuration="48.147035591s" podCreationTimestamp="2026-01-26 18:50:30 +0000 UTC" firstStartedPulling="2026-01-26 18:50:35.21035748 +0000 UTC m=+1208.518552188" lastFinishedPulling="2026-01-26 18:51:16.261278733 +0000 UTC m=+1249.569473441" observedRunningTime="2026-01-26 18:51:18.146684902 +0000 UTC m=+1251.454879610" watchObservedRunningTime="2026-01-26 18:51:18.147035591 +0000 UTC m=+1251.455230309" Jan 26 18:51:18 crc kubenswrapper[4737]: I0126 18:51:18.173168 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6mjbw" podStartSLOduration=10.791845601 podStartE2EDuration="48.173146582s" podCreationTimestamp="2026-01-26 18:50:30 +0000 UTC" firstStartedPulling="2026-01-26 18:50:34.222354333 +0000 UTC m=+1207.530549041" lastFinishedPulling="2026-01-26 18:51:11.603655314 +0000 UTC m=+1244.911850022" observedRunningTime="2026-01-26 18:51:18.167345009 +0000 UTC m=+1251.475539717" watchObservedRunningTime="2026-01-26 18:51:18.173146582 +0000 UTC m=+1251.481341280" Jan 26 18:51:18 crc kubenswrapper[4737]: I0126 18:51:18.190476 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-hx2gj" podStartSLOduration=6.890900656 podStartE2EDuration="48.190461212s" podCreationTimestamp="2026-01-26 18:50:30 +0000 UTC" firstStartedPulling="2026-01-26 18:50:35.174217826 +0000 UTC m=+1208.482412534" lastFinishedPulling="2026-01-26 18:51:16.473778382 +0000 UTC m=+1249.781973090" observedRunningTime="2026-01-26 18:51:18.186831468 +0000 UTC m=+1251.495026176" watchObservedRunningTime="2026-01-26 18:51:18.190461212 +0000 UTC m=+1251.498655920" Jan 26 18:51:18 crc kubenswrapper[4737]: I0126 18:51:18.219502 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-jpmmh" podStartSLOduration=5.665918358 podStartE2EDuration="48.219486081s" podCreationTimestamp="2026-01-26 18:50:30 +0000 UTC" firstStartedPulling="2026-01-26 18:50:33.864459484 +0000 UTC m=+1207.172654212" lastFinishedPulling="2026-01-26 18:51:16.418027227 +0000 UTC m=+1249.726221935" observedRunningTime="2026-01-26 18:51:18.218384186 +0000 UTC m=+1251.526578894" watchObservedRunningTime="2026-01-26 18:51:18.219486081 +0000 UTC m=+1251.527680789" Jan 26 18:51:18 crc kubenswrapper[4737]: I0126 18:51:18.832665 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6ffbd5d47c-xwdkt" event={"ID":"c7cfbb47-6d43-4030-a3d1-516430aeffb7","Type":"ContainerStarted","Data":"689bd433675f820f03a79f94ef816a4dcdaba4c2998cc200c133ea8a85802d6e"} Jan 26 18:51:18 crc kubenswrapper[4737]: I0126 18:51:18.833338 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-lfh5n" Jan 26 18:51:18 crc kubenswrapper[4737]: I0126 18:51:18.835402 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-4n95b" Jan 26 18:51:18 crc kubenswrapper[4737]: I0126 18:51:18.836296 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6ffbd5d47c-xwdkt" Jan 26 18:51:18 crc kubenswrapper[4737]: I0126 18:51:18.859955 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-6ffbd5d47c-xwdkt" podStartSLOduration=47.859937334 podStartE2EDuration="47.859937334s" podCreationTimestamp="2026-01-26 18:50:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:51:18.85631368 +0000 UTC m=+1252.164508388" watchObservedRunningTime="2026-01-26 18:51:18.859937334 +0000 UTC m=+1252.168132042" Jan 26 18:51:18 crc kubenswrapper[4737]: I0126 18:51:18.882098 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-lfh5n" podStartSLOduration=7.613778077 podStartE2EDuration="48.882081164s" podCreationTimestamp="2026-01-26 18:50:30 +0000 UTC" firstStartedPulling="2026-01-26 18:50:35.205471405 +0000 UTC m=+1208.513666113" lastFinishedPulling="2026-01-26 18:51:16.473774492 +0000 UTC m=+1249.781969200" observedRunningTime="2026-01-26 18:51:18.881616973 +0000 UTC m=+1252.189811681" watchObservedRunningTime="2026-01-26 18:51:18.882081164 +0000 UTC m=+1252.190275872" Jan 26 18:51:21 crc kubenswrapper[4737]: I0126 18:51:21.192217 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-v9b85" Jan 26 18:51:21 crc kubenswrapper[4737]: I0126 18:51:21.861989 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854wx9kv" event={"ID":"5175d9d3-4bf9-4f52-be13-e33b02e03592","Type":"ContainerStarted","Data":"f57bdb36e2274ad596fcd5954e4a7219cc516581c424a17fca819847173ff049"} Jan 26 18:51:21 crc kubenswrapper[4737]: I0126 18:51:21.862890 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854wx9kv" Jan 26 18:51:21 crc kubenswrapper[4737]: I0126 18:51:21.864687 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-9lqk4" event={"ID":"6904aa8b-12dd-4139-9a9f-f60be010cf3b","Type":"ContainerStarted","Data":"5e51c5a22461694142cc12c69a855376f1c72509e3dce58fffe6204d6981ff35"} Jan 26 18:51:21 crc kubenswrapper[4737]: I0126 18:51:21.864870 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-9lqk4" Jan 26 18:51:21 crc kubenswrapper[4737]: I0126 18:51:21.901158 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854wx9kv" podStartSLOduration=47.799278345 podStartE2EDuration="51.901138175s" podCreationTimestamp="2026-01-26 18:50:30 +0000 UTC" firstStartedPulling="2026-01-26 18:51:17.058850017 +0000 UTC m=+1250.367044725" lastFinishedPulling="2026-01-26 18:51:21.160709847 +0000 UTC m=+1254.468904555" observedRunningTime="2026-01-26 18:51:21.89271379 +0000 UTC m=+1255.200908508" watchObservedRunningTime="2026-01-26 18:51:21.901138175 +0000 UTC m=+1255.209332893" Jan 26 18:51:21 crc kubenswrapper[4737]: I0126 18:51:21.919503 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-9lqk4" podStartSLOduration=48.00165921 podStartE2EDuration="51.919477017s" podCreationTimestamp="2026-01-26 18:50:30 +0000 UTC" firstStartedPulling="2026-01-26 18:51:17.238648502 +0000 UTC m=+1250.546843210" lastFinishedPulling="2026-01-26 18:51:21.156466309 +0000 UTC m=+1254.464661017" observedRunningTime="2026-01-26 18:51:21.912143058 +0000 UTC m=+1255.220337766" watchObservedRunningTime="2026-01-26 18:51:21.919477017 +0000 UTC m=+1255.227671725" Jan 26 18:51:21 crc kubenswrapper[4737]: E0126 18:51:21.984477 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-9lkfc" podUID="8aa44595-2352-4a3e-888f-3409254cde36" Jan 26 18:51:22 crc kubenswrapper[4737]: I0126 18:51:22.161833 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-55xkx" Jan 26 18:51:22 crc kubenswrapper[4737]: I0126 18:51:22.297336 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-lfh5n" Jan 26 18:51:22 crc kubenswrapper[4737]: I0126 18:51:22.333021 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-6cf49855b4-zfzgj" Jan 26 18:51:22 crc kubenswrapper[4737]: I0126 18:51:22.367145 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-4n95b" Jan 26 18:51:22 crc kubenswrapper[4737]: I0126 18:51:22.399553 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-hx2gj" Jan 26 18:51:23 crc kubenswrapper[4737]: I0126 18:51:23.916173 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6ffbd5d47c-xwdkt" Jan 26 18:51:24 crc kubenswrapper[4737]: I0126 18:51:24.888644 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-bl8hk" event={"ID":"97c0989d-f677-4460-b62b-4733c7db29d4","Type":"ContainerStarted","Data":"77058139c75daef0ab21cf59b1d9f305e975feac84592f34062e570c385fa8c2"} Jan 26 18:51:24 crc kubenswrapper[4737]: I0126 18:51:24.889680 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-bl8hk" Jan 26 18:51:24 crc kubenswrapper[4737]: I0126 18:51:24.908650 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-bl8hk" podStartSLOduration=4.848819461 podStartE2EDuration="54.908633658s" podCreationTimestamp="2026-01-26 18:50:30 +0000 UTC" firstStartedPulling="2026-01-26 18:50:33.891945896 +0000 UTC m=+1207.200140604" lastFinishedPulling="2026-01-26 18:51:23.951760093 +0000 UTC m=+1257.259954801" observedRunningTime="2026-01-26 18:51:24.90825471 +0000 UTC m=+1258.216449418" watchObservedRunningTime="2026-01-26 18:51:24.908633658 +0000 UTC m=+1258.216828366" Jan 26 18:51:24 crc kubenswrapper[4737]: E0126 18:51:24.983681 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-tz995" podUID="01b83dfe-58bb-40fa-a0e8-b942b4c79b72" Jan 26 18:51:29 crc kubenswrapper[4737]: I0126 18:51:29.931018 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5xvj4" event={"ID":"3c491fdc-889c-4d4a-aedd-60a242e26027","Type":"ContainerStarted","Data":"ce51a2520357567a495927976cba31f037a81c5f4437b4f6567ec96c40c0d448"} Jan 26 18:51:29 crc kubenswrapper[4737]: I0126 18:51:29.956870 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5xvj4" podStartSLOduration=5.478043818 podStartE2EDuration="58.956841001s" podCreationTimestamp="2026-01-26 18:50:31 +0000 UTC" firstStartedPulling="2026-01-26 18:50:35.203190063 +0000 UTC m=+1208.511384771" lastFinishedPulling="2026-01-26 18:51:28.681987246 +0000 UTC m=+1261.990181954" observedRunningTime="2026-01-26 18:51:29.947326622 +0000 UTC m=+1263.255521330" watchObservedRunningTime="2026-01-26 18:51:29.956841001 +0000 UTC m=+1263.265035729" Jan 26 18:51:30 crc kubenswrapper[4737]: I0126 18:51:30.818610 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-bl8hk" Jan 26 18:51:30 crc kubenswrapper[4737]: I0126 18:51:30.823019 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-6mjbw" Jan 26 18:51:30 crc kubenswrapper[4737]: I0126 18:51:30.905989 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-j9nc9" Jan 26 18:51:30 crc kubenswrapper[4737]: I0126 18:51:30.942262 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-xrm44" event={"ID":"3164f5a5-0f37-4ab6-bc2a-51978eb9f842","Type":"ContainerStarted","Data":"16e9c3dba8f83172bca9294c4e99d129b499b186df18796f7baa26c34af433ce"} Jan 26 18:51:30 crc kubenswrapper[4737]: I0126 18:51:30.943556 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-xrm44" Jan 26 18:51:30 crc kubenswrapper[4737]: I0126 18:51:30.950416 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:51:30 crc kubenswrapper[4737]: I0126 18:51:30.950461 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:51:30 crc kubenswrapper[4737]: I0126 18:51:30.950506 4737 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" Jan 26 18:51:30 crc kubenswrapper[4737]: I0126 18:51:30.951369 4737 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c76105450930f5c76ed15e2ed040f365f4a322bf2138c5c2073f549076e278fc"} pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 18:51:30 crc kubenswrapper[4737]: I0126 18:51:30.951432 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" containerID="cri-o://c76105450930f5c76ed15e2ed040f365f4a322bf2138c5c2073f549076e278fc" gracePeriod=600 Jan 26 18:51:30 crc kubenswrapper[4737]: I0126 18:51:30.964537 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-zbp84" event={"ID":"03d41d00-eefc-45c4-aaea-f09a5e34362b","Type":"ContainerStarted","Data":"3ad466a7f2d9c812f316c177b297869d95dc58afa57ee3208b94e142093daf4b"} Jan 26 18:51:30 crc kubenswrapper[4737]: I0126 18:51:30.965549 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-zbp84" Jan 26 18:51:30 crc kubenswrapper[4737]: I0126 18:51:30.970060 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-kq82d" Jan 26 18:51:30 crc kubenswrapper[4737]: I0126 18:51:30.974842 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-xrm44" podStartSLOduration=6.489444623 podStartE2EDuration="1m0.974822806s" podCreationTimestamp="2026-01-26 18:50:30 +0000 UTC" firstStartedPulling="2026-01-26 18:50:35.176049648 +0000 UTC m=+1208.484244356" lastFinishedPulling="2026-01-26 18:51:29.661427811 +0000 UTC m=+1262.969622539" observedRunningTime="2026-01-26 18:51:30.968534131 +0000 UTC m=+1264.276728839" watchObservedRunningTime="2026-01-26 18:51:30.974822806 +0000 UTC m=+1264.283017504" Jan 26 18:51:30 crc kubenswrapper[4737]: I0126 18:51:30.990421 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-zbp84" podStartSLOduration=5.536243965 podStartE2EDuration="1m0.990397555s" podCreationTimestamp="2026-01-26 18:50:30 +0000 UTC" firstStartedPulling="2026-01-26 18:50:34.209059563 +0000 UTC m=+1207.517254271" lastFinishedPulling="2026-01-26 18:51:29.663213153 +0000 UTC m=+1262.971407861" observedRunningTime="2026-01-26 18:51:30.98714635 +0000 UTC m=+1264.295341058" watchObservedRunningTime="2026-01-26 18:51:30.990397555 +0000 UTC m=+1264.298592263" Jan 26 18:51:31 crc kubenswrapper[4737]: I0126 18:51:31.013719 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-p42h8" Jan 26 18:51:31 crc kubenswrapper[4737]: I0126 18:51:31.185570 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-jpmmh" Jan 26 18:51:31 crc kubenswrapper[4737]: I0126 18:51:31.241268 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-c4hpz" Jan 26 18:51:31 crc kubenswrapper[4737]: I0126 18:51:31.405589 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-qr8vf" Jan 26 18:51:31 crc kubenswrapper[4737]: I0126 18:51:31.978155 4737 generic.go:334] "Generic (PLEG): container finished" podID="afd75772-7900-46c3-b392-afb075e1cc08" containerID="c76105450930f5c76ed15e2ed040f365f4a322bf2138c5c2073f549076e278fc" exitCode=0 Jan 26 18:51:31 crc kubenswrapper[4737]: I0126 18:51:31.978220 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" event={"ID":"afd75772-7900-46c3-b392-afb075e1cc08","Type":"ContainerDied","Data":"c76105450930f5c76ed15e2ed040f365f4a322bf2138c5c2073f549076e278fc"} Jan 26 18:51:31 crc kubenswrapper[4737]: I0126 18:51:31.978550 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" event={"ID":"afd75772-7900-46c3-b392-afb075e1cc08","Type":"ContainerStarted","Data":"2e00b45a79587ca6768c3a9f0e09f0e494c418f3da2b1b4af85ad9741a3fdd5c"} Jan 26 18:51:31 crc kubenswrapper[4737]: I0126 18:51:31.978568 4737 scope.go:117] "RemoveContainer" containerID="234088f96dcb5aa606a89e947e92e3f85265b7ec69ab162d10f16abfa114b135" Jan 26 18:51:32 crc kubenswrapper[4737]: I0126 18:51:32.915171 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-9lqk4" Jan 26 18:51:33 crc kubenswrapper[4737]: I0126 18:51:33.591406 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854wx9kv" Jan 26 18:51:35 crc kubenswrapper[4737]: I0126 18:51:35.006496 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-9lkfc" event={"ID":"8aa44595-2352-4a3e-888f-3409254cde36","Type":"ContainerStarted","Data":"9564f94c9c10f3245b76b78fbe8bb161bba4676ad4287fd7358ce91cc6e4b22d"} Jan 26 18:51:35 crc kubenswrapper[4737]: I0126 18:51:35.006726 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-9lkfc" Jan 26 18:51:35 crc kubenswrapper[4737]: I0126 18:51:35.025920 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-9lkfc" podStartSLOduration=6.475603691 podStartE2EDuration="1m5.025900195s" podCreationTimestamp="2026-01-26 18:50:30 +0000 UTC" firstStartedPulling="2026-01-26 18:50:35.235198399 +0000 UTC m=+1208.543393107" lastFinishedPulling="2026-01-26 18:51:33.785494903 +0000 UTC m=+1267.093689611" observedRunningTime="2026-01-26 18:51:35.020750256 +0000 UTC m=+1268.328944964" watchObservedRunningTime="2026-01-26 18:51:35.025900195 +0000 UTC m=+1268.334094893" Jan 26 18:51:40 crc kubenswrapper[4737]: I0126 18:51:40.045547 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-tz995" event={"ID":"01b83dfe-58bb-40fa-a0e8-b942b4c79b72","Type":"ContainerStarted","Data":"44017b2ecf9b7bb710a93c5aeaad3a80a9e32a7005bc741b77f09be70d9a73b8"} Jan 26 18:51:40 crc kubenswrapper[4737]: I0126 18:51:40.046519 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-tz995" Jan 26 18:51:40 crc kubenswrapper[4737]: I0126 18:51:40.063192 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-tz995" podStartSLOduration=5.858487417 podStartE2EDuration="1m10.063172526s" podCreationTimestamp="2026-01-26 18:50:30 +0000 UTC" firstStartedPulling="2026-01-26 18:50:35.235299762 +0000 UTC m=+1208.543494460" lastFinishedPulling="2026-01-26 18:51:39.439984861 +0000 UTC m=+1272.748179569" observedRunningTime="2026-01-26 18:51:40.060391802 +0000 UTC m=+1273.368586510" watchObservedRunningTime="2026-01-26 18:51:40.063172526 +0000 UTC m=+1273.371367234" Jan 26 18:51:41 crc kubenswrapper[4737]: I0126 18:51:41.184268 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-zbp84" Jan 26 18:51:41 crc kubenswrapper[4737]: I0126 18:51:41.380678 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-xrm44" Jan 26 18:51:42 crc kubenswrapper[4737]: I0126 18:51:42.298462 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-9lkfc" Jan 26 18:51:51 crc kubenswrapper[4737]: I0126 18:51:51.338465 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-tz995" Jan 26 18:52:11 crc kubenswrapper[4737]: I0126 18:52:11.590152 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-9xtr2"] Jan 26 18:52:11 crc kubenswrapper[4737]: I0126 18:52:11.592416 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-9xtr2" Jan 26 18:52:11 crc kubenswrapper[4737]: I0126 18:52:11.597182 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 26 18:52:11 crc kubenswrapper[4737]: I0126 18:52:11.597201 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-v559c" Jan 26 18:52:11 crc kubenswrapper[4737]: I0126 18:52:11.597363 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 26 18:52:11 crc kubenswrapper[4737]: I0126 18:52:11.597525 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 26 18:52:11 crc kubenswrapper[4737]: I0126 18:52:11.602993 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-9xtr2"] Jan 26 18:52:11 crc kubenswrapper[4737]: I0126 18:52:11.677301 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-527jf"] Jan 26 18:52:11 crc kubenswrapper[4737]: I0126 18:52:11.679414 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-527jf" Jan 26 18:52:11 crc kubenswrapper[4737]: I0126 18:52:11.686814 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 26 18:52:11 crc kubenswrapper[4737]: I0126 18:52:11.690495 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-527jf"] Jan 26 18:52:11 crc kubenswrapper[4737]: I0126 18:52:11.718914 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/425352a9-7fbe-4370-be54-cb85d79de0b1-config\") pod \"dnsmasq-dns-675f4bcbfc-9xtr2\" (UID: \"425352a9-7fbe-4370-be54-cb85d79de0b1\") " pod="openstack/dnsmasq-dns-675f4bcbfc-9xtr2" Jan 26 18:52:11 crc kubenswrapper[4737]: I0126 18:52:11.719241 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sxcq\" (UniqueName: \"kubernetes.io/projected/425352a9-7fbe-4370-be54-cb85d79de0b1-kube-api-access-6sxcq\") pod \"dnsmasq-dns-675f4bcbfc-9xtr2\" (UID: \"425352a9-7fbe-4370-be54-cb85d79de0b1\") " pod="openstack/dnsmasq-dns-675f4bcbfc-9xtr2" Jan 26 18:52:11 crc kubenswrapper[4737]: I0126 18:52:11.820955 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcdss\" (UniqueName: \"kubernetes.io/projected/bb1f1f93-5c26-47f2-b5f1-42d96632aa89-kube-api-access-qcdss\") pod \"dnsmasq-dns-78dd6ddcc-527jf\" (UID: \"bb1f1f93-5c26-47f2-b5f1-42d96632aa89\") " pod="openstack/dnsmasq-dns-78dd6ddcc-527jf" Jan 26 18:52:11 crc kubenswrapper[4737]: I0126 18:52:11.821065 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/425352a9-7fbe-4370-be54-cb85d79de0b1-config\") pod \"dnsmasq-dns-675f4bcbfc-9xtr2\" (UID: \"425352a9-7fbe-4370-be54-cb85d79de0b1\") " pod="openstack/dnsmasq-dns-675f4bcbfc-9xtr2" Jan 26 18:52:11 crc kubenswrapper[4737]: I0126 18:52:11.821168 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb1f1f93-5c26-47f2-b5f1-42d96632aa89-config\") pod \"dnsmasq-dns-78dd6ddcc-527jf\" (UID: \"bb1f1f93-5c26-47f2-b5f1-42d96632aa89\") " pod="openstack/dnsmasq-dns-78dd6ddcc-527jf" Jan 26 18:52:11 crc kubenswrapper[4737]: I0126 18:52:11.821220 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sxcq\" (UniqueName: \"kubernetes.io/projected/425352a9-7fbe-4370-be54-cb85d79de0b1-kube-api-access-6sxcq\") pod \"dnsmasq-dns-675f4bcbfc-9xtr2\" (UID: \"425352a9-7fbe-4370-be54-cb85d79de0b1\") " pod="openstack/dnsmasq-dns-675f4bcbfc-9xtr2" Jan 26 18:52:11 crc kubenswrapper[4737]: I0126 18:52:11.821266 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bb1f1f93-5c26-47f2-b5f1-42d96632aa89-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-527jf\" (UID: \"bb1f1f93-5c26-47f2-b5f1-42d96632aa89\") " pod="openstack/dnsmasq-dns-78dd6ddcc-527jf" Jan 26 18:52:11 crc kubenswrapper[4737]: I0126 18:52:11.822150 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/425352a9-7fbe-4370-be54-cb85d79de0b1-config\") pod \"dnsmasq-dns-675f4bcbfc-9xtr2\" (UID: \"425352a9-7fbe-4370-be54-cb85d79de0b1\") " pod="openstack/dnsmasq-dns-675f4bcbfc-9xtr2" Jan 26 18:52:11 crc kubenswrapper[4737]: I0126 18:52:11.852088 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sxcq\" (UniqueName: \"kubernetes.io/projected/425352a9-7fbe-4370-be54-cb85d79de0b1-kube-api-access-6sxcq\") pod \"dnsmasq-dns-675f4bcbfc-9xtr2\" (UID: \"425352a9-7fbe-4370-be54-cb85d79de0b1\") " pod="openstack/dnsmasq-dns-675f4bcbfc-9xtr2" Jan 26 18:52:11 crc kubenswrapper[4737]: I0126 18:52:11.918364 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-9xtr2" Jan 26 18:52:11 crc kubenswrapper[4737]: I0126 18:52:11.922685 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcdss\" (UniqueName: \"kubernetes.io/projected/bb1f1f93-5c26-47f2-b5f1-42d96632aa89-kube-api-access-qcdss\") pod \"dnsmasq-dns-78dd6ddcc-527jf\" (UID: \"bb1f1f93-5c26-47f2-b5f1-42d96632aa89\") " pod="openstack/dnsmasq-dns-78dd6ddcc-527jf" Jan 26 18:52:11 crc kubenswrapper[4737]: I0126 18:52:11.922815 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb1f1f93-5c26-47f2-b5f1-42d96632aa89-config\") pod \"dnsmasq-dns-78dd6ddcc-527jf\" (UID: \"bb1f1f93-5c26-47f2-b5f1-42d96632aa89\") " pod="openstack/dnsmasq-dns-78dd6ddcc-527jf" Jan 26 18:52:11 crc kubenswrapper[4737]: I0126 18:52:11.922896 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bb1f1f93-5c26-47f2-b5f1-42d96632aa89-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-527jf\" (UID: \"bb1f1f93-5c26-47f2-b5f1-42d96632aa89\") " pod="openstack/dnsmasq-dns-78dd6ddcc-527jf" Jan 26 18:52:11 crc kubenswrapper[4737]: I0126 18:52:11.923800 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bb1f1f93-5c26-47f2-b5f1-42d96632aa89-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-527jf\" (UID: \"bb1f1f93-5c26-47f2-b5f1-42d96632aa89\") " pod="openstack/dnsmasq-dns-78dd6ddcc-527jf" Jan 26 18:52:11 crc kubenswrapper[4737]: I0126 18:52:11.923856 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb1f1f93-5c26-47f2-b5f1-42d96632aa89-config\") pod \"dnsmasq-dns-78dd6ddcc-527jf\" (UID: \"bb1f1f93-5c26-47f2-b5f1-42d96632aa89\") " pod="openstack/dnsmasq-dns-78dd6ddcc-527jf" Jan 26 18:52:11 crc kubenswrapper[4737]: I0126 18:52:11.955374 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcdss\" (UniqueName: \"kubernetes.io/projected/bb1f1f93-5c26-47f2-b5f1-42d96632aa89-kube-api-access-qcdss\") pod \"dnsmasq-dns-78dd6ddcc-527jf\" (UID: \"bb1f1f93-5c26-47f2-b5f1-42d96632aa89\") " pod="openstack/dnsmasq-dns-78dd6ddcc-527jf" Jan 26 18:52:12 crc kubenswrapper[4737]: I0126 18:52:12.005116 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-527jf" Jan 26 18:52:12 crc kubenswrapper[4737]: I0126 18:52:12.466900 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-9xtr2"] Jan 26 18:52:12 crc kubenswrapper[4737]: W0126 18:52:12.536829 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb1f1f93_5c26_47f2_b5f1_42d96632aa89.slice/crio-9258a42bf5a6354f9bee238c41f0e7d9c99ce607c095b76834f0455a70282549 WatchSource:0}: Error finding container 9258a42bf5a6354f9bee238c41f0e7d9c99ce607c095b76834f0455a70282549: Status 404 returned error can't find the container with id 9258a42bf5a6354f9bee238c41f0e7d9c99ce607c095b76834f0455a70282549 Jan 26 18:52:12 crc kubenswrapper[4737]: I0126 18:52:12.538421 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-527jf"] Jan 26 18:52:13 crc kubenswrapper[4737]: I0126 18:52:13.340101 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-527jf" event={"ID":"bb1f1f93-5c26-47f2-b5f1-42d96632aa89","Type":"ContainerStarted","Data":"9258a42bf5a6354f9bee238c41f0e7d9c99ce607c095b76834f0455a70282549"} Jan 26 18:52:13 crc kubenswrapper[4737]: I0126 18:52:13.372272 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-9xtr2" event={"ID":"425352a9-7fbe-4370-be54-cb85d79de0b1","Type":"ContainerStarted","Data":"e7182ffcabee635b32cc442b3843c5fcea38dc0e40cd3fea5484dfa678c2e316"} Jan 26 18:52:14 crc kubenswrapper[4737]: I0126 18:52:14.463833 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-9xtr2"] Jan 26 18:52:14 crc kubenswrapper[4737]: I0126 18:52:14.496843 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-2j527"] Jan 26 18:52:14 crc kubenswrapper[4737]: I0126 18:52:14.498435 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-2j527" Jan 26 18:52:14 crc kubenswrapper[4737]: I0126 18:52:14.516187 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-2j527"] Jan 26 18:52:14 crc kubenswrapper[4737]: I0126 18:52:14.583433 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b254d0c-eff7-4b4a-8814-a261c66bac0b-config\") pod \"dnsmasq-dns-666b6646f7-2j527\" (UID: \"8b254d0c-eff7-4b4a-8814-a261c66bac0b\") " pod="openstack/dnsmasq-dns-666b6646f7-2j527" Jan 26 18:52:14 crc kubenswrapper[4737]: I0126 18:52:14.583938 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b254d0c-eff7-4b4a-8814-a261c66bac0b-dns-svc\") pod \"dnsmasq-dns-666b6646f7-2j527\" (UID: \"8b254d0c-eff7-4b4a-8814-a261c66bac0b\") " pod="openstack/dnsmasq-dns-666b6646f7-2j527" Jan 26 18:52:14 crc kubenswrapper[4737]: I0126 18:52:14.584041 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nvnb\" (UniqueName: \"kubernetes.io/projected/8b254d0c-eff7-4b4a-8814-a261c66bac0b-kube-api-access-2nvnb\") pod \"dnsmasq-dns-666b6646f7-2j527\" (UID: \"8b254d0c-eff7-4b4a-8814-a261c66bac0b\") " pod="openstack/dnsmasq-dns-666b6646f7-2j527" Jan 26 18:52:14 crc kubenswrapper[4737]: I0126 18:52:14.686496 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b254d0c-eff7-4b4a-8814-a261c66bac0b-config\") pod \"dnsmasq-dns-666b6646f7-2j527\" (UID: \"8b254d0c-eff7-4b4a-8814-a261c66bac0b\") " pod="openstack/dnsmasq-dns-666b6646f7-2j527" Jan 26 18:52:14 crc kubenswrapper[4737]: I0126 18:52:14.686636 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b254d0c-eff7-4b4a-8814-a261c66bac0b-dns-svc\") pod \"dnsmasq-dns-666b6646f7-2j527\" (UID: \"8b254d0c-eff7-4b4a-8814-a261c66bac0b\") " pod="openstack/dnsmasq-dns-666b6646f7-2j527" Jan 26 18:52:14 crc kubenswrapper[4737]: I0126 18:52:14.686670 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nvnb\" (UniqueName: \"kubernetes.io/projected/8b254d0c-eff7-4b4a-8814-a261c66bac0b-kube-api-access-2nvnb\") pod \"dnsmasq-dns-666b6646f7-2j527\" (UID: \"8b254d0c-eff7-4b4a-8814-a261c66bac0b\") " pod="openstack/dnsmasq-dns-666b6646f7-2j527" Jan 26 18:52:14 crc kubenswrapper[4737]: I0126 18:52:14.691062 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b254d0c-eff7-4b4a-8814-a261c66bac0b-config\") pod \"dnsmasq-dns-666b6646f7-2j527\" (UID: \"8b254d0c-eff7-4b4a-8814-a261c66bac0b\") " pod="openstack/dnsmasq-dns-666b6646f7-2j527" Jan 26 18:52:14 crc kubenswrapper[4737]: I0126 18:52:14.691271 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b254d0c-eff7-4b4a-8814-a261c66bac0b-dns-svc\") pod \"dnsmasq-dns-666b6646f7-2j527\" (UID: \"8b254d0c-eff7-4b4a-8814-a261c66bac0b\") " pod="openstack/dnsmasq-dns-666b6646f7-2j527" Jan 26 18:52:14 crc kubenswrapper[4737]: I0126 18:52:14.711118 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nvnb\" (UniqueName: \"kubernetes.io/projected/8b254d0c-eff7-4b4a-8814-a261c66bac0b-kube-api-access-2nvnb\") pod \"dnsmasq-dns-666b6646f7-2j527\" (UID: \"8b254d0c-eff7-4b4a-8814-a261c66bac0b\") " pod="openstack/dnsmasq-dns-666b6646f7-2j527" Jan 26 18:52:14 crc kubenswrapper[4737]: I0126 18:52:14.827341 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-2j527" Jan 26 18:52:14 crc kubenswrapper[4737]: I0126 18:52:14.881562 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-527jf"] Jan 26 18:52:14 crc kubenswrapper[4737]: I0126 18:52:14.952385 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-bj2wh"] Jan 26 18:52:14 crc kubenswrapper[4737]: I0126 18:52:14.954330 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-bj2wh" Jan 26 18:52:14 crc kubenswrapper[4737]: I0126 18:52:14.968761 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-bj2wh"] Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.104449 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ac6f0c1-3e0d-4896-a392-913dc6576566-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-bj2wh\" (UID: \"7ac6f0c1-3e0d-4896-a392-913dc6576566\") " pod="openstack/dnsmasq-dns-57d769cc4f-bj2wh" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.105722 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkjmc\" (UniqueName: \"kubernetes.io/projected/7ac6f0c1-3e0d-4896-a392-913dc6576566-kube-api-access-vkjmc\") pod \"dnsmasq-dns-57d769cc4f-bj2wh\" (UID: \"7ac6f0c1-3e0d-4896-a392-913dc6576566\") " pod="openstack/dnsmasq-dns-57d769cc4f-bj2wh" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.105761 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ac6f0c1-3e0d-4896-a392-913dc6576566-config\") pod \"dnsmasq-dns-57d769cc4f-bj2wh\" (UID: \"7ac6f0c1-3e0d-4896-a392-913dc6576566\") " pod="openstack/dnsmasq-dns-57d769cc4f-bj2wh" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.207336 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ac6f0c1-3e0d-4896-a392-913dc6576566-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-bj2wh\" (UID: \"7ac6f0c1-3e0d-4896-a392-913dc6576566\") " pod="openstack/dnsmasq-dns-57d769cc4f-bj2wh" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.207728 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkjmc\" (UniqueName: \"kubernetes.io/projected/7ac6f0c1-3e0d-4896-a392-913dc6576566-kube-api-access-vkjmc\") pod \"dnsmasq-dns-57d769cc4f-bj2wh\" (UID: \"7ac6f0c1-3e0d-4896-a392-913dc6576566\") " pod="openstack/dnsmasq-dns-57d769cc4f-bj2wh" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.207787 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ac6f0c1-3e0d-4896-a392-913dc6576566-config\") pod \"dnsmasq-dns-57d769cc4f-bj2wh\" (UID: \"7ac6f0c1-3e0d-4896-a392-913dc6576566\") " pod="openstack/dnsmasq-dns-57d769cc4f-bj2wh" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.209513 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ac6f0c1-3e0d-4896-a392-913dc6576566-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-bj2wh\" (UID: \"7ac6f0c1-3e0d-4896-a392-913dc6576566\") " pod="openstack/dnsmasq-dns-57d769cc4f-bj2wh" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.235045 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ac6f0c1-3e0d-4896-a392-913dc6576566-config\") pod \"dnsmasq-dns-57d769cc4f-bj2wh\" (UID: \"7ac6f0c1-3e0d-4896-a392-913dc6576566\") " pod="openstack/dnsmasq-dns-57d769cc4f-bj2wh" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.280416 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkjmc\" (UniqueName: \"kubernetes.io/projected/7ac6f0c1-3e0d-4896-a392-913dc6576566-kube-api-access-vkjmc\") pod \"dnsmasq-dns-57d769cc4f-bj2wh\" (UID: \"7ac6f0c1-3e0d-4896-a392-913dc6576566\") " pod="openstack/dnsmasq-dns-57d769cc4f-bj2wh" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.284527 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-bj2wh" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.582861 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-2j527"] Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.646653 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.649943 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.654264 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.698093 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-trjbh" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.698144 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.698370 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.698381 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.698513 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.698601 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.704735 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.731325 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.741121 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.784685 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.817825 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.817949 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.842199 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zktn\" (UniqueName: \"kubernetes.io/projected/49c4dfd6-d334-4e11-8a1d-0dd773f91b1f-kube-api-access-8zktn\") pod \"rabbitmq-server-0\" (UID: \"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f\") " pod="openstack/rabbitmq-server-0" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.842288 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ca2ccc7a-b591-4abe-b133-f959b5445611-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"ca2ccc7a-b591-4abe-b133-f959b5445611\") " pod="openstack/rabbitmq-server-2" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.842312 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/49c4dfd6-d334-4e11-8a1d-0dd773f91b1f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f\") " pod="openstack/rabbitmq-server-0" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.842337 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ca2ccc7a-b591-4abe-b133-f959b5445611-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"ca2ccc7a-b591-4abe-b133-f959b5445611\") " pod="openstack/rabbitmq-server-2" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.842391 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ca2ccc7a-b591-4abe-b133-f959b5445611-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"ca2ccc7a-b591-4abe-b133-f959b5445611\") " pod="openstack/rabbitmq-server-2" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.842411 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4fbf0178-86f6-49f4-b2f3-2a47d08ef3e7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4fbf0178-86f6-49f4-b2f3-2a47d08ef3e7\") pod \"rabbitmq-server-2\" (UID: \"ca2ccc7a-b591-4abe-b133-f959b5445611\") " pod="openstack/rabbitmq-server-2" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.842426 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/49c4dfd6-d334-4e11-8a1d-0dd773f91b1f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f\") " pod="openstack/rabbitmq-server-0" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.842458 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ca2ccc7a-b591-4abe-b133-f959b5445611-config-data\") pod \"rabbitmq-server-2\" (UID: \"ca2ccc7a-b591-4abe-b133-f959b5445611\") " pod="openstack/rabbitmq-server-2" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.842475 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ca2ccc7a-b591-4abe-b133-f959b5445611-pod-info\") pod \"rabbitmq-server-2\" (UID: \"ca2ccc7a-b591-4abe-b133-f959b5445611\") " pod="openstack/rabbitmq-server-2" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.842708 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/49c4dfd6-d334-4e11-8a1d-0dd773f91b1f-config-data\") pod \"rabbitmq-server-0\" (UID: \"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f\") " pod="openstack/rabbitmq-server-0" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.842849 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/49c4dfd6-d334-4e11-8a1d-0dd773f91b1f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f\") " pod="openstack/rabbitmq-server-0" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.843191 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ca2ccc7a-b591-4abe-b133-f959b5445611-server-conf\") pod \"rabbitmq-server-2\" (UID: \"ca2ccc7a-b591-4abe-b133-f959b5445611\") " pod="openstack/rabbitmq-server-2" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.843225 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/49c4dfd6-d334-4e11-8a1d-0dd773f91b1f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f\") " pod="openstack/rabbitmq-server-0" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.843279 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ca2ccc7a-b591-4abe-b133-f959b5445611-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"ca2ccc7a-b591-4abe-b133-f959b5445611\") " pod="openstack/rabbitmq-server-2" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.843344 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5aa7f00a-70ef-4395-a7e7-fa25917f1da4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5aa7f00a-70ef-4395-a7e7-fa25917f1da4\") pod \"rabbitmq-server-0\" (UID: \"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f\") " pod="openstack/rabbitmq-server-0" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.843503 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ca2ccc7a-b591-4abe-b133-f959b5445611-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"ca2ccc7a-b591-4abe-b133-f959b5445611\") " pod="openstack/rabbitmq-server-2" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.843587 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ca2ccc7a-b591-4abe-b133-f959b5445611-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"ca2ccc7a-b591-4abe-b133-f959b5445611\") " pod="openstack/rabbitmq-server-2" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.843931 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/49c4dfd6-d334-4e11-8a1d-0dd773f91b1f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f\") " pod="openstack/rabbitmq-server-0" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.844019 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrm67\" (UniqueName: \"kubernetes.io/projected/ca2ccc7a-b591-4abe-b133-f959b5445611-kube-api-access-mrm67\") pod \"rabbitmq-server-2\" (UID: \"ca2ccc7a-b591-4abe-b133-f959b5445611\") " pod="openstack/rabbitmq-server-2" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.844191 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/49c4dfd6-d334-4e11-8a1d-0dd773f91b1f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f\") " pod="openstack/rabbitmq-server-0" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.844518 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/49c4dfd6-d334-4e11-8a1d-0dd773f91b1f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f\") " pod="openstack/rabbitmq-server-0" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.844719 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/49c4dfd6-d334-4e11-8a1d-0dd773f91b1f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f\") " pod="openstack/rabbitmq-server-0" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.869237 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.948883 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/49c4dfd6-d334-4e11-8a1d-0dd773f91b1f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f\") " pod="openstack/rabbitmq-server-0" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.948932 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrm67\" (UniqueName: \"kubernetes.io/projected/ca2ccc7a-b591-4abe-b133-f959b5445611-kube-api-access-mrm67\") pod \"rabbitmq-server-2\" (UID: \"ca2ccc7a-b591-4abe-b133-f959b5445611\") " pod="openstack/rabbitmq-server-2" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.948956 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/49c4dfd6-d334-4e11-8a1d-0dd773f91b1f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f\") " pod="openstack/rabbitmq-server-0" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.948981 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5bfe0217-6204-407d-aaeb-94051bb8255b-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"5bfe0217-6204-407d-aaeb-94051bb8255b\") " pod="openstack/rabbitmq-server-1" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.949000 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5bfe0217-6204-407d-aaeb-94051bb8255b-pod-info\") pod \"rabbitmq-server-1\" (UID: \"5bfe0217-6204-407d-aaeb-94051bb8255b\") " pod="openstack/rabbitmq-server-1" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.949018 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/49c4dfd6-d334-4e11-8a1d-0dd773f91b1f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f\") " pod="openstack/rabbitmq-server-0" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.949041 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5bfe0217-6204-407d-aaeb-94051bb8255b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"5bfe0217-6204-407d-aaeb-94051bb8255b\") " pod="openstack/rabbitmq-server-1" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.949062 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cf74a87d-7af1-49ca-ad77-fa33c810eec2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf74a87d-7af1-49ca-ad77-fa33c810eec2\") pod \"rabbitmq-server-1\" (UID: \"5bfe0217-6204-407d-aaeb-94051bb8255b\") " pod="openstack/rabbitmq-server-1" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.949117 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/49c4dfd6-d334-4e11-8a1d-0dd773f91b1f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f\") " pod="openstack/rabbitmq-server-0" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.949193 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zktn\" (UniqueName: \"kubernetes.io/projected/49c4dfd6-d334-4e11-8a1d-0dd773f91b1f-kube-api-access-8zktn\") pod \"rabbitmq-server-0\" (UID: \"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f\") " pod="openstack/rabbitmq-server-0" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.949212 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5bfe0217-6204-407d-aaeb-94051bb8255b-server-conf\") pod \"rabbitmq-server-1\" (UID: \"5bfe0217-6204-407d-aaeb-94051bb8255b\") " pod="openstack/rabbitmq-server-1" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.949247 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ca2ccc7a-b591-4abe-b133-f959b5445611-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"ca2ccc7a-b591-4abe-b133-f959b5445611\") " pod="openstack/rabbitmq-server-2" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.949266 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/49c4dfd6-d334-4e11-8a1d-0dd773f91b1f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f\") " pod="openstack/rabbitmq-server-0" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.949289 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ca2ccc7a-b591-4abe-b133-f959b5445611-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"ca2ccc7a-b591-4abe-b133-f959b5445611\") " pod="openstack/rabbitmq-server-2" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.949312 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5bfe0217-6204-407d-aaeb-94051bb8255b-config-data\") pod \"rabbitmq-server-1\" (UID: \"5bfe0217-6204-407d-aaeb-94051bb8255b\") " pod="openstack/rabbitmq-server-1" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.949335 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5bfe0217-6204-407d-aaeb-94051bb8255b-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"5bfe0217-6204-407d-aaeb-94051bb8255b\") " pod="openstack/rabbitmq-server-1" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.949365 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5bfe0217-6204-407d-aaeb-94051bb8255b-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"5bfe0217-6204-407d-aaeb-94051bb8255b\") " pod="openstack/rabbitmq-server-1" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.949389 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5bfe0217-6204-407d-aaeb-94051bb8255b-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"5bfe0217-6204-407d-aaeb-94051bb8255b\") " pod="openstack/rabbitmq-server-1" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.949418 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ca2ccc7a-b591-4abe-b133-f959b5445611-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"ca2ccc7a-b591-4abe-b133-f959b5445611\") " pod="openstack/rabbitmq-server-2" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.949443 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4fbf0178-86f6-49f4-b2f3-2a47d08ef3e7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4fbf0178-86f6-49f4-b2f3-2a47d08ef3e7\") pod \"rabbitmq-server-2\" (UID: \"ca2ccc7a-b591-4abe-b133-f959b5445611\") " pod="openstack/rabbitmq-server-2" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.949463 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/49c4dfd6-d334-4e11-8a1d-0dd773f91b1f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f\") " pod="openstack/rabbitmq-server-0" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.949491 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ca2ccc7a-b591-4abe-b133-f959b5445611-config-data\") pod \"rabbitmq-server-2\" (UID: \"ca2ccc7a-b591-4abe-b133-f959b5445611\") " pod="openstack/rabbitmq-server-2" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.949512 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ca2ccc7a-b591-4abe-b133-f959b5445611-pod-info\") pod \"rabbitmq-server-2\" (UID: \"ca2ccc7a-b591-4abe-b133-f959b5445611\") " pod="openstack/rabbitmq-server-2" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.949539 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/49c4dfd6-d334-4e11-8a1d-0dd773f91b1f-config-data\") pod \"rabbitmq-server-0\" (UID: \"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f\") " pod="openstack/rabbitmq-server-0" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.949579 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/49c4dfd6-d334-4e11-8a1d-0dd773f91b1f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f\") " pod="openstack/rabbitmq-server-0" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.949615 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5bfe0217-6204-407d-aaeb-94051bb8255b-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"5bfe0217-6204-407d-aaeb-94051bb8255b\") " pod="openstack/rabbitmq-server-1" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.949646 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ca2ccc7a-b591-4abe-b133-f959b5445611-server-conf\") pod \"rabbitmq-server-2\" (UID: \"ca2ccc7a-b591-4abe-b133-f959b5445611\") " pod="openstack/rabbitmq-server-2" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.949666 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/49c4dfd6-d334-4e11-8a1d-0dd773f91b1f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f\") " pod="openstack/rabbitmq-server-0" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.949696 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcwlb\" (UniqueName: \"kubernetes.io/projected/5bfe0217-6204-407d-aaeb-94051bb8255b-kube-api-access-jcwlb\") pod \"rabbitmq-server-1\" (UID: \"5bfe0217-6204-407d-aaeb-94051bb8255b\") " pod="openstack/rabbitmq-server-1" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.949726 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ca2ccc7a-b591-4abe-b133-f959b5445611-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"ca2ccc7a-b591-4abe-b133-f959b5445611\") " pod="openstack/rabbitmq-server-2" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.949757 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5aa7f00a-70ef-4395-a7e7-fa25917f1da4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5aa7f00a-70ef-4395-a7e7-fa25917f1da4\") pod \"rabbitmq-server-0\" (UID: \"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f\") " pod="openstack/rabbitmq-server-0" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.949794 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ca2ccc7a-b591-4abe-b133-f959b5445611-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"ca2ccc7a-b591-4abe-b133-f959b5445611\") " pod="openstack/rabbitmq-server-2" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.949817 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ca2ccc7a-b591-4abe-b133-f959b5445611-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"ca2ccc7a-b591-4abe-b133-f959b5445611\") " pod="openstack/rabbitmq-server-2" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.951570 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/49c4dfd6-d334-4e11-8a1d-0dd773f91b1f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f\") " pod="openstack/rabbitmq-server-0" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.951574 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/49c4dfd6-d334-4e11-8a1d-0dd773f91b1f-config-data\") pod \"rabbitmq-server-0\" (UID: \"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f\") " pod="openstack/rabbitmq-server-0" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.951655 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ca2ccc7a-b591-4abe-b133-f959b5445611-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"ca2ccc7a-b591-4abe-b133-f959b5445611\") " pod="openstack/rabbitmq-server-2" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.951918 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ca2ccc7a-b591-4abe-b133-f959b5445611-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"ca2ccc7a-b591-4abe-b133-f959b5445611\") " pod="openstack/rabbitmq-server-2" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.952776 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/49c4dfd6-d334-4e11-8a1d-0dd773f91b1f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f\") " pod="openstack/rabbitmq-server-0" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.953049 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/49c4dfd6-d334-4e11-8a1d-0dd773f91b1f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f\") " pod="openstack/rabbitmq-server-0" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.953685 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ca2ccc7a-b591-4abe-b133-f959b5445611-config-data\") pod \"rabbitmq-server-2\" (UID: \"ca2ccc7a-b591-4abe-b133-f959b5445611\") " pod="openstack/rabbitmq-server-2" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.956459 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ca2ccc7a-b591-4abe-b133-f959b5445611-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"ca2ccc7a-b591-4abe-b133-f959b5445611\") " pod="openstack/rabbitmq-server-2" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.956765 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ca2ccc7a-b591-4abe-b133-f959b5445611-pod-info\") pod \"rabbitmq-server-2\" (UID: \"ca2ccc7a-b591-4abe-b133-f959b5445611\") " pod="openstack/rabbitmq-server-2" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.956869 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/49c4dfd6-d334-4e11-8a1d-0dd773f91b1f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f\") " pod="openstack/rabbitmq-server-0" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.958414 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ca2ccc7a-b591-4abe-b133-f959b5445611-server-conf\") pod \"rabbitmq-server-2\" (UID: \"ca2ccc7a-b591-4abe-b133-f959b5445611\") " pod="openstack/rabbitmq-server-2" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.958701 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/49c4dfd6-d334-4e11-8a1d-0dd773f91b1f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f\") " pod="openstack/rabbitmq-server-0" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.960266 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/49c4dfd6-d334-4e11-8a1d-0dd773f91b1f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f\") " pod="openstack/rabbitmq-server-0" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.961553 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ca2ccc7a-b591-4abe-b133-f959b5445611-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"ca2ccc7a-b591-4abe-b133-f959b5445611\") " pod="openstack/rabbitmq-server-2" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.961994 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/49c4dfd6-d334-4e11-8a1d-0dd773f91b1f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f\") " pod="openstack/rabbitmq-server-0" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.962710 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/49c4dfd6-d334-4e11-8a1d-0dd773f91b1f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f\") " pod="openstack/rabbitmq-server-0" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.963786 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ca2ccc7a-b591-4abe-b133-f959b5445611-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"ca2ccc7a-b591-4abe-b133-f959b5445611\") " pod="openstack/rabbitmq-server-2" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.963931 4737 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.964062 4737 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4fbf0178-86f6-49f4-b2f3-2a47d08ef3e7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4fbf0178-86f6-49f4-b2f3-2a47d08ef3e7\") pod \"rabbitmq-server-2\" (UID: \"ca2ccc7a-b591-4abe-b133-f959b5445611\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e4b9ceb8c52abf651bff7514af3cc683572e9e232935ffe7b4905a076db603b6/globalmount\"" pod="openstack/rabbitmq-server-2" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.969396 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ca2ccc7a-b591-4abe-b133-f959b5445611-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"ca2ccc7a-b591-4abe-b133-f959b5445611\") " pod="openstack/rabbitmq-server-2" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.969859 4737 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.969911 4737 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5aa7f00a-70ef-4395-a7e7-fa25917f1da4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5aa7f00a-70ef-4395-a7e7-fa25917f1da4\") pod \"rabbitmq-server-0\" (UID: \"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e9878083e57acdd195c36221ffb7f100349a5e63230bc6c4e3af1f5b75c0abd7/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 26 18:52:15 crc kubenswrapper[4737]: I0126 18:52:15.985629 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrm67\" (UniqueName: \"kubernetes.io/projected/ca2ccc7a-b591-4abe-b133-f959b5445611-kube-api-access-mrm67\") pod \"rabbitmq-server-2\" (UID: \"ca2ccc7a-b591-4abe-b133-f959b5445611\") " pod="openstack/rabbitmq-server-2" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.015407 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zktn\" (UniqueName: \"kubernetes.io/projected/49c4dfd6-d334-4e11-8a1d-0dd773f91b1f-kube-api-access-8zktn\") pod \"rabbitmq-server-0\" (UID: \"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f\") " pod="openstack/rabbitmq-server-0" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.052146 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcwlb\" (UniqueName: \"kubernetes.io/projected/5bfe0217-6204-407d-aaeb-94051bb8255b-kube-api-access-jcwlb\") pod \"rabbitmq-server-1\" (UID: \"5bfe0217-6204-407d-aaeb-94051bb8255b\") " pod="openstack/rabbitmq-server-1" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.052246 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5bfe0217-6204-407d-aaeb-94051bb8255b-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"5bfe0217-6204-407d-aaeb-94051bb8255b\") " pod="openstack/rabbitmq-server-1" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.052269 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5bfe0217-6204-407d-aaeb-94051bb8255b-pod-info\") pod \"rabbitmq-server-1\" (UID: \"5bfe0217-6204-407d-aaeb-94051bb8255b\") " pod="openstack/rabbitmq-server-1" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.052306 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5bfe0217-6204-407d-aaeb-94051bb8255b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"5bfe0217-6204-407d-aaeb-94051bb8255b\") " pod="openstack/rabbitmq-server-1" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.052325 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-cf74a87d-7af1-49ca-ad77-fa33c810eec2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf74a87d-7af1-49ca-ad77-fa33c810eec2\") pod \"rabbitmq-server-1\" (UID: \"5bfe0217-6204-407d-aaeb-94051bb8255b\") " pod="openstack/rabbitmq-server-1" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.052354 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5bfe0217-6204-407d-aaeb-94051bb8255b-server-conf\") pod \"rabbitmq-server-1\" (UID: \"5bfe0217-6204-407d-aaeb-94051bb8255b\") " pod="openstack/rabbitmq-server-1" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.052398 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5bfe0217-6204-407d-aaeb-94051bb8255b-config-data\") pod \"rabbitmq-server-1\" (UID: \"5bfe0217-6204-407d-aaeb-94051bb8255b\") " pod="openstack/rabbitmq-server-1" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.052417 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5bfe0217-6204-407d-aaeb-94051bb8255b-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"5bfe0217-6204-407d-aaeb-94051bb8255b\") " pod="openstack/rabbitmq-server-1" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.052440 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5bfe0217-6204-407d-aaeb-94051bb8255b-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"5bfe0217-6204-407d-aaeb-94051bb8255b\") " pod="openstack/rabbitmq-server-1" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.052459 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5bfe0217-6204-407d-aaeb-94051bb8255b-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"5bfe0217-6204-407d-aaeb-94051bb8255b\") " pod="openstack/rabbitmq-server-1" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.052550 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5bfe0217-6204-407d-aaeb-94051bb8255b-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"5bfe0217-6204-407d-aaeb-94051bb8255b\") " pod="openstack/rabbitmq-server-1" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.055563 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5bfe0217-6204-407d-aaeb-94051bb8255b-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"5bfe0217-6204-407d-aaeb-94051bb8255b\") " pod="openstack/rabbitmq-server-1" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.056519 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5bfe0217-6204-407d-aaeb-94051bb8255b-server-conf\") pod \"rabbitmq-server-1\" (UID: \"5bfe0217-6204-407d-aaeb-94051bb8255b\") " pod="openstack/rabbitmq-server-1" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.056798 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4fbf0178-86f6-49f4-b2f3-2a47d08ef3e7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4fbf0178-86f6-49f4-b2f3-2a47d08ef3e7\") pod \"rabbitmq-server-2\" (UID: \"ca2ccc7a-b591-4abe-b133-f959b5445611\") " pod="openstack/rabbitmq-server-2" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.057782 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5bfe0217-6204-407d-aaeb-94051bb8255b-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"5bfe0217-6204-407d-aaeb-94051bb8255b\") " pod="openstack/rabbitmq-server-1" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.057898 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5bfe0217-6204-407d-aaeb-94051bb8255b-config-data\") pod \"rabbitmq-server-1\" (UID: \"5bfe0217-6204-407d-aaeb-94051bb8255b\") " pod="openstack/rabbitmq-server-1" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.058748 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5bfe0217-6204-407d-aaeb-94051bb8255b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"5bfe0217-6204-407d-aaeb-94051bb8255b\") " pod="openstack/rabbitmq-server-1" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.060732 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5bfe0217-6204-407d-aaeb-94051bb8255b-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"5bfe0217-6204-407d-aaeb-94051bb8255b\") " pod="openstack/rabbitmq-server-1" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.076805 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5bfe0217-6204-407d-aaeb-94051bb8255b-pod-info\") pod \"rabbitmq-server-1\" (UID: \"5bfe0217-6204-407d-aaeb-94051bb8255b\") " pod="openstack/rabbitmq-server-1" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.080696 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5bfe0217-6204-407d-aaeb-94051bb8255b-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"5bfe0217-6204-407d-aaeb-94051bb8255b\") " pod="openstack/rabbitmq-server-1" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.081170 4737 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.081207 4737 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-cf74a87d-7af1-49ca-ad77-fa33c810eec2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf74a87d-7af1-49ca-ad77-fa33c810eec2\") pod \"rabbitmq-server-1\" (UID: \"5bfe0217-6204-407d-aaeb-94051bb8255b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/94be7bc95e95b6be2553bf8bbbf70b563164647bca719a84027c68345843d929/globalmount\"" pod="openstack/rabbitmq-server-1" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.131499 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcwlb\" (UniqueName: \"kubernetes.io/projected/5bfe0217-6204-407d-aaeb-94051bb8255b-kube-api-access-jcwlb\") pod \"rabbitmq-server-1\" (UID: \"5bfe0217-6204-407d-aaeb-94051bb8255b\") " pod="openstack/rabbitmq-server-1" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.132106 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5bfe0217-6204-407d-aaeb-94051bb8255b-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"5bfe0217-6204-407d-aaeb-94051bb8255b\") " pod="openstack/rabbitmq-server-1" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.134748 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.136298 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.145701 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-j5nkh" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.145912 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.146019 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.146049 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.146158 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.146349 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.146482 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.155200 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/89a3c35d-3e74-49b8-ae17-81808321d00d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"89a3c35d-3e74-49b8-ae17-81808321d00d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.155386 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/89a3c35d-3e74-49b8-ae17-81808321d00d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"89a3c35d-3e74-49b8-ae17-81808321d00d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.155448 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/89a3c35d-3e74-49b8-ae17-81808321d00d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"89a3c35d-3e74-49b8-ae17-81808321d00d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.155479 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/89a3c35d-3e74-49b8-ae17-81808321d00d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"89a3c35d-3e74-49b8-ae17-81808321d00d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.155541 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbj5x\" (UniqueName: \"kubernetes.io/projected/89a3c35d-3e74-49b8-ae17-81808321d00d-kube-api-access-dbj5x\") pod \"rabbitmq-cell1-server-0\" (UID: \"89a3c35d-3e74-49b8-ae17-81808321d00d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.155692 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/89a3c35d-3e74-49b8-ae17-81808321d00d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"89a3c35d-3e74-49b8-ae17-81808321d00d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.156138 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/89a3c35d-3e74-49b8-ae17-81808321d00d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"89a3c35d-3e74-49b8-ae17-81808321d00d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.156240 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/89a3c35d-3e74-49b8-ae17-81808321d00d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"89a3c35d-3e74-49b8-ae17-81808321d00d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.156413 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-49eb82bb-9c03-410a-9d39-d4b8709abbeb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49eb82bb-9c03-410a-9d39-d4b8709abbeb\") pod \"rabbitmq-cell1-server-0\" (UID: \"89a3c35d-3e74-49b8-ae17-81808321d00d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.156436 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/89a3c35d-3e74-49b8-ae17-81808321d00d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"89a3c35d-3e74-49b8-ae17-81808321d00d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.156509 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/89a3c35d-3e74-49b8-ae17-81808321d00d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"89a3c35d-3e74-49b8-ae17-81808321d00d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.163483 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.171454 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5aa7f00a-70ef-4395-a7e7-fa25917f1da4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5aa7f00a-70ef-4395-a7e7-fa25917f1da4\") pod \"rabbitmq-server-0\" (UID: \"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f\") " pod="openstack/rabbitmq-server-0" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.179178 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.210416 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-bj2wh"] Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.227013 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-cf74a87d-7af1-49ca-ad77-fa33c810eec2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf74a87d-7af1-49ca-ad77-fa33c810eec2\") pod \"rabbitmq-server-1\" (UID: \"5bfe0217-6204-407d-aaeb-94051bb8255b\") " pod="openstack/rabbitmq-server-1" Jan 26 18:52:16 crc kubenswrapper[4737]: W0126 18:52:16.229733 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ac6f0c1_3e0d_4896_a392_913dc6576566.slice/crio-024ef884157f7d4eaa2dc8dc6e1a05750994da6819c4c152c88a4b02410ae943 WatchSource:0}: Error finding container 024ef884157f7d4eaa2dc8dc6e1a05750994da6819c4c152c88a4b02410ae943: Status 404 returned error can't find the container with id 024ef884157f7d4eaa2dc8dc6e1a05750994da6819c4c152c88a4b02410ae943 Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.258424 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/89a3c35d-3e74-49b8-ae17-81808321d00d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"89a3c35d-3e74-49b8-ae17-81808321d00d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.258479 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/89a3c35d-3e74-49b8-ae17-81808321d00d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"89a3c35d-3e74-49b8-ae17-81808321d00d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.258525 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/89a3c35d-3e74-49b8-ae17-81808321d00d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"89a3c35d-3e74-49b8-ae17-81808321d00d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.258552 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/89a3c35d-3e74-49b8-ae17-81808321d00d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"89a3c35d-3e74-49b8-ae17-81808321d00d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.258594 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbj5x\" (UniqueName: \"kubernetes.io/projected/89a3c35d-3e74-49b8-ae17-81808321d00d-kube-api-access-dbj5x\") pod \"rabbitmq-cell1-server-0\" (UID: \"89a3c35d-3e74-49b8-ae17-81808321d00d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.258670 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/89a3c35d-3e74-49b8-ae17-81808321d00d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"89a3c35d-3e74-49b8-ae17-81808321d00d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.258694 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/89a3c35d-3e74-49b8-ae17-81808321d00d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"89a3c35d-3e74-49b8-ae17-81808321d00d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.258741 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/89a3c35d-3e74-49b8-ae17-81808321d00d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"89a3c35d-3e74-49b8-ae17-81808321d00d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.258803 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-49eb82bb-9c03-410a-9d39-d4b8709abbeb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49eb82bb-9c03-410a-9d39-d4b8709abbeb\") pod \"rabbitmq-cell1-server-0\" (UID: \"89a3c35d-3e74-49b8-ae17-81808321d00d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.258826 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/89a3c35d-3e74-49b8-ae17-81808321d00d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"89a3c35d-3e74-49b8-ae17-81808321d00d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.258859 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/89a3c35d-3e74-49b8-ae17-81808321d00d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"89a3c35d-3e74-49b8-ae17-81808321d00d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.263501 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/89a3c35d-3e74-49b8-ae17-81808321d00d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"89a3c35d-3e74-49b8-ae17-81808321d00d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.264321 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/89a3c35d-3e74-49b8-ae17-81808321d00d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"89a3c35d-3e74-49b8-ae17-81808321d00d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.264454 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/89a3c35d-3e74-49b8-ae17-81808321d00d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"89a3c35d-3e74-49b8-ae17-81808321d00d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.264461 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/89a3c35d-3e74-49b8-ae17-81808321d00d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"89a3c35d-3e74-49b8-ae17-81808321d00d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.267152 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/89a3c35d-3e74-49b8-ae17-81808321d00d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"89a3c35d-3e74-49b8-ae17-81808321d00d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.270179 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/89a3c35d-3e74-49b8-ae17-81808321d00d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"89a3c35d-3e74-49b8-ae17-81808321d00d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.273579 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/89a3c35d-3e74-49b8-ae17-81808321d00d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"89a3c35d-3e74-49b8-ae17-81808321d00d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.277881 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/89a3c35d-3e74-49b8-ae17-81808321d00d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"89a3c35d-3e74-49b8-ae17-81808321d00d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.283029 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/89a3c35d-3e74-49b8-ae17-81808321d00d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"89a3c35d-3e74-49b8-ae17-81808321d00d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.283874 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbj5x\" (UniqueName: \"kubernetes.io/projected/89a3c35d-3e74-49b8-ae17-81808321d00d-kube-api-access-dbj5x\") pod \"rabbitmq-cell1-server-0\" (UID: \"89a3c35d-3e74-49b8-ae17-81808321d00d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.292535 4737 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.292634 4737 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-49eb82bb-9c03-410a-9d39-d4b8709abbeb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49eb82bb-9c03-410a-9d39-d4b8709abbeb\") pod \"rabbitmq-cell1-server-0\" (UID: \"89a3c35d-3e74-49b8-ae17-81808321d00d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/fd4fc01515cf411f2c3c1201953e7057ccc603e7317600a03debd4076f0e2cbc/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.357024 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-49eb82bb-9c03-410a-9d39-d4b8709abbeb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49eb82bb-9c03-410a-9d39-d4b8709abbeb\") pod \"rabbitmq-cell1-server-0\" (UID: \"89a3c35d-3e74-49b8-ae17-81808321d00d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.458858 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.481581 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.511469 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.512012 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-bj2wh" event={"ID":"7ac6f0c1-3e0d-4896-a392-913dc6576566","Type":"ContainerStarted","Data":"024ef884157f7d4eaa2dc8dc6e1a05750994da6819c4c152c88a4b02410ae943"} Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.513717 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-2j527" event={"ID":"8b254d0c-eff7-4b4a-8814-a261c66bac0b","Type":"ContainerStarted","Data":"9e291dc82814af364677fa831ddfd9a2d7145db1694d81d807fd640b69196dcc"} Jan 26 18:52:16 crc kubenswrapper[4737]: I0126 18:52:16.793787 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 26 18:52:17 crc kubenswrapper[4737]: I0126 18:52:17.123878 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 18:52:17 crc kubenswrapper[4737]: I0126 18:52:17.179908 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 18:52:17 crc kubenswrapper[4737]: I0126 18:52:17.271397 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 26 18:52:17 crc kubenswrapper[4737]: I0126 18:52:17.273181 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 26 18:52:17 crc kubenswrapper[4737]: I0126 18:52:17.276031 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-n5nvh" Jan 26 18:52:17 crc kubenswrapper[4737]: I0126 18:52:17.276706 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 26 18:52:17 crc kubenswrapper[4737]: I0126 18:52:17.276866 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 26 18:52:17 crc kubenswrapper[4737]: I0126 18:52:17.277031 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 26 18:52:17 crc kubenswrapper[4737]: I0126 18:52:17.282495 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 26 18:52:17 crc kubenswrapper[4737]: I0126 18:52:17.285944 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 26 18:52:17 crc kubenswrapper[4737]: I0126 18:52:17.353677 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 26 18:52:17 crc kubenswrapper[4737]: I0126 18:52:17.384344 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca50689d-e7af-4267-9ee0-11d254c08962-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ca50689d-e7af-4267-9ee0-11d254c08962\") " pod="openstack/openstack-galera-0" Jan 26 18:52:17 crc kubenswrapper[4737]: I0126 18:52:17.384404 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca50689d-e7af-4267-9ee0-11d254c08962-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ca50689d-e7af-4267-9ee0-11d254c08962\") " pod="openstack/openstack-galera-0" Jan 26 18:52:17 crc kubenswrapper[4737]: I0126 18:52:17.384428 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca50689d-e7af-4267-9ee0-11d254c08962-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ca50689d-e7af-4267-9ee0-11d254c08962\") " pod="openstack/openstack-galera-0" Jan 26 18:52:17 crc kubenswrapper[4737]: I0126 18:52:17.384450 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ca50689d-e7af-4267-9ee0-11d254c08962-kolla-config\") pod \"openstack-galera-0\" (UID: \"ca50689d-e7af-4267-9ee0-11d254c08962\") " pod="openstack/openstack-galera-0" Jan 26 18:52:17 crc kubenswrapper[4737]: I0126 18:52:17.384839 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ca50689d-e7af-4267-9ee0-11d254c08962-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ca50689d-e7af-4267-9ee0-11d254c08962\") " pod="openstack/openstack-galera-0" Jan 26 18:52:17 crc kubenswrapper[4737]: I0126 18:52:17.385092 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ca50689d-e7af-4267-9ee0-11d254c08962-config-data-default\") pod \"openstack-galera-0\" (UID: \"ca50689d-e7af-4267-9ee0-11d254c08962\") " pod="openstack/openstack-galera-0" Jan 26 18:52:17 crc kubenswrapper[4737]: I0126 18:52:17.385229 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-231867a2-631c-483b-995d-c3db3e151a0d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-231867a2-631c-483b-995d-c3db3e151a0d\") pod \"openstack-galera-0\" (UID: \"ca50689d-e7af-4267-9ee0-11d254c08962\") " pod="openstack/openstack-galera-0" Jan 26 18:52:17 crc kubenswrapper[4737]: I0126 18:52:17.385544 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldsmr\" (UniqueName: \"kubernetes.io/projected/ca50689d-e7af-4267-9ee0-11d254c08962-kube-api-access-ldsmr\") pod \"openstack-galera-0\" (UID: \"ca50689d-e7af-4267-9ee0-11d254c08962\") " pod="openstack/openstack-galera-0" Jan 26 18:52:17 crc kubenswrapper[4737]: I0126 18:52:17.487219 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ca50689d-e7af-4267-9ee0-11d254c08962-config-data-default\") pod \"openstack-galera-0\" (UID: \"ca50689d-e7af-4267-9ee0-11d254c08962\") " pod="openstack/openstack-galera-0" Jan 26 18:52:17 crc kubenswrapper[4737]: I0126 18:52:17.487329 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-231867a2-631c-483b-995d-c3db3e151a0d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-231867a2-631c-483b-995d-c3db3e151a0d\") pod \"openstack-galera-0\" (UID: \"ca50689d-e7af-4267-9ee0-11d254c08962\") " pod="openstack/openstack-galera-0" Jan 26 18:52:17 crc kubenswrapper[4737]: I0126 18:52:17.487353 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldsmr\" (UniqueName: \"kubernetes.io/projected/ca50689d-e7af-4267-9ee0-11d254c08962-kube-api-access-ldsmr\") pod \"openstack-galera-0\" (UID: \"ca50689d-e7af-4267-9ee0-11d254c08962\") " pod="openstack/openstack-galera-0" Jan 26 18:52:17 crc kubenswrapper[4737]: I0126 18:52:17.488571 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ca50689d-e7af-4267-9ee0-11d254c08962-config-data-default\") pod \"openstack-galera-0\" (UID: \"ca50689d-e7af-4267-9ee0-11d254c08962\") " pod="openstack/openstack-galera-0" Jan 26 18:52:17 crc kubenswrapper[4737]: I0126 18:52:17.488658 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca50689d-e7af-4267-9ee0-11d254c08962-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ca50689d-e7af-4267-9ee0-11d254c08962\") " pod="openstack/openstack-galera-0" Jan 26 18:52:17 crc kubenswrapper[4737]: I0126 18:52:17.488701 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca50689d-e7af-4267-9ee0-11d254c08962-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ca50689d-e7af-4267-9ee0-11d254c08962\") " pod="openstack/openstack-galera-0" Jan 26 18:52:17 crc kubenswrapper[4737]: I0126 18:52:17.488727 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca50689d-e7af-4267-9ee0-11d254c08962-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ca50689d-e7af-4267-9ee0-11d254c08962\") " pod="openstack/openstack-galera-0" Jan 26 18:52:17 crc kubenswrapper[4737]: I0126 18:52:17.488749 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ca50689d-e7af-4267-9ee0-11d254c08962-kolla-config\") pod \"openstack-galera-0\" (UID: \"ca50689d-e7af-4267-9ee0-11d254c08962\") " pod="openstack/openstack-galera-0" Jan 26 18:52:17 crc kubenswrapper[4737]: I0126 18:52:17.488811 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ca50689d-e7af-4267-9ee0-11d254c08962-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ca50689d-e7af-4267-9ee0-11d254c08962\") " pod="openstack/openstack-galera-0" Jan 26 18:52:17 crc kubenswrapper[4737]: I0126 18:52:17.489221 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ca50689d-e7af-4267-9ee0-11d254c08962-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ca50689d-e7af-4267-9ee0-11d254c08962\") " pod="openstack/openstack-galera-0" Jan 26 18:52:17 crc kubenswrapper[4737]: I0126 18:52:17.491175 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca50689d-e7af-4267-9ee0-11d254c08962-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ca50689d-e7af-4267-9ee0-11d254c08962\") " pod="openstack/openstack-galera-0" Jan 26 18:52:17 crc kubenswrapper[4737]: I0126 18:52:17.491610 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ca50689d-e7af-4267-9ee0-11d254c08962-kolla-config\") pod \"openstack-galera-0\" (UID: \"ca50689d-e7af-4267-9ee0-11d254c08962\") " pod="openstack/openstack-galera-0" Jan 26 18:52:17 crc kubenswrapper[4737]: I0126 18:52:17.495574 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca50689d-e7af-4267-9ee0-11d254c08962-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ca50689d-e7af-4267-9ee0-11d254c08962\") " pod="openstack/openstack-galera-0" Jan 26 18:52:17 crc kubenswrapper[4737]: I0126 18:52:17.497141 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca50689d-e7af-4267-9ee0-11d254c08962-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ca50689d-e7af-4267-9ee0-11d254c08962\") " pod="openstack/openstack-galera-0" Jan 26 18:52:17 crc kubenswrapper[4737]: I0126 18:52:17.503305 4737 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 18:52:17 crc kubenswrapper[4737]: I0126 18:52:17.503351 4737 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-231867a2-631c-483b-995d-c3db3e151a0d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-231867a2-631c-483b-995d-c3db3e151a0d\") pod \"openstack-galera-0\" (UID: \"ca50689d-e7af-4267-9ee0-11d254c08962\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4190c646ed3422e42ac81dafa96d880314d4552510694ea8a6e9511322a08709/globalmount\"" pod="openstack/openstack-galera-0" Jan 26 18:52:17 crc kubenswrapper[4737]: I0126 18:52:17.511655 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldsmr\" (UniqueName: \"kubernetes.io/projected/ca50689d-e7af-4267-9ee0-11d254c08962-kube-api-access-ldsmr\") pod \"openstack-galera-0\" (UID: \"ca50689d-e7af-4267-9ee0-11d254c08962\") " pod="openstack/openstack-galera-0" Jan 26 18:52:17 crc kubenswrapper[4737]: I0126 18:52:17.522400 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"89a3c35d-3e74-49b8-ae17-81808321d00d","Type":"ContainerStarted","Data":"8f62d35970963431573036fce6585d65aa0b4fb788a7b5e7fa3cc2b77ba8009e"} Jan 26 18:52:17 crc kubenswrapper[4737]: I0126 18:52:17.523525 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f","Type":"ContainerStarted","Data":"1ee9dd549f27c874bf9d6d6ea6424c9bd6686b9ddc095a7c415dd84a7ad6f6b4"} Jan 26 18:52:17 crc kubenswrapper[4737]: I0126 18:52:17.524750 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"5bfe0217-6204-407d-aaeb-94051bb8255b","Type":"ContainerStarted","Data":"1ce82639dbc64e8e36e50a8dca2bc037cfe125204c0dbd49fb60a56482e408a3"} Jan 26 18:52:17 crc kubenswrapper[4737]: I0126 18:52:17.527854 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"ca2ccc7a-b591-4abe-b133-f959b5445611","Type":"ContainerStarted","Data":"6a1a51e2413b378d6a7940812f10933c9a99e1b502881a766a143b74e90c7c5a"} Jan 26 18:52:17 crc kubenswrapper[4737]: I0126 18:52:17.571901 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-231867a2-631c-483b-995d-c3db3e151a0d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-231867a2-631c-483b-995d-c3db3e151a0d\") pod \"openstack-galera-0\" (UID: \"ca50689d-e7af-4267-9ee0-11d254c08962\") " pod="openstack/openstack-galera-0" Jan 26 18:52:17 crc kubenswrapper[4737]: I0126 18:52:17.624933 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 26 18:52:18 crc kubenswrapper[4737]: I0126 18:52:18.349168 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 26 18:52:18 crc kubenswrapper[4737]: W0126 18:52:18.359118 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca50689d_e7af_4267_9ee0_11d254c08962.slice/crio-5104cbd16fb58d52685bddbe162ad7c779cad7268a64225f41c6f5d5ffff57d5 WatchSource:0}: Error finding container 5104cbd16fb58d52685bddbe162ad7c779cad7268a64225f41c6f5d5ffff57d5: Status 404 returned error can't find the container with id 5104cbd16fb58d52685bddbe162ad7c779cad7268a64225f41c6f5d5ffff57d5 Jan 26 18:52:18 crc kubenswrapper[4737]: I0126 18:52:18.516469 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 26 18:52:18 crc kubenswrapper[4737]: I0126 18:52:18.518774 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 26 18:52:18 crc kubenswrapper[4737]: I0126 18:52:18.526869 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-hnk4b" Jan 26 18:52:18 crc kubenswrapper[4737]: I0126 18:52:18.527041 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 26 18:52:18 crc kubenswrapper[4737]: I0126 18:52:18.527157 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 26 18:52:18 crc kubenswrapper[4737]: I0126 18:52:18.527262 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 26 18:52:18 crc kubenswrapper[4737]: I0126 18:52:18.537998 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 26 18:52:18 crc kubenswrapper[4737]: I0126 18:52:18.550476 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ca50689d-e7af-4267-9ee0-11d254c08962","Type":"ContainerStarted","Data":"5104cbd16fb58d52685bddbe162ad7c779cad7268a64225f41c6f5d5ffff57d5"} Jan 26 18:52:18 crc kubenswrapper[4737]: I0126 18:52:18.625668 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/89018ab2-3fc5-4855-b47e-ac19d8008c8e-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"89018ab2-3fc5-4855-b47e-ac19d8008c8e\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:52:18 crc kubenswrapper[4737]: I0126 18:52:18.625783 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89018ab2-3fc5-4855-b47e-ac19d8008c8e-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"89018ab2-3fc5-4855-b47e-ac19d8008c8e\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:52:18 crc kubenswrapper[4737]: I0126 18:52:18.625815 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/89018ab2-3fc5-4855-b47e-ac19d8008c8e-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"89018ab2-3fc5-4855-b47e-ac19d8008c8e\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:52:18 crc kubenswrapper[4737]: I0126 18:52:18.625898 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/89018ab2-3fc5-4855-b47e-ac19d8008c8e-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"89018ab2-3fc5-4855-b47e-ac19d8008c8e\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:52:18 crc kubenswrapper[4737]: I0126 18:52:18.625975 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czdcn\" (UniqueName: \"kubernetes.io/projected/89018ab2-3fc5-4855-b47e-ac19d8008c8e-kube-api-access-czdcn\") pod \"openstack-cell1-galera-0\" (UID: \"89018ab2-3fc5-4855-b47e-ac19d8008c8e\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:52:18 crc kubenswrapper[4737]: I0126 18:52:18.626004 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89018ab2-3fc5-4855-b47e-ac19d8008c8e-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"89018ab2-3fc5-4855-b47e-ac19d8008c8e\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:52:18 crc kubenswrapper[4737]: I0126 18:52:18.626274 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-72f385ce-d8d3-4674-ab8b-5520554d3dc2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-72f385ce-d8d3-4674-ab8b-5520554d3dc2\") pod \"openstack-cell1-galera-0\" (UID: \"89018ab2-3fc5-4855-b47e-ac19d8008c8e\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:52:18 crc kubenswrapper[4737]: I0126 18:52:18.626456 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/89018ab2-3fc5-4855-b47e-ac19d8008c8e-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"89018ab2-3fc5-4855-b47e-ac19d8008c8e\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:52:18 crc kubenswrapper[4737]: I0126 18:52:18.729539 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89018ab2-3fc5-4855-b47e-ac19d8008c8e-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"89018ab2-3fc5-4855-b47e-ac19d8008c8e\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:52:18 crc kubenswrapper[4737]: I0126 18:52:18.729582 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/89018ab2-3fc5-4855-b47e-ac19d8008c8e-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"89018ab2-3fc5-4855-b47e-ac19d8008c8e\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:52:18 crc kubenswrapper[4737]: I0126 18:52:18.729611 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/89018ab2-3fc5-4855-b47e-ac19d8008c8e-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"89018ab2-3fc5-4855-b47e-ac19d8008c8e\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:52:18 crc kubenswrapper[4737]: I0126 18:52:18.729650 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czdcn\" (UniqueName: \"kubernetes.io/projected/89018ab2-3fc5-4855-b47e-ac19d8008c8e-kube-api-access-czdcn\") pod \"openstack-cell1-galera-0\" (UID: \"89018ab2-3fc5-4855-b47e-ac19d8008c8e\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:52:18 crc kubenswrapper[4737]: I0126 18:52:18.729684 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89018ab2-3fc5-4855-b47e-ac19d8008c8e-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"89018ab2-3fc5-4855-b47e-ac19d8008c8e\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:52:18 crc kubenswrapper[4737]: I0126 18:52:18.729760 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-72f385ce-d8d3-4674-ab8b-5520554d3dc2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-72f385ce-d8d3-4674-ab8b-5520554d3dc2\") pod \"openstack-cell1-galera-0\" (UID: \"89018ab2-3fc5-4855-b47e-ac19d8008c8e\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:52:18 crc kubenswrapper[4737]: I0126 18:52:18.729791 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/89018ab2-3fc5-4855-b47e-ac19d8008c8e-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"89018ab2-3fc5-4855-b47e-ac19d8008c8e\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:52:18 crc kubenswrapper[4737]: I0126 18:52:18.729832 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/89018ab2-3fc5-4855-b47e-ac19d8008c8e-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"89018ab2-3fc5-4855-b47e-ac19d8008c8e\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:52:18 crc kubenswrapper[4737]: I0126 18:52:18.730715 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/89018ab2-3fc5-4855-b47e-ac19d8008c8e-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"89018ab2-3fc5-4855-b47e-ac19d8008c8e\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:52:18 crc kubenswrapper[4737]: I0126 18:52:18.732114 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89018ab2-3fc5-4855-b47e-ac19d8008c8e-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"89018ab2-3fc5-4855-b47e-ac19d8008c8e\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:52:18 crc kubenswrapper[4737]: I0126 18:52:18.732205 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/89018ab2-3fc5-4855-b47e-ac19d8008c8e-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"89018ab2-3fc5-4855-b47e-ac19d8008c8e\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:52:18 crc kubenswrapper[4737]: I0126 18:52:18.739945 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/89018ab2-3fc5-4855-b47e-ac19d8008c8e-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"89018ab2-3fc5-4855-b47e-ac19d8008c8e\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:52:18 crc kubenswrapper[4737]: I0126 18:52:18.745133 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/89018ab2-3fc5-4855-b47e-ac19d8008c8e-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"89018ab2-3fc5-4855-b47e-ac19d8008c8e\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:52:18 crc kubenswrapper[4737]: I0126 18:52:18.745220 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89018ab2-3fc5-4855-b47e-ac19d8008c8e-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"89018ab2-3fc5-4855-b47e-ac19d8008c8e\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:52:18 crc kubenswrapper[4737]: I0126 18:52:18.745466 4737 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 18:52:18 crc kubenswrapper[4737]: I0126 18:52:18.751876 4737 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-72f385ce-d8d3-4674-ab8b-5520554d3dc2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-72f385ce-d8d3-4674-ab8b-5520554d3dc2\") pod \"openstack-cell1-galera-0\" (UID: \"89018ab2-3fc5-4855-b47e-ac19d8008c8e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2fa583d5db544e76bd4e34ee19b34509eb075013b06a5dc6976bdbc5e9814cf0/globalmount\"" pod="openstack/openstack-cell1-galera-0" Jan 26 18:52:18 crc kubenswrapper[4737]: I0126 18:52:18.766858 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czdcn\" (UniqueName: \"kubernetes.io/projected/89018ab2-3fc5-4855-b47e-ac19d8008c8e-kube-api-access-czdcn\") pod \"openstack-cell1-galera-0\" (UID: \"89018ab2-3fc5-4855-b47e-ac19d8008c8e\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:52:18 crc kubenswrapper[4737]: I0126 18:52:18.839801 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 26 18:52:18 crc kubenswrapper[4737]: I0126 18:52:18.846040 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 26 18:52:18 crc kubenswrapper[4737]: I0126 18:52:18.854025 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-5t7z5" Jan 26 18:52:18 crc kubenswrapper[4737]: I0126 18:52:18.854994 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 26 18:52:18 crc kubenswrapper[4737]: I0126 18:52:18.855053 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 26 18:52:18 crc kubenswrapper[4737]: I0126 18:52:18.874247 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 26 18:52:18 crc kubenswrapper[4737]: I0126 18:52:18.926490 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-72f385ce-d8d3-4674-ab8b-5520554d3dc2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-72f385ce-d8d3-4674-ab8b-5520554d3dc2\") pod \"openstack-cell1-galera-0\" (UID: \"89018ab2-3fc5-4855-b47e-ac19d8008c8e\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:52:18 crc kubenswrapper[4737]: I0126 18:52:18.935265 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frkbj\" (UniqueName: \"kubernetes.io/projected/2618c486-a631-4a87-ba8f-d5ccad83a208-kube-api-access-frkbj\") pod \"memcached-0\" (UID: \"2618c486-a631-4a87-ba8f-d5ccad83a208\") " pod="openstack/memcached-0" Jan 26 18:52:18 crc kubenswrapper[4737]: I0126 18:52:18.935325 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2618c486-a631-4a87-ba8f-d5ccad83a208-combined-ca-bundle\") pod \"memcached-0\" (UID: \"2618c486-a631-4a87-ba8f-d5ccad83a208\") " pod="openstack/memcached-0" Jan 26 18:52:18 crc kubenswrapper[4737]: I0126 18:52:18.935359 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/2618c486-a631-4a87-ba8f-d5ccad83a208-memcached-tls-certs\") pod \"memcached-0\" (UID: \"2618c486-a631-4a87-ba8f-d5ccad83a208\") " pod="openstack/memcached-0" Jan 26 18:52:18 crc kubenswrapper[4737]: I0126 18:52:18.935387 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2618c486-a631-4a87-ba8f-d5ccad83a208-kolla-config\") pod \"memcached-0\" (UID: \"2618c486-a631-4a87-ba8f-d5ccad83a208\") " pod="openstack/memcached-0" Jan 26 18:52:18 crc kubenswrapper[4737]: I0126 18:52:18.935470 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2618c486-a631-4a87-ba8f-d5ccad83a208-config-data\") pod \"memcached-0\" (UID: \"2618c486-a631-4a87-ba8f-d5ccad83a208\") " pod="openstack/memcached-0" Jan 26 18:52:19 crc kubenswrapper[4737]: I0126 18:52:19.037813 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frkbj\" (UniqueName: \"kubernetes.io/projected/2618c486-a631-4a87-ba8f-d5ccad83a208-kube-api-access-frkbj\") pod \"memcached-0\" (UID: \"2618c486-a631-4a87-ba8f-d5ccad83a208\") " pod="openstack/memcached-0" Jan 26 18:52:19 crc kubenswrapper[4737]: I0126 18:52:19.037892 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2618c486-a631-4a87-ba8f-d5ccad83a208-combined-ca-bundle\") pod \"memcached-0\" (UID: \"2618c486-a631-4a87-ba8f-d5ccad83a208\") " pod="openstack/memcached-0" Jan 26 18:52:19 crc kubenswrapper[4737]: I0126 18:52:19.037946 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/2618c486-a631-4a87-ba8f-d5ccad83a208-memcached-tls-certs\") pod \"memcached-0\" (UID: \"2618c486-a631-4a87-ba8f-d5ccad83a208\") " pod="openstack/memcached-0" Jan 26 18:52:19 crc kubenswrapper[4737]: I0126 18:52:19.037983 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2618c486-a631-4a87-ba8f-d5ccad83a208-kolla-config\") pod \"memcached-0\" (UID: \"2618c486-a631-4a87-ba8f-d5ccad83a208\") " pod="openstack/memcached-0" Jan 26 18:52:19 crc kubenswrapper[4737]: I0126 18:52:19.038133 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2618c486-a631-4a87-ba8f-d5ccad83a208-config-data\") pod \"memcached-0\" (UID: \"2618c486-a631-4a87-ba8f-d5ccad83a208\") " pod="openstack/memcached-0" Jan 26 18:52:19 crc kubenswrapper[4737]: I0126 18:52:19.039585 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2618c486-a631-4a87-ba8f-d5ccad83a208-config-data\") pod \"memcached-0\" (UID: \"2618c486-a631-4a87-ba8f-d5ccad83a208\") " pod="openstack/memcached-0" Jan 26 18:52:19 crc kubenswrapper[4737]: I0126 18:52:19.040580 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2618c486-a631-4a87-ba8f-d5ccad83a208-kolla-config\") pod \"memcached-0\" (UID: \"2618c486-a631-4a87-ba8f-d5ccad83a208\") " pod="openstack/memcached-0" Jan 26 18:52:19 crc kubenswrapper[4737]: I0126 18:52:19.049128 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2618c486-a631-4a87-ba8f-d5ccad83a208-combined-ca-bundle\") pod \"memcached-0\" (UID: \"2618c486-a631-4a87-ba8f-d5ccad83a208\") " pod="openstack/memcached-0" Jan 26 18:52:19 crc kubenswrapper[4737]: I0126 18:52:19.051692 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/2618c486-a631-4a87-ba8f-d5ccad83a208-memcached-tls-certs\") pod \"memcached-0\" (UID: \"2618c486-a631-4a87-ba8f-d5ccad83a208\") " pod="openstack/memcached-0" Jan 26 18:52:19 crc kubenswrapper[4737]: I0126 18:52:19.082710 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frkbj\" (UniqueName: \"kubernetes.io/projected/2618c486-a631-4a87-ba8f-d5ccad83a208-kube-api-access-frkbj\") pod \"memcached-0\" (UID: \"2618c486-a631-4a87-ba8f-d5ccad83a208\") " pod="openstack/memcached-0" Jan 26 18:52:19 crc kubenswrapper[4737]: I0126 18:52:19.147241 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 26 18:52:19 crc kubenswrapper[4737]: I0126 18:52:19.257848 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 26 18:52:20 crc kubenswrapper[4737]: I0126 18:52:20.010332 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 26 18:52:20 crc kubenswrapper[4737]: W0126 18:52:20.020991 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89018ab2_3fc5_4855_b47e_ac19d8008c8e.slice/crio-2277ea1b70577cf5268c3e7413b496207721ef8569bea27cc6ae6f84cf5e4118 WatchSource:0}: Error finding container 2277ea1b70577cf5268c3e7413b496207721ef8569bea27cc6ae6f84cf5e4118: Status 404 returned error can't find the container with id 2277ea1b70577cf5268c3e7413b496207721ef8569bea27cc6ae6f84cf5e4118 Jan 26 18:52:20 crc kubenswrapper[4737]: I0126 18:52:20.294502 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 26 18:52:20 crc kubenswrapper[4737]: W0126 18:52:20.402172 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2618c486_a631_4a87_ba8f_d5ccad83a208.slice/crio-92ec41ca2b7960846f1f7cc8b5688d814b2b7567d005b17463bd4842ebf5f541 WatchSource:0}: Error finding container 92ec41ca2b7960846f1f7cc8b5688d814b2b7567d005b17463bd4842ebf5f541: Status 404 returned error can't find the container with id 92ec41ca2b7960846f1f7cc8b5688d814b2b7567d005b17463bd4842ebf5f541 Jan 26 18:52:20 crc kubenswrapper[4737]: I0126 18:52:20.638100 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"2618c486-a631-4a87-ba8f-d5ccad83a208","Type":"ContainerStarted","Data":"92ec41ca2b7960846f1f7cc8b5688d814b2b7567d005b17463bd4842ebf5f541"} Jan 26 18:52:20 crc kubenswrapper[4737]: I0126 18:52:20.641722 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"89018ab2-3fc5-4855-b47e-ac19d8008c8e","Type":"ContainerStarted","Data":"2277ea1b70577cf5268c3e7413b496207721ef8569bea27cc6ae6f84cf5e4118"} Jan 26 18:52:21 crc kubenswrapper[4737]: I0126 18:52:21.316624 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 18:52:21 crc kubenswrapper[4737]: I0126 18:52:21.328939 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 18:52:21 crc kubenswrapper[4737]: I0126 18:52:21.337952 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-hhvzt" Jan 26 18:52:21 crc kubenswrapper[4737]: I0126 18:52:21.372595 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 18:52:21 crc kubenswrapper[4737]: I0126 18:52:21.428185 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9vcw\" (UniqueName: \"kubernetes.io/projected/aba2f81e-11de-4d89-ab90-34ca36d205d6-kube-api-access-w9vcw\") pod \"kube-state-metrics-0\" (UID: \"aba2f81e-11de-4d89-ab90-34ca36d205d6\") " pod="openstack/kube-state-metrics-0" Jan 26 18:52:21 crc kubenswrapper[4737]: I0126 18:52:21.530307 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9vcw\" (UniqueName: \"kubernetes.io/projected/aba2f81e-11de-4d89-ab90-34ca36d205d6-kube-api-access-w9vcw\") pod \"kube-state-metrics-0\" (UID: \"aba2f81e-11de-4d89-ab90-34ca36d205d6\") " pod="openstack/kube-state-metrics-0" Jan 26 18:52:21 crc kubenswrapper[4737]: I0126 18:52:21.568655 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9vcw\" (UniqueName: \"kubernetes.io/projected/aba2f81e-11de-4d89-ab90-34ca36d205d6-kube-api-access-w9vcw\") pod \"kube-state-metrics-0\" (UID: \"aba2f81e-11de-4d89-ab90-34ca36d205d6\") " pod="openstack/kube-state-metrics-0" Jan 26 18:52:21 crc kubenswrapper[4737]: I0126 18:52:21.694642 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 18:52:22 crc kubenswrapper[4737]: I0126 18:52:22.196594 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-ckxn2"] Jan 26 18:52:22 crc kubenswrapper[4737]: I0126 18:52:22.198318 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-ckxn2" Jan 26 18:52:22 crc kubenswrapper[4737]: I0126 18:52:22.205646 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-ckxn2"] Jan 26 18:52:22 crc kubenswrapper[4737]: I0126 18:52:22.206437 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Jan 26 18:52:22 crc kubenswrapper[4737]: I0126 18:52:22.206573 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-754h2" Jan 26 18:52:22 crc kubenswrapper[4737]: I0126 18:52:22.257942 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz2p4\" (UniqueName: \"kubernetes.io/projected/6b80cd0d-81ac-4f45-a80c-3b1cf442fc44-kube-api-access-qz2p4\") pod \"observability-ui-dashboards-66cbf594b5-ckxn2\" (UID: \"6b80cd0d-81ac-4f45-a80c-3b1cf442fc44\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-ckxn2" Jan 26 18:52:22 crc kubenswrapper[4737]: I0126 18:52:22.257989 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b80cd0d-81ac-4f45-a80c-3b1cf442fc44-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-ckxn2\" (UID: \"6b80cd0d-81ac-4f45-a80c-3b1cf442fc44\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-ckxn2" Jan 26 18:52:22 crc kubenswrapper[4737]: I0126 18:52:22.364003 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qz2p4\" (UniqueName: \"kubernetes.io/projected/6b80cd0d-81ac-4f45-a80c-3b1cf442fc44-kube-api-access-qz2p4\") pod \"observability-ui-dashboards-66cbf594b5-ckxn2\" (UID: \"6b80cd0d-81ac-4f45-a80c-3b1cf442fc44\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-ckxn2" Jan 26 18:52:22 crc kubenswrapper[4737]: I0126 18:52:22.364049 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b80cd0d-81ac-4f45-a80c-3b1cf442fc44-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-ckxn2\" (UID: \"6b80cd0d-81ac-4f45-a80c-3b1cf442fc44\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-ckxn2" Jan 26 18:52:22 crc kubenswrapper[4737]: E0126 18:52:22.364224 4737 secret.go:188] Couldn't get secret openshift-operators/observability-ui-dashboards: secret "observability-ui-dashboards" not found Jan 26 18:52:22 crc kubenswrapper[4737]: E0126 18:52:22.364274 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b80cd0d-81ac-4f45-a80c-3b1cf442fc44-serving-cert podName:6b80cd0d-81ac-4f45-a80c-3b1cf442fc44 nodeName:}" failed. No retries permitted until 2026-01-26 18:52:22.864258303 +0000 UTC m=+1316.172453011 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6b80cd0d-81ac-4f45-a80c-3b1cf442fc44-serving-cert") pod "observability-ui-dashboards-66cbf594b5-ckxn2" (UID: "6b80cd0d-81ac-4f45-a80c-3b1cf442fc44") : secret "observability-ui-dashboards" not found Jan 26 18:52:22 crc kubenswrapper[4737]: I0126 18:52:22.408153 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qz2p4\" (UniqueName: \"kubernetes.io/projected/6b80cd0d-81ac-4f45-a80c-3b1cf442fc44-kube-api-access-qz2p4\") pod \"observability-ui-dashboards-66cbf594b5-ckxn2\" (UID: \"6b80cd0d-81ac-4f45-a80c-3b1cf442fc44\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-ckxn2" Jan 26 18:52:22 crc kubenswrapper[4737]: I0126 18:52:22.470329 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 18:52:22 crc kubenswrapper[4737]: I0126 18:52:22.480546 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 26 18:52:22 crc kubenswrapper[4737]: I0126 18:52:22.516711 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 26 18:52:22 crc kubenswrapper[4737]: I0126 18:52:22.516946 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 26 18:52:22 crc kubenswrapper[4737]: I0126 18:52:22.517113 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 26 18:52:22 crc kubenswrapper[4737]: I0126 18:52:22.517263 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 26 18:52:22 crc kubenswrapper[4737]: I0126 18:52:22.517423 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-f47fq" Jan 26 18:52:22 crc kubenswrapper[4737]: I0126 18:52:22.517591 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 26 18:52:22 crc kubenswrapper[4737]: I0126 18:52:22.517733 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 26 18:52:22 crc kubenswrapper[4737]: I0126 18:52:22.517907 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 26 18:52:22 crc kubenswrapper[4737]: I0126 18:52:22.675041 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/539f99ad-d4f8-4e02-aca3-f247bc802698-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"539f99ad-d4f8-4e02-aca3-f247bc802698\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:52:22 crc kubenswrapper[4737]: I0126 18:52:22.675381 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/539f99ad-d4f8-4e02-aca3-f247bc802698-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"539f99ad-d4f8-4e02-aca3-f247bc802698\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:52:22 crc kubenswrapper[4737]: I0126 18:52:22.675426 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/539f99ad-d4f8-4e02-aca3-f247bc802698-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"539f99ad-d4f8-4e02-aca3-f247bc802698\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:52:22 crc kubenswrapper[4737]: I0126 18:52:22.675460 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/539f99ad-d4f8-4e02-aca3-f247bc802698-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"539f99ad-d4f8-4e02-aca3-f247bc802698\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:52:22 crc kubenswrapper[4737]: I0126 18:52:22.675494 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5e74d0cb-707b-46f9-94e4-b1f98c52eb48\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e74d0cb-707b-46f9-94e4-b1f98c52eb48\") pod \"prometheus-metric-storage-0\" (UID: \"539f99ad-d4f8-4e02-aca3-f247bc802698\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:52:22 crc kubenswrapper[4737]: I0126 18:52:22.675568 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/539f99ad-d4f8-4e02-aca3-f247bc802698-config\") pod \"prometheus-metric-storage-0\" (UID: \"539f99ad-d4f8-4e02-aca3-f247bc802698\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:52:22 crc kubenswrapper[4737]: I0126 18:52:22.675568 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 18:52:22 crc kubenswrapper[4737]: I0126 18:52:22.675590 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/539f99ad-d4f8-4e02-aca3-f247bc802698-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"539f99ad-d4f8-4e02-aca3-f247bc802698\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:52:22 crc kubenswrapper[4737]: I0126 18:52:22.675632 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/539f99ad-d4f8-4e02-aca3-f247bc802698-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"539f99ad-d4f8-4e02-aca3-f247bc802698\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:52:22 crc kubenswrapper[4737]: I0126 18:52:22.675922 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/539f99ad-d4f8-4e02-aca3-f247bc802698-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"539f99ad-d4f8-4e02-aca3-f247bc802698\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:52:22 crc kubenswrapper[4737]: I0126 18:52:22.675963 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcw7m\" (UniqueName: \"kubernetes.io/projected/539f99ad-d4f8-4e02-aca3-f247bc802698-kube-api-access-vcw7m\") pod \"prometheus-metric-storage-0\" (UID: \"539f99ad-d4f8-4e02-aca3-f247bc802698\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:52:22 crc kubenswrapper[4737]: I0126 18:52:22.748578 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 18:52:22 crc kubenswrapper[4737]: I0126 18:52:22.782351 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/539f99ad-d4f8-4e02-aca3-f247bc802698-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"539f99ad-d4f8-4e02-aca3-f247bc802698\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:52:22 crc kubenswrapper[4737]: I0126 18:52:22.782402 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5e74d0cb-707b-46f9-94e4-b1f98c52eb48\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e74d0cb-707b-46f9-94e4-b1f98c52eb48\") pod \"prometheus-metric-storage-0\" (UID: \"539f99ad-d4f8-4e02-aca3-f247bc802698\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:52:22 crc kubenswrapper[4737]: I0126 18:52:22.782465 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/539f99ad-d4f8-4e02-aca3-f247bc802698-config\") pod \"prometheus-metric-storage-0\" (UID: \"539f99ad-d4f8-4e02-aca3-f247bc802698\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:52:22 crc kubenswrapper[4737]: I0126 18:52:22.782515 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/539f99ad-d4f8-4e02-aca3-f247bc802698-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"539f99ad-d4f8-4e02-aca3-f247bc802698\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:52:22 crc kubenswrapper[4737]: I0126 18:52:22.782551 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/539f99ad-d4f8-4e02-aca3-f247bc802698-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"539f99ad-d4f8-4e02-aca3-f247bc802698\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:52:22 crc kubenswrapper[4737]: I0126 18:52:22.782604 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/539f99ad-d4f8-4e02-aca3-f247bc802698-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"539f99ad-d4f8-4e02-aca3-f247bc802698\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:52:22 crc kubenswrapper[4737]: I0126 18:52:22.782632 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcw7m\" (UniqueName: \"kubernetes.io/projected/539f99ad-d4f8-4e02-aca3-f247bc802698-kube-api-access-vcw7m\") pod \"prometheus-metric-storage-0\" (UID: \"539f99ad-d4f8-4e02-aca3-f247bc802698\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:52:22 crc kubenswrapper[4737]: I0126 18:52:22.782692 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/539f99ad-d4f8-4e02-aca3-f247bc802698-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"539f99ad-d4f8-4e02-aca3-f247bc802698\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:52:22 crc kubenswrapper[4737]: I0126 18:52:22.782727 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/539f99ad-d4f8-4e02-aca3-f247bc802698-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"539f99ad-d4f8-4e02-aca3-f247bc802698\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:52:22 crc kubenswrapper[4737]: I0126 18:52:22.782755 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/539f99ad-d4f8-4e02-aca3-f247bc802698-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"539f99ad-d4f8-4e02-aca3-f247bc802698\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:52:22 crc kubenswrapper[4737]: I0126 18:52:22.783921 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/539f99ad-d4f8-4e02-aca3-f247bc802698-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"539f99ad-d4f8-4e02-aca3-f247bc802698\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:52:22 crc kubenswrapper[4737]: I0126 18:52:22.787472 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/539f99ad-d4f8-4e02-aca3-f247bc802698-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"539f99ad-d4f8-4e02-aca3-f247bc802698\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:52:22 crc kubenswrapper[4737]: I0126 18:52:22.817246 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/539f99ad-d4f8-4e02-aca3-f247bc802698-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"539f99ad-d4f8-4e02-aca3-f247bc802698\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:52:22 crc kubenswrapper[4737]: I0126 18:52:22.817816 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/539f99ad-d4f8-4e02-aca3-f247bc802698-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"539f99ad-d4f8-4e02-aca3-f247bc802698\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:52:22 crc kubenswrapper[4737]: I0126 18:52:22.819032 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/539f99ad-d4f8-4e02-aca3-f247bc802698-config\") pod \"prometheus-metric-storage-0\" (UID: \"539f99ad-d4f8-4e02-aca3-f247bc802698\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:52:22 crc kubenswrapper[4737]: I0126 18:52:22.819980 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/539f99ad-d4f8-4e02-aca3-f247bc802698-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"539f99ad-d4f8-4e02-aca3-f247bc802698\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:52:22 crc kubenswrapper[4737]: I0126 18:52:22.831971 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/539f99ad-d4f8-4e02-aca3-f247bc802698-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"539f99ad-d4f8-4e02-aca3-f247bc802698\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:52:22 crc kubenswrapper[4737]: I0126 18:52:22.835872 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/539f99ad-d4f8-4e02-aca3-f247bc802698-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"539f99ad-d4f8-4e02-aca3-f247bc802698\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:52:22 crc kubenswrapper[4737]: I0126 18:52:22.887456 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b80cd0d-81ac-4f45-a80c-3b1cf442fc44-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-ckxn2\" (UID: \"6b80cd0d-81ac-4f45-a80c-3b1cf442fc44\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-ckxn2" Jan 26 18:52:22 crc kubenswrapper[4737]: E0126 18:52:22.887771 4737 secret.go:188] Couldn't get secret openshift-operators/observability-ui-dashboards: secret "observability-ui-dashboards" not found Jan 26 18:52:22 crc kubenswrapper[4737]: E0126 18:52:22.887823 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b80cd0d-81ac-4f45-a80c-3b1cf442fc44-serving-cert podName:6b80cd0d-81ac-4f45-a80c-3b1cf442fc44 nodeName:}" failed. No retries permitted until 2026-01-26 18:52:23.887809911 +0000 UTC m=+1317.196004619 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6b80cd0d-81ac-4f45-a80c-3b1cf442fc44-serving-cert") pod "observability-ui-dashboards-66cbf594b5-ckxn2" (UID: "6b80cd0d-81ac-4f45-a80c-3b1cf442fc44") : secret "observability-ui-dashboards" not found Jan 26 18:52:22 crc kubenswrapper[4737]: I0126 18:52:22.880154 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcw7m\" (UniqueName: \"kubernetes.io/projected/539f99ad-d4f8-4e02-aca3-f247bc802698-kube-api-access-vcw7m\") pod \"prometheus-metric-storage-0\" (UID: \"539f99ad-d4f8-4e02-aca3-f247bc802698\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:52:22 crc kubenswrapper[4737]: I0126 18:52:22.937761 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-67df48bc8d-j5g9z"] Jan 26 18:52:22 crc kubenswrapper[4737]: I0126 18:52:22.939322 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-67df48bc8d-j5g9z" Jan 26 18:52:22 crc kubenswrapper[4737]: I0126 18:52:22.969870 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-67df48bc8d-j5g9z"] Jan 26 18:52:23 crc kubenswrapper[4737]: I0126 18:52:23.125150 4737 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 18:52:23 crc kubenswrapper[4737]: I0126 18:52:23.125222 4737 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5e74d0cb-707b-46f9-94e4-b1f98c52eb48\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e74d0cb-707b-46f9-94e4-b1f98c52eb48\") pod \"prometheus-metric-storage-0\" (UID: \"539f99ad-d4f8-4e02-aca3-f247bc802698\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/74f1aeb064e68dd5bb300f4ee340cba58d92675dd4510f16aad36f018da9b6f4/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 26 18:52:23 crc kubenswrapper[4737]: I0126 18:52:23.136335 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/846a23e4-f5aa-4975-af0a-d02c60aa08fd-console-config\") pod \"console-67df48bc8d-j5g9z\" (UID: \"846a23e4-f5aa-4975-af0a-d02c60aa08fd\") " pod="openshift-console/console-67df48bc8d-j5g9z" Jan 26 18:52:23 crc kubenswrapper[4737]: I0126 18:52:23.136402 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/846a23e4-f5aa-4975-af0a-d02c60aa08fd-oauth-serving-cert\") pod \"console-67df48bc8d-j5g9z\" (UID: \"846a23e4-f5aa-4975-af0a-d02c60aa08fd\") " pod="openshift-console/console-67df48bc8d-j5g9z" Jan 26 18:52:23 crc kubenswrapper[4737]: I0126 18:52:23.136432 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/846a23e4-f5aa-4975-af0a-d02c60aa08fd-trusted-ca-bundle\") pod \"console-67df48bc8d-j5g9z\" (UID: \"846a23e4-f5aa-4975-af0a-d02c60aa08fd\") " pod="openshift-console/console-67df48bc8d-j5g9z" Jan 26 18:52:23 crc kubenswrapper[4737]: I0126 18:52:23.136472 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/846a23e4-f5aa-4975-af0a-d02c60aa08fd-console-serving-cert\") pod \"console-67df48bc8d-j5g9z\" (UID: \"846a23e4-f5aa-4975-af0a-d02c60aa08fd\") " pod="openshift-console/console-67df48bc8d-j5g9z" Jan 26 18:52:23 crc kubenswrapper[4737]: I0126 18:52:23.136489 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/846a23e4-f5aa-4975-af0a-d02c60aa08fd-console-oauth-config\") pod \"console-67df48bc8d-j5g9z\" (UID: \"846a23e4-f5aa-4975-af0a-d02c60aa08fd\") " pod="openshift-console/console-67df48bc8d-j5g9z" Jan 26 18:52:23 crc kubenswrapper[4737]: I0126 18:52:23.136542 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/846a23e4-f5aa-4975-af0a-d02c60aa08fd-service-ca\") pod \"console-67df48bc8d-j5g9z\" (UID: \"846a23e4-f5aa-4975-af0a-d02c60aa08fd\") " pod="openshift-console/console-67df48bc8d-j5g9z" Jan 26 18:52:23 crc kubenswrapper[4737]: I0126 18:52:23.136607 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7dh5\" (UniqueName: \"kubernetes.io/projected/846a23e4-f5aa-4975-af0a-d02c60aa08fd-kube-api-access-r7dh5\") pod \"console-67df48bc8d-j5g9z\" (UID: \"846a23e4-f5aa-4975-af0a-d02c60aa08fd\") " pod="openshift-console/console-67df48bc8d-j5g9z" Jan 26 18:52:23 crc kubenswrapper[4737]: I0126 18:52:23.238192 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/846a23e4-f5aa-4975-af0a-d02c60aa08fd-trusted-ca-bundle\") pod \"console-67df48bc8d-j5g9z\" (UID: \"846a23e4-f5aa-4975-af0a-d02c60aa08fd\") " pod="openshift-console/console-67df48bc8d-j5g9z" Jan 26 18:52:23 crc kubenswrapper[4737]: I0126 18:52:23.238270 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/846a23e4-f5aa-4975-af0a-d02c60aa08fd-console-serving-cert\") pod \"console-67df48bc8d-j5g9z\" (UID: \"846a23e4-f5aa-4975-af0a-d02c60aa08fd\") " pod="openshift-console/console-67df48bc8d-j5g9z" Jan 26 18:52:23 crc kubenswrapper[4737]: I0126 18:52:23.238289 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/846a23e4-f5aa-4975-af0a-d02c60aa08fd-console-oauth-config\") pod \"console-67df48bc8d-j5g9z\" (UID: \"846a23e4-f5aa-4975-af0a-d02c60aa08fd\") " pod="openshift-console/console-67df48bc8d-j5g9z" Jan 26 18:52:23 crc kubenswrapper[4737]: I0126 18:52:23.238354 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/846a23e4-f5aa-4975-af0a-d02c60aa08fd-service-ca\") pod \"console-67df48bc8d-j5g9z\" (UID: \"846a23e4-f5aa-4975-af0a-d02c60aa08fd\") " pod="openshift-console/console-67df48bc8d-j5g9z" Jan 26 18:52:23 crc kubenswrapper[4737]: I0126 18:52:23.238426 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7dh5\" (UniqueName: \"kubernetes.io/projected/846a23e4-f5aa-4975-af0a-d02c60aa08fd-kube-api-access-r7dh5\") pod \"console-67df48bc8d-j5g9z\" (UID: \"846a23e4-f5aa-4975-af0a-d02c60aa08fd\") " pod="openshift-console/console-67df48bc8d-j5g9z" Jan 26 18:52:23 crc kubenswrapper[4737]: I0126 18:52:23.238461 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/846a23e4-f5aa-4975-af0a-d02c60aa08fd-console-config\") pod \"console-67df48bc8d-j5g9z\" (UID: \"846a23e4-f5aa-4975-af0a-d02c60aa08fd\") " pod="openshift-console/console-67df48bc8d-j5g9z" Jan 26 18:52:23 crc kubenswrapper[4737]: I0126 18:52:23.238502 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/846a23e4-f5aa-4975-af0a-d02c60aa08fd-oauth-serving-cert\") pod \"console-67df48bc8d-j5g9z\" (UID: \"846a23e4-f5aa-4975-af0a-d02c60aa08fd\") " pod="openshift-console/console-67df48bc8d-j5g9z" Jan 26 18:52:23 crc kubenswrapper[4737]: I0126 18:52:23.243354 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/846a23e4-f5aa-4975-af0a-d02c60aa08fd-console-oauth-config\") pod \"console-67df48bc8d-j5g9z\" (UID: \"846a23e4-f5aa-4975-af0a-d02c60aa08fd\") " pod="openshift-console/console-67df48bc8d-j5g9z" Jan 26 18:52:23 crc kubenswrapper[4737]: I0126 18:52:23.245843 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/846a23e4-f5aa-4975-af0a-d02c60aa08fd-console-serving-cert\") pod \"console-67df48bc8d-j5g9z\" (UID: \"846a23e4-f5aa-4975-af0a-d02c60aa08fd\") " pod="openshift-console/console-67df48bc8d-j5g9z" Jan 26 18:52:23 crc kubenswrapper[4737]: I0126 18:52:23.247108 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/846a23e4-f5aa-4975-af0a-d02c60aa08fd-oauth-serving-cert\") pod \"console-67df48bc8d-j5g9z\" (UID: \"846a23e4-f5aa-4975-af0a-d02c60aa08fd\") " pod="openshift-console/console-67df48bc8d-j5g9z" Jan 26 18:52:23 crc kubenswrapper[4737]: I0126 18:52:23.247449 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/846a23e4-f5aa-4975-af0a-d02c60aa08fd-service-ca\") pod \"console-67df48bc8d-j5g9z\" (UID: \"846a23e4-f5aa-4975-af0a-d02c60aa08fd\") " pod="openshift-console/console-67df48bc8d-j5g9z" Jan 26 18:52:23 crc kubenswrapper[4737]: I0126 18:52:23.247538 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/846a23e4-f5aa-4975-af0a-d02c60aa08fd-trusted-ca-bundle\") pod \"console-67df48bc8d-j5g9z\" (UID: \"846a23e4-f5aa-4975-af0a-d02c60aa08fd\") " pod="openshift-console/console-67df48bc8d-j5g9z" Jan 26 18:52:23 crc kubenswrapper[4737]: I0126 18:52:23.253488 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/846a23e4-f5aa-4975-af0a-d02c60aa08fd-console-config\") pod \"console-67df48bc8d-j5g9z\" (UID: \"846a23e4-f5aa-4975-af0a-d02c60aa08fd\") " pod="openshift-console/console-67df48bc8d-j5g9z" Jan 26 18:52:23 crc kubenswrapper[4737]: I0126 18:52:23.270737 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7dh5\" (UniqueName: \"kubernetes.io/projected/846a23e4-f5aa-4975-af0a-d02c60aa08fd-kube-api-access-r7dh5\") pod \"console-67df48bc8d-j5g9z\" (UID: \"846a23e4-f5aa-4975-af0a-d02c60aa08fd\") " pod="openshift-console/console-67df48bc8d-j5g9z" Jan 26 18:52:23 crc kubenswrapper[4737]: I0126 18:52:23.289517 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5e74d0cb-707b-46f9-94e4-b1f98c52eb48\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e74d0cb-707b-46f9-94e4-b1f98c52eb48\") pod \"prometheus-metric-storage-0\" (UID: \"539f99ad-d4f8-4e02-aca3-f247bc802698\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:52:23 crc kubenswrapper[4737]: I0126 18:52:23.313049 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-67df48bc8d-j5g9z" Jan 26 18:52:23 crc kubenswrapper[4737]: I0126 18:52:23.417865 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 26 18:52:23 crc kubenswrapper[4737]: I0126 18:52:23.820557 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"aba2f81e-11de-4d89-ab90-34ca36d205d6","Type":"ContainerStarted","Data":"a524c26effe0029f371a7ffb021d11f06bc363ce1a05d7072314e40b8034c390"} Jan 26 18:52:23 crc kubenswrapper[4737]: I0126 18:52:23.971523 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b80cd0d-81ac-4f45-a80c-3b1cf442fc44-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-ckxn2\" (UID: \"6b80cd0d-81ac-4f45-a80c-3b1cf442fc44\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-ckxn2" Jan 26 18:52:23 crc kubenswrapper[4737]: I0126 18:52:23.991436 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b80cd0d-81ac-4f45-a80c-3b1cf442fc44-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-ckxn2\" (UID: \"6b80cd0d-81ac-4f45-a80c-3b1cf442fc44\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-ckxn2" Jan 26 18:52:24 crc kubenswrapper[4737]: I0126 18:52:24.063435 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-ckxn2" Jan 26 18:52:24 crc kubenswrapper[4737]: I0126 18:52:24.453629 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-zrckb"] Jan 26 18:52:24 crc kubenswrapper[4737]: I0126 18:52:24.455493 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zrckb" Jan 26 18:52:24 crc kubenswrapper[4737]: I0126 18:52:24.461986 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-6c8mb" Jan 26 18:52:24 crc kubenswrapper[4737]: I0126 18:52:24.462413 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 26 18:52:24 crc kubenswrapper[4737]: I0126 18:52:24.475163 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 26 18:52:24 crc kubenswrapper[4737]: I0126 18:52:24.476301 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-tnjz7"] Jan 26 18:52:24 crc kubenswrapper[4737]: I0126 18:52:24.480824 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-tnjz7" Jan 26 18:52:24 crc kubenswrapper[4737]: I0126 18:52:24.486787 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zrckb"] Jan 26 18:52:24 crc kubenswrapper[4737]: I0126 18:52:24.529724 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-tnjz7"] Jan 26 18:52:24 crc kubenswrapper[4737]: I0126 18:52:24.600578 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/11408d0f-4b45-4dab-bc9e-965ac14aed79-var-run-ovn\") pod \"ovn-controller-zrckb\" (UID: \"11408d0f-4b45-4dab-bc9e-965ac14aed79\") " pod="openstack/ovn-controller-zrckb" Jan 26 18:52:24 crc kubenswrapper[4737]: I0126 18:52:24.600894 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/11408d0f-4b45-4dab-bc9e-965ac14aed79-scripts\") pod \"ovn-controller-zrckb\" (UID: \"11408d0f-4b45-4dab-bc9e-965ac14aed79\") " pod="openstack/ovn-controller-zrckb" Jan 26 18:52:24 crc kubenswrapper[4737]: I0126 18:52:24.600999 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b875fe78-bf29-45f1-a4a5-f3881134a171-var-lib\") pod \"ovn-controller-ovs-tnjz7\" (UID: \"b875fe78-bf29-45f1-a4a5-f3881134a171\") " pod="openstack/ovn-controller-ovs-tnjz7" Jan 26 18:52:24 crc kubenswrapper[4737]: I0126 18:52:24.601297 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b875fe78-bf29-45f1-a4a5-f3881134a171-var-run\") pod \"ovn-controller-ovs-tnjz7\" (UID: \"b875fe78-bf29-45f1-a4a5-f3881134a171\") " pod="openstack/ovn-controller-ovs-tnjz7" Jan 26 18:52:24 crc kubenswrapper[4737]: I0126 18:52:24.601419 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b875fe78-bf29-45f1-a4a5-f3881134a171-var-log\") pod \"ovn-controller-ovs-tnjz7\" (UID: \"b875fe78-bf29-45f1-a4a5-f3881134a171\") " pod="openstack/ovn-controller-ovs-tnjz7" Jan 26 18:52:24 crc kubenswrapper[4737]: I0126 18:52:24.601497 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/11408d0f-4b45-4dab-bc9e-965ac14aed79-var-run\") pod \"ovn-controller-zrckb\" (UID: \"11408d0f-4b45-4dab-bc9e-965ac14aed79\") " pod="openstack/ovn-controller-zrckb" Jan 26 18:52:24 crc kubenswrapper[4737]: I0126 18:52:24.601527 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rq4mq\" (UniqueName: \"kubernetes.io/projected/b875fe78-bf29-45f1-a4a5-f3881134a171-kube-api-access-rq4mq\") pod \"ovn-controller-ovs-tnjz7\" (UID: \"b875fe78-bf29-45f1-a4a5-f3881134a171\") " pod="openstack/ovn-controller-ovs-tnjz7" Jan 26 18:52:24 crc kubenswrapper[4737]: I0126 18:52:24.601567 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/11408d0f-4b45-4dab-bc9e-965ac14aed79-ovn-controller-tls-certs\") pod \"ovn-controller-zrckb\" (UID: \"11408d0f-4b45-4dab-bc9e-965ac14aed79\") " pod="openstack/ovn-controller-zrckb" Jan 26 18:52:24 crc kubenswrapper[4737]: I0126 18:52:24.601602 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11408d0f-4b45-4dab-bc9e-965ac14aed79-combined-ca-bundle\") pod \"ovn-controller-zrckb\" (UID: \"11408d0f-4b45-4dab-bc9e-965ac14aed79\") " pod="openstack/ovn-controller-zrckb" Jan 26 18:52:24 crc kubenswrapper[4737]: I0126 18:52:24.601679 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54xdh\" (UniqueName: \"kubernetes.io/projected/11408d0f-4b45-4dab-bc9e-965ac14aed79-kube-api-access-54xdh\") pod \"ovn-controller-zrckb\" (UID: \"11408d0f-4b45-4dab-bc9e-965ac14aed79\") " pod="openstack/ovn-controller-zrckb" Jan 26 18:52:24 crc kubenswrapper[4737]: I0126 18:52:24.601703 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b875fe78-bf29-45f1-a4a5-f3881134a171-etc-ovs\") pod \"ovn-controller-ovs-tnjz7\" (UID: \"b875fe78-bf29-45f1-a4a5-f3881134a171\") " pod="openstack/ovn-controller-ovs-tnjz7" Jan 26 18:52:24 crc kubenswrapper[4737]: I0126 18:52:24.601728 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/11408d0f-4b45-4dab-bc9e-965ac14aed79-var-log-ovn\") pod \"ovn-controller-zrckb\" (UID: \"11408d0f-4b45-4dab-bc9e-965ac14aed79\") " pod="openstack/ovn-controller-zrckb" Jan 26 18:52:24 crc kubenswrapper[4737]: I0126 18:52:24.601809 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b875fe78-bf29-45f1-a4a5-f3881134a171-scripts\") pod \"ovn-controller-ovs-tnjz7\" (UID: \"b875fe78-bf29-45f1-a4a5-f3881134a171\") " pod="openstack/ovn-controller-ovs-tnjz7" Jan 26 18:52:24 crc kubenswrapper[4737]: I0126 18:52:24.626044 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 18:52:24 crc kubenswrapper[4737]: I0126 18:52:24.705537 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b875fe78-bf29-45f1-a4a5-f3881134a171-var-run\") pod \"ovn-controller-ovs-tnjz7\" (UID: \"b875fe78-bf29-45f1-a4a5-f3881134a171\") " pod="openstack/ovn-controller-ovs-tnjz7" Jan 26 18:52:24 crc kubenswrapper[4737]: I0126 18:52:24.706451 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b875fe78-bf29-45f1-a4a5-f3881134a171-var-run\") pod \"ovn-controller-ovs-tnjz7\" (UID: \"b875fe78-bf29-45f1-a4a5-f3881134a171\") " pod="openstack/ovn-controller-ovs-tnjz7" Jan 26 18:52:24 crc kubenswrapper[4737]: I0126 18:52:24.720280 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b875fe78-bf29-45f1-a4a5-f3881134a171-var-log\") pod \"ovn-controller-ovs-tnjz7\" (UID: \"b875fe78-bf29-45f1-a4a5-f3881134a171\") " pod="openstack/ovn-controller-ovs-tnjz7" Jan 26 18:52:24 crc kubenswrapper[4737]: I0126 18:52:24.720533 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/11408d0f-4b45-4dab-bc9e-965ac14aed79-var-run\") pod \"ovn-controller-zrckb\" (UID: \"11408d0f-4b45-4dab-bc9e-965ac14aed79\") " pod="openstack/ovn-controller-zrckb" Jan 26 18:52:24 crc kubenswrapper[4737]: I0126 18:52:24.720589 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b875fe78-bf29-45f1-a4a5-f3881134a171-var-log\") pod \"ovn-controller-ovs-tnjz7\" (UID: \"b875fe78-bf29-45f1-a4a5-f3881134a171\") " pod="openstack/ovn-controller-ovs-tnjz7" Jan 26 18:52:24 crc kubenswrapper[4737]: I0126 18:52:24.720607 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rq4mq\" (UniqueName: \"kubernetes.io/projected/b875fe78-bf29-45f1-a4a5-f3881134a171-kube-api-access-rq4mq\") pod \"ovn-controller-ovs-tnjz7\" (UID: \"b875fe78-bf29-45f1-a4a5-f3881134a171\") " pod="openstack/ovn-controller-ovs-tnjz7" Jan 26 18:52:24 crc kubenswrapper[4737]: I0126 18:52:24.720723 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/11408d0f-4b45-4dab-bc9e-965ac14aed79-ovn-controller-tls-certs\") pod \"ovn-controller-zrckb\" (UID: \"11408d0f-4b45-4dab-bc9e-965ac14aed79\") " pod="openstack/ovn-controller-zrckb" Jan 26 18:52:24 crc kubenswrapper[4737]: I0126 18:52:24.720773 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11408d0f-4b45-4dab-bc9e-965ac14aed79-combined-ca-bundle\") pod \"ovn-controller-zrckb\" (UID: \"11408d0f-4b45-4dab-bc9e-965ac14aed79\") " pod="openstack/ovn-controller-zrckb" Jan 26 18:52:24 crc kubenswrapper[4737]: I0126 18:52:24.721097 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/11408d0f-4b45-4dab-bc9e-965ac14aed79-var-run\") pod \"ovn-controller-zrckb\" (UID: \"11408d0f-4b45-4dab-bc9e-965ac14aed79\") " pod="openstack/ovn-controller-zrckb" Jan 26 18:52:24 crc kubenswrapper[4737]: I0126 18:52:24.722372 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54xdh\" (UniqueName: \"kubernetes.io/projected/11408d0f-4b45-4dab-bc9e-965ac14aed79-kube-api-access-54xdh\") pod \"ovn-controller-zrckb\" (UID: \"11408d0f-4b45-4dab-bc9e-965ac14aed79\") " pod="openstack/ovn-controller-zrckb" Jan 26 18:52:24 crc kubenswrapper[4737]: I0126 18:52:24.722412 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b875fe78-bf29-45f1-a4a5-f3881134a171-etc-ovs\") pod \"ovn-controller-ovs-tnjz7\" (UID: \"b875fe78-bf29-45f1-a4a5-f3881134a171\") " pod="openstack/ovn-controller-ovs-tnjz7" Jan 26 18:52:24 crc kubenswrapper[4737]: I0126 18:52:24.722449 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/11408d0f-4b45-4dab-bc9e-965ac14aed79-var-log-ovn\") pod \"ovn-controller-zrckb\" (UID: \"11408d0f-4b45-4dab-bc9e-965ac14aed79\") " pod="openstack/ovn-controller-zrckb" Jan 26 18:52:24 crc kubenswrapper[4737]: I0126 18:52:24.722718 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/11408d0f-4b45-4dab-bc9e-965ac14aed79-var-log-ovn\") pod \"ovn-controller-zrckb\" (UID: \"11408d0f-4b45-4dab-bc9e-965ac14aed79\") " pod="openstack/ovn-controller-zrckb" Jan 26 18:52:24 crc kubenswrapper[4737]: I0126 18:52:24.722823 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b875fe78-bf29-45f1-a4a5-f3881134a171-scripts\") pod \"ovn-controller-ovs-tnjz7\" (UID: \"b875fe78-bf29-45f1-a4a5-f3881134a171\") " pod="openstack/ovn-controller-ovs-tnjz7" Jan 26 18:52:24 crc kubenswrapper[4737]: I0126 18:52:24.722935 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/11408d0f-4b45-4dab-bc9e-965ac14aed79-var-run-ovn\") pod \"ovn-controller-zrckb\" (UID: \"11408d0f-4b45-4dab-bc9e-965ac14aed79\") " pod="openstack/ovn-controller-zrckb" Jan 26 18:52:24 crc kubenswrapper[4737]: I0126 18:52:24.723063 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/11408d0f-4b45-4dab-bc9e-965ac14aed79-scripts\") pod \"ovn-controller-zrckb\" (UID: \"11408d0f-4b45-4dab-bc9e-965ac14aed79\") " pod="openstack/ovn-controller-zrckb" Jan 26 18:52:24 crc kubenswrapper[4737]: I0126 18:52:24.723303 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b875fe78-bf29-45f1-a4a5-f3881134a171-var-lib\") pod \"ovn-controller-ovs-tnjz7\" (UID: \"b875fe78-bf29-45f1-a4a5-f3881134a171\") " pod="openstack/ovn-controller-ovs-tnjz7" Jan 26 18:52:24 crc kubenswrapper[4737]: I0126 18:52:24.723687 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b875fe78-bf29-45f1-a4a5-f3881134a171-var-lib\") pod \"ovn-controller-ovs-tnjz7\" (UID: \"b875fe78-bf29-45f1-a4a5-f3881134a171\") " pod="openstack/ovn-controller-ovs-tnjz7" Jan 26 18:52:24 crc kubenswrapper[4737]: I0126 18:52:24.723838 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/11408d0f-4b45-4dab-bc9e-965ac14aed79-var-run-ovn\") pod \"ovn-controller-zrckb\" (UID: \"11408d0f-4b45-4dab-bc9e-965ac14aed79\") " pod="openstack/ovn-controller-zrckb" Jan 26 18:52:24 crc kubenswrapper[4737]: I0126 18:52:24.728605 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b875fe78-bf29-45f1-a4a5-f3881134a171-scripts\") pod \"ovn-controller-ovs-tnjz7\" (UID: \"b875fe78-bf29-45f1-a4a5-f3881134a171\") " pod="openstack/ovn-controller-ovs-tnjz7" Jan 26 18:52:24 crc kubenswrapper[4737]: I0126 18:52:24.728833 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b875fe78-bf29-45f1-a4a5-f3881134a171-etc-ovs\") pod \"ovn-controller-ovs-tnjz7\" (UID: \"b875fe78-bf29-45f1-a4a5-f3881134a171\") " pod="openstack/ovn-controller-ovs-tnjz7" Jan 26 18:52:24 crc kubenswrapper[4737]: I0126 18:52:24.729518 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/11408d0f-4b45-4dab-bc9e-965ac14aed79-scripts\") pod \"ovn-controller-zrckb\" (UID: \"11408d0f-4b45-4dab-bc9e-965ac14aed79\") " pod="openstack/ovn-controller-zrckb" Jan 26 18:52:24 crc kubenswrapper[4737]: I0126 18:52:24.729822 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11408d0f-4b45-4dab-bc9e-965ac14aed79-combined-ca-bundle\") pod \"ovn-controller-zrckb\" (UID: \"11408d0f-4b45-4dab-bc9e-965ac14aed79\") " pod="openstack/ovn-controller-zrckb" Jan 26 18:52:24 crc kubenswrapper[4737]: I0126 18:52:24.746149 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rq4mq\" (UniqueName: \"kubernetes.io/projected/b875fe78-bf29-45f1-a4a5-f3881134a171-kube-api-access-rq4mq\") pod \"ovn-controller-ovs-tnjz7\" (UID: \"b875fe78-bf29-45f1-a4a5-f3881134a171\") " pod="openstack/ovn-controller-ovs-tnjz7" Jan 26 18:52:24 crc kubenswrapper[4737]: I0126 18:52:24.748651 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/11408d0f-4b45-4dab-bc9e-965ac14aed79-ovn-controller-tls-certs\") pod \"ovn-controller-zrckb\" (UID: \"11408d0f-4b45-4dab-bc9e-965ac14aed79\") " pod="openstack/ovn-controller-zrckb" Jan 26 18:52:24 crc kubenswrapper[4737]: I0126 18:52:24.755859 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54xdh\" (UniqueName: \"kubernetes.io/projected/11408d0f-4b45-4dab-bc9e-965ac14aed79-kube-api-access-54xdh\") pod \"ovn-controller-zrckb\" (UID: \"11408d0f-4b45-4dab-bc9e-965ac14aed79\") " pod="openstack/ovn-controller-zrckb" Jan 26 18:52:24 crc kubenswrapper[4737]: I0126 18:52:24.832272 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-67df48bc8d-j5g9z"] Jan 26 18:52:24 crc kubenswrapper[4737]: I0126 18:52:24.851763 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zrckb" Jan 26 18:52:24 crc kubenswrapper[4737]: I0126 18:52:24.879366 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-tnjz7" Jan 26 18:52:25 crc kubenswrapper[4737]: I0126 18:52:25.066709 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 26 18:52:25 crc kubenswrapper[4737]: I0126 18:52:25.079590 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 26 18:52:25 crc kubenswrapper[4737]: I0126 18:52:25.089822 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 26 18:52:25 crc kubenswrapper[4737]: I0126 18:52:25.090169 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 26 18:52:25 crc kubenswrapper[4737]: I0126 18:52:25.090365 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 26 18:52:25 crc kubenswrapper[4737]: I0126 18:52:25.091416 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 26 18:52:25 crc kubenswrapper[4737]: I0126 18:52:25.092639 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-bkrl8" Jan 26 18:52:25 crc kubenswrapper[4737]: I0126 18:52:25.098487 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 26 18:52:25 crc kubenswrapper[4737]: I0126 18:52:25.243647 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6465a03e-5fc8-4886-b68b-531fe218230f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6465a03e-5fc8-4886-b68b-531fe218230f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:52:25 crc kubenswrapper[4737]: I0126 18:52:25.243701 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6465a03e-5fc8-4886-b68b-531fe218230f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"6465a03e-5fc8-4886-b68b-531fe218230f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:52:25 crc kubenswrapper[4737]: I0126 18:52:25.243760 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-db638be7-f0b9-4505-81f5-bdc736cd94c4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-db638be7-f0b9-4505-81f5-bdc736cd94c4\") pod \"ovsdbserver-nb-0\" (UID: \"6465a03e-5fc8-4886-b68b-531fe218230f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:52:25 crc kubenswrapper[4737]: I0126 18:52:25.243798 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6465a03e-5fc8-4886-b68b-531fe218230f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"6465a03e-5fc8-4886-b68b-531fe218230f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:52:25 crc kubenswrapper[4737]: I0126 18:52:25.243845 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6465a03e-5fc8-4886-b68b-531fe218230f-config\") pod \"ovsdbserver-nb-0\" (UID: \"6465a03e-5fc8-4886-b68b-531fe218230f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:52:25 crc kubenswrapper[4737]: I0126 18:52:25.243911 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqwxr\" (UniqueName: \"kubernetes.io/projected/6465a03e-5fc8-4886-b68b-531fe218230f-kube-api-access-hqwxr\") pod \"ovsdbserver-nb-0\" (UID: \"6465a03e-5fc8-4886-b68b-531fe218230f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:52:25 crc kubenswrapper[4737]: I0126 18:52:25.243939 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6465a03e-5fc8-4886-b68b-531fe218230f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6465a03e-5fc8-4886-b68b-531fe218230f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:52:25 crc kubenswrapper[4737]: I0126 18:52:25.243975 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6465a03e-5fc8-4886-b68b-531fe218230f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"6465a03e-5fc8-4886-b68b-531fe218230f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:52:25 crc kubenswrapper[4737]: I0126 18:52:25.345617 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6465a03e-5fc8-4886-b68b-531fe218230f-config\") pod \"ovsdbserver-nb-0\" (UID: \"6465a03e-5fc8-4886-b68b-531fe218230f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:52:25 crc kubenswrapper[4737]: I0126 18:52:25.346155 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqwxr\" (UniqueName: \"kubernetes.io/projected/6465a03e-5fc8-4886-b68b-531fe218230f-kube-api-access-hqwxr\") pod \"ovsdbserver-nb-0\" (UID: \"6465a03e-5fc8-4886-b68b-531fe218230f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:52:25 crc kubenswrapper[4737]: I0126 18:52:25.346207 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6465a03e-5fc8-4886-b68b-531fe218230f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6465a03e-5fc8-4886-b68b-531fe218230f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:52:25 crc kubenswrapper[4737]: I0126 18:52:25.346272 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6465a03e-5fc8-4886-b68b-531fe218230f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"6465a03e-5fc8-4886-b68b-531fe218230f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:52:25 crc kubenswrapper[4737]: I0126 18:52:25.346317 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6465a03e-5fc8-4886-b68b-531fe218230f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6465a03e-5fc8-4886-b68b-531fe218230f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:52:25 crc kubenswrapper[4737]: I0126 18:52:25.346343 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6465a03e-5fc8-4886-b68b-531fe218230f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"6465a03e-5fc8-4886-b68b-531fe218230f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:52:25 crc kubenswrapper[4737]: I0126 18:52:25.346426 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-db638be7-f0b9-4505-81f5-bdc736cd94c4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-db638be7-f0b9-4505-81f5-bdc736cd94c4\") pod \"ovsdbserver-nb-0\" (UID: \"6465a03e-5fc8-4886-b68b-531fe218230f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:52:25 crc kubenswrapper[4737]: I0126 18:52:25.346450 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6465a03e-5fc8-4886-b68b-531fe218230f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"6465a03e-5fc8-4886-b68b-531fe218230f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:52:25 crc kubenswrapper[4737]: I0126 18:52:25.347734 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6465a03e-5fc8-4886-b68b-531fe218230f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"6465a03e-5fc8-4886-b68b-531fe218230f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:52:25 crc kubenswrapper[4737]: I0126 18:52:25.348456 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6465a03e-5fc8-4886-b68b-531fe218230f-config\") pod \"ovsdbserver-nb-0\" (UID: \"6465a03e-5fc8-4886-b68b-531fe218230f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:52:25 crc kubenswrapper[4737]: I0126 18:52:25.363315 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6465a03e-5fc8-4886-b68b-531fe218230f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"6465a03e-5fc8-4886-b68b-531fe218230f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:52:25 crc kubenswrapper[4737]: I0126 18:52:25.380884 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6465a03e-5fc8-4886-b68b-531fe218230f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"6465a03e-5fc8-4886-b68b-531fe218230f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:52:25 crc kubenswrapper[4737]: I0126 18:52:25.402274 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6465a03e-5fc8-4886-b68b-531fe218230f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6465a03e-5fc8-4886-b68b-531fe218230f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:52:25 crc kubenswrapper[4737]: I0126 18:52:25.403378 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6465a03e-5fc8-4886-b68b-531fe218230f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6465a03e-5fc8-4886-b68b-531fe218230f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:52:25 crc kubenswrapper[4737]: I0126 18:52:25.416579 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqwxr\" (UniqueName: \"kubernetes.io/projected/6465a03e-5fc8-4886-b68b-531fe218230f-kube-api-access-hqwxr\") pod \"ovsdbserver-nb-0\" (UID: \"6465a03e-5fc8-4886-b68b-531fe218230f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:52:25 crc kubenswrapper[4737]: I0126 18:52:25.535834 4737 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 18:52:25 crc kubenswrapper[4737]: I0126 18:52:25.535876 4737 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-db638be7-f0b9-4505-81f5-bdc736cd94c4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-db638be7-f0b9-4505-81f5-bdc736cd94c4\") pod \"ovsdbserver-nb-0\" (UID: \"6465a03e-5fc8-4886-b68b-531fe218230f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/71b9798f2349447e9966ee8c399e17f7d8af44a1d8c3bc8d46f9e376af963b1f/globalmount\"" pod="openstack/ovsdbserver-nb-0" Jan 26 18:52:25 crc kubenswrapper[4737]: I0126 18:52:25.767677 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-db638be7-f0b9-4505-81f5-bdc736cd94c4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-db638be7-f0b9-4505-81f5-bdc736cd94c4\") pod \"ovsdbserver-nb-0\" (UID: \"6465a03e-5fc8-4886-b68b-531fe218230f\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:52:26 crc kubenswrapper[4737]: I0126 18:52:26.019093 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 26 18:52:26 crc kubenswrapper[4737]: W0126 18:52:26.122980 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod539f99ad_d4f8_4e02_aca3_f247bc802698.slice/crio-d15c6f609cd91a92edf04dbfbaf960fd3d9092a25c528a4624b62dc5fc4e75c6 WatchSource:0}: Error finding container d15c6f609cd91a92edf04dbfbaf960fd3d9092a25c528a4624b62dc5fc4e75c6: Status 404 returned error can't find the container with id d15c6f609cd91a92edf04dbfbaf960fd3d9092a25c528a4624b62dc5fc4e75c6 Jan 26 18:52:26 crc kubenswrapper[4737]: W0126 18:52:26.126485 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod846a23e4_f5aa_4975_af0a_d02c60aa08fd.slice/crio-86ff73aec2de2033e80a074dc1be00443a792b0a0b580a89aac13bad606a2db2 WatchSource:0}: Error finding container 86ff73aec2de2033e80a074dc1be00443a792b0a0b580a89aac13bad606a2db2: Status 404 returned error can't find the container with id 86ff73aec2de2033e80a074dc1be00443a792b0a0b580a89aac13bad606a2db2 Jan 26 18:52:26 crc kubenswrapper[4737]: I0126 18:52:26.719053 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-ckxn2"] Jan 26 18:52:26 crc kubenswrapper[4737]: I0126 18:52:26.919540 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-67df48bc8d-j5g9z" event={"ID":"846a23e4-f5aa-4975-af0a-d02c60aa08fd","Type":"ContainerStarted","Data":"86ff73aec2de2033e80a074dc1be00443a792b0a0b580a89aac13bad606a2db2"} Jan 26 18:52:26 crc kubenswrapper[4737]: I0126 18:52:26.946143 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"539f99ad-d4f8-4e02-aca3-f247bc802698","Type":"ContainerStarted","Data":"d15c6f609cd91a92edf04dbfbaf960fd3d9092a25c528a4624b62dc5fc4e75c6"} Jan 26 18:52:28 crc kubenswrapper[4737]: I0126 18:52:28.561977 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 26 18:52:28 crc kubenswrapper[4737]: I0126 18:52:28.564575 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 26 18:52:28 crc kubenswrapper[4737]: I0126 18:52:28.573252 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-7jlxq" Jan 26 18:52:28 crc kubenswrapper[4737]: I0126 18:52:28.573281 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 26 18:52:28 crc kubenswrapper[4737]: I0126 18:52:28.575640 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 26 18:52:28 crc kubenswrapper[4737]: I0126 18:52:28.576047 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 26 18:52:28 crc kubenswrapper[4737]: I0126 18:52:28.602524 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 26 18:52:28 crc kubenswrapper[4737]: I0126 18:52:28.701370 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/923f982a-41f5-4c9d-a2dc-50e96e54c283-config\") pod \"ovsdbserver-sb-0\" (UID: \"923f982a-41f5-4c9d-a2dc-50e96e54c283\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:52:28 crc kubenswrapper[4737]: I0126 18:52:28.701420 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qg844\" (UniqueName: \"kubernetes.io/projected/923f982a-41f5-4c9d-a2dc-50e96e54c283-kube-api-access-qg844\") pod \"ovsdbserver-sb-0\" (UID: \"923f982a-41f5-4c9d-a2dc-50e96e54c283\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:52:28 crc kubenswrapper[4737]: I0126 18:52:28.701451 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/923f982a-41f5-4c9d-a2dc-50e96e54c283-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"923f982a-41f5-4c9d-a2dc-50e96e54c283\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:52:28 crc kubenswrapper[4737]: I0126 18:52:28.701471 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/923f982a-41f5-4c9d-a2dc-50e96e54c283-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"923f982a-41f5-4c9d-a2dc-50e96e54c283\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:52:28 crc kubenswrapper[4737]: I0126 18:52:28.701498 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/923f982a-41f5-4c9d-a2dc-50e96e54c283-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"923f982a-41f5-4c9d-a2dc-50e96e54c283\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:52:28 crc kubenswrapper[4737]: I0126 18:52:28.701556 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a5398c78-c0a0-4246-bce7-45e0b2815936\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5398c78-c0a0-4246-bce7-45e0b2815936\") pod \"ovsdbserver-sb-0\" (UID: \"923f982a-41f5-4c9d-a2dc-50e96e54c283\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:52:28 crc kubenswrapper[4737]: I0126 18:52:28.701629 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/923f982a-41f5-4c9d-a2dc-50e96e54c283-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"923f982a-41f5-4c9d-a2dc-50e96e54c283\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:52:28 crc kubenswrapper[4737]: I0126 18:52:28.701658 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/923f982a-41f5-4c9d-a2dc-50e96e54c283-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"923f982a-41f5-4c9d-a2dc-50e96e54c283\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:52:28 crc kubenswrapper[4737]: I0126 18:52:28.805130 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a5398c78-c0a0-4246-bce7-45e0b2815936\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5398c78-c0a0-4246-bce7-45e0b2815936\") pod \"ovsdbserver-sb-0\" (UID: \"923f982a-41f5-4c9d-a2dc-50e96e54c283\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:52:28 crc kubenswrapper[4737]: I0126 18:52:28.805243 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/923f982a-41f5-4c9d-a2dc-50e96e54c283-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"923f982a-41f5-4c9d-a2dc-50e96e54c283\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:52:28 crc kubenswrapper[4737]: I0126 18:52:28.805281 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/923f982a-41f5-4c9d-a2dc-50e96e54c283-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"923f982a-41f5-4c9d-a2dc-50e96e54c283\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:52:28 crc kubenswrapper[4737]: I0126 18:52:28.806938 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/923f982a-41f5-4c9d-a2dc-50e96e54c283-config\") pod \"ovsdbserver-sb-0\" (UID: \"923f982a-41f5-4c9d-a2dc-50e96e54c283\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:52:28 crc kubenswrapper[4737]: I0126 18:52:28.806974 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qg844\" (UniqueName: \"kubernetes.io/projected/923f982a-41f5-4c9d-a2dc-50e96e54c283-kube-api-access-qg844\") pod \"ovsdbserver-sb-0\" (UID: \"923f982a-41f5-4c9d-a2dc-50e96e54c283\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:52:28 crc kubenswrapper[4737]: I0126 18:52:28.807009 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/923f982a-41f5-4c9d-a2dc-50e96e54c283-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"923f982a-41f5-4c9d-a2dc-50e96e54c283\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:52:28 crc kubenswrapper[4737]: I0126 18:52:28.807035 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/923f982a-41f5-4c9d-a2dc-50e96e54c283-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"923f982a-41f5-4c9d-a2dc-50e96e54c283\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:52:28 crc kubenswrapper[4737]: I0126 18:52:28.807093 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/923f982a-41f5-4c9d-a2dc-50e96e54c283-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"923f982a-41f5-4c9d-a2dc-50e96e54c283\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:52:28 crc kubenswrapper[4737]: I0126 18:52:28.807501 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/923f982a-41f5-4c9d-a2dc-50e96e54c283-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"923f982a-41f5-4c9d-a2dc-50e96e54c283\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:52:28 crc kubenswrapper[4737]: I0126 18:52:28.807691 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/923f982a-41f5-4c9d-a2dc-50e96e54c283-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"923f982a-41f5-4c9d-a2dc-50e96e54c283\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:52:28 crc kubenswrapper[4737]: I0126 18:52:28.808110 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/923f982a-41f5-4c9d-a2dc-50e96e54c283-config\") pod \"ovsdbserver-sb-0\" (UID: \"923f982a-41f5-4c9d-a2dc-50e96e54c283\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:52:28 crc kubenswrapper[4737]: I0126 18:52:28.809717 4737 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 18:52:28 crc kubenswrapper[4737]: I0126 18:52:28.809756 4737 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a5398c78-c0a0-4246-bce7-45e0b2815936\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5398c78-c0a0-4246-bce7-45e0b2815936\") pod \"ovsdbserver-sb-0\" (UID: \"923f982a-41f5-4c9d-a2dc-50e96e54c283\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/264d8107ba6a9b46f47865ffa66218c971a4e21c05e38565e68f960e3f897414/globalmount\"" pod="openstack/ovsdbserver-sb-0" Jan 26 18:52:28 crc kubenswrapper[4737]: I0126 18:52:28.853316 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/923f982a-41f5-4c9d-a2dc-50e96e54c283-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"923f982a-41f5-4c9d-a2dc-50e96e54c283\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:52:28 crc kubenswrapper[4737]: I0126 18:52:28.853620 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qg844\" (UniqueName: \"kubernetes.io/projected/923f982a-41f5-4c9d-a2dc-50e96e54c283-kube-api-access-qg844\") pod \"ovsdbserver-sb-0\" (UID: \"923f982a-41f5-4c9d-a2dc-50e96e54c283\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:52:28 crc kubenswrapper[4737]: I0126 18:52:28.853459 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/923f982a-41f5-4c9d-a2dc-50e96e54c283-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"923f982a-41f5-4c9d-a2dc-50e96e54c283\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:52:28 crc kubenswrapper[4737]: I0126 18:52:28.855212 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/923f982a-41f5-4c9d-a2dc-50e96e54c283-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"923f982a-41f5-4c9d-a2dc-50e96e54c283\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:52:28 crc kubenswrapper[4737]: I0126 18:52:28.914995 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a5398c78-c0a0-4246-bce7-45e0b2815936\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5398c78-c0a0-4246-bce7-45e0b2815936\") pod \"ovsdbserver-sb-0\" (UID: \"923f982a-41f5-4c9d-a2dc-50e96e54c283\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:52:29 crc kubenswrapper[4737]: I0126 18:52:29.026606 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-ckxn2" event={"ID":"6b80cd0d-81ac-4f45-a80c-3b1cf442fc44","Type":"ContainerStarted","Data":"be3edc4671cd0f95d39ce70613c40d88059a9ad2c5055903fc5d9d0edd81b0bc"} Jan 26 18:52:29 crc kubenswrapper[4737]: I0126 18:52:29.185093 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 26 18:52:29 crc kubenswrapper[4737]: I0126 18:52:29.924563 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zrckb"] Jan 26 18:52:30 crc kubenswrapper[4737]: I0126 18:52:30.620636 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-tnjz7"] Jan 26 18:52:40 crc kubenswrapper[4737]: W0126 18:52:40.871331 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11408d0f_4b45_4dab_bc9e_965ac14aed79.slice/crio-f0ef3779601030a2f6a2bb2c2da6d56427c8bc03a63e90038f39f07e5c326b9d WatchSource:0}: Error finding container f0ef3779601030a2f6a2bb2c2da6d56427c8bc03a63e90038f39f07e5c326b9d: Status 404 returned error can't find the container with id f0ef3779601030a2f6a2bb2c2da6d56427c8bc03a63e90038f39f07e5c326b9d Jan 26 18:52:40 crc kubenswrapper[4737]: W0126 18:52:40.887430 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb875fe78_bf29_45f1_a4a5_f3881134a171.slice/crio-1aeb348175bc13b0cf9b8a794784f47bf2064e61815686bc3ceb153157663c9a WatchSource:0}: Error finding container 1aeb348175bc13b0cf9b8a794784f47bf2064e61815686bc3ceb153157663c9a: Status 404 returned error can't find the container with id 1aeb348175bc13b0cf9b8a794784f47bf2064e61815686bc3ceb153157663c9a Jan 26 18:52:41 crc kubenswrapper[4737]: I0126 18:52:41.103511 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zrckb" event={"ID":"11408d0f-4b45-4dab-bc9e-965ac14aed79","Type":"ContainerStarted","Data":"f0ef3779601030a2f6a2bb2c2da6d56427c8bc03a63e90038f39f07e5c326b9d"} Jan 26 18:52:41 crc kubenswrapper[4737]: I0126 18:52:41.105403 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-tnjz7" event={"ID":"b875fe78-bf29-45f1-a4a5-f3881134a171","Type":"ContainerStarted","Data":"1aeb348175bc13b0cf9b8a794784f47bf2064e61815686bc3ceb153157663c9a"} Jan 26 18:52:41 crc kubenswrapper[4737]: I0126 18:52:41.415856 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 26 18:52:42 crc kubenswrapper[4737]: E0126 18:52:42.462934 4737 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a" Jan 26 18:52:42 crc kubenswrapper[4737]: E0126 18:52:42.463575 4737 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init-config-reloader,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a,Command:[/bin/prometheus-config-reloader],Args:[--watch-interval=0 --listen-address=:8081 --config-file=/etc/prometheus/config/prometheus.yaml.gz --config-envsubst-file=/etc/prometheus/config_out/prometheus.env.yaml --watched-dir=/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0 --watched-dir=/etc/prometheus/rules/prometheus-metric-storage-rulefiles-1 --watched-dir=/etc/prometheus/rules/prometheus-metric-storage-rulefiles-2],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:reloader-init,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:SHARD,Value:0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/prometheus/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-out,ReadOnly:false,MountPath:/etc/prometheus/config_out,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-0,ReadOnly:false,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-1,ReadOnly:false,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-1,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-2,ReadOnly:false,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-2,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vcw7m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-metric-storage-0_openstack(539f99ad-d4f8-4e02-aca3-f247bc802698): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 18:52:42 crc kubenswrapper[4737]: E0126 18:52:42.464728 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init-config-reloader\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/prometheus-metric-storage-0" podUID="539f99ad-d4f8-4e02-aca3-f247bc802698" Jan 26 18:52:43 crc kubenswrapper[4737]: E0126 18:52:43.123870 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init-config-reloader\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="539f99ad-d4f8-4e02-aca3-f247bc802698" Jan 26 18:52:44 crc kubenswrapper[4737]: E0126 18:52:44.285651 4737 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 26 18:52:44 crc kubenswrapper[4737]: E0126 18:52:44.286571 4737 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jcwlb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-1_openstack(5bfe0217-6204-407d-aaeb-94051bb8255b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 18:52:44 crc kubenswrapper[4737]: E0126 18:52:44.287694 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-1" podUID="5bfe0217-6204-407d-aaeb-94051bb8255b" Jan 26 18:52:45 crc kubenswrapper[4737]: E0126 18:52:45.027822 4737 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Jan 26 18:52:45 crc kubenswrapper[4737]: E0126 18:52:45.028057 4737 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n7ch69h65dhdchcch67bh8dh595h579hf8h674h697hf7h59bh66dh5d4h64h56dhc8h556h5f5h94hfdh66fh58bh589h7chc6h69h5d5hdbh666q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-frkbj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(2618c486-a631-4a87-ba8f-d5ccad83a208): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 18:52:45 crc kubenswrapper[4737]: E0126 18:52:45.035228 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="2618c486-a631-4a87-ba8f-d5ccad83a208" Jan 26 18:52:45 crc kubenswrapper[4737]: E0126 18:52:45.053336 4737 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 26 18:52:45 crc kubenswrapper[4737]: E0126 18:52:45.053607 4737 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8zktn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(49c4dfd6-d334-4e11-8a1d-0dd773f91b1f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 18:52:45 crc kubenswrapper[4737]: E0126 18:52:45.055013 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="49c4dfd6-d334-4e11-8a1d-0dd773f91b1f" Jan 26 18:52:45 crc kubenswrapper[4737]: E0126 18:52:45.101411 4737 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 26 18:52:45 crc kubenswrapper[4737]: E0126 18:52:45.101612 4737 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dbj5x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(89a3c35d-3e74-49b8-ae17-81808321d00d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 18:52:45 crc kubenswrapper[4737]: E0126 18:52:45.103332 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="89a3c35d-3e74-49b8-ae17-81808321d00d" Jan 26 18:52:45 crc kubenswrapper[4737]: I0126 18:52:45.140505 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6465a03e-5fc8-4886-b68b-531fe218230f","Type":"ContainerStarted","Data":"f36b97ee38c6982a236ab380d30d7f41bbe64defc6bf7dcf9e4d03f9212e19c0"} Jan 26 18:52:45 crc kubenswrapper[4737]: E0126 18:52:45.141709 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="2618c486-a631-4a87-ba8f-d5ccad83a208" Jan 26 18:52:45 crc kubenswrapper[4737]: E0126 18:52:45.141913 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="49c4dfd6-d334-4e11-8a1d-0dd773f91b1f" Jan 26 18:52:45 crc kubenswrapper[4737]: E0126 18:52:45.143903 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="89a3c35d-3e74-49b8-ae17-81808321d00d" Jan 26 18:52:45 crc kubenswrapper[4737]: E0126 18:52:45.144219 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-1" podUID="5bfe0217-6204-407d-aaeb-94051bb8255b" Jan 26 18:52:47 crc kubenswrapper[4737]: E0126 18:52:47.200619 4737 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 26 18:52:47 crc kubenswrapper[4737]: E0126 18:52:47.202138 4737 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-czdcn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(89018ab2-3fc5-4855-b47e-ac19d8008c8e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 18:52:47 crc kubenswrapper[4737]: E0126 18:52:47.204046 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="89018ab2-3fc5-4855-b47e-ac19d8008c8e" Jan 26 18:52:48 crc kubenswrapper[4737]: E0126 18:52:48.166695 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="89018ab2-3fc5-4855-b47e-ac19d8008c8e" Jan 26 18:52:49 crc kubenswrapper[4737]: E0126 18:52:49.769821 4737 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 26 18:52:49 crc kubenswrapper[4737]: E0126 18:52:49.770487 4737 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mrm67,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-2_openstack(ca2ccc7a-b591-4abe-b133-f959b5445611): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 18:52:49 crc kubenswrapper[4737]: E0126 18:52:49.771680 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-2" podUID="ca2ccc7a-b591-4abe-b133-f959b5445611" Jan 26 18:52:49 crc kubenswrapper[4737]: E0126 18:52:49.788192 4737 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 26 18:52:49 crc kubenswrapper[4737]: E0126 18:52:49.789140 4737 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ldsmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(ca50689d-e7af-4267-9ee0-11d254c08962): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 18:52:49 crc kubenswrapper[4737]: E0126 18:52:49.790280 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="ca50689d-e7af-4267-9ee0-11d254c08962" Jan 26 18:52:50 crc kubenswrapper[4737]: I0126 18:52:50.185099 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-67df48bc8d-j5g9z" event={"ID":"846a23e4-f5aa-4975-af0a-d02c60aa08fd","Type":"ContainerStarted","Data":"5720b2e282aeb4dc1f10414cf0b57fda29a03acb119f0f67d54d5f01fb1d070e"} Jan 26 18:52:50 crc kubenswrapper[4737]: E0126 18:52:50.186692 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-2" podUID="ca2ccc7a-b591-4abe-b133-f959b5445611" Jan 26 18:52:50 crc kubenswrapper[4737]: E0126 18:52:50.186709 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="ca50689d-e7af-4267-9ee0-11d254c08962" Jan 26 18:52:50 crc kubenswrapper[4737]: I0126 18:52:50.241850 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-67df48bc8d-j5g9z" podStartSLOduration=28.241827112 podStartE2EDuration="28.241827112s" podCreationTimestamp="2026-01-26 18:52:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:52:50.240605664 +0000 UTC m=+1343.548800372" watchObservedRunningTime="2026-01-26 18:52:50.241827112 +0000 UTC m=+1343.550021810" Jan 26 18:52:53 crc kubenswrapper[4737]: I0126 18:52:53.317118 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-67df48bc8d-j5g9z" Jan 26 18:52:53 crc kubenswrapper[4737]: I0126 18:52:53.318211 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-67df48bc8d-j5g9z" Jan 26 18:52:53 crc kubenswrapper[4737]: I0126 18:52:53.321955 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-67df48bc8d-j5g9z" Jan 26 18:52:54 crc kubenswrapper[4737]: I0126 18:52:54.223880 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-67df48bc8d-j5g9z" Jan 26 18:52:54 crc kubenswrapper[4737]: I0126 18:52:54.296743 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-645c6f4f57-glmhb"] Jan 26 18:52:58 crc kubenswrapper[4737]: I0126 18:52:58.236232 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 26 18:52:58 crc kubenswrapper[4737]: E0126 18:52:58.934837 4737 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 26 18:52:58 crc kubenswrapper[4737]: E0126 18:52:58.935017 4737 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vkjmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-bj2wh_openstack(7ac6f0c1-3e0d-4896-a392-913dc6576566): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 18:52:58 crc kubenswrapper[4737]: E0126 18:52:58.936257 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-bj2wh" podUID="7ac6f0c1-3e0d-4896-a392-913dc6576566" Jan 26 18:52:59 crc kubenswrapper[4737]: E0126 18:52:59.285865 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-bj2wh" podUID="7ac6f0c1-3e0d-4896-a392-913dc6576566" Jan 26 18:52:59 crc kubenswrapper[4737]: W0126 18:52:59.530448 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod923f982a_41f5_4c9d_a2dc_50e96e54c283.slice/crio-a17138e922e5e6079b03332c6c13e22ead5971bbde7b11422250a4345c825e1c WatchSource:0}: Error finding container a17138e922e5e6079b03332c6c13e22ead5971bbde7b11422250a4345c825e1c: Status 404 returned error can't find the container with id a17138e922e5e6079b03332c6c13e22ead5971bbde7b11422250a4345c825e1c Jan 26 18:52:59 crc kubenswrapper[4737]: E0126 18:52:59.583568 4737 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 26 18:52:59 crc kubenswrapper[4737]: E0126 18:52:59.583781 4737 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6sxcq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-9xtr2_openstack(425352a9-7fbe-4370-be54-cb85d79de0b1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 18:52:59 crc kubenswrapper[4737]: E0126 18:52:59.585130 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-9xtr2" podUID="425352a9-7fbe-4370-be54-cb85d79de0b1" Jan 26 18:52:59 crc kubenswrapper[4737]: E0126 18:52:59.717835 4737 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 26 18:52:59 crc kubenswrapper[4737]: E0126 18:52:59.717992 4737 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qcdss,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-527jf_openstack(bb1f1f93-5c26-47f2-b5f1-42d96632aa89): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 18:52:59 crc kubenswrapper[4737]: E0126 18:52:59.719235 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-527jf" podUID="bb1f1f93-5c26-47f2-b5f1-42d96632aa89" Jan 26 18:52:59 crc kubenswrapper[4737]: E0126 18:52:59.832183 4737 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 26 18:52:59 crc kubenswrapper[4737]: E0126 18:52:59.832397 4737 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2nvnb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-2j527_openstack(8b254d0c-eff7-4b4a-8814-a261c66bac0b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 18:52:59 crc kubenswrapper[4737]: E0126 18:52:59.833622 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-2j527" podUID="8b254d0c-eff7-4b4a-8814-a261c66bac0b" Jan 26 18:53:00 crc kubenswrapper[4737]: I0126 18:53:00.276984 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"923f982a-41f5-4c9d-a2dc-50e96e54c283","Type":"ContainerStarted","Data":"a17138e922e5e6079b03332c6c13e22ead5971bbde7b11422250a4345c825e1c"} Jan 26 18:53:00 crc kubenswrapper[4737]: E0126 18:53:00.278939 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-2j527" podUID="8b254d0c-eff7-4b4a-8814-a261c66bac0b" Jan 26 18:53:01 crc kubenswrapper[4737]: E0126 18:53:00.819747 4737 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 26 18:53:01 crc kubenswrapper[4737]: E0126 18:53:00.820115 4737 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 26 18:53:01 crc kubenswrapper[4737]: E0126 18:53:00.820254 4737 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w9vcw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(aba2f81e-11de-4d89-ab90-34ca36d205d6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 18:53:01 crc kubenswrapper[4737]: E0126 18:53:00.821646 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="aba2f81e-11de-4d89-ab90-34ca36d205d6" Jan 26 18:53:01 crc kubenswrapper[4737]: I0126 18:53:01.287267 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-9xtr2" event={"ID":"425352a9-7fbe-4370-be54-cb85d79de0b1","Type":"ContainerDied","Data":"e7182ffcabee635b32cc442b3843c5fcea38dc0e40cd3fea5484dfa678c2e316"} Jan 26 18:53:01 crc kubenswrapper[4737]: I0126 18:53:01.287596 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7182ffcabee635b32cc442b3843c5fcea38dc0e40cd3fea5484dfa678c2e316" Jan 26 18:53:01 crc kubenswrapper[4737]: I0126 18:53:01.289407 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-527jf" event={"ID":"bb1f1f93-5c26-47f2-b5f1-42d96632aa89","Type":"ContainerDied","Data":"9258a42bf5a6354f9bee238c41f0e7d9c99ce607c095b76834f0455a70282549"} Jan 26 18:53:01 crc kubenswrapper[4737]: I0126 18:53:01.289444 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9258a42bf5a6354f9bee238c41f0e7d9c99ce607c095b76834f0455a70282549" Jan 26 18:53:01 crc kubenswrapper[4737]: E0126 18:53:01.294765 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="aba2f81e-11de-4d89-ab90-34ca36d205d6" Jan 26 18:53:01 crc kubenswrapper[4737]: I0126 18:53:01.425087 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-9xtr2" Jan 26 18:53:01 crc kubenswrapper[4737]: I0126 18:53:01.457868 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-527jf" Jan 26 18:53:01 crc kubenswrapper[4737]: I0126 18:53:01.573387 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/425352a9-7fbe-4370-be54-cb85d79de0b1-config\") pod \"425352a9-7fbe-4370-be54-cb85d79de0b1\" (UID: \"425352a9-7fbe-4370-be54-cb85d79de0b1\") " Jan 26 18:53:01 crc kubenswrapper[4737]: I0126 18:53:01.573452 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcdss\" (UniqueName: \"kubernetes.io/projected/bb1f1f93-5c26-47f2-b5f1-42d96632aa89-kube-api-access-qcdss\") pod \"bb1f1f93-5c26-47f2-b5f1-42d96632aa89\" (UID: \"bb1f1f93-5c26-47f2-b5f1-42d96632aa89\") " Jan 26 18:53:01 crc kubenswrapper[4737]: I0126 18:53:01.573503 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb1f1f93-5c26-47f2-b5f1-42d96632aa89-config\") pod \"bb1f1f93-5c26-47f2-b5f1-42d96632aa89\" (UID: \"bb1f1f93-5c26-47f2-b5f1-42d96632aa89\") " Jan 26 18:53:01 crc kubenswrapper[4737]: I0126 18:53:01.573592 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bb1f1f93-5c26-47f2-b5f1-42d96632aa89-dns-svc\") pod \"bb1f1f93-5c26-47f2-b5f1-42d96632aa89\" (UID: \"bb1f1f93-5c26-47f2-b5f1-42d96632aa89\") " Jan 26 18:53:01 crc kubenswrapper[4737]: I0126 18:53:01.573633 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6sxcq\" (UniqueName: \"kubernetes.io/projected/425352a9-7fbe-4370-be54-cb85d79de0b1-kube-api-access-6sxcq\") pod \"425352a9-7fbe-4370-be54-cb85d79de0b1\" (UID: \"425352a9-7fbe-4370-be54-cb85d79de0b1\") " Jan 26 18:53:01 crc kubenswrapper[4737]: I0126 18:53:01.574199 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb1f1f93-5c26-47f2-b5f1-42d96632aa89-config" (OuterVolumeSpecName: "config") pod "bb1f1f93-5c26-47f2-b5f1-42d96632aa89" (UID: "bb1f1f93-5c26-47f2-b5f1-42d96632aa89"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:53:01 crc kubenswrapper[4737]: I0126 18:53:01.574227 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb1f1f93-5c26-47f2-b5f1-42d96632aa89-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bb1f1f93-5c26-47f2-b5f1-42d96632aa89" (UID: "bb1f1f93-5c26-47f2-b5f1-42d96632aa89"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:53:01 crc kubenswrapper[4737]: I0126 18:53:01.574243 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/425352a9-7fbe-4370-be54-cb85d79de0b1-config" (OuterVolumeSpecName: "config") pod "425352a9-7fbe-4370-be54-cb85d79de0b1" (UID: "425352a9-7fbe-4370-be54-cb85d79de0b1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:53:01 crc kubenswrapper[4737]: I0126 18:53:01.586598 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/425352a9-7fbe-4370-be54-cb85d79de0b1-kube-api-access-6sxcq" (OuterVolumeSpecName: "kube-api-access-6sxcq") pod "425352a9-7fbe-4370-be54-cb85d79de0b1" (UID: "425352a9-7fbe-4370-be54-cb85d79de0b1"). InnerVolumeSpecName "kube-api-access-6sxcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:53:01 crc kubenswrapper[4737]: I0126 18:53:01.587383 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb1f1f93-5c26-47f2-b5f1-42d96632aa89-kube-api-access-qcdss" (OuterVolumeSpecName: "kube-api-access-qcdss") pod "bb1f1f93-5c26-47f2-b5f1-42d96632aa89" (UID: "bb1f1f93-5c26-47f2-b5f1-42d96632aa89"). InnerVolumeSpecName "kube-api-access-qcdss". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:53:01 crc kubenswrapper[4737]: I0126 18:53:01.675924 4737 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/425352a9-7fbe-4370-be54-cb85d79de0b1-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:01 crc kubenswrapper[4737]: I0126 18:53:01.676251 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcdss\" (UniqueName: \"kubernetes.io/projected/bb1f1f93-5c26-47f2-b5f1-42d96632aa89-kube-api-access-qcdss\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:01 crc kubenswrapper[4737]: I0126 18:53:01.676261 4737 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb1f1f93-5c26-47f2-b5f1-42d96632aa89-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:01 crc kubenswrapper[4737]: I0126 18:53:01.676269 4737 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bb1f1f93-5c26-47f2-b5f1-42d96632aa89-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:01 crc kubenswrapper[4737]: I0126 18:53:01.676280 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6sxcq\" (UniqueName: \"kubernetes.io/projected/425352a9-7fbe-4370-be54-cb85d79de0b1-kube-api-access-6sxcq\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:02 crc kubenswrapper[4737]: I0126 18:53:02.298750 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"2618c486-a631-4a87-ba8f-d5ccad83a208","Type":"ContainerStarted","Data":"34473b02d7169b31cdaeb63e5c469919ced01f861f8c75e953ab1105e0dd4c59"} Jan 26 18:53:02 crc kubenswrapper[4737]: I0126 18:53:02.298791 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-9xtr2" Jan 26 18:53:02 crc kubenswrapper[4737]: I0126 18:53:02.298771 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-527jf" Jan 26 18:53:02 crc kubenswrapper[4737]: I0126 18:53:02.299306 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 26 18:53:05 crc kubenswrapper[4737]: I0126 18:53:05.032230 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=6.24821838 podStartE2EDuration="47.032207046s" podCreationTimestamp="2026-01-26 18:52:18 +0000 UTC" firstStartedPulling="2026-01-26 18:52:20.452132268 +0000 UTC m=+1313.760326976" lastFinishedPulling="2026-01-26 18:53:01.236120934 +0000 UTC m=+1354.544315642" observedRunningTime="2026-01-26 18:53:02.327662865 +0000 UTC m=+1355.635857583" watchObservedRunningTime="2026-01-26 18:53:05.032207046 +0000 UTC m=+1358.340401764" Jan 26 18:53:05 crc kubenswrapper[4737]: I0126 18:53:05.331771 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zrckb" event={"ID":"11408d0f-4b45-4dab-bc9e-965ac14aed79","Type":"ContainerStarted","Data":"5c680c2efb380665c839f81fdc6d1bdf3122597895939ef529c2a9bfe3df6d39"} Jan 26 18:53:05 crc kubenswrapper[4737]: I0126 18:53:05.332197 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-zrckb" Jan 26 18:53:05 crc kubenswrapper[4737]: I0126 18:53:05.338297 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"923f982a-41f5-4c9d-a2dc-50e96e54c283","Type":"ContainerStarted","Data":"b6064940087b8077148f3cf9f3248b79117392bb88227916e1c9b838dfca1bf5"} Jan 26 18:53:05 crc kubenswrapper[4737]: I0126 18:53:05.344099 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6465a03e-5fc8-4886-b68b-531fe218230f","Type":"ContainerStarted","Data":"8e50bcde8602d2909287946da8771e5d38b059262c789f401a6cd952a37b80c6"} Jan 26 18:53:05 crc kubenswrapper[4737]: I0126 18:53:05.346687 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"89a3c35d-3e74-49b8-ae17-81808321d00d","Type":"ContainerStarted","Data":"2a45bf488bd58772199e809a22fe3c7f3e42578b271a140966f49ff0c91d3844"} Jan 26 18:53:05 crc kubenswrapper[4737]: I0126 18:53:05.352732 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-zrckb" podStartSLOduration=21.018494083 podStartE2EDuration="41.352700024s" podCreationTimestamp="2026-01-26 18:52:24 +0000 UTC" firstStartedPulling="2026-01-26 18:52:40.880247414 +0000 UTC m=+1334.188442122" lastFinishedPulling="2026-01-26 18:53:01.214453355 +0000 UTC m=+1354.522648063" observedRunningTime="2026-01-26 18:53:05.347734859 +0000 UTC m=+1358.655929577" watchObservedRunningTime="2026-01-26 18:53:05.352700024 +0000 UTC m=+1358.660894732" Jan 26 18:53:05 crc kubenswrapper[4737]: I0126 18:53:05.352786 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-tnjz7" event={"ID":"b875fe78-bf29-45f1-a4a5-f3881134a171","Type":"ContainerStarted","Data":"2ae4ab7f7d4f8fced4b7d35bd3af94c8bbfbfcd72ca877e0e7e94e63f37d512c"} Jan 26 18:53:05 crc kubenswrapper[4737]: I0126 18:53:05.356990 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f","Type":"ContainerStarted","Data":"c04a9af212861452c83b676661f97393cc144f3603cfef17b7005dfd75266a8c"} Jan 26 18:53:05 crc kubenswrapper[4737]: I0126 18:53:05.361273 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"539f99ad-d4f8-4e02-aca3-f247bc802698","Type":"ContainerStarted","Data":"534e53f36240e829133ec44ea5c49ff7a0fb2f54f9886eafd969f2271734cf1f"} Jan 26 18:53:05 crc kubenswrapper[4737]: I0126 18:53:05.363351 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"5bfe0217-6204-407d-aaeb-94051bb8255b","Type":"ContainerStarted","Data":"3014aff826d6940c1d9ef79a0dc47bd5a4dba695d4fb45b94f0378a1b7619f38"} Jan 26 18:53:05 crc kubenswrapper[4737]: I0126 18:53:05.364329 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"89018ab2-3fc5-4855-b47e-ac19d8008c8e","Type":"ContainerStarted","Data":"a2b0077b19df22c04ef1d0b2ea132488eab17304e48558c4f7a243dd96c79557"} Jan 26 18:53:05 crc kubenswrapper[4737]: I0126 18:53:05.365838 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-ckxn2" event={"ID":"6b80cd0d-81ac-4f45-a80c-3b1cf442fc44","Type":"ContainerStarted","Data":"b67fabe227d4d1c8817e300100825e25f22a604d7a4dd6d3a083fcaf16f8af16"} Jan 26 18:53:05 crc kubenswrapper[4737]: I0126 18:53:05.368174 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ca50689d-e7af-4267-9ee0-11d254c08962","Type":"ContainerStarted","Data":"afa078d0d5da56af18349c4fc9144857b8a67edc79d4ee028d7d8517ac2c189b"} Jan 26 18:53:05 crc kubenswrapper[4737]: I0126 18:53:05.390919 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-ckxn2" podStartSLOduration=10.403746887 podStartE2EDuration="43.390894954s" podCreationTimestamp="2026-01-26 18:52:22 +0000 UTC" firstStartedPulling="2026-01-26 18:52:28.121216832 +0000 UTC m=+1321.429411540" lastFinishedPulling="2026-01-26 18:53:01.108364899 +0000 UTC m=+1354.416559607" observedRunningTime="2026-01-26 18:53:05.383705738 +0000 UTC m=+1358.691900446" watchObservedRunningTime="2026-01-26 18:53:05.390894954 +0000 UTC m=+1358.699089662" Jan 26 18:53:06 crc kubenswrapper[4737]: I0126 18:53:06.380781 4737 generic.go:334] "Generic (PLEG): container finished" podID="b875fe78-bf29-45f1-a4a5-f3881134a171" containerID="2ae4ab7f7d4f8fced4b7d35bd3af94c8bbfbfcd72ca877e0e7e94e63f37d512c" exitCode=0 Jan 26 18:53:06 crc kubenswrapper[4737]: I0126 18:53:06.384704 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-tnjz7" event={"ID":"b875fe78-bf29-45f1-a4a5-f3881134a171","Type":"ContainerDied","Data":"2ae4ab7f7d4f8fced4b7d35bd3af94c8bbfbfcd72ca877e0e7e94e63f37d512c"} Jan 26 18:53:07 crc kubenswrapper[4737]: I0126 18:53:07.394795 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-tnjz7" event={"ID":"b875fe78-bf29-45f1-a4a5-f3881134a171","Type":"ContainerStarted","Data":"0de87add20ba18742bd4cd57dc9cc02362bba1b78bb86677abe92760f4880b5b"} Jan 26 18:53:07 crc kubenswrapper[4737]: I0126 18:53:07.397049 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"ca2ccc7a-b591-4abe-b133-f959b5445611","Type":"ContainerStarted","Data":"06306e7466a0c6f5f61dfb9fca1c925ea9079f79f0d7027946b84c72b13358b0"} Jan 26 18:53:09 crc kubenswrapper[4737]: I0126 18:53:09.260718 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 26 18:53:11 crc kubenswrapper[4737]: I0126 18:53:11.549818 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-bj2wh"] Jan 26 18:53:11 crc kubenswrapper[4737]: I0126 18:53:11.592212 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-gkplf"] Jan 26 18:53:11 crc kubenswrapper[4737]: I0126 18:53:11.594212 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-gkplf" Jan 26 18:53:11 crc kubenswrapper[4737]: I0126 18:53:11.625373 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-gkplf"] Jan 26 18:53:11 crc kubenswrapper[4737]: I0126 18:53:11.637981 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdhwm\" (UniqueName: \"kubernetes.io/projected/95e9d7d0-9037-453f-b3ca-c6d563152124-kube-api-access-mdhwm\") pod \"dnsmasq-dns-7cb5889db5-gkplf\" (UID: \"95e9d7d0-9037-453f-b3ca-c6d563152124\") " pod="openstack/dnsmasq-dns-7cb5889db5-gkplf" Jan 26 18:53:11 crc kubenswrapper[4737]: I0126 18:53:11.638056 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95e9d7d0-9037-453f-b3ca-c6d563152124-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-gkplf\" (UID: \"95e9d7d0-9037-453f-b3ca-c6d563152124\") " pod="openstack/dnsmasq-dns-7cb5889db5-gkplf" Jan 26 18:53:11 crc kubenswrapper[4737]: I0126 18:53:11.638207 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95e9d7d0-9037-453f-b3ca-c6d563152124-config\") pod \"dnsmasq-dns-7cb5889db5-gkplf\" (UID: \"95e9d7d0-9037-453f-b3ca-c6d563152124\") " pod="openstack/dnsmasq-dns-7cb5889db5-gkplf" Jan 26 18:53:11 crc kubenswrapper[4737]: I0126 18:53:11.740139 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95e9d7d0-9037-453f-b3ca-c6d563152124-config\") pod \"dnsmasq-dns-7cb5889db5-gkplf\" (UID: \"95e9d7d0-9037-453f-b3ca-c6d563152124\") " pod="openstack/dnsmasq-dns-7cb5889db5-gkplf" Jan 26 18:53:11 crc kubenswrapper[4737]: I0126 18:53:11.740257 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdhwm\" (UniqueName: \"kubernetes.io/projected/95e9d7d0-9037-453f-b3ca-c6d563152124-kube-api-access-mdhwm\") pod \"dnsmasq-dns-7cb5889db5-gkplf\" (UID: \"95e9d7d0-9037-453f-b3ca-c6d563152124\") " pod="openstack/dnsmasq-dns-7cb5889db5-gkplf" Jan 26 18:53:11 crc kubenswrapper[4737]: I0126 18:53:11.740288 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95e9d7d0-9037-453f-b3ca-c6d563152124-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-gkplf\" (UID: \"95e9d7d0-9037-453f-b3ca-c6d563152124\") " pod="openstack/dnsmasq-dns-7cb5889db5-gkplf" Jan 26 18:53:11 crc kubenswrapper[4737]: I0126 18:53:11.741394 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95e9d7d0-9037-453f-b3ca-c6d563152124-config\") pod \"dnsmasq-dns-7cb5889db5-gkplf\" (UID: \"95e9d7d0-9037-453f-b3ca-c6d563152124\") " pod="openstack/dnsmasq-dns-7cb5889db5-gkplf" Jan 26 18:53:11 crc kubenswrapper[4737]: I0126 18:53:11.742276 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95e9d7d0-9037-453f-b3ca-c6d563152124-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-gkplf\" (UID: \"95e9d7d0-9037-453f-b3ca-c6d563152124\") " pod="openstack/dnsmasq-dns-7cb5889db5-gkplf" Jan 26 18:53:11 crc kubenswrapper[4737]: I0126 18:53:11.762818 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdhwm\" (UniqueName: \"kubernetes.io/projected/95e9d7d0-9037-453f-b3ca-c6d563152124-kube-api-access-mdhwm\") pod \"dnsmasq-dns-7cb5889db5-gkplf\" (UID: \"95e9d7d0-9037-453f-b3ca-c6d563152124\") " pod="openstack/dnsmasq-dns-7cb5889db5-gkplf" Jan 26 18:53:11 crc kubenswrapper[4737]: I0126 18:53:11.931319 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-gkplf" Jan 26 18:53:12 crc kubenswrapper[4737]: I0126 18:53:12.741001 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 26 18:53:12 crc kubenswrapper[4737]: I0126 18:53:12.747438 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 26 18:53:12 crc kubenswrapper[4737]: I0126 18:53:12.749736 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 26 18:53:12 crc kubenswrapper[4737]: I0126 18:53:12.749958 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 26 18:53:12 crc kubenswrapper[4737]: I0126 18:53:12.750082 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-hrhv7" Jan 26 18:53:12 crc kubenswrapper[4737]: I0126 18:53:12.750396 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 26 18:53:12 crc kubenswrapper[4737]: I0126 18:53:12.769887 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 26 18:53:12 crc kubenswrapper[4737]: I0126 18:53:12.863973 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03970489-bf21-4d19-afc2-bf8d39aa683e-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"03970489-bf21-4d19-afc2-bf8d39aa683e\") " pod="openstack/swift-storage-0" Jan 26 18:53:12 crc kubenswrapper[4737]: I0126 18:53:12.864035 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4f9d3aff-06c2-426c-b66c-b883c1ff021a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4f9d3aff-06c2-426c-b66c-b883c1ff021a\") pod \"swift-storage-0\" (UID: \"03970489-bf21-4d19-afc2-bf8d39aa683e\") " pod="openstack/swift-storage-0" Jan 26 18:53:12 crc kubenswrapper[4737]: I0126 18:53:12.864123 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/03970489-bf21-4d19-afc2-bf8d39aa683e-lock\") pod \"swift-storage-0\" (UID: \"03970489-bf21-4d19-afc2-bf8d39aa683e\") " pod="openstack/swift-storage-0" Jan 26 18:53:12 crc kubenswrapper[4737]: I0126 18:53:12.864178 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/03970489-bf21-4d19-afc2-bf8d39aa683e-etc-swift\") pod \"swift-storage-0\" (UID: \"03970489-bf21-4d19-afc2-bf8d39aa683e\") " pod="openstack/swift-storage-0" Jan 26 18:53:12 crc kubenswrapper[4737]: I0126 18:53:12.864260 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/03970489-bf21-4d19-afc2-bf8d39aa683e-cache\") pod \"swift-storage-0\" (UID: \"03970489-bf21-4d19-afc2-bf8d39aa683e\") " pod="openstack/swift-storage-0" Jan 26 18:53:12 crc kubenswrapper[4737]: I0126 18:53:12.864295 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xtg7\" (UniqueName: \"kubernetes.io/projected/03970489-bf21-4d19-afc2-bf8d39aa683e-kube-api-access-2xtg7\") pod \"swift-storage-0\" (UID: \"03970489-bf21-4d19-afc2-bf8d39aa683e\") " pod="openstack/swift-storage-0" Jan 26 18:53:12 crc kubenswrapper[4737]: I0126 18:53:12.966247 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/03970489-bf21-4d19-afc2-bf8d39aa683e-cache\") pod \"swift-storage-0\" (UID: \"03970489-bf21-4d19-afc2-bf8d39aa683e\") " pod="openstack/swift-storage-0" Jan 26 18:53:12 crc kubenswrapper[4737]: I0126 18:53:12.966291 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xtg7\" (UniqueName: \"kubernetes.io/projected/03970489-bf21-4d19-afc2-bf8d39aa683e-kube-api-access-2xtg7\") pod \"swift-storage-0\" (UID: \"03970489-bf21-4d19-afc2-bf8d39aa683e\") " pod="openstack/swift-storage-0" Jan 26 18:53:12 crc kubenswrapper[4737]: I0126 18:53:12.966397 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03970489-bf21-4d19-afc2-bf8d39aa683e-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"03970489-bf21-4d19-afc2-bf8d39aa683e\") " pod="openstack/swift-storage-0" Jan 26 18:53:12 crc kubenswrapper[4737]: I0126 18:53:12.966431 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4f9d3aff-06c2-426c-b66c-b883c1ff021a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4f9d3aff-06c2-426c-b66c-b883c1ff021a\") pod \"swift-storage-0\" (UID: \"03970489-bf21-4d19-afc2-bf8d39aa683e\") " pod="openstack/swift-storage-0" Jan 26 18:53:12 crc kubenswrapper[4737]: I0126 18:53:12.966490 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/03970489-bf21-4d19-afc2-bf8d39aa683e-lock\") pod \"swift-storage-0\" (UID: \"03970489-bf21-4d19-afc2-bf8d39aa683e\") " pod="openstack/swift-storage-0" Jan 26 18:53:12 crc kubenswrapper[4737]: I0126 18:53:12.966542 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/03970489-bf21-4d19-afc2-bf8d39aa683e-etc-swift\") pod \"swift-storage-0\" (UID: \"03970489-bf21-4d19-afc2-bf8d39aa683e\") " pod="openstack/swift-storage-0" Jan 26 18:53:12 crc kubenswrapper[4737]: E0126 18:53:12.966793 4737 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 18:53:12 crc kubenswrapper[4737]: E0126 18:53:12.966816 4737 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 18:53:12 crc kubenswrapper[4737]: E0126 18:53:12.966864 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/03970489-bf21-4d19-afc2-bf8d39aa683e-etc-swift podName:03970489-bf21-4d19-afc2-bf8d39aa683e nodeName:}" failed. No retries permitted until 2026-01-26 18:53:13.466846166 +0000 UTC m=+1366.775040874 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/03970489-bf21-4d19-afc2-bf8d39aa683e-etc-swift") pod "swift-storage-0" (UID: "03970489-bf21-4d19-afc2-bf8d39aa683e") : configmap "swift-ring-files" not found Jan 26 18:53:12 crc kubenswrapper[4737]: I0126 18:53:12.966936 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/03970489-bf21-4d19-afc2-bf8d39aa683e-cache\") pod \"swift-storage-0\" (UID: \"03970489-bf21-4d19-afc2-bf8d39aa683e\") " pod="openstack/swift-storage-0" Jan 26 18:53:12 crc kubenswrapper[4737]: I0126 18:53:12.967411 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/03970489-bf21-4d19-afc2-bf8d39aa683e-lock\") pod \"swift-storage-0\" (UID: \"03970489-bf21-4d19-afc2-bf8d39aa683e\") " pod="openstack/swift-storage-0" Jan 26 18:53:12 crc kubenswrapper[4737]: I0126 18:53:12.972802 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03970489-bf21-4d19-afc2-bf8d39aa683e-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"03970489-bf21-4d19-afc2-bf8d39aa683e\") " pod="openstack/swift-storage-0" Jan 26 18:53:12 crc kubenswrapper[4737]: I0126 18:53:12.986442 4737 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 18:53:12 crc kubenswrapper[4737]: I0126 18:53:12.986520 4737 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4f9d3aff-06c2-426c-b66c-b883c1ff021a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4f9d3aff-06c2-426c-b66c-b883c1ff021a\") pod \"swift-storage-0\" (UID: \"03970489-bf21-4d19-afc2-bf8d39aa683e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/249d14dd985a7cc93df0a281318f326b717a1f038acd81649fc42fa8d694273f/globalmount\"" pod="openstack/swift-storage-0" Jan 26 18:53:12 crc kubenswrapper[4737]: I0126 18:53:12.988049 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xtg7\" (UniqueName: \"kubernetes.io/projected/03970489-bf21-4d19-afc2-bf8d39aa683e-kube-api-access-2xtg7\") pod \"swift-storage-0\" (UID: \"03970489-bf21-4d19-afc2-bf8d39aa683e\") " pod="openstack/swift-storage-0" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.086290 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4f9d3aff-06c2-426c-b66c-b883c1ff021a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4f9d3aff-06c2-426c-b66c-b883c1ff021a\") pod \"swift-storage-0\" (UID: \"03970489-bf21-4d19-afc2-bf8d39aa683e\") " pod="openstack/swift-storage-0" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.258318 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-k25q2"] Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.260812 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-k25q2" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.262811 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.262837 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.264922 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.296115 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-k25q2"] Jan 26 18:53:13 crc kubenswrapper[4737]: E0126 18:53:13.297108 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-8xk96 ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-8xk96 ring-data-devices scripts swiftconf]: context canceled" pod="openstack/swift-ring-rebalance-k25q2" podUID="4d53262a-0ae0-4a51-8187-54934a2ad4a2" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.320979 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-2fbb8"] Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.322529 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-2fbb8" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.343368 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-k25q2"] Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.357413 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-2fbb8"] Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.388176 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9be0bf2-1b3f-4f77-89ec-b5afa2362e47-scripts\") pod \"swift-ring-rebalance-2fbb8\" (UID: \"c9be0bf2-1b3f-4f77-89ec-b5afa2362e47\") " pod="openstack/swift-ring-rebalance-2fbb8" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.388261 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vnv5\" (UniqueName: \"kubernetes.io/projected/c9be0bf2-1b3f-4f77-89ec-b5afa2362e47-kube-api-access-8vnv5\") pod \"swift-ring-rebalance-2fbb8\" (UID: \"c9be0bf2-1b3f-4f77-89ec-b5afa2362e47\") " pod="openstack/swift-ring-rebalance-2fbb8" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.388300 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d53262a-0ae0-4a51-8187-54934a2ad4a2-combined-ca-bundle\") pod \"swift-ring-rebalance-k25q2\" (UID: \"4d53262a-0ae0-4a51-8187-54934a2ad4a2\") " pod="openstack/swift-ring-rebalance-k25q2" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.388340 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4d53262a-0ae0-4a51-8187-54934a2ad4a2-dispersionconf\") pod \"swift-ring-rebalance-k25q2\" (UID: \"4d53262a-0ae0-4a51-8187-54934a2ad4a2\") " pod="openstack/swift-ring-rebalance-k25q2" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.388444 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9be0bf2-1b3f-4f77-89ec-b5afa2362e47-combined-ca-bundle\") pod \"swift-ring-rebalance-2fbb8\" (UID: \"c9be0bf2-1b3f-4f77-89ec-b5afa2362e47\") " pod="openstack/swift-ring-rebalance-2fbb8" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.388488 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4d53262a-0ae0-4a51-8187-54934a2ad4a2-swiftconf\") pod \"swift-ring-rebalance-k25q2\" (UID: \"4d53262a-0ae0-4a51-8187-54934a2ad4a2\") " pod="openstack/swift-ring-rebalance-k25q2" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.388538 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xk96\" (UniqueName: \"kubernetes.io/projected/4d53262a-0ae0-4a51-8187-54934a2ad4a2-kube-api-access-8xk96\") pod \"swift-ring-rebalance-k25q2\" (UID: \"4d53262a-0ae0-4a51-8187-54934a2ad4a2\") " pod="openstack/swift-ring-rebalance-k25q2" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.388614 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4d53262a-0ae0-4a51-8187-54934a2ad4a2-scripts\") pod \"swift-ring-rebalance-k25q2\" (UID: \"4d53262a-0ae0-4a51-8187-54934a2ad4a2\") " pod="openstack/swift-ring-rebalance-k25q2" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.388651 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c9be0bf2-1b3f-4f77-89ec-b5afa2362e47-etc-swift\") pod \"swift-ring-rebalance-2fbb8\" (UID: \"c9be0bf2-1b3f-4f77-89ec-b5afa2362e47\") " pod="openstack/swift-ring-rebalance-2fbb8" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.388725 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4d53262a-0ae0-4a51-8187-54934a2ad4a2-etc-swift\") pod \"swift-ring-rebalance-k25q2\" (UID: \"4d53262a-0ae0-4a51-8187-54934a2ad4a2\") " pod="openstack/swift-ring-rebalance-k25q2" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.388751 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c9be0bf2-1b3f-4f77-89ec-b5afa2362e47-swiftconf\") pod \"swift-ring-rebalance-2fbb8\" (UID: \"c9be0bf2-1b3f-4f77-89ec-b5afa2362e47\") " pod="openstack/swift-ring-rebalance-2fbb8" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.388806 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c9be0bf2-1b3f-4f77-89ec-b5afa2362e47-dispersionconf\") pod \"swift-ring-rebalance-2fbb8\" (UID: \"c9be0bf2-1b3f-4f77-89ec-b5afa2362e47\") " pod="openstack/swift-ring-rebalance-2fbb8" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.389172 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4d53262a-0ae0-4a51-8187-54934a2ad4a2-ring-data-devices\") pod \"swift-ring-rebalance-k25q2\" (UID: \"4d53262a-0ae0-4a51-8187-54934a2ad4a2\") " pod="openstack/swift-ring-rebalance-k25q2" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.389287 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c9be0bf2-1b3f-4f77-89ec-b5afa2362e47-ring-data-devices\") pod \"swift-ring-rebalance-2fbb8\" (UID: \"c9be0bf2-1b3f-4f77-89ec-b5afa2362e47\") " pod="openstack/swift-ring-rebalance-2fbb8" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.449799 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-k25q2" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.459885 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-k25q2" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.491612 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/03970489-bf21-4d19-afc2-bf8d39aa683e-etc-swift\") pod \"swift-storage-0\" (UID: \"03970489-bf21-4d19-afc2-bf8d39aa683e\") " pod="openstack/swift-storage-0" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.491657 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9be0bf2-1b3f-4f77-89ec-b5afa2362e47-combined-ca-bundle\") pod \"swift-ring-rebalance-2fbb8\" (UID: \"c9be0bf2-1b3f-4f77-89ec-b5afa2362e47\") " pod="openstack/swift-ring-rebalance-2fbb8" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.491685 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4d53262a-0ae0-4a51-8187-54934a2ad4a2-swiftconf\") pod \"swift-ring-rebalance-k25q2\" (UID: \"4d53262a-0ae0-4a51-8187-54934a2ad4a2\") " pod="openstack/swift-ring-rebalance-k25q2" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.491714 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xk96\" (UniqueName: \"kubernetes.io/projected/4d53262a-0ae0-4a51-8187-54934a2ad4a2-kube-api-access-8xk96\") pod \"swift-ring-rebalance-k25q2\" (UID: \"4d53262a-0ae0-4a51-8187-54934a2ad4a2\") " pod="openstack/swift-ring-rebalance-k25q2" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.491758 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4d53262a-0ae0-4a51-8187-54934a2ad4a2-scripts\") pod \"swift-ring-rebalance-k25q2\" (UID: \"4d53262a-0ae0-4a51-8187-54934a2ad4a2\") " pod="openstack/swift-ring-rebalance-k25q2" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.491778 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c9be0bf2-1b3f-4f77-89ec-b5afa2362e47-etc-swift\") pod \"swift-ring-rebalance-2fbb8\" (UID: \"c9be0bf2-1b3f-4f77-89ec-b5afa2362e47\") " pod="openstack/swift-ring-rebalance-2fbb8" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.491810 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4d53262a-0ae0-4a51-8187-54934a2ad4a2-etc-swift\") pod \"swift-ring-rebalance-k25q2\" (UID: \"4d53262a-0ae0-4a51-8187-54934a2ad4a2\") " pod="openstack/swift-ring-rebalance-k25q2" Jan 26 18:53:13 crc kubenswrapper[4737]: E0126 18:53:13.491815 4737 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 18:53:13 crc kubenswrapper[4737]: E0126 18:53:13.491856 4737 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 18:53:13 crc kubenswrapper[4737]: E0126 18:53:13.491928 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/03970489-bf21-4d19-afc2-bf8d39aa683e-etc-swift podName:03970489-bf21-4d19-afc2-bf8d39aa683e nodeName:}" failed. No retries permitted until 2026-01-26 18:53:14.491901931 +0000 UTC m=+1367.800096659 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/03970489-bf21-4d19-afc2-bf8d39aa683e-etc-swift") pod "swift-storage-0" (UID: "03970489-bf21-4d19-afc2-bf8d39aa683e") : configmap "swift-ring-files" not found Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.491826 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c9be0bf2-1b3f-4f77-89ec-b5afa2362e47-swiftconf\") pod \"swift-ring-rebalance-2fbb8\" (UID: \"c9be0bf2-1b3f-4f77-89ec-b5afa2362e47\") " pod="openstack/swift-ring-rebalance-2fbb8" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.492285 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c9be0bf2-1b3f-4f77-89ec-b5afa2362e47-dispersionconf\") pod \"swift-ring-rebalance-2fbb8\" (UID: \"c9be0bf2-1b3f-4f77-89ec-b5afa2362e47\") " pod="openstack/swift-ring-rebalance-2fbb8" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.492340 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4d53262a-0ae0-4a51-8187-54934a2ad4a2-etc-swift\") pod \"swift-ring-rebalance-k25q2\" (UID: \"4d53262a-0ae0-4a51-8187-54934a2ad4a2\") " pod="openstack/swift-ring-rebalance-k25q2" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.492398 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c9be0bf2-1b3f-4f77-89ec-b5afa2362e47-etc-swift\") pod \"swift-ring-rebalance-2fbb8\" (UID: \"c9be0bf2-1b3f-4f77-89ec-b5afa2362e47\") " pod="openstack/swift-ring-rebalance-2fbb8" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.492538 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4d53262a-0ae0-4a51-8187-54934a2ad4a2-ring-data-devices\") pod \"swift-ring-rebalance-k25q2\" (UID: \"4d53262a-0ae0-4a51-8187-54934a2ad4a2\") " pod="openstack/swift-ring-rebalance-k25q2" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.492624 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c9be0bf2-1b3f-4f77-89ec-b5afa2362e47-ring-data-devices\") pod \"swift-ring-rebalance-2fbb8\" (UID: \"c9be0bf2-1b3f-4f77-89ec-b5afa2362e47\") " pod="openstack/swift-ring-rebalance-2fbb8" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.492666 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9be0bf2-1b3f-4f77-89ec-b5afa2362e47-scripts\") pod \"swift-ring-rebalance-2fbb8\" (UID: \"c9be0bf2-1b3f-4f77-89ec-b5afa2362e47\") " pod="openstack/swift-ring-rebalance-2fbb8" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.492678 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4d53262a-0ae0-4a51-8187-54934a2ad4a2-scripts\") pod \"swift-ring-rebalance-k25q2\" (UID: \"4d53262a-0ae0-4a51-8187-54934a2ad4a2\") " pod="openstack/swift-ring-rebalance-k25q2" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.492751 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vnv5\" (UniqueName: \"kubernetes.io/projected/c9be0bf2-1b3f-4f77-89ec-b5afa2362e47-kube-api-access-8vnv5\") pod \"swift-ring-rebalance-2fbb8\" (UID: \"c9be0bf2-1b3f-4f77-89ec-b5afa2362e47\") " pod="openstack/swift-ring-rebalance-2fbb8" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.492789 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d53262a-0ae0-4a51-8187-54934a2ad4a2-combined-ca-bundle\") pod \"swift-ring-rebalance-k25q2\" (UID: \"4d53262a-0ae0-4a51-8187-54934a2ad4a2\") " pod="openstack/swift-ring-rebalance-k25q2" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.492821 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4d53262a-0ae0-4a51-8187-54934a2ad4a2-dispersionconf\") pod \"swift-ring-rebalance-k25q2\" (UID: \"4d53262a-0ae0-4a51-8187-54934a2ad4a2\") " pod="openstack/swift-ring-rebalance-k25q2" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.493289 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c9be0bf2-1b3f-4f77-89ec-b5afa2362e47-ring-data-devices\") pod \"swift-ring-rebalance-2fbb8\" (UID: \"c9be0bf2-1b3f-4f77-89ec-b5afa2362e47\") " pod="openstack/swift-ring-rebalance-2fbb8" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.493359 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4d53262a-0ae0-4a51-8187-54934a2ad4a2-ring-data-devices\") pod \"swift-ring-rebalance-k25q2\" (UID: \"4d53262a-0ae0-4a51-8187-54934a2ad4a2\") " pod="openstack/swift-ring-rebalance-k25q2" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.493452 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9be0bf2-1b3f-4f77-89ec-b5afa2362e47-scripts\") pod \"swift-ring-rebalance-2fbb8\" (UID: \"c9be0bf2-1b3f-4f77-89ec-b5afa2362e47\") " pod="openstack/swift-ring-rebalance-2fbb8" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.495476 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c9be0bf2-1b3f-4f77-89ec-b5afa2362e47-swiftconf\") pod \"swift-ring-rebalance-2fbb8\" (UID: \"c9be0bf2-1b3f-4f77-89ec-b5afa2362e47\") " pod="openstack/swift-ring-rebalance-2fbb8" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.496663 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4d53262a-0ae0-4a51-8187-54934a2ad4a2-swiftconf\") pod \"swift-ring-rebalance-k25q2\" (UID: \"4d53262a-0ae0-4a51-8187-54934a2ad4a2\") " pod="openstack/swift-ring-rebalance-k25q2" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.497199 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4d53262a-0ae0-4a51-8187-54934a2ad4a2-dispersionconf\") pod \"swift-ring-rebalance-k25q2\" (UID: \"4d53262a-0ae0-4a51-8187-54934a2ad4a2\") " pod="openstack/swift-ring-rebalance-k25q2" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.497453 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c9be0bf2-1b3f-4f77-89ec-b5afa2362e47-dispersionconf\") pod \"swift-ring-rebalance-2fbb8\" (UID: \"c9be0bf2-1b3f-4f77-89ec-b5afa2362e47\") " pod="openstack/swift-ring-rebalance-2fbb8" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.498415 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9be0bf2-1b3f-4f77-89ec-b5afa2362e47-combined-ca-bundle\") pod \"swift-ring-rebalance-2fbb8\" (UID: \"c9be0bf2-1b3f-4f77-89ec-b5afa2362e47\") " pod="openstack/swift-ring-rebalance-2fbb8" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.499517 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d53262a-0ae0-4a51-8187-54934a2ad4a2-combined-ca-bundle\") pod \"swift-ring-rebalance-k25q2\" (UID: \"4d53262a-0ae0-4a51-8187-54934a2ad4a2\") " pod="openstack/swift-ring-rebalance-k25q2" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.512513 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xk96\" (UniqueName: \"kubernetes.io/projected/4d53262a-0ae0-4a51-8187-54934a2ad4a2-kube-api-access-8xk96\") pod \"swift-ring-rebalance-k25q2\" (UID: \"4d53262a-0ae0-4a51-8187-54934a2ad4a2\") " pod="openstack/swift-ring-rebalance-k25q2" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.517037 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vnv5\" (UniqueName: \"kubernetes.io/projected/c9be0bf2-1b3f-4f77-89ec-b5afa2362e47-kube-api-access-8vnv5\") pod \"swift-ring-rebalance-2fbb8\" (UID: \"c9be0bf2-1b3f-4f77-89ec-b5afa2362e47\") " pod="openstack/swift-ring-rebalance-2fbb8" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.593913 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4d53262a-0ae0-4a51-8187-54934a2ad4a2-etc-swift\") pod \"4d53262a-0ae0-4a51-8187-54934a2ad4a2\" (UID: \"4d53262a-0ae0-4a51-8187-54934a2ad4a2\") " Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.594038 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d53262a-0ae0-4a51-8187-54934a2ad4a2-combined-ca-bundle\") pod \"4d53262a-0ae0-4a51-8187-54934a2ad4a2\" (UID: \"4d53262a-0ae0-4a51-8187-54934a2ad4a2\") " Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.594323 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4d53262a-0ae0-4a51-8187-54934a2ad4a2-scripts\") pod \"4d53262a-0ae0-4a51-8187-54934a2ad4a2\" (UID: \"4d53262a-0ae0-4a51-8187-54934a2ad4a2\") " Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.594404 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d53262a-0ae0-4a51-8187-54934a2ad4a2-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "4d53262a-0ae0-4a51-8187-54934a2ad4a2" (UID: "4d53262a-0ae0-4a51-8187-54934a2ad4a2"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.594468 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4d53262a-0ae0-4a51-8187-54934a2ad4a2-ring-data-devices\") pod \"4d53262a-0ae0-4a51-8187-54934a2ad4a2\" (UID: \"4d53262a-0ae0-4a51-8187-54934a2ad4a2\") " Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.594617 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4d53262a-0ae0-4a51-8187-54934a2ad4a2-swiftconf\") pod \"4d53262a-0ae0-4a51-8187-54934a2ad4a2\" (UID: \"4d53262a-0ae0-4a51-8187-54934a2ad4a2\") " Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.594661 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xk96\" (UniqueName: \"kubernetes.io/projected/4d53262a-0ae0-4a51-8187-54934a2ad4a2-kube-api-access-8xk96\") pod \"4d53262a-0ae0-4a51-8187-54934a2ad4a2\" (UID: \"4d53262a-0ae0-4a51-8187-54934a2ad4a2\") " Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.594716 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4d53262a-0ae0-4a51-8187-54934a2ad4a2-dispersionconf\") pod \"4d53262a-0ae0-4a51-8187-54934a2ad4a2\" (UID: \"4d53262a-0ae0-4a51-8187-54934a2ad4a2\") " Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.595236 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d53262a-0ae0-4a51-8187-54934a2ad4a2-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "4d53262a-0ae0-4a51-8187-54934a2ad4a2" (UID: "4d53262a-0ae0-4a51-8187-54934a2ad4a2"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.595323 4737 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4d53262a-0ae0-4a51-8187-54934a2ad4a2-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.595337 4737 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4d53262a-0ae0-4a51-8187-54934a2ad4a2-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.595643 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d53262a-0ae0-4a51-8187-54934a2ad4a2-scripts" (OuterVolumeSpecName: "scripts") pod "4d53262a-0ae0-4a51-8187-54934a2ad4a2" (UID: "4d53262a-0ae0-4a51-8187-54934a2ad4a2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.600318 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d53262a-0ae0-4a51-8187-54934a2ad4a2-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "4d53262a-0ae0-4a51-8187-54934a2ad4a2" (UID: "4d53262a-0ae0-4a51-8187-54934a2ad4a2"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.601159 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d53262a-0ae0-4a51-8187-54934a2ad4a2-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "4d53262a-0ae0-4a51-8187-54934a2ad4a2" (UID: "4d53262a-0ae0-4a51-8187-54934a2ad4a2"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.602197 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d53262a-0ae0-4a51-8187-54934a2ad4a2-kube-api-access-8xk96" (OuterVolumeSpecName: "kube-api-access-8xk96") pod "4d53262a-0ae0-4a51-8187-54934a2ad4a2" (UID: "4d53262a-0ae0-4a51-8187-54934a2ad4a2"). InnerVolumeSpecName "kube-api-access-8xk96". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.602321 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d53262a-0ae0-4a51-8187-54934a2ad4a2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4d53262a-0ae0-4a51-8187-54934a2ad4a2" (UID: "4d53262a-0ae0-4a51-8187-54934a2ad4a2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.645390 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-2fbb8" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.697839 4737 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d53262a-0ae0-4a51-8187-54934a2ad4a2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.697881 4737 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4d53262a-0ae0-4a51-8187-54934a2ad4a2-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.697896 4737 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4d53262a-0ae0-4a51-8187-54934a2ad4a2-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.697909 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xk96\" (UniqueName: \"kubernetes.io/projected/4d53262a-0ae0-4a51-8187-54934a2ad4a2-kube-api-access-8xk96\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:13 crc kubenswrapper[4737]: I0126 18:53:13.697921 4737 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4d53262a-0ae0-4a51-8187-54934a2ad4a2-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:14 crc kubenswrapper[4737]: I0126 18:53:14.461131 4737 generic.go:334] "Generic (PLEG): container finished" podID="539f99ad-d4f8-4e02-aca3-f247bc802698" containerID="534e53f36240e829133ec44ea5c49ff7a0fb2f54f9886eafd969f2271734cf1f" exitCode=0 Jan 26 18:53:14 crc kubenswrapper[4737]: I0126 18:53:14.461213 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"539f99ad-d4f8-4e02-aca3-f247bc802698","Type":"ContainerDied","Data":"534e53f36240e829133ec44ea5c49ff7a0fb2f54f9886eafd969f2271734cf1f"} Jan 26 18:53:14 crc kubenswrapper[4737]: I0126 18:53:14.463098 4737 generic.go:334] "Generic (PLEG): container finished" podID="89018ab2-3fc5-4855-b47e-ac19d8008c8e" containerID="a2b0077b19df22c04ef1d0b2ea132488eab17304e48558c4f7a243dd96c79557" exitCode=0 Jan 26 18:53:14 crc kubenswrapper[4737]: I0126 18:53:14.463186 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"89018ab2-3fc5-4855-b47e-ac19d8008c8e","Type":"ContainerDied","Data":"a2b0077b19df22c04ef1d0b2ea132488eab17304e48558c4f7a243dd96c79557"} Jan 26 18:53:14 crc kubenswrapper[4737]: I0126 18:53:14.465592 4737 generic.go:334] "Generic (PLEG): container finished" podID="ca50689d-e7af-4267-9ee0-11d254c08962" containerID="afa078d0d5da56af18349c4fc9144857b8a67edc79d4ee028d7d8517ac2c189b" exitCode=0 Jan 26 18:53:14 crc kubenswrapper[4737]: I0126 18:53:14.465719 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-k25q2" Jan 26 18:53:14 crc kubenswrapper[4737]: I0126 18:53:14.465756 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ca50689d-e7af-4267-9ee0-11d254c08962","Type":"ContainerDied","Data":"afa078d0d5da56af18349c4fc9144857b8a67edc79d4ee028d7d8517ac2c189b"} Jan 26 18:53:14 crc kubenswrapper[4737]: I0126 18:53:14.516031 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/03970489-bf21-4d19-afc2-bf8d39aa683e-etc-swift\") pod \"swift-storage-0\" (UID: \"03970489-bf21-4d19-afc2-bf8d39aa683e\") " pod="openstack/swift-storage-0" Jan 26 18:53:14 crc kubenswrapper[4737]: E0126 18:53:14.517093 4737 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 18:53:14 crc kubenswrapper[4737]: E0126 18:53:14.517121 4737 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 18:53:14 crc kubenswrapper[4737]: E0126 18:53:14.517171 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/03970489-bf21-4d19-afc2-bf8d39aa683e-etc-swift podName:03970489-bf21-4d19-afc2-bf8d39aa683e nodeName:}" failed. No retries permitted until 2026-01-26 18:53:16.517154035 +0000 UTC m=+1369.825348823 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/03970489-bf21-4d19-afc2-bf8d39aa683e-etc-swift") pod "swift-storage-0" (UID: "03970489-bf21-4d19-afc2-bf8d39aa683e") : configmap "swift-ring-files" not found Jan 26 18:53:14 crc kubenswrapper[4737]: I0126 18:53:14.601603 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-k25q2"] Jan 26 18:53:14 crc kubenswrapper[4737]: I0126 18:53:14.612950 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-k25q2"] Jan 26 18:53:14 crc kubenswrapper[4737]: E0126 18:53:14.760923 4737 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d53262a_0ae0_4a51_8187_54934a2ad4a2.slice\": RecentStats: unable to find data in memory cache]" Jan 26 18:53:14 crc kubenswrapper[4737]: I0126 18:53:14.997520 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d53262a-0ae0-4a51-8187-54934a2ad4a2" path="/var/lib/kubelet/pods/4d53262a-0ae0-4a51-8187-54934a2ad4a2/volumes" Jan 26 18:53:15 crc kubenswrapper[4737]: I0126 18:53:15.340923 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-bj2wh" Jan 26 18:53:15 crc kubenswrapper[4737]: I0126 18:53:15.435652 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ac6f0c1-3e0d-4896-a392-913dc6576566-config\") pod \"7ac6f0c1-3e0d-4896-a392-913dc6576566\" (UID: \"7ac6f0c1-3e0d-4896-a392-913dc6576566\") " Jan 26 18:53:15 crc kubenswrapper[4737]: I0126 18:53:15.435738 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ac6f0c1-3e0d-4896-a392-913dc6576566-dns-svc\") pod \"7ac6f0c1-3e0d-4896-a392-913dc6576566\" (UID: \"7ac6f0c1-3e0d-4896-a392-913dc6576566\") " Jan 26 18:53:15 crc kubenswrapper[4737]: I0126 18:53:15.435834 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkjmc\" (UniqueName: \"kubernetes.io/projected/7ac6f0c1-3e0d-4896-a392-913dc6576566-kube-api-access-vkjmc\") pod \"7ac6f0c1-3e0d-4896-a392-913dc6576566\" (UID: \"7ac6f0c1-3e0d-4896-a392-913dc6576566\") " Jan 26 18:53:15 crc kubenswrapper[4737]: I0126 18:53:15.436205 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ac6f0c1-3e0d-4896-a392-913dc6576566-config" (OuterVolumeSpecName: "config") pod "7ac6f0c1-3e0d-4896-a392-913dc6576566" (UID: "7ac6f0c1-3e0d-4896-a392-913dc6576566"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:53:15 crc kubenswrapper[4737]: I0126 18:53:15.436345 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ac6f0c1-3e0d-4896-a392-913dc6576566-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7ac6f0c1-3e0d-4896-a392-913dc6576566" (UID: "7ac6f0c1-3e0d-4896-a392-913dc6576566"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:53:15 crc kubenswrapper[4737]: I0126 18:53:15.436687 4737 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ac6f0c1-3e0d-4896-a392-913dc6576566-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:15 crc kubenswrapper[4737]: I0126 18:53:15.436712 4737 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ac6f0c1-3e0d-4896-a392-913dc6576566-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:15 crc kubenswrapper[4737]: I0126 18:53:15.445975 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ac6f0c1-3e0d-4896-a392-913dc6576566-kube-api-access-vkjmc" (OuterVolumeSpecName: "kube-api-access-vkjmc") pod "7ac6f0c1-3e0d-4896-a392-913dc6576566" (UID: "7ac6f0c1-3e0d-4896-a392-913dc6576566"). InnerVolumeSpecName "kube-api-access-vkjmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:53:15 crc kubenswrapper[4737]: I0126 18:53:15.476053 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-bj2wh" event={"ID":"7ac6f0c1-3e0d-4896-a392-913dc6576566","Type":"ContainerDied","Data":"024ef884157f7d4eaa2dc8dc6e1a05750994da6819c4c152c88a4b02410ae943"} Jan 26 18:53:15 crc kubenswrapper[4737]: I0126 18:53:15.476175 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-bj2wh" Jan 26 18:53:15 crc kubenswrapper[4737]: I0126 18:53:15.538880 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkjmc\" (UniqueName: \"kubernetes.io/projected/7ac6f0c1-3e0d-4896-a392-913dc6576566-kube-api-access-vkjmc\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:15 crc kubenswrapper[4737]: I0126 18:53:15.541468 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-bj2wh"] Jan 26 18:53:15 crc kubenswrapper[4737]: I0126 18:53:15.559587 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-bj2wh"] Jan 26 18:53:16 crc kubenswrapper[4737]: I0126 18:53:16.121603 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-2fbb8"] Jan 26 18:53:16 crc kubenswrapper[4737]: W0126 18:53:16.138582 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9be0bf2_1b3f_4f77_89ec_b5afa2362e47.slice/crio-c6c69df6a043e1281050607167efc61d992be5d4ebecee768f4c2c844853652f WatchSource:0}: Error finding container c6c69df6a043e1281050607167efc61d992be5d4ebecee768f4c2c844853652f: Status 404 returned error can't find the container with id c6c69df6a043e1281050607167efc61d992be5d4ebecee768f4c2c844853652f Jan 26 18:53:16 crc kubenswrapper[4737]: I0126 18:53:16.356350 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-gkplf"] Jan 26 18:53:16 crc kubenswrapper[4737]: W0126 18:53:16.364626 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95e9d7d0_9037_453f_b3ca_c6d563152124.slice/crio-ff9f510bacef681301b12ae7668829fc67e4fc7ec6f644f3990e9ae4aab20354 WatchSource:0}: Error finding container ff9f510bacef681301b12ae7668829fc67e4fc7ec6f644f3990e9ae4aab20354: Status 404 returned error can't find the container with id ff9f510bacef681301b12ae7668829fc67e4fc7ec6f644f3990e9ae4aab20354 Jan 26 18:53:16 crc kubenswrapper[4737]: I0126 18:53:16.487349 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"89018ab2-3fc5-4855-b47e-ac19d8008c8e","Type":"ContainerStarted","Data":"7b49547375cc4e6feaca4f44db87793ef0c118f644ef81db338467f8a0ec033e"} Jan 26 18:53:16 crc kubenswrapper[4737]: I0126 18:53:16.491679 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ca50689d-e7af-4267-9ee0-11d254c08962","Type":"ContainerStarted","Data":"7292c086d348b0852b27b24b885df10ecbc36141b3eee66032252e78f7cce5fe"} Jan 26 18:53:16 crc kubenswrapper[4737]: I0126 18:53:16.493240 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"aba2f81e-11de-4d89-ab90-34ca36d205d6","Type":"ContainerStarted","Data":"c04895f54990b88c5519934e14d7ebee5009b74f57e0554fc193a7549f810162"} Jan 26 18:53:16 crc kubenswrapper[4737]: I0126 18:53:16.493446 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 26 18:53:16 crc kubenswrapper[4737]: I0126 18:53:16.495760 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6465a03e-5fc8-4886-b68b-531fe218230f","Type":"ContainerStarted","Data":"792a75bc4fff53575de18a0390ec4727e0ca5a686a032cdc38f19ee24224c713"} Jan 26 18:53:16 crc kubenswrapper[4737]: I0126 18:53:16.497721 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"923f982a-41f5-4c9d-a2dc-50e96e54c283","Type":"ContainerStarted","Data":"55cc1fa470c1e26730bc23946210eeb7f3411b4f32a8ac32c94aede3966569aa"} Jan 26 18:53:16 crc kubenswrapper[4737]: I0126 18:53:16.503017 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-tnjz7" event={"ID":"b875fe78-bf29-45f1-a4a5-f3881134a171","Type":"ContainerStarted","Data":"5917009465f63b6f9c2c6c869390fceb10cc5a9d58db76fe8195b72d1e274931"} Jan 26 18:53:16 crc kubenswrapper[4737]: I0126 18:53:16.503173 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-tnjz7" Jan 26 18:53:16 crc kubenswrapper[4737]: I0126 18:53:16.504737 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-2fbb8" event={"ID":"c9be0bf2-1b3f-4f77-89ec-b5afa2362e47","Type":"ContainerStarted","Data":"c6c69df6a043e1281050607167efc61d992be5d4ebecee768f4c2c844853652f"} Jan 26 18:53:16 crc kubenswrapper[4737]: I0126 18:53:16.506619 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-gkplf" event={"ID":"95e9d7d0-9037-453f-b3ca-c6d563152124","Type":"ContainerStarted","Data":"ff9f510bacef681301b12ae7668829fc67e4fc7ec6f644f3990e9ae4aab20354"} Jan 26 18:53:16 crc kubenswrapper[4737]: I0126 18:53:16.517309 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=17.64547039 podStartE2EDuration="59.517285972s" podCreationTimestamp="2026-01-26 18:52:17 +0000 UTC" firstStartedPulling="2026-01-26 18:52:20.024793537 +0000 UTC m=+1313.332988245" lastFinishedPulling="2026-01-26 18:53:01.896609119 +0000 UTC m=+1355.204803827" observedRunningTime="2026-01-26 18:53:16.505587036 +0000 UTC m=+1369.813781744" watchObservedRunningTime="2026-01-26 18:53:16.517285972 +0000 UTC m=+1369.825480680" Jan 26 18:53:16 crc kubenswrapper[4737]: I0126 18:53:16.525762 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.495415646 podStartE2EDuration="55.525746553s" podCreationTimestamp="2026-01-26 18:52:21 +0000 UTC" firstStartedPulling="2026-01-26 18:52:22.786533336 +0000 UTC m=+1316.094728044" lastFinishedPulling="2026-01-26 18:53:15.816864243 +0000 UTC m=+1369.125058951" observedRunningTime="2026-01-26 18:53:16.522845634 +0000 UTC m=+1369.831040342" watchObservedRunningTime="2026-01-26 18:53:16.525746553 +0000 UTC m=+1369.833941261" Jan 26 18:53:16 crc kubenswrapper[4737]: I0126 18:53:16.548463 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=33.336187325 podStartE2EDuration="49.548444709s" podCreationTimestamp="2026-01-26 18:52:27 +0000 UTC" firstStartedPulling="2026-01-26 18:52:59.53311938 +0000 UTC m=+1352.841314078" lastFinishedPulling="2026-01-26 18:53:15.745376754 +0000 UTC m=+1369.053571462" observedRunningTime="2026-01-26 18:53:16.545222972 +0000 UTC m=+1369.853417720" watchObservedRunningTime="2026-01-26 18:53:16.548444709 +0000 UTC m=+1369.856639417" Jan 26 18:53:16 crc kubenswrapper[4737]: I0126 18:53:16.570157 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-tnjz7" podStartSLOduration=32.343261895 podStartE2EDuration="52.570130511s" podCreationTimestamp="2026-01-26 18:52:24 +0000 UTC" firstStartedPulling="2026-01-26 18:52:40.890157283 +0000 UTC m=+1334.198351991" lastFinishedPulling="2026-01-26 18:53:01.117025909 +0000 UTC m=+1354.425220607" observedRunningTime="2026-01-26 18:53:16.566850024 +0000 UTC m=+1369.875044742" watchObservedRunningTime="2026-01-26 18:53:16.570130511 +0000 UTC m=+1369.878325239" Jan 26 18:53:16 crc kubenswrapper[4737]: I0126 18:53:16.574258 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/03970489-bf21-4d19-afc2-bf8d39aa683e-etc-swift\") pod \"swift-storage-0\" (UID: \"03970489-bf21-4d19-afc2-bf8d39aa683e\") " pod="openstack/swift-storage-0" Jan 26 18:53:16 crc kubenswrapper[4737]: E0126 18:53:16.576477 4737 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 18:53:16 crc kubenswrapper[4737]: E0126 18:53:16.576498 4737 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 18:53:16 crc kubenswrapper[4737]: E0126 18:53:16.576539 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/03970489-bf21-4d19-afc2-bf8d39aa683e-etc-swift podName:03970489-bf21-4d19-afc2-bf8d39aa683e nodeName:}" failed. No retries permitted until 2026-01-26 18:53:20.576521793 +0000 UTC m=+1373.884716501 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/03970489-bf21-4d19-afc2-bf8d39aa683e-etc-swift") pod "swift-storage-0" (UID: "03970489-bf21-4d19-afc2-bf8d39aa683e") : configmap "swift-ring-files" not found Jan 26 18:53:16 crc kubenswrapper[4737]: I0126 18:53:16.623825 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=22.135450191 podStartE2EDuration="53.62380327s" podCreationTimestamp="2026-01-26 18:52:23 +0000 UTC" firstStartedPulling="2026-01-26 18:52:44.24660957 +0000 UTC m=+1337.554804278" lastFinishedPulling="2026-01-26 18:53:15.734962649 +0000 UTC m=+1369.043157357" observedRunningTime="2026-01-26 18:53:16.584519951 +0000 UTC m=+1369.892714659" watchObservedRunningTime="2026-01-26 18:53:16.62380327 +0000 UTC m=+1369.931997978" Jan 26 18:53:16 crc kubenswrapper[4737]: I0126 18:53:16.634308 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371976.220486 podStartE2EDuration="1m0.634290777s" podCreationTimestamp="2026-01-26 18:52:16 +0000 UTC" firstStartedPulling="2026-01-26 18:52:18.362155173 +0000 UTC m=+1311.670349881" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:53:16.600434277 +0000 UTC m=+1369.908628985" watchObservedRunningTime="2026-01-26 18:53:16.634290777 +0000 UTC m=+1369.942485485" Jan 26 18:53:16 crc kubenswrapper[4737]: I0126 18:53:16.999768 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ac6f0c1-3e0d-4896-a392-913dc6576566" path="/var/lib/kubelet/pods/7ac6f0c1-3e0d-4896-a392-913dc6576566/volumes" Jan 26 18:53:17 crc kubenswrapper[4737]: I0126 18:53:17.019863 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 26 18:53:17 crc kubenswrapper[4737]: I0126 18:53:17.087538 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 26 18:53:17 crc kubenswrapper[4737]: I0126 18:53:17.185462 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 26 18:53:17 crc kubenswrapper[4737]: I0126 18:53:17.228388 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 26 18:53:17 crc kubenswrapper[4737]: I0126 18:53:17.522566 4737 generic.go:334] "Generic (PLEG): container finished" podID="8b254d0c-eff7-4b4a-8814-a261c66bac0b" containerID="f887fe89f9bb30426e22f98a86715c12fe4a19f6ea1d18ba83b2445cebda64c9" exitCode=0 Jan 26 18:53:17 crc kubenswrapper[4737]: I0126 18:53:17.522674 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-2j527" event={"ID":"8b254d0c-eff7-4b4a-8814-a261c66bac0b","Type":"ContainerDied","Data":"f887fe89f9bb30426e22f98a86715c12fe4a19f6ea1d18ba83b2445cebda64c9"} Jan 26 18:53:17 crc kubenswrapper[4737]: I0126 18:53:17.525349 4737 generic.go:334] "Generic (PLEG): container finished" podID="95e9d7d0-9037-453f-b3ca-c6d563152124" containerID="f1737c793dc7a18acb0ae0f2dc34be4a4ba6660fd48d3ae77286876e6af77432" exitCode=0 Jan 26 18:53:17 crc kubenswrapper[4737]: I0126 18:53:17.525393 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-gkplf" event={"ID":"95e9d7d0-9037-453f-b3ca-c6d563152124","Type":"ContainerDied","Data":"f1737c793dc7a18acb0ae0f2dc34be4a4ba6660fd48d3ae77286876e6af77432"} Jan 26 18:53:17 crc kubenswrapper[4737]: I0126 18:53:17.525501 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-tnjz7" Jan 26 18:53:17 crc kubenswrapper[4737]: I0126 18:53:17.526111 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 26 18:53:17 crc kubenswrapper[4737]: I0126 18:53:17.526255 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 26 18:53:17 crc kubenswrapper[4737]: I0126 18:53:17.607437 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 26 18:53:17 crc kubenswrapper[4737]: I0126 18:53:17.608456 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 26 18:53:17 crc kubenswrapper[4737]: I0126 18:53:17.625428 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 26 18:53:17 crc kubenswrapper[4737]: I0126 18:53:17.626046 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 26 18:53:17 crc kubenswrapper[4737]: I0126 18:53:17.867792 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-2j527"] Jan 26 18:53:17 crc kubenswrapper[4737]: I0126 18:53:17.890617 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-kjzn5"] Jan 26 18:53:17 crc kubenswrapper[4737]: I0126 18:53:17.892913 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6f696b9-kjzn5" Jan 26 18:53:17 crc kubenswrapper[4737]: I0126 18:53:17.902404 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 26 18:53:17 crc kubenswrapper[4737]: I0126 18:53:17.906634 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-kjzn5"] Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.010003 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-96rrx"] Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.011772 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-96rrx" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.016005 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.022414 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a8bf10ed-050c-48c9-8967-f2bb9b53b9eb-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6f696b9-kjzn5\" (UID: \"a8bf10ed-050c-48c9-8967-f2bb9b53b9eb\") " pod="openstack/dnsmasq-dns-74f6f696b9-kjzn5" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.022485 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8bf10ed-050c-48c9-8967-f2bb9b53b9eb-config\") pod \"dnsmasq-dns-74f6f696b9-kjzn5\" (UID: \"a8bf10ed-050c-48c9-8967-f2bb9b53b9eb\") " pod="openstack/dnsmasq-dns-74f6f696b9-kjzn5" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.022534 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a8bf10ed-050c-48c9-8967-f2bb9b53b9eb-dns-svc\") pod \"dnsmasq-dns-74f6f696b9-kjzn5\" (UID: \"a8bf10ed-050c-48c9-8967-f2bb9b53b9eb\") " pod="openstack/dnsmasq-dns-74f6f696b9-kjzn5" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.022730 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq2kl\" (UniqueName: \"kubernetes.io/projected/a8bf10ed-050c-48c9-8967-f2bb9b53b9eb-kube-api-access-pq2kl\") pod \"dnsmasq-dns-74f6f696b9-kjzn5\" (UID: \"a8bf10ed-050c-48c9-8967-f2bb9b53b9eb\") " pod="openstack/dnsmasq-dns-74f6f696b9-kjzn5" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.043917 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-96rrx"] Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.113001 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.115352 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.118677 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-zzkgf" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.118958 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.119236 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.119397 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.125822 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bdafee1-1c61-4cbe-b052-c5948c27152d-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-96rrx\" (UID: \"6bdafee1-1c61-4cbe-b052-c5948c27152d\") " pod="openstack/ovn-controller-metrics-96rrx" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.125886 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/6bdafee1-1c61-4cbe-b052-c5948c27152d-ovs-rundir\") pod \"ovn-controller-metrics-96rrx\" (UID: \"6bdafee1-1c61-4cbe-b052-c5948c27152d\") " pod="openstack/ovn-controller-metrics-96rrx" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.126061 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a8bf10ed-050c-48c9-8967-f2bb9b53b9eb-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6f696b9-kjzn5\" (UID: \"a8bf10ed-050c-48c9-8967-f2bb9b53b9eb\") " pod="openstack/dnsmasq-dns-74f6f696b9-kjzn5" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.126134 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8bf10ed-050c-48c9-8967-f2bb9b53b9eb-config\") pod \"dnsmasq-dns-74f6f696b9-kjzn5\" (UID: \"a8bf10ed-050c-48c9-8967-f2bb9b53b9eb\") " pod="openstack/dnsmasq-dns-74f6f696b9-kjzn5" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.126157 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/6bdafee1-1c61-4cbe-b052-c5948c27152d-ovn-rundir\") pod \"ovn-controller-metrics-96rrx\" (UID: \"6bdafee1-1c61-4cbe-b052-c5948c27152d\") " pod="openstack/ovn-controller-metrics-96rrx" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.126207 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a8bf10ed-050c-48c9-8967-f2bb9b53b9eb-dns-svc\") pod \"dnsmasq-dns-74f6f696b9-kjzn5\" (UID: \"a8bf10ed-050c-48c9-8967-f2bb9b53b9eb\") " pod="openstack/dnsmasq-dns-74f6f696b9-kjzn5" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.126297 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bdafee1-1c61-4cbe-b052-c5948c27152d-combined-ca-bundle\") pod \"ovn-controller-metrics-96rrx\" (UID: \"6bdafee1-1c61-4cbe-b052-c5948c27152d\") " pod="openstack/ovn-controller-metrics-96rrx" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.126370 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pq2kl\" (UniqueName: \"kubernetes.io/projected/a8bf10ed-050c-48c9-8967-f2bb9b53b9eb-kube-api-access-pq2kl\") pod \"dnsmasq-dns-74f6f696b9-kjzn5\" (UID: \"a8bf10ed-050c-48c9-8967-f2bb9b53b9eb\") " pod="openstack/dnsmasq-dns-74f6f696b9-kjzn5" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.126531 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk46p\" (UniqueName: \"kubernetes.io/projected/6bdafee1-1c61-4cbe-b052-c5948c27152d-kube-api-access-kk46p\") pod \"ovn-controller-metrics-96rrx\" (UID: \"6bdafee1-1c61-4cbe-b052-c5948c27152d\") " pod="openstack/ovn-controller-metrics-96rrx" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.126583 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bdafee1-1c61-4cbe-b052-c5948c27152d-config\") pod \"ovn-controller-metrics-96rrx\" (UID: \"6bdafee1-1c61-4cbe-b052-c5948c27152d\") " pod="openstack/ovn-controller-metrics-96rrx" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.128497 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a8bf10ed-050c-48c9-8967-f2bb9b53b9eb-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6f696b9-kjzn5\" (UID: \"a8bf10ed-050c-48c9-8967-f2bb9b53b9eb\") " pod="openstack/dnsmasq-dns-74f6f696b9-kjzn5" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.130279 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8bf10ed-050c-48c9-8967-f2bb9b53b9eb-config\") pod \"dnsmasq-dns-74f6f696b9-kjzn5\" (UID: \"a8bf10ed-050c-48c9-8967-f2bb9b53b9eb\") " pod="openstack/dnsmasq-dns-74f6f696b9-kjzn5" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.132326 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-gkplf"] Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.135461 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a8bf10ed-050c-48c9-8967-f2bb9b53b9eb-dns-svc\") pod \"dnsmasq-dns-74f6f696b9-kjzn5\" (UID: \"a8bf10ed-050c-48c9-8967-f2bb9b53b9eb\") " pod="openstack/dnsmasq-dns-74f6f696b9-kjzn5" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.153998 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.165998 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq2kl\" (UniqueName: \"kubernetes.io/projected/a8bf10ed-050c-48c9-8967-f2bb9b53b9eb-kube-api-access-pq2kl\") pod \"dnsmasq-dns-74f6f696b9-kjzn5\" (UID: \"a8bf10ed-050c-48c9-8967-f2bb9b53b9eb\") " pod="openstack/dnsmasq-dns-74f6f696b9-kjzn5" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.202227 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-7xhdj"] Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.204273 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-7xhdj" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.208332 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.233206 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/19bc14ba-dd2b-4cb9-969d-e44339856cf0-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"19bc14ba-dd2b-4cb9-969d-e44339856cf0\") " pod="openstack/ovn-northd-0" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.233269 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bdafee1-1c61-4cbe-b052-c5948c27152d-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-96rrx\" (UID: \"6bdafee1-1c61-4cbe-b052-c5948c27152d\") " pod="openstack/ovn-controller-metrics-96rrx" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.233299 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/6bdafee1-1c61-4cbe-b052-c5948c27152d-ovs-rundir\") pod \"ovn-controller-metrics-96rrx\" (UID: \"6bdafee1-1c61-4cbe-b052-c5948c27152d\") " pod="openstack/ovn-controller-metrics-96rrx" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.233379 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19bc14ba-dd2b-4cb9-969d-e44339856cf0-config\") pod \"ovn-northd-0\" (UID: \"19bc14ba-dd2b-4cb9-969d-e44339856cf0\") " pod="openstack/ovn-northd-0" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.233442 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/6bdafee1-1c61-4cbe-b052-c5948c27152d-ovn-rundir\") pod \"ovn-controller-metrics-96rrx\" (UID: \"6bdafee1-1c61-4cbe-b052-c5948c27152d\") " pod="openstack/ovn-controller-metrics-96rrx" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.233470 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bdafee1-1c61-4cbe-b052-c5948c27152d-combined-ca-bundle\") pod \"ovn-controller-metrics-96rrx\" (UID: \"6bdafee1-1c61-4cbe-b052-c5948c27152d\") " pod="openstack/ovn-controller-metrics-96rrx" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.233491 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19bc14ba-dd2b-4cb9-969d-e44339856cf0-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"19bc14ba-dd2b-4cb9-969d-e44339856cf0\") " pod="openstack/ovn-northd-0" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.233521 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/19bc14ba-dd2b-4cb9-969d-e44339856cf0-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"19bc14ba-dd2b-4cb9-969d-e44339856cf0\") " pod="openstack/ovn-northd-0" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.233551 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19bc14ba-dd2b-4cb9-969d-e44339856cf0-scripts\") pod \"ovn-northd-0\" (UID: \"19bc14ba-dd2b-4cb9-969d-e44339856cf0\") " pod="openstack/ovn-northd-0" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.233634 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kk46p\" (UniqueName: \"kubernetes.io/projected/6bdafee1-1c61-4cbe-b052-c5948c27152d-kube-api-access-kk46p\") pod \"ovn-controller-metrics-96rrx\" (UID: \"6bdafee1-1c61-4cbe-b052-c5948c27152d\") " pod="openstack/ovn-controller-metrics-96rrx" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.233681 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bdafee1-1c61-4cbe-b052-c5948c27152d-config\") pod \"ovn-controller-metrics-96rrx\" (UID: \"6bdafee1-1c61-4cbe-b052-c5948c27152d\") " pod="openstack/ovn-controller-metrics-96rrx" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.233713 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/19bc14ba-dd2b-4cb9-969d-e44339856cf0-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"19bc14ba-dd2b-4cb9-969d-e44339856cf0\") " pod="openstack/ovn-northd-0" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.233808 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdcz8\" (UniqueName: \"kubernetes.io/projected/19bc14ba-dd2b-4cb9-969d-e44339856cf0-kube-api-access-cdcz8\") pod \"ovn-northd-0\" (UID: \"19bc14ba-dd2b-4cb9-969d-e44339856cf0\") " pod="openstack/ovn-northd-0" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.238787 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/6bdafee1-1c61-4cbe-b052-c5948c27152d-ovs-rundir\") pod \"ovn-controller-metrics-96rrx\" (UID: \"6bdafee1-1c61-4cbe-b052-c5948c27152d\") " pod="openstack/ovn-controller-metrics-96rrx" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.245117 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bdafee1-1c61-4cbe-b052-c5948c27152d-combined-ca-bundle\") pod \"ovn-controller-metrics-96rrx\" (UID: \"6bdafee1-1c61-4cbe-b052-c5948c27152d\") " pod="openstack/ovn-controller-metrics-96rrx" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.251750 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bdafee1-1c61-4cbe-b052-c5948c27152d-config\") pod \"ovn-controller-metrics-96rrx\" (UID: \"6bdafee1-1c61-4cbe-b052-c5948c27152d\") " pod="openstack/ovn-controller-metrics-96rrx" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.251849 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/6bdafee1-1c61-4cbe-b052-c5948c27152d-ovn-rundir\") pod \"ovn-controller-metrics-96rrx\" (UID: \"6bdafee1-1c61-4cbe-b052-c5948c27152d\") " pod="openstack/ovn-controller-metrics-96rrx" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.252851 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6f696b9-kjzn5" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.270766 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bdafee1-1c61-4cbe-b052-c5948c27152d-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-96rrx\" (UID: \"6bdafee1-1c61-4cbe-b052-c5948c27152d\") " pod="openstack/ovn-controller-metrics-96rrx" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.282243 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kk46p\" (UniqueName: \"kubernetes.io/projected/6bdafee1-1c61-4cbe-b052-c5948c27152d-kube-api-access-kk46p\") pod \"ovn-controller-metrics-96rrx\" (UID: \"6bdafee1-1c61-4cbe-b052-c5948c27152d\") " pod="openstack/ovn-controller-metrics-96rrx" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.318128 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-7xhdj"] Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.350767 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4943ea2e-2d2e-4024-97f5-b7a2b288e3b2-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-7xhdj\" (UID: \"4943ea2e-2d2e-4024-97f5-b7a2b288e3b2\") " pod="openstack/dnsmasq-dns-698758b865-7xhdj" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.350827 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljmbk\" (UniqueName: \"kubernetes.io/projected/4943ea2e-2d2e-4024-97f5-b7a2b288e3b2-kube-api-access-ljmbk\") pod \"dnsmasq-dns-698758b865-7xhdj\" (UID: \"4943ea2e-2d2e-4024-97f5-b7a2b288e3b2\") " pod="openstack/dnsmasq-dns-698758b865-7xhdj" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.350853 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/19bc14ba-dd2b-4cb9-969d-e44339856cf0-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"19bc14ba-dd2b-4cb9-969d-e44339856cf0\") " pod="openstack/ovn-northd-0" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.350902 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4943ea2e-2d2e-4024-97f5-b7a2b288e3b2-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-7xhdj\" (UID: \"4943ea2e-2d2e-4024-97f5-b7a2b288e3b2\") " pod="openstack/dnsmasq-dns-698758b865-7xhdj" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.350968 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdcz8\" (UniqueName: \"kubernetes.io/projected/19bc14ba-dd2b-4cb9-969d-e44339856cf0-kube-api-access-cdcz8\") pod \"ovn-northd-0\" (UID: \"19bc14ba-dd2b-4cb9-969d-e44339856cf0\") " pod="openstack/ovn-northd-0" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.350997 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/19bc14ba-dd2b-4cb9-969d-e44339856cf0-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"19bc14ba-dd2b-4cb9-969d-e44339856cf0\") " pod="openstack/ovn-northd-0" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.351097 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19bc14ba-dd2b-4cb9-969d-e44339856cf0-config\") pod \"ovn-northd-0\" (UID: \"19bc14ba-dd2b-4cb9-969d-e44339856cf0\") " pod="openstack/ovn-northd-0" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.351150 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4943ea2e-2d2e-4024-97f5-b7a2b288e3b2-config\") pod \"dnsmasq-dns-698758b865-7xhdj\" (UID: \"4943ea2e-2d2e-4024-97f5-b7a2b288e3b2\") " pod="openstack/dnsmasq-dns-698758b865-7xhdj" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.351193 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4943ea2e-2d2e-4024-97f5-b7a2b288e3b2-dns-svc\") pod \"dnsmasq-dns-698758b865-7xhdj\" (UID: \"4943ea2e-2d2e-4024-97f5-b7a2b288e3b2\") " pod="openstack/dnsmasq-dns-698758b865-7xhdj" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.351223 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19bc14ba-dd2b-4cb9-969d-e44339856cf0-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"19bc14ba-dd2b-4cb9-969d-e44339856cf0\") " pod="openstack/ovn-northd-0" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.351246 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/19bc14ba-dd2b-4cb9-969d-e44339856cf0-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"19bc14ba-dd2b-4cb9-969d-e44339856cf0\") " pod="openstack/ovn-northd-0" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.351271 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19bc14ba-dd2b-4cb9-969d-e44339856cf0-scripts\") pod \"ovn-northd-0\" (UID: \"19bc14ba-dd2b-4cb9-969d-e44339856cf0\") " pod="openstack/ovn-northd-0" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.356763 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19bc14ba-dd2b-4cb9-969d-e44339856cf0-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"19bc14ba-dd2b-4cb9-969d-e44339856cf0\") " pod="openstack/ovn-northd-0" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.357525 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19bc14ba-dd2b-4cb9-969d-e44339856cf0-config\") pod \"ovn-northd-0\" (UID: \"19bc14ba-dd2b-4cb9-969d-e44339856cf0\") " pod="openstack/ovn-northd-0" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.357864 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/19bc14ba-dd2b-4cb9-969d-e44339856cf0-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"19bc14ba-dd2b-4cb9-969d-e44339856cf0\") " pod="openstack/ovn-northd-0" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.358617 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19bc14ba-dd2b-4cb9-969d-e44339856cf0-scripts\") pod \"ovn-northd-0\" (UID: \"19bc14ba-dd2b-4cb9-969d-e44339856cf0\") " pod="openstack/ovn-northd-0" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.359807 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/19bc14ba-dd2b-4cb9-969d-e44339856cf0-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"19bc14ba-dd2b-4cb9-969d-e44339856cf0\") " pod="openstack/ovn-northd-0" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.363929 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-96rrx" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.367303 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/19bc14ba-dd2b-4cb9-969d-e44339856cf0-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"19bc14ba-dd2b-4cb9-969d-e44339856cf0\") " pod="openstack/ovn-northd-0" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.404637 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdcz8\" (UniqueName: \"kubernetes.io/projected/19bc14ba-dd2b-4cb9-969d-e44339856cf0-kube-api-access-cdcz8\") pod \"ovn-northd-0\" (UID: \"19bc14ba-dd2b-4cb9-969d-e44339856cf0\") " pod="openstack/ovn-northd-0" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.452627 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4943ea2e-2d2e-4024-97f5-b7a2b288e3b2-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-7xhdj\" (UID: \"4943ea2e-2d2e-4024-97f5-b7a2b288e3b2\") " pod="openstack/dnsmasq-dns-698758b865-7xhdj" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.452926 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljmbk\" (UniqueName: \"kubernetes.io/projected/4943ea2e-2d2e-4024-97f5-b7a2b288e3b2-kube-api-access-ljmbk\") pod \"dnsmasq-dns-698758b865-7xhdj\" (UID: \"4943ea2e-2d2e-4024-97f5-b7a2b288e3b2\") " pod="openstack/dnsmasq-dns-698758b865-7xhdj" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.452973 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4943ea2e-2d2e-4024-97f5-b7a2b288e3b2-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-7xhdj\" (UID: \"4943ea2e-2d2e-4024-97f5-b7a2b288e3b2\") " pod="openstack/dnsmasq-dns-698758b865-7xhdj" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.453142 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4943ea2e-2d2e-4024-97f5-b7a2b288e3b2-config\") pod \"dnsmasq-dns-698758b865-7xhdj\" (UID: \"4943ea2e-2d2e-4024-97f5-b7a2b288e3b2\") " pod="openstack/dnsmasq-dns-698758b865-7xhdj" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.453170 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4943ea2e-2d2e-4024-97f5-b7a2b288e3b2-dns-svc\") pod \"dnsmasq-dns-698758b865-7xhdj\" (UID: \"4943ea2e-2d2e-4024-97f5-b7a2b288e3b2\") " pod="openstack/dnsmasq-dns-698758b865-7xhdj" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.456419 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4943ea2e-2d2e-4024-97f5-b7a2b288e3b2-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-7xhdj\" (UID: \"4943ea2e-2d2e-4024-97f5-b7a2b288e3b2\") " pod="openstack/dnsmasq-dns-698758b865-7xhdj" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.458537 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4943ea2e-2d2e-4024-97f5-b7a2b288e3b2-config\") pod \"dnsmasq-dns-698758b865-7xhdj\" (UID: \"4943ea2e-2d2e-4024-97f5-b7a2b288e3b2\") " pod="openstack/dnsmasq-dns-698758b865-7xhdj" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.459183 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.465545 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4943ea2e-2d2e-4024-97f5-b7a2b288e3b2-dns-svc\") pod \"dnsmasq-dns-698758b865-7xhdj\" (UID: \"4943ea2e-2d2e-4024-97f5-b7a2b288e3b2\") " pod="openstack/dnsmasq-dns-698758b865-7xhdj" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.473064 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4943ea2e-2d2e-4024-97f5-b7a2b288e3b2-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-7xhdj\" (UID: \"4943ea2e-2d2e-4024-97f5-b7a2b288e3b2\") " pod="openstack/dnsmasq-dns-698758b865-7xhdj" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.484530 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljmbk\" (UniqueName: \"kubernetes.io/projected/4943ea2e-2d2e-4024-97f5-b7a2b288e3b2-kube-api-access-ljmbk\") pod \"dnsmasq-dns-698758b865-7xhdj\" (UID: \"4943ea2e-2d2e-4024-97f5-b7a2b288e3b2\") " pod="openstack/dnsmasq-dns-698758b865-7xhdj" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.541662 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-2j527" event={"ID":"8b254d0c-eff7-4b4a-8814-a261c66bac0b","Type":"ContainerStarted","Data":"52a8f589668ac114a7831c4fc367289d06a0c9c9f68f117bc94d724605041bac"} Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.541867 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-2j527" podUID="8b254d0c-eff7-4b4a-8814-a261c66bac0b" containerName="dnsmasq-dns" containerID="cri-o://52a8f589668ac114a7831c4fc367289d06a0c9c9f68f117bc94d724605041bac" gracePeriod=10 Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.542200 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-2j527" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.550299 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-gkplf" event={"ID":"95e9d7d0-9037-453f-b3ca-c6d563152124","Type":"ContainerStarted","Data":"ae0dc74e933ad9f5d755737cfb28a8ed96921023ab493b1af8b2aabda3518486"} Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.551518 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7cb5889db5-gkplf" podUID="95e9d7d0-9037-453f-b3ca-c6d563152124" containerName="dnsmasq-dns" containerID="cri-o://ae0dc74e933ad9f5d755737cfb28a8ed96921023ab493b1af8b2aabda3518486" gracePeriod=10 Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.578447 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-2j527" podStartSLOduration=3.671055881 podStartE2EDuration="1m4.578421932s" podCreationTimestamp="2026-01-26 18:52:14 +0000 UTC" firstStartedPulling="2026-01-26 18:52:15.61068005 +0000 UTC m=+1308.918874758" lastFinishedPulling="2026-01-26 18:53:16.518046101 +0000 UTC m=+1369.826240809" observedRunningTime="2026-01-26 18:53:18.567886073 +0000 UTC m=+1371.876080781" watchObservedRunningTime="2026-01-26 18:53:18.578421932 +0000 UTC m=+1371.886616640" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.600425 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7cb5889db5-gkplf" podStartSLOduration=6.917774602 podStartE2EDuration="7.600401821s" podCreationTimestamp="2026-01-26 18:53:11 +0000 UTC" firstStartedPulling="2026-01-26 18:53:16.367389091 +0000 UTC m=+1369.675583799" lastFinishedPulling="2026-01-26 18:53:17.0500163 +0000 UTC m=+1370.358211018" observedRunningTime="2026-01-26 18:53:18.592920925 +0000 UTC m=+1371.901115633" watchObservedRunningTime="2026-01-26 18:53:18.600401821 +0000 UTC m=+1371.908596529" Jan 26 18:53:18 crc kubenswrapper[4737]: I0126 18:53:18.760982 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-7xhdj" Jan 26 18:53:19 crc kubenswrapper[4737]: I0126 18:53:19.148895 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 26 18:53:19 crc kubenswrapper[4737]: I0126 18:53:19.148946 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 26 18:53:19 crc kubenswrapper[4737]: I0126 18:53:19.386737 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-645c6f4f57-glmhb" podUID="d5a33684-359a-48ee-b9de-3f09cd04bc51" containerName="console" containerID="cri-o://e77c00fcc8e5981a3b4bc1de3b40217df4672344e02bd378be1b988d919d7c17" gracePeriod=15 Jan 26 18:53:19 crc kubenswrapper[4737]: I0126 18:53:19.567990 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-645c6f4f57-glmhb_d5a33684-359a-48ee-b9de-3f09cd04bc51/console/0.log" Jan 26 18:53:19 crc kubenswrapper[4737]: I0126 18:53:19.568055 4737 generic.go:334] "Generic (PLEG): container finished" podID="d5a33684-359a-48ee-b9de-3f09cd04bc51" containerID="e77c00fcc8e5981a3b4bc1de3b40217df4672344e02bd378be1b988d919d7c17" exitCode=2 Jan 26 18:53:19 crc kubenswrapper[4737]: I0126 18:53:19.568149 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-645c6f4f57-glmhb" event={"ID":"d5a33684-359a-48ee-b9de-3f09cd04bc51","Type":"ContainerDied","Data":"e77c00fcc8e5981a3b4bc1de3b40217df4672344e02bd378be1b988d919d7c17"} Jan 26 18:53:19 crc kubenswrapper[4737]: I0126 18:53:19.572529 4737 generic.go:334] "Generic (PLEG): container finished" podID="8b254d0c-eff7-4b4a-8814-a261c66bac0b" containerID="52a8f589668ac114a7831c4fc367289d06a0c9c9f68f117bc94d724605041bac" exitCode=0 Jan 26 18:53:19 crc kubenswrapper[4737]: I0126 18:53:19.572665 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-2j527" event={"ID":"8b254d0c-eff7-4b4a-8814-a261c66bac0b","Type":"ContainerDied","Data":"52a8f589668ac114a7831c4fc367289d06a0c9c9f68f117bc94d724605041bac"} Jan 26 18:53:19 crc kubenswrapper[4737]: I0126 18:53:19.580375 4737 generic.go:334] "Generic (PLEG): container finished" podID="95e9d7d0-9037-453f-b3ca-c6d563152124" containerID="ae0dc74e933ad9f5d755737cfb28a8ed96921023ab493b1af8b2aabda3518486" exitCode=0 Jan 26 18:53:19 crc kubenswrapper[4737]: I0126 18:53:19.583141 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-gkplf" event={"ID":"95e9d7d0-9037-453f-b3ca-c6d563152124","Type":"ContainerDied","Data":"ae0dc74e933ad9f5d755737cfb28a8ed96921023ab493b1af8b2aabda3518486"} Jan 26 18:53:20 crc kubenswrapper[4737]: I0126 18:53:20.628690 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/03970489-bf21-4d19-afc2-bf8d39aa683e-etc-swift\") pod \"swift-storage-0\" (UID: \"03970489-bf21-4d19-afc2-bf8d39aa683e\") " pod="openstack/swift-storage-0" Jan 26 18:53:20 crc kubenswrapper[4737]: E0126 18:53:20.628947 4737 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 18:53:20 crc kubenswrapper[4737]: E0126 18:53:20.629115 4737 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 18:53:20 crc kubenswrapper[4737]: E0126 18:53:20.629189 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/03970489-bf21-4d19-afc2-bf8d39aa683e-etc-swift podName:03970489-bf21-4d19-afc2-bf8d39aa683e nodeName:}" failed. No retries permitted until 2026-01-26 18:53:28.629166466 +0000 UTC m=+1381.937361184 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/03970489-bf21-4d19-afc2-bf8d39aa683e-etc-swift") pod "swift-storage-0" (UID: "03970489-bf21-4d19-afc2-bf8d39aa683e") : configmap "swift-ring-files" not found Jan 26 18:53:21 crc kubenswrapper[4737]: I0126 18:53:21.700432 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 26 18:53:21 crc kubenswrapper[4737]: I0126 18:53:21.932713 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7cb5889db5-gkplf" Jan 26 18:53:22 crc kubenswrapper[4737]: I0126 18:53:22.470849 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-2j527" Jan 26 18:53:22 crc kubenswrapper[4737]: I0126 18:53:22.477601 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-gkplf" Jan 26 18:53:22 crc kubenswrapper[4737]: I0126 18:53:22.569573 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b254d0c-eff7-4b4a-8814-a261c66bac0b-dns-svc\") pod \"8b254d0c-eff7-4b4a-8814-a261c66bac0b\" (UID: \"8b254d0c-eff7-4b4a-8814-a261c66bac0b\") " Jan 26 18:53:22 crc kubenswrapper[4737]: I0126 18:53:22.569645 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95e9d7d0-9037-453f-b3ca-c6d563152124-config\") pod \"95e9d7d0-9037-453f-b3ca-c6d563152124\" (UID: \"95e9d7d0-9037-453f-b3ca-c6d563152124\") " Jan 26 18:53:22 crc kubenswrapper[4737]: I0126 18:53:22.569807 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95e9d7d0-9037-453f-b3ca-c6d563152124-dns-svc\") pod \"95e9d7d0-9037-453f-b3ca-c6d563152124\" (UID: \"95e9d7d0-9037-453f-b3ca-c6d563152124\") " Jan 26 18:53:22 crc kubenswrapper[4737]: I0126 18:53:22.569896 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nvnb\" (UniqueName: \"kubernetes.io/projected/8b254d0c-eff7-4b4a-8814-a261c66bac0b-kube-api-access-2nvnb\") pod \"8b254d0c-eff7-4b4a-8814-a261c66bac0b\" (UID: \"8b254d0c-eff7-4b4a-8814-a261c66bac0b\") " Jan 26 18:53:22 crc kubenswrapper[4737]: I0126 18:53:22.570232 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b254d0c-eff7-4b4a-8814-a261c66bac0b-config\") pod \"8b254d0c-eff7-4b4a-8814-a261c66bac0b\" (UID: \"8b254d0c-eff7-4b4a-8814-a261c66bac0b\") " Jan 26 18:53:22 crc kubenswrapper[4737]: I0126 18:53:22.570390 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdhwm\" (UniqueName: \"kubernetes.io/projected/95e9d7d0-9037-453f-b3ca-c6d563152124-kube-api-access-mdhwm\") pod \"95e9d7d0-9037-453f-b3ca-c6d563152124\" (UID: \"95e9d7d0-9037-453f-b3ca-c6d563152124\") " Jan 26 18:53:22 crc kubenswrapper[4737]: I0126 18:53:22.576284 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95e9d7d0-9037-453f-b3ca-c6d563152124-kube-api-access-mdhwm" (OuterVolumeSpecName: "kube-api-access-mdhwm") pod "95e9d7d0-9037-453f-b3ca-c6d563152124" (UID: "95e9d7d0-9037-453f-b3ca-c6d563152124"). InnerVolumeSpecName "kube-api-access-mdhwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:53:22 crc kubenswrapper[4737]: I0126 18:53:22.582430 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b254d0c-eff7-4b4a-8814-a261c66bac0b-kube-api-access-2nvnb" (OuterVolumeSpecName: "kube-api-access-2nvnb") pod "8b254d0c-eff7-4b4a-8814-a261c66bac0b" (UID: "8b254d0c-eff7-4b4a-8814-a261c66bac0b"). InnerVolumeSpecName "kube-api-access-2nvnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:53:22 crc kubenswrapper[4737]: I0126 18:53:22.622804 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b254d0c-eff7-4b4a-8814-a261c66bac0b-config" (OuterVolumeSpecName: "config") pod "8b254d0c-eff7-4b4a-8814-a261c66bac0b" (UID: "8b254d0c-eff7-4b4a-8814-a261c66bac0b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:53:22 crc kubenswrapper[4737]: I0126 18:53:22.634986 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95e9d7d0-9037-453f-b3ca-c6d563152124-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "95e9d7d0-9037-453f-b3ca-c6d563152124" (UID: "95e9d7d0-9037-453f-b3ca-c6d563152124"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:53:22 crc kubenswrapper[4737]: I0126 18:53:22.642105 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95e9d7d0-9037-453f-b3ca-c6d563152124-config" (OuterVolumeSpecName: "config") pod "95e9d7d0-9037-453f-b3ca-c6d563152124" (UID: "95e9d7d0-9037-453f-b3ca-c6d563152124"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:53:22 crc kubenswrapper[4737]: I0126 18:53:22.642399 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-2j527" event={"ID":"8b254d0c-eff7-4b4a-8814-a261c66bac0b","Type":"ContainerDied","Data":"9e291dc82814af364677fa831ddfd9a2d7145db1694d81d807fd640b69196dcc"} Jan 26 18:53:22 crc kubenswrapper[4737]: I0126 18:53:22.642483 4737 scope.go:117] "RemoveContainer" containerID="52a8f589668ac114a7831c4fc367289d06a0c9c9f68f117bc94d724605041bac" Jan 26 18:53:22 crc kubenswrapper[4737]: I0126 18:53:22.643418 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-2j527" Jan 26 18:53:22 crc kubenswrapper[4737]: I0126 18:53:22.653334 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-gkplf" event={"ID":"95e9d7d0-9037-453f-b3ca-c6d563152124","Type":"ContainerDied","Data":"ff9f510bacef681301b12ae7668829fc67e4fc7ec6f644f3990e9ae4aab20354"} Jan 26 18:53:22 crc kubenswrapper[4737]: I0126 18:53:22.653385 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-gkplf" Jan 26 18:53:22 crc kubenswrapper[4737]: I0126 18:53:22.674563 4737 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95e9d7d0-9037-453f-b3ca-c6d563152124-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:22 crc kubenswrapper[4737]: I0126 18:53:22.674610 4737 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95e9d7d0-9037-453f-b3ca-c6d563152124-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:22 crc kubenswrapper[4737]: I0126 18:53:22.674620 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nvnb\" (UniqueName: \"kubernetes.io/projected/8b254d0c-eff7-4b4a-8814-a261c66bac0b-kube-api-access-2nvnb\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:22 crc kubenswrapper[4737]: I0126 18:53:22.674630 4737 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b254d0c-eff7-4b4a-8814-a261c66bac0b-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:22 crc kubenswrapper[4737]: I0126 18:53:22.674649 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdhwm\" (UniqueName: \"kubernetes.io/projected/95e9d7d0-9037-453f-b3ca-c6d563152124-kube-api-access-mdhwm\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:22 crc kubenswrapper[4737]: I0126 18:53:22.676466 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b254d0c-eff7-4b4a-8814-a261c66bac0b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8b254d0c-eff7-4b4a-8814-a261c66bac0b" (UID: "8b254d0c-eff7-4b4a-8814-a261c66bac0b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:53:22 crc kubenswrapper[4737]: I0126 18:53:22.713304 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-gkplf"] Jan 26 18:53:22 crc kubenswrapper[4737]: I0126 18:53:22.723711 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-gkplf"] Jan 26 18:53:22 crc kubenswrapper[4737]: I0126 18:53:22.777206 4737 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b254d0c-eff7-4b4a-8814-a261c66bac0b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:22 crc kubenswrapper[4737]: I0126 18:53:22.998057 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95e9d7d0-9037-453f-b3ca-c6d563152124" path="/var/lib/kubelet/pods/95e9d7d0-9037-453f-b3ca-c6d563152124/volumes" Jan 26 18:53:22 crc kubenswrapper[4737]: I0126 18:53:22.998883 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-2j527"] Jan 26 18:53:23 crc kubenswrapper[4737]: I0126 18:53:23.002436 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-2j527"] Jan 26 18:53:23 crc kubenswrapper[4737]: I0126 18:53:23.021939 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 26 18:53:23 crc kubenswrapper[4737]: I0126 18:53:23.110518 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 26 18:53:24 crc kubenswrapper[4737]: I0126 18:53:24.993468 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b254d0c-eff7-4b4a-8814-a261c66bac0b" path="/var/lib/kubelet/pods/8b254d0c-eff7-4b4a-8814-a261c66bac0b/volumes" Jan 26 18:53:25 crc kubenswrapper[4737]: I0126 18:53:25.495431 4737 patch_prober.go:28] interesting pod/console-645c6f4f57-glmhb container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.91:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 18:53:25 crc kubenswrapper[4737]: I0126 18:53:25.496148 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-645c6f4f57-glmhb" podUID="d5a33684-359a-48ee-b9de-3f09cd04bc51" containerName="console" probeResult="failure" output="Get \"https://10.217.0.91:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 18:53:26 crc kubenswrapper[4737]: I0126 18:53:26.130200 4737 scope.go:117] "RemoveContainer" containerID="f887fe89f9bb30426e22f98a86715c12fe4a19f6ea1d18ba83b2445cebda64c9" Jan 26 18:53:26 crc kubenswrapper[4737]: I0126 18:53:26.234473 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-645c6f4f57-glmhb_d5a33684-359a-48ee-b9de-3f09cd04bc51/console/0.log" Jan 26 18:53:26 crc kubenswrapper[4737]: I0126 18:53:26.234860 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-645c6f4f57-glmhb" Jan 26 18:53:26 crc kubenswrapper[4737]: I0126 18:53:26.255736 4737 scope.go:117] "RemoveContainer" containerID="ae0dc74e933ad9f5d755737cfb28a8ed96921023ab493b1af8b2aabda3518486" Jan 26 18:53:26 crc kubenswrapper[4737]: I0126 18:53:26.311930 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-2wspq"] Jan 26 18:53:26 crc kubenswrapper[4737]: E0126 18:53:26.312809 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5a33684-359a-48ee-b9de-3f09cd04bc51" containerName="console" Jan 26 18:53:26 crc kubenswrapper[4737]: I0126 18:53:26.312831 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5a33684-359a-48ee-b9de-3f09cd04bc51" containerName="console" Jan 26 18:53:26 crc kubenswrapper[4737]: E0126 18:53:26.312861 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95e9d7d0-9037-453f-b3ca-c6d563152124" containerName="dnsmasq-dns" Jan 26 18:53:26 crc kubenswrapper[4737]: I0126 18:53:26.312868 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="95e9d7d0-9037-453f-b3ca-c6d563152124" containerName="dnsmasq-dns" Jan 26 18:53:26 crc kubenswrapper[4737]: E0126 18:53:26.312887 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95e9d7d0-9037-453f-b3ca-c6d563152124" containerName="init" Jan 26 18:53:26 crc kubenswrapper[4737]: I0126 18:53:26.312895 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="95e9d7d0-9037-453f-b3ca-c6d563152124" containerName="init" Jan 26 18:53:26 crc kubenswrapper[4737]: E0126 18:53:26.312916 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b254d0c-eff7-4b4a-8814-a261c66bac0b" containerName="init" Jan 26 18:53:26 crc kubenswrapper[4737]: I0126 18:53:26.312924 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b254d0c-eff7-4b4a-8814-a261c66bac0b" containerName="init" Jan 26 18:53:26 crc kubenswrapper[4737]: E0126 18:53:26.312938 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b254d0c-eff7-4b4a-8814-a261c66bac0b" containerName="dnsmasq-dns" Jan 26 18:53:26 crc kubenswrapper[4737]: I0126 18:53:26.312945 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b254d0c-eff7-4b4a-8814-a261c66bac0b" containerName="dnsmasq-dns" Jan 26 18:53:26 crc kubenswrapper[4737]: I0126 18:53:26.313150 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="95e9d7d0-9037-453f-b3ca-c6d563152124" containerName="dnsmasq-dns" Jan 26 18:53:26 crc kubenswrapper[4737]: I0126 18:53:26.313167 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5a33684-359a-48ee-b9de-3f09cd04bc51" containerName="console" Jan 26 18:53:26 crc kubenswrapper[4737]: I0126 18:53:26.313175 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b254d0c-eff7-4b4a-8814-a261c66bac0b" containerName="dnsmasq-dns" Jan 26 18:53:26 crc kubenswrapper[4737]: I0126 18:53:26.314371 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2wspq" Jan 26 18:53:26 crc kubenswrapper[4737]: I0126 18:53:26.317183 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 26 18:53:26 crc kubenswrapper[4737]: I0126 18:53:26.328169 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-2wspq"] Jan 26 18:53:26 crc kubenswrapper[4737]: I0126 18:53:26.361646 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d5a33684-359a-48ee-b9de-3f09cd04bc51-service-ca\") pod \"d5a33684-359a-48ee-b9de-3f09cd04bc51\" (UID: \"d5a33684-359a-48ee-b9de-3f09cd04bc51\") " Jan 26 18:53:26 crc kubenswrapper[4737]: I0126 18:53:26.361802 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d5a33684-359a-48ee-b9de-3f09cd04bc51-oauth-serving-cert\") pod \"d5a33684-359a-48ee-b9de-3f09cd04bc51\" (UID: \"d5a33684-359a-48ee-b9de-3f09cd04bc51\") " Jan 26 18:53:26 crc kubenswrapper[4737]: I0126 18:53:26.361830 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5a33684-359a-48ee-b9de-3f09cd04bc51-trusted-ca-bundle\") pod \"d5a33684-359a-48ee-b9de-3f09cd04bc51\" (UID: \"d5a33684-359a-48ee-b9de-3f09cd04bc51\") " Jan 26 18:53:26 crc kubenswrapper[4737]: I0126 18:53:26.361976 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d5a33684-359a-48ee-b9de-3f09cd04bc51-console-config\") pod \"d5a33684-359a-48ee-b9de-3f09cd04bc51\" (UID: \"d5a33684-359a-48ee-b9de-3f09cd04bc51\") " Jan 26 18:53:26 crc kubenswrapper[4737]: I0126 18:53:26.362013 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mdk6\" (UniqueName: \"kubernetes.io/projected/d5a33684-359a-48ee-b9de-3f09cd04bc51-kube-api-access-7mdk6\") pod \"d5a33684-359a-48ee-b9de-3f09cd04bc51\" (UID: \"d5a33684-359a-48ee-b9de-3f09cd04bc51\") " Jan 26 18:53:26 crc kubenswrapper[4737]: I0126 18:53:26.362196 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d5a33684-359a-48ee-b9de-3f09cd04bc51-console-oauth-config\") pod \"d5a33684-359a-48ee-b9de-3f09cd04bc51\" (UID: \"d5a33684-359a-48ee-b9de-3f09cd04bc51\") " Jan 26 18:53:26 crc kubenswrapper[4737]: I0126 18:53:26.362227 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d5a33684-359a-48ee-b9de-3f09cd04bc51-console-serving-cert\") pod \"d5a33684-359a-48ee-b9de-3f09cd04bc51\" (UID: \"d5a33684-359a-48ee-b9de-3f09cd04bc51\") " Jan 26 18:53:26 crc kubenswrapper[4737]: I0126 18:53:26.362486 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b1d8684-f062-414f-a991-fe492f651e21-operator-scripts\") pod \"root-account-create-update-2wspq\" (UID: \"3b1d8684-f062-414f-a991-fe492f651e21\") " pod="openstack/root-account-create-update-2wspq" Jan 26 18:53:26 crc kubenswrapper[4737]: I0126 18:53:26.362730 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbjrt\" (UniqueName: \"kubernetes.io/projected/3b1d8684-f062-414f-a991-fe492f651e21-kube-api-access-vbjrt\") pod \"root-account-create-update-2wspq\" (UID: \"3b1d8684-f062-414f-a991-fe492f651e21\") " pod="openstack/root-account-create-update-2wspq" Jan 26 18:53:26 crc kubenswrapper[4737]: I0126 18:53:26.366467 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5a33684-359a-48ee-b9de-3f09cd04bc51-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d5a33684-359a-48ee-b9de-3f09cd04bc51" (UID: "d5a33684-359a-48ee-b9de-3f09cd04bc51"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:53:26 crc kubenswrapper[4737]: I0126 18:53:26.366531 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5a33684-359a-48ee-b9de-3f09cd04bc51-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "d5a33684-359a-48ee-b9de-3f09cd04bc51" (UID: "d5a33684-359a-48ee-b9de-3f09cd04bc51"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:53:26 crc kubenswrapper[4737]: I0126 18:53:26.366601 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5a33684-359a-48ee-b9de-3f09cd04bc51-console-config" (OuterVolumeSpecName: "console-config") pod "d5a33684-359a-48ee-b9de-3f09cd04bc51" (UID: "d5a33684-359a-48ee-b9de-3f09cd04bc51"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:53:26 crc kubenswrapper[4737]: I0126 18:53:26.368023 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5a33684-359a-48ee-b9de-3f09cd04bc51-service-ca" (OuterVolumeSpecName: "service-ca") pod "d5a33684-359a-48ee-b9de-3f09cd04bc51" (UID: "d5a33684-359a-48ee-b9de-3f09cd04bc51"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:53:26 crc kubenswrapper[4737]: I0126 18:53:26.374670 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5a33684-359a-48ee-b9de-3f09cd04bc51-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "d5a33684-359a-48ee-b9de-3f09cd04bc51" (UID: "d5a33684-359a-48ee-b9de-3f09cd04bc51"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:53:26 crc kubenswrapper[4737]: I0126 18:53:26.374850 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5a33684-359a-48ee-b9de-3f09cd04bc51-kube-api-access-7mdk6" (OuterVolumeSpecName: "kube-api-access-7mdk6") pod "d5a33684-359a-48ee-b9de-3f09cd04bc51" (UID: "d5a33684-359a-48ee-b9de-3f09cd04bc51"). InnerVolumeSpecName "kube-api-access-7mdk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:53:26 crc kubenswrapper[4737]: I0126 18:53:26.378363 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5a33684-359a-48ee-b9de-3f09cd04bc51-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "d5a33684-359a-48ee-b9de-3f09cd04bc51" (UID: "d5a33684-359a-48ee-b9de-3f09cd04bc51"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:53:26 crc kubenswrapper[4737]: I0126 18:53:26.431791 4737 scope.go:117] "RemoveContainer" containerID="f1737c793dc7a18acb0ae0f2dc34be4a4ba6660fd48d3ae77286876e6af77432" Jan 26 18:53:26 crc kubenswrapper[4737]: I0126 18:53:26.467786 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbjrt\" (UniqueName: \"kubernetes.io/projected/3b1d8684-f062-414f-a991-fe492f651e21-kube-api-access-vbjrt\") pod \"root-account-create-update-2wspq\" (UID: \"3b1d8684-f062-414f-a991-fe492f651e21\") " pod="openstack/root-account-create-update-2wspq" Jan 26 18:53:26 crc kubenswrapper[4737]: I0126 18:53:26.467849 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b1d8684-f062-414f-a991-fe492f651e21-operator-scripts\") pod \"root-account-create-update-2wspq\" (UID: \"3b1d8684-f062-414f-a991-fe492f651e21\") " pod="openstack/root-account-create-update-2wspq" Jan 26 18:53:26 crc kubenswrapper[4737]: I0126 18:53:26.467959 4737 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d5a33684-359a-48ee-b9de-3f09cd04bc51-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:26 crc kubenswrapper[4737]: I0126 18:53:26.467970 4737 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5a33684-359a-48ee-b9de-3f09cd04bc51-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:26 crc kubenswrapper[4737]: I0126 18:53:26.467983 4737 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d5a33684-359a-48ee-b9de-3f09cd04bc51-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:26 crc kubenswrapper[4737]: I0126 18:53:26.467991 4737 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d5a33684-359a-48ee-b9de-3f09cd04bc51-console-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:26 crc kubenswrapper[4737]: I0126 18:53:26.467999 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mdk6\" (UniqueName: \"kubernetes.io/projected/d5a33684-359a-48ee-b9de-3f09cd04bc51-kube-api-access-7mdk6\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:26 crc kubenswrapper[4737]: I0126 18:53:26.468009 4737 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d5a33684-359a-48ee-b9de-3f09cd04bc51-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:26 crc kubenswrapper[4737]: I0126 18:53:26.468017 4737 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d5a33684-359a-48ee-b9de-3f09cd04bc51-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:26 crc kubenswrapper[4737]: I0126 18:53:26.468760 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b1d8684-f062-414f-a991-fe492f651e21-operator-scripts\") pod \"root-account-create-update-2wspq\" (UID: \"3b1d8684-f062-414f-a991-fe492f651e21\") " pod="openstack/root-account-create-update-2wspq" Jan 26 18:53:26 crc kubenswrapper[4737]: I0126 18:53:26.487899 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbjrt\" (UniqueName: \"kubernetes.io/projected/3b1d8684-f062-414f-a991-fe492f651e21-kube-api-access-vbjrt\") pod \"root-account-create-update-2wspq\" (UID: \"3b1d8684-f062-414f-a991-fe492f651e21\") " pod="openstack/root-account-create-update-2wspq" Jan 26 18:53:26 crc kubenswrapper[4737]: I0126 18:53:26.656834 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 26 18:53:26 crc kubenswrapper[4737]: I0126 18:53:26.694980 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-2fbb8" event={"ID":"c9be0bf2-1b3f-4f77-89ec-b5afa2362e47","Type":"ContainerStarted","Data":"92342e732a0b918a1eaac74018c29ac11c769fbea4c5a6e7349f67293c72f3fb"} Jan 26 18:53:26 crc kubenswrapper[4737]: I0126 18:53:26.704569 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"539f99ad-d4f8-4e02-aca3-f247bc802698","Type":"ContainerStarted","Data":"cb418ce5af80bfbb3b78e37a5ceb2c2e826568d5dbe57eea92dd3ac6d0bf783f"} Jan 26 18:53:26 crc kubenswrapper[4737]: I0126 18:53:26.706824 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-645c6f4f57-glmhb_d5a33684-359a-48ee-b9de-3f09cd04bc51/console/0.log" Jan 26 18:53:26 crc kubenswrapper[4737]: I0126 18:53:26.706887 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-645c6f4f57-glmhb" event={"ID":"d5a33684-359a-48ee-b9de-3f09cd04bc51","Type":"ContainerDied","Data":"36bf75cf95b4e46776a21c9c00482713b649e90bbed767f8c6a88367e9225b89"} Jan 26 18:53:26 crc kubenswrapper[4737]: I0126 18:53:26.706923 4737 scope.go:117] "RemoveContainer" containerID="e77c00fcc8e5981a3b4bc1de3b40217df4672344e02bd378be1b988d919d7c17" Jan 26 18:53:26 crc kubenswrapper[4737]: I0126 18:53:26.707031 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-645c6f4f57-glmhb" Jan 26 18:53:26 crc kubenswrapper[4737]: I0126 18:53:26.722704 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-2fbb8" podStartSLOduration=3.618639327 podStartE2EDuration="13.722676539s" podCreationTimestamp="2026-01-26 18:53:13 +0000 UTC" firstStartedPulling="2026-01-26 18:53:16.151806187 +0000 UTC m=+1369.460000895" lastFinishedPulling="2026-01-26 18:53:26.255843399 +0000 UTC m=+1379.564038107" observedRunningTime="2026-01-26 18:53:26.715928969 +0000 UTC m=+1380.024123677" watchObservedRunningTime="2026-01-26 18:53:26.722676539 +0000 UTC m=+1380.030871267" Jan 26 18:53:26 crc kubenswrapper[4737]: I0126 18:53:26.751831 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 26 18:53:26 crc kubenswrapper[4737]: W0126 18:53:26.758399 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4943ea2e_2d2e_4024_97f5_b7a2b288e3b2.slice/crio-9da804a911f9cfc50e69b67ac067af3d65ee5e741c5cbcd32cb0d1bccfa39bd9 WatchSource:0}: Error finding container 9da804a911f9cfc50e69b67ac067af3d65ee5e741c5cbcd32cb0d1bccfa39bd9: Status 404 returned error can't find the container with id 9da804a911f9cfc50e69b67ac067af3d65ee5e741c5cbcd32cb0d1bccfa39bd9 Jan 26 18:53:26 crc kubenswrapper[4737]: I0126 18:53:26.772020 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-7xhdj"] Jan 26 18:53:26 crc kubenswrapper[4737]: I0126 18:53:26.774362 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 26 18:53:26 crc kubenswrapper[4737]: I0126 18:53:26.785115 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-645c6f4f57-glmhb"] Jan 26 18:53:26 crc kubenswrapper[4737]: I0126 18:53:26.785834 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2wspq" Jan 26 18:53:26 crc kubenswrapper[4737]: I0126 18:53:26.799641 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-645c6f4f57-glmhb"] Jan 26 18:53:27 crc kubenswrapper[4737]: I0126 18:53:27.087212 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5a33684-359a-48ee-b9de-3f09cd04bc51" path="/var/lib/kubelet/pods/d5a33684-359a-48ee-b9de-3f09cd04bc51/volumes" Jan 26 18:53:27 crc kubenswrapper[4737]: I0126 18:53:27.088587 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-96rrx"] Jan 26 18:53:27 crc kubenswrapper[4737]: I0126 18:53:27.255083 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-kjzn5"] Jan 26 18:53:27 crc kubenswrapper[4737]: I0126 18:53:27.502889 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-2wspq"] Jan 26 18:53:27 crc kubenswrapper[4737]: W0126 18:53:27.677276 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b1d8684_f062_414f_a991_fe492f651e21.slice/crio-c763df7a33fa76744a9624212c79862874fffc51f6a4ec3a4478b8b004a529ec WatchSource:0}: Error finding container c763df7a33fa76744a9624212c79862874fffc51f6a4ec3a4478b8b004a529ec: Status 404 returned error can't find the container with id c763df7a33fa76744a9624212c79862874fffc51f6a4ec3a4478b8b004a529ec Jan 26 18:53:27 crc kubenswrapper[4737]: I0126 18:53:27.682609 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 26 18:53:27 crc kubenswrapper[4737]: I0126 18:53:27.723001 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2wspq" event={"ID":"3b1d8684-f062-414f-a991-fe492f651e21","Type":"ContainerStarted","Data":"c763df7a33fa76744a9624212c79862874fffc51f6a4ec3a4478b8b004a529ec"} Jan 26 18:53:27 crc kubenswrapper[4737]: I0126 18:53:27.728414 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-96rrx" event={"ID":"6bdafee1-1c61-4cbe-b052-c5948c27152d","Type":"ContainerStarted","Data":"f29d65cee0924e2c615b20421bc2182811a0f783e8afe150dcdbe3fa126a4a18"} Jan 26 18:53:27 crc kubenswrapper[4737]: I0126 18:53:27.731602 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6f696b9-kjzn5" event={"ID":"a8bf10ed-050c-48c9-8967-f2bb9b53b9eb","Type":"ContainerStarted","Data":"67a4228c8f047840cf4cd363df218607266d982917d94ef5fb6f850921fa791d"} Jan 26 18:53:27 crc kubenswrapper[4737]: I0126 18:53:27.732623 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"19bc14ba-dd2b-4cb9-969d-e44339856cf0","Type":"ContainerStarted","Data":"952bda44e122a9ba72cd84cddcade1b708ef50c5d36aeeec532a6cd18133563d"} Jan 26 18:53:27 crc kubenswrapper[4737]: I0126 18:53:27.734784 4737 generic.go:334] "Generic (PLEG): container finished" podID="4943ea2e-2d2e-4024-97f5-b7a2b288e3b2" containerID="63068da46dd032ec15d7b3b5928e294fd23fa62a1a292859de77e65f8bb1b7ef" exitCode=0 Jan 26 18:53:27 crc kubenswrapper[4737]: I0126 18:53:27.736587 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-7xhdj" event={"ID":"4943ea2e-2d2e-4024-97f5-b7a2b288e3b2","Type":"ContainerDied","Data":"63068da46dd032ec15d7b3b5928e294fd23fa62a1a292859de77e65f8bb1b7ef"} Jan 26 18:53:27 crc kubenswrapper[4737]: I0126 18:53:27.736622 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-7xhdj" event={"ID":"4943ea2e-2d2e-4024-97f5-b7a2b288e3b2","Type":"ContainerStarted","Data":"9da804a911f9cfc50e69b67ac067af3d65ee5e741c5cbcd32cb0d1bccfa39bd9"} Jan 26 18:53:28 crc kubenswrapper[4737]: I0126 18:53:28.680505 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/03970489-bf21-4d19-afc2-bf8d39aa683e-etc-swift\") pod \"swift-storage-0\" (UID: \"03970489-bf21-4d19-afc2-bf8d39aa683e\") " pod="openstack/swift-storage-0" Jan 26 18:53:28 crc kubenswrapper[4737]: E0126 18:53:28.680750 4737 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 18:53:28 crc kubenswrapper[4737]: E0126 18:53:28.681226 4737 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 18:53:28 crc kubenswrapper[4737]: E0126 18:53:28.681304 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/03970489-bf21-4d19-afc2-bf8d39aa683e-etc-swift podName:03970489-bf21-4d19-afc2-bf8d39aa683e nodeName:}" failed. No retries permitted until 2026-01-26 18:53:44.681275375 +0000 UTC m=+1397.989470083 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/03970489-bf21-4d19-afc2-bf8d39aa683e-etc-swift") pod "swift-storage-0" (UID: "03970489-bf21-4d19-afc2-bf8d39aa683e") : configmap "swift-ring-files" not found Jan 26 18:53:28 crc kubenswrapper[4737]: I0126 18:53:28.734765 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-ntqg8"] Jan 26 18:53:28 crc kubenswrapper[4737]: I0126 18:53:28.739898 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-ntqg8" Jan 26 18:53:28 crc kubenswrapper[4737]: I0126 18:53:28.751813 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-ntqg8"] Jan 26 18:53:28 crc kubenswrapper[4737]: I0126 18:53:28.766713 4737 generic.go:334] "Generic (PLEG): container finished" podID="a8bf10ed-050c-48c9-8967-f2bb9b53b9eb" containerID="b04071c9f667679af64f9f30c5016a6c3d393ca9486101f824a4a4e248545887" exitCode=0 Jan 26 18:53:28 crc kubenswrapper[4737]: I0126 18:53:28.768182 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6f696b9-kjzn5" event={"ID":"a8bf10ed-050c-48c9-8967-f2bb9b53b9eb","Type":"ContainerDied","Data":"b04071c9f667679af64f9f30c5016a6c3d393ca9486101f824a4a4e248545887"} Jan 26 18:53:28 crc kubenswrapper[4737]: I0126 18:53:28.778486 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"19bc14ba-dd2b-4cb9-969d-e44339856cf0","Type":"ContainerStarted","Data":"dfc1b28fa11de3fa31ff92ed0f299e3987e37fcd8318990b77eb46c5f492ca5b"} Jan 26 18:53:28 crc kubenswrapper[4737]: I0126 18:53:28.782681 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e633aa68-a0b7-4ee4-bf00-7d46105654e2-operator-scripts\") pod \"keystone-db-create-ntqg8\" (UID: \"e633aa68-a0b7-4ee4-bf00-7d46105654e2\") " pod="openstack/keystone-db-create-ntqg8" Jan 26 18:53:28 crc kubenswrapper[4737]: I0126 18:53:28.783099 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqzc6\" (UniqueName: \"kubernetes.io/projected/e633aa68-a0b7-4ee4-bf00-7d46105654e2-kube-api-access-jqzc6\") pod \"keystone-db-create-ntqg8\" (UID: \"e633aa68-a0b7-4ee4-bf00-7d46105654e2\") " pod="openstack/keystone-db-create-ntqg8" Jan 26 18:53:28 crc kubenswrapper[4737]: I0126 18:53:28.791853 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-7xhdj" event={"ID":"4943ea2e-2d2e-4024-97f5-b7a2b288e3b2","Type":"ContainerStarted","Data":"2c9c0d4d99b533cca672fb2062e4a3ece43523c094dbf869dabc5092baf30fd6"} Jan 26 18:53:28 crc kubenswrapper[4737]: I0126 18:53:28.803675 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-7xhdj" Jan 26 18:53:28 crc kubenswrapper[4737]: I0126 18:53:28.806558 4737 generic.go:334] "Generic (PLEG): container finished" podID="3b1d8684-f062-414f-a991-fe492f651e21" containerID="8c379f6429cef2f3fe40f14884abebbff80588aafc47fcf061ea9ab1f406e9aa" exitCode=0 Jan 26 18:53:28 crc kubenswrapper[4737]: I0126 18:53:28.806655 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2wspq" event={"ID":"3b1d8684-f062-414f-a991-fe492f651e21","Type":"ContainerDied","Data":"8c379f6429cef2f3fe40f14884abebbff80588aafc47fcf061ea9ab1f406e9aa"} Jan 26 18:53:28 crc kubenswrapper[4737]: I0126 18:53:28.813464 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-96rrx" event={"ID":"6bdafee1-1c61-4cbe-b052-c5948c27152d","Type":"ContainerStarted","Data":"af83455f6c8723b9efb917cab2e68eef5b40bb073ebda24d0730002c9f768c7b"} Jan 26 18:53:28 crc kubenswrapper[4737]: I0126 18:53:28.873139 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-a4fd-account-create-update-jq2tl"] Jan 26 18:53:28 crc kubenswrapper[4737]: I0126 18:53:28.874751 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a4fd-account-create-update-jq2tl" Jan 26 18:53:28 crc kubenswrapper[4737]: I0126 18:53:28.880459 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 26 18:53:28 crc kubenswrapper[4737]: I0126 18:53:28.889239 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqzc6\" (UniqueName: \"kubernetes.io/projected/e633aa68-a0b7-4ee4-bf00-7d46105654e2-kube-api-access-jqzc6\") pod \"keystone-db-create-ntqg8\" (UID: \"e633aa68-a0b7-4ee4-bf00-7d46105654e2\") " pod="openstack/keystone-db-create-ntqg8" Jan 26 18:53:28 crc kubenswrapper[4737]: I0126 18:53:28.889519 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdtsf\" (UniqueName: \"kubernetes.io/projected/a9404174-9225-41ad-9db6-d523f17739d0-kube-api-access-wdtsf\") pod \"keystone-a4fd-account-create-update-jq2tl\" (UID: \"a9404174-9225-41ad-9db6-d523f17739d0\") " pod="openstack/keystone-a4fd-account-create-update-jq2tl" Jan 26 18:53:28 crc kubenswrapper[4737]: I0126 18:53:28.889591 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9404174-9225-41ad-9db6-d523f17739d0-operator-scripts\") pod \"keystone-a4fd-account-create-update-jq2tl\" (UID: \"a9404174-9225-41ad-9db6-d523f17739d0\") " pod="openstack/keystone-a4fd-account-create-update-jq2tl" Jan 26 18:53:28 crc kubenswrapper[4737]: I0126 18:53:28.889732 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e633aa68-a0b7-4ee4-bf00-7d46105654e2-operator-scripts\") pod \"keystone-db-create-ntqg8\" (UID: \"e633aa68-a0b7-4ee4-bf00-7d46105654e2\") " pod="openstack/keystone-db-create-ntqg8" Jan 26 18:53:28 crc kubenswrapper[4737]: I0126 18:53:28.894819 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e633aa68-a0b7-4ee4-bf00-7d46105654e2-operator-scripts\") pod \"keystone-db-create-ntqg8\" (UID: \"e633aa68-a0b7-4ee4-bf00-7d46105654e2\") " pod="openstack/keystone-db-create-ntqg8" Jan 26 18:53:28 crc kubenswrapper[4737]: I0126 18:53:28.904246 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-a4fd-account-create-update-jq2tl"] Jan 26 18:53:28 crc kubenswrapper[4737]: I0126 18:53:28.910772 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-7xhdj" podStartSLOduration=10.910750418 podStartE2EDuration="10.910750418s" podCreationTimestamp="2026-01-26 18:53:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:53:28.842221068 +0000 UTC m=+1382.150415806" watchObservedRunningTime="2026-01-26 18:53:28.910750418 +0000 UTC m=+1382.218945126" Jan 26 18:53:28 crc kubenswrapper[4737]: I0126 18:53:28.916511 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqzc6\" (UniqueName: \"kubernetes.io/projected/e633aa68-a0b7-4ee4-bf00-7d46105654e2-kube-api-access-jqzc6\") pod \"keystone-db-create-ntqg8\" (UID: \"e633aa68-a0b7-4ee4-bf00-7d46105654e2\") " pod="openstack/keystone-db-create-ntqg8" Jan 26 18:53:28 crc kubenswrapper[4737]: I0126 18:53:28.925417 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-96rrx" podStartSLOduration=11.925399484 podStartE2EDuration="11.925399484s" podCreationTimestamp="2026-01-26 18:53:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:53:28.879818897 +0000 UTC m=+1382.188013605" watchObservedRunningTime="2026-01-26 18:53:28.925399484 +0000 UTC m=+1382.233594182" Jan 26 18:53:29 crc kubenswrapper[4737]: I0126 18:53:29.101343 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdtsf\" (UniqueName: \"kubernetes.io/projected/a9404174-9225-41ad-9db6-d523f17739d0-kube-api-access-wdtsf\") pod \"keystone-a4fd-account-create-update-jq2tl\" (UID: \"a9404174-9225-41ad-9db6-d523f17739d0\") " pod="openstack/keystone-a4fd-account-create-update-jq2tl" Jan 26 18:53:29 crc kubenswrapper[4737]: I0126 18:53:29.101440 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9404174-9225-41ad-9db6-d523f17739d0-operator-scripts\") pod \"keystone-a4fd-account-create-update-jq2tl\" (UID: \"a9404174-9225-41ad-9db6-d523f17739d0\") " pod="openstack/keystone-a4fd-account-create-update-jq2tl" Jan 26 18:53:29 crc kubenswrapper[4737]: I0126 18:53:29.102501 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9404174-9225-41ad-9db6-d523f17739d0-operator-scripts\") pod \"keystone-a4fd-account-create-update-jq2tl\" (UID: \"a9404174-9225-41ad-9db6-d523f17739d0\") " pod="openstack/keystone-a4fd-account-create-update-jq2tl" Jan 26 18:53:29 crc kubenswrapper[4737]: I0126 18:53:29.112362 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-ntqg8" Jan 26 18:53:29 crc kubenswrapper[4737]: I0126 18:53:29.174151 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-xx2nh"] Jan 26 18:53:29 crc kubenswrapper[4737]: I0126 18:53:29.177284 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-xx2nh" Jan 26 18:53:29 crc kubenswrapper[4737]: I0126 18:53:29.191088 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdtsf\" (UniqueName: \"kubernetes.io/projected/a9404174-9225-41ad-9db6-d523f17739d0-kube-api-access-wdtsf\") pod \"keystone-a4fd-account-create-update-jq2tl\" (UID: \"a9404174-9225-41ad-9db6-d523f17739d0\") " pod="openstack/keystone-a4fd-account-create-update-jq2tl" Jan 26 18:53:29 crc kubenswrapper[4737]: I0126 18:53:29.195737 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-xx2nh"] Jan 26 18:53:29 crc kubenswrapper[4737]: I0126 18:53:29.222931 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lxbz\" (UniqueName: \"kubernetes.io/projected/b117bcd7-b58c-4af6-9bd6-ce70ec70f601-kube-api-access-7lxbz\") pod \"placement-db-create-xx2nh\" (UID: \"b117bcd7-b58c-4af6-9bd6-ce70ec70f601\") " pod="openstack/placement-db-create-xx2nh" Jan 26 18:53:29 crc kubenswrapper[4737]: I0126 18:53:29.223018 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b117bcd7-b58c-4af6-9bd6-ce70ec70f601-operator-scripts\") pod \"placement-db-create-xx2nh\" (UID: \"b117bcd7-b58c-4af6-9bd6-ce70ec70f601\") " pod="openstack/placement-db-create-xx2nh" Jan 26 18:53:29 crc kubenswrapper[4737]: I0126 18:53:29.278047 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a4fd-account-create-update-jq2tl" Jan 26 18:53:29 crc kubenswrapper[4737]: I0126 18:53:29.304779 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-887b-account-create-update-zvz84"] Jan 26 18:53:29 crc kubenswrapper[4737]: I0126 18:53:29.306450 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-887b-account-create-update-zvz84" Jan 26 18:53:29 crc kubenswrapper[4737]: I0126 18:53:29.309179 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 26 18:53:29 crc kubenswrapper[4737]: I0126 18:53:29.313325 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-887b-account-create-update-zvz84"] Jan 26 18:53:29 crc kubenswrapper[4737]: I0126 18:53:29.327112 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lxbz\" (UniqueName: \"kubernetes.io/projected/b117bcd7-b58c-4af6-9bd6-ce70ec70f601-kube-api-access-7lxbz\") pod \"placement-db-create-xx2nh\" (UID: \"b117bcd7-b58c-4af6-9bd6-ce70ec70f601\") " pod="openstack/placement-db-create-xx2nh" Jan 26 18:53:29 crc kubenswrapper[4737]: I0126 18:53:29.327203 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b117bcd7-b58c-4af6-9bd6-ce70ec70f601-operator-scripts\") pod \"placement-db-create-xx2nh\" (UID: \"b117bcd7-b58c-4af6-9bd6-ce70ec70f601\") " pod="openstack/placement-db-create-xx2nh" Jan 26 18:53:29 crc kubenswrapper[4737]: I0126 18:53:29.328250 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b117bcd7-b58c-4af6-9bd6-ce70ec70f601-operator-scripts\") pod \"placement-db-create-xx2nh\" (UID: \"b117bcd7-b58c-4af6-9bd6-ce70ec70f601\") " pod="openstack/placement-db-create-xx2nh" Jan 26 18:53:29 crc kubenswrapper[4737]: I0126 18:53:29.356233 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lxbz\" (UniqueName: \"kubernetes.io/projected/b117bcd7-b58c-4af6-9bd6-ce70ec70f601-kube-api-access-7lxbz\") pod \"placement-db-create-xx2nh\" (UID: \"b117bcd7-b58c-4af6-9bd6-ce70ec70f601\") " pod="openstack/placement-db-create-xx2nh" Jan 26 18:53:29 crc kubenswrapper[4737]: I0126 18:53:29.429057 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjb26\" (UniqueName: \"kubernetes.io/projected/e57f4e7a-0e31-4911-9f19-a43e3d91e721-kube-api-access-jjb26\") pod \"placement-887b-account-create-update-zvz84\" (UID: \"e57f4e7a-0e31-4911-9f19-a43e3d91e721\") " pod="openstack/placement-887b-account-create-update-zvz84" Jan 26 18:53:29 crc kubenswrapper[4737]: I0126 18:53:29.429457 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e57f4e7a-0e31-4911-9f19-a43e3d91e721-operator-scripts\") pod \"placement-887b-account-create-update-zvz84\" (UID: \"e57f4e7a-0e31-4911-9f19-a43e3d91e721\") " pod="openstack/placement-887b-account-create-update-zvz84" Jan 26 18:53:29 crc kubenswrapper[4737]: I0126 18:53:29.483775 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-pb2pj"] Jan 26 18:53:29 crc kubenswrapper[4737]: I0126 18:53:29.485834 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-pb2pj" Jan 26 18:53:29 crc kubenswrapper[4737]: I0126 18:53:29.501871 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-pb2pj"] Jan 26 18:53:29 crc kubenswrapper[4737]: I0126 18:53:29.525575 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-xx2nh" Jan 26 18:53:29 crc kubenswrapper[4737]: I0126 18:53:29.531189 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjb26\" (UniqueName: \"kubernetes.io/projected/e57f4e7a-0e31-4911-9f19-a43e3d91e721-kube-api-access-jjb26\") pod \"placement-887b-account-create-update-zvz84\" (UID: \"e57f4e7a-0e31-4911-9f19-a43e3d91e721\") " pod="openstack/placement-887b-account-create-update-zvz84" Jan 26 18:53:29 crc kubenswrapper[4737]: I0126 18:53:29.531434 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e57f4e7a-0e31-4911-9f19-a43e3d91e721-operator-scripts\") pod \"placement-887b-account-create-update-zvz84\" (UID: \"e57f4e7a-0e31-4911-9f19-a43e3d91e721\") " pod="openstack/placement-887b-account-create-update-zvz84" Jan 26 18:53:29 crc kubenswrapper[4737]: I0126 18:53:29.534523 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e57f4e7a-0e31-4911-9f19-a43e3d91e721-operator-scripts\") pod \"placement-887b-account-create-update-zvz84\" (UID: \"e57f4e7a-0e31-4911-9f19-a43e3d91e721\") " pod="openstack/placement-887b-account-create-update-zvz84" Jan 26 18:53:29 crc kubenswrapper[4737]: I0126 18:53:29.575656 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjb26\" (UniqueName: \"kubernetes.io/projected/e57f4e7a-0e31-4911-9f19-a43e3d91e721-kube-api-access-jjb26\") pod \"placement-887b-account-create-update-zvz84\" (UID: \"e57f4e7a-0e31-4911-9f19-a43e3d91e721\") " pod="openstack/placement-887b-account-create-update-zvz84" Jan 26 18:53:29 crc kubenswrapper[4737]: I0126 18:53:29.621644 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-f9d4-account-create-update-bf25x"] Jan 26 18:53:29 crc kubenswrapper[4737]: I0126 18:53:29.623581 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f9d4-account-create-update-bf25x" Jan 26 18:53:29 crc kubenswrapper[4737]: I0126 18:53:29.630346 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 26 18:53:29 crc kubenswrapper[4737]: I0126 18:53:29.635104 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9c00937-8b37-4a41-8403-c69b2e307675-operator-scripts\") pod \"glance-db-create-pb2pj\" (UID: \"c9c00937-8b37-4a41-8403-c69b2e307675\") " pod="openstack/glance-db-create-pb2pj" Jan 26 18:53:29 crc kubenswrapper[4737]: I0126 18:53:29.635383 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv4v9\" (UniqueName: \"kubernetes.io/projected/c9c00937-8b37-4a41-8403-c69b2e307675-kube-api-access-hv4v9\") pod \"glance-db-create-pb2pj\" (UID: \"c9c00937-8b37-4a41-8403-c69b2e307675\") " pod="openstack/glance-db-create-pb2pj" Jan 26 18:53:29 crc kubenswrapper[4737]: I0126 18:53:29.647732 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-f9d4-account-create-update-bf25x"] Jan 26 18:53:29 crc kubenswrapper[4737]: I0126 18:53:29.736921 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9c00937-8b37-4a41-8403-c69b2e307675-operator-scripts\") pod \"glance-db-create-pb2pj\" (UID: \"c9c00937-8b37-4a41-8403-c69b2e307675\") " pod="openstack/glance-db-create-pb2pj" Jan 26 18:53:29 crc kubenswrapper[4737]: I0126 18:53:29.737114 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8txrc\" (UniqueName: \"kubernetes.io/projected/5e51d252-fc4e-4694-87e5-dade4de60ec5-kube-api-access-8txrc\") pod \"glance-f9d4-account-create-update-bf25x\" (UID: \"5e51d252-fc4e-4694-87e5-dade4de60ec5\") " pod="openstack/glance-f9d4-account-create-update-bf25x" Jan 26 18:53:29 crc kubenswrapper[4737]: I0126 18:53:29.737267 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e51d252-fc4e-4694-87e5-dade4de60ec5-operator-scripts\") pod \"glance-f9d4-account-create-update-bf25x\" (UID: \"5e51d252-fc4e-4694-87e5-dade4de60ec5\") " pod="openstack/glance-f9d4-account-create-update-bf25x" Jan 26 18:53:29 crc kubenswrapper[4737]: I0126 18:53:29.737308 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hv4v9\" (UniqueName: \"kubernetes.io/projected/c9c00937-8b37-4a41-8403-c69b2e307675-kube-api-access-hv4v9\") pod \"glance-db-create-pb2pj\" (UID: \"c9c00937-8b37-4a41-8403-c69b2e307675\") " pod="openstack/glance-db-create-pb2pj" Jan 26 18:53:29 crc kubenswrapper[4737]: I0126 18:53:29.740913 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9c00937-8b37-4a41-8403-c69b2e307675-operator-scripts\") pod \"glance-db-create-pb2pj\" (UID: \"c9c00937-8b37-4a41-8403-c69b2e307675\") " pod="openstack/glance-db-create-pb2pj" Jan 26 18:53:29 crc kubenswrapper[4737]: I0126 18:53:29.756180 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hv4v9\" (UniqueName: \"kubernetes.io/projected/c9c00937-8b37-4a41-8403-c69b2e307675-kube-api-access-hv4v9\") pod \"glance-db-create-pb2pj\" (UID: \"c9c00937-8b37-4a41-8403-c69b2e307675\") " pod="openstack/glance-db-create-pb2pj" Jan 26 18:53:29 crc kubenswrapper[4737]: I0126 18:53:29.836878 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6f696b9-kjzn5" event={"ID":"a8bf10ed-050c-48c9-8967-f2bb9b53b9eb","Type":"ContainerStarted","Data":"f3693d3822b348d34ac56e55d4634eca01ce143e5df194d6e5232e471f5e3ecf"} Jan 26 18:53:29 crc kubenswrapper[4737]: I0126 18:53:29.837198 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6f696b9-kjzn5" Jan 26 18:53:29 crc kubenswrapper[4737]: I0126 18:53:29.838133 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-887b-account-create-update-zvz84" Jan 26 18:53:29 crc kubenswrapper[4737]: I0126 18:53:29.839198 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8txrc\" (UniqueName: \"kubernetes.io/projected/5e51d252-fc4e-4694-87e5-dade4de60ec5-kube-api-access-8txrc\") pod \"glance-f9d4-account-create-update-bf25x\" (UID: \"5e51d252-fc4e-4694-87e5-dade4de60ec5\") " pod="openstack/glance-f9d4-account-create-update-bf25x" Jan 26 18:53:29 crc kubenswrapper[4737]: I0126 18:53:29.840627 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e51d252-fc4e-4694-87e5-dade4de60ec5-operator-scripts\") pod \"glance-f9d4-account-create-update-bf25x\" (UID: \"5e51d252-fc4e-4694-87e5-dade4de60ec5\") " pod="openstack/glance-f9d4-account-create-update-bf25x" Jan 26 18:53:29 crc kubenswrapper[4737]: I0126 18:53:29.841777 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e51d252-fc4e-4694-87e5-dade4de60ec5-operator-scripts\") pod \"glance-f9d4-account-create-update-bf25x\" (UID: \"5e51d252-fc4e-4694-87e5-dade4de60ec5\") " pod="openstack/glance-f9d4-account-create-update-bf25x" Jan 26 18:53:29 crc kubenswrapper[4737]: I0126 18:53:29.842049 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-ntqg8"] Jan 26 18:53:29 crc kubenswrapper[4737]: I0126 18:53:29.843144 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"19bc14ba-dd2b-4cb9-969d-e44339856cf0","Type":"ContainerStarted","Data":"b5694799b72953942437daf8ecacbdc52a1d3c40626c30993a1ac9a1df167650"} Jan 26 18:53:29 crc kubenswrapper[4737]: I0126 18:53:29.843265 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 26 18:53:29 crc kubenswrapper[4737]: I0126 18:53:29.856532 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-pb2pj" Jan 26 18:53:29 crc kubenswrapper[4737]: I0126 18:53:29.856792 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74f6f696b9-kjzn5" podStartSLOduration=12.856733839 podStartE2EDuration="12.856733839s" podCreationTimestamp="2026-01-26 18:53:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:53:29.855574921 +0000 UTC m=+1383.163769629" watchObservedRunningTime="2026-01-26 18:53:29.856733839 +0000 UTC m=+1383.164928547" Jan 26 18:53:29 crc kubenswrapper[4737]: I0126 18:53:29.874719 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8txrc\" (UniqueName: \"kubernetes.io/projected/5e51d252-fc4e-4694-87e5-dade4de60ec5-kube-api-access-8txrc\") pod \"glance-f9d4-account-create-update-bf25x\" (UID: \"5e51d252-fc4e-4694-87e5-dade4de60ec5\") " pod="openstack/glance-f9d4-account-create-update-bf25x" Jan 26 18:53:29 crc kubenswrapper[4737]: I0126 18:53:29.889736 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=10.193687665 podStartE2EDuration="11.889712678s" podCreationTimestamp="2026-01-26 18:53:18 +0000 UTC" firstStartedPulling="2026-01-26 18:53:26.76758533 +0000 UTC m=+1380.075780038" lastFinishedPulling="2026-01-26 18:53:28.463610343 +0000 UTC m=+1381.771805051" observedRunningTime="2026-01-26 18:53:29.884109835 +0000 UTC m=+1383.192304543" watchObservedRunningTime="2026-01-26 18:53:29.889712678 +0000 UTC m=+1383.197907386" Jan 26 18:53:30 crc kubenswrapper[4737]: I0126 18:53:30.020567 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f9d4-account-create-update-bf25x" Jan 26 18:53:30 crc kubenswrapper[4737]: I0126 18:53:30.132369 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-a4fd-account-create-update-jq2tl"] Jan 26 18:53:30 crc kubenswrapper[4737]: I0126 18:53:30.146165 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-xx2nh"] Jan 26 18:53:30 crc kubenswrapper[4737]: I0126 18:53:30.655250 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-f9d4-account-create-update-bf25x"] Jan 26 18:53:30 crc kubenswrapper[4737]: I0126 18:53:30.666694 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-pb2pj"] Jan 26 18:53:30 crc kubenswrapper[4737]: I0126 18:53:30.679079 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-887b-account-create-update-zvz84"] Jan 26 18:53:30 crc kubenswrapper[4737]: W0126 18:53:30.819536 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9c00937_8b37_4a41_8403_c69b2e307675.slice/crio-f2a39d1750dd54eeaabf2642450abdcf5256979fe8df5e7005ae3326af0e0fa6 WatchSource:0}: Error finding container f2a39d1750dd54eeaabf2642450abdcf5256979fe8df5e7005ae3326af0e0fa6: Status 404 returned error can't find the container with id f2a39d1750dd54eeaabf2642450abdcf5256979fe8df5e7005ae3326af0e0fa6 Jan 26 18:53:30 crc kubenswrapper[4737]: I0126 18:53:30.857297 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f9d4-account-create-update-bf25x" event={"ID":"5e51d252-fc4e-4694-87e5-dade4de60ec5","Type":"ContainerStarted","Data":"7a2e817047b054dea7ab88e612db0191ec52e27750f1b1776587ef5e3416eda3"} Jan 26 18:53:30 crc kubenswrapper[4737]: I0126 18:53:30.858952 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-887b-account-create-update-zvz84" event={"ID":"e57f4e7a-0e31-4911-9f19-a43e3d91e721","Type":"ContainerStarted","Data":"cb19d7543861ea57f1c447d098756f09f11f2488c92ea7e02aaf92896036540d"} Jan 26 18:53:30 crc kubenswrapper[4737]: I0126 18:53:30.872594 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2wspq" event={"ID":"3b1d8684-f062-414f-a991-fe492f651e21","Type":"ContainerDied","Data":"c763df7a33fa76744a9624212c79862874fffc51f6a4ec3a4478b8b004a529ec"} Jan 26 18:53:30 crc kubenswrapper[4737]: I0126 18:53:30.873131 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c763df7a33fa76744a9624212c79862874fffc51f6a4ec3a4478b8b004a529ec" Jan 26 18:53:30 crc kubenswrapper[4737]: I0126 18:53:30.873903 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-pb2pj" event={"ID":"c9c00937-8b37-4a41-8403-c69b2e307675","Type":"ContainerStarted","Data":"f2a39d1750dd54eeaabf2642450abdcf5256979fe8df5e7005ae3326af0e0fa6"} Jan 26 18:53:30 crc kubenswrapper[4737]: I0126 18:53:30.878487 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"539f99ad-d4f8-4e02-aca3-f247bc802698","Type":"ContainerStarted","Data":"2357be3fbbd97bef6dd25c9b79beff5a03788cc4d28b237f3b702ffcf5bf2b92"} Jan 26 18:53:30 crc kubenswrapper[4737]: I0126 18:53:30.880519 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-ntqg8" event={"ID":"e633aa68-a0b7-4ee4-bf00-7d46105654e2","Type":"ContainerStarted","Data":"aaa2655e3e7923485bf313b5caf3bc38bb74acc5668adcae7cb4444b8317df4a"} Jan 26 18:53:30 crc kubenswrapper[4737]: I0126 18:53:30.881457 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-xx2nh" event={"ID":"b117bcd7-b58c-4af6-9bd6-ce70ec70f601","Type":"ContainerStarted","Data":"1b4594a9be0940b7993e800f93bbe366dc501fa4e9730a024775a29377d9f2ba"} Jan 26 18:53:30 crc kubenswrapper[4737]: I0126 18:53:30.882463 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a4fd-account-create-update-jq2tl" event={"ID":"a9404174-9225-41ad-9db6-d523f17739d0","Type":"ContainerStarted","Data":"a781f243c477db96239118a50dbe3146cd7700683a01c3f177065045d4d08612"} Jan 26 18:53:31 crc kubenswrapper[4737]: I0126 18:53:31.070319 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2wspq" Jan 26 18:53:31 crc kubenswrapper[4737]: I0126 18:53:31.087920 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b1d8684-f062-414f-a991-fe492f651e21-operator-scripts\") pod \"3b1d8684-f062-414f-a991-fe492f651e21\" (UID: \"3b1d8684-f062-414f-a991-fe492f651e21\") " Jan 26 18:53:31 crc kubenswrapper[4737]: I0126 18:53:31.088849 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbjrt\" (UniqueName: \"kubernetes.io/projected/3b1d8684-f062-414f-a991-fe492f651e21-kube-api-access-vbjrt\") pod \"3b1d8684-f062-414f-a991-fe492f651e21\" (UID: \"3b1d8684-f062-414f-a991-fe492f651e21\") " Jan 26 18:53:31 crc kubenswrapper[4737]: I0126 18:53:31.089837 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b1d8684-f062-414f-a991-fe492f651e21-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3b1d8684-f062-414f-a991-fe492f651e21" (UID: "3b1d8684-f062-414f-a991-fe492f651e21"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:53:31 crc kubenswrapper[4737]: I0126 18:53:31.090264 4737 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b1d8684-f062-414f-a991-fe492f651e21-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:31 crc kubenswrapper[4737]: I0126 18:53:31.095658 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b1d8684-f062-414f-a991-fe492f651e21-kube-api-access-vbjrt" (OuterVolumeSpecName: "kube-api-access-vbjrt") pod "3b1d8684-f062-414f-a991-fe492f651e21" (UID: "3b1d8684-f062-414f-a991-fe492f651e21"). InnerVolumeSpecName "kube-api-access-vbjrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:53:31 crc kubenswrapper[4737]: I0126 18:53:31.193192 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbjrt\" (UniqueName: \"kubernetes.io/projected/3b1d8684-f062-414f-a991-fe492f651e21-kube-api-access-vbjrt\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:31 crc kubenswrapper[4737]: I0126 18:53:31.620725 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-nppft"] Jan 26 18:53:31 crc kubenswrapper[4737]: E0126 18:53:31.621379 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b1d8684-f062-414f-a991-fe492f651e21" containerName="mariadb-account-create-update" Jan 26 18:53:31 crc kubenswrapper[4737]: I0126 18:53:31.621446 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b1d8684-f062-414f-a991-fe492f651e21" containerName="mariadb-account-create-update" Jan 26 18:53:31 crc kubenswrapper[4737]: I0126 18:53:31.622169 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b1d8684-f062-414f-a991-fe492f651e21" containerName="mariadb-account-create-update" Jan 26 18:53:31 crc kubenswrapper[4737]: I0126 18:53:31.622971 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-nppft" Jan 26 18:53:31 crc kubenswrapper[4737]: I0126 18:53:31.638327 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-nppft"] Jan 26 18:53:31 crc kubenswrapper[4737]: I0126 18:53:31.703505 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3228aed7-c127-465a-ba59-822d4e6e92e6-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-nppft\" (UID: \"3228aed7-c127-465a-ba59-822d4e6e92e6\") " pod="openstack/mysqld-exporter-openstack-db-create-nppft" Jan 26 18:53:31 crc kubenswrapper[4737]: I0126 18:53:31.703686 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnxfw\" (UniqueName: \"kubernetes.io/projected/3228aed7-c127-465a-ba59-822d4e6e92e6-kube-api-access-pnxfw\") pod \"mysqld-exporter-openstack-db-create-nppft\" (UID: \"3228aed7-c127-465a-ba59-822d4e6e92e6\") " pod="openstack/mysqld-exporter-openstack-db-create-nppft" Jan 26 18:53:31 crc kubenswrapper[4737]: I0126 18:53:31.806357 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3228aed7-c127-465a-ba59-822d4e6e92e6-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-nppft\" (UID: \"3228aed7-c127-465a-ba59-822d4e6e92e6\") " pod="openstack/mysqld-exporter-openstack-db-create-nppft" Jan 26 18:53:31 crc kubenswrapper[4737]: I0126 18:53:31.806414 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnxfw\" (UniqueName: \"kubernetes.io/projected/3228aed7-c127-465a-ba59-822d4e6e92e6-kube-api-access-pnxfw\") pod \"mysqld-exporter-openstack-db-create-nppft\" (UID: \"3228aed7-c127-465a-ba59-822d4e6e92e6\") " pod="openstack/mysqld-exporter-openstack-db-create-nppft" Jan 26 18:53:31 crc kubenswrapper[4737]: I0126 18:53:31.807896 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3228aed7-c127-465a-ba59-822d4e6e92e6-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-nppft\" (UID: \"3228aed7-c127-465a-ba59-822d4e6e92e6\") " pod="openstack/mysqld-exporter-openstack-db-create-nppft" Jan 26 18:53:31 crc kubenswrapper[4737]: I0126 18:53:31.838386 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnxfw\" (UniqueName: \"kubernetes.io/projected/3228aed7-c127-465a-ba59-822d4e6e92e6-kube-api-access-pnxfw\") pod \"mysqld-exporter-openstack-db-create-nppft\" (UID: \"3228aed7-c127-465a-ba59-822d4e6e92e6\") " pod="openstack/mysqld-exporter-openstack-db-create-nppft" Jan 26 18:53:31 crc kubenswrapper[4737]: I0126 18:53:31.846259 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-bc81-account-create-update-fjw9f"] Jan 26 18:53:31 crc kubenswrapper[4737]: I0126 18:53:31.847928 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-bc81-account-create-update-fjw9f" Jan 26 18:53:31 crc kubenswrapper[4737]: I0126 18:53:31.850382 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Jan 26 18:53:31 crc kubenswrapper[4737]: I0126 18:53:31.871155 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-bc81-account-create-update-fjw9f"] Jan 26 18:53:31 crc kubenswrapper[4737]: I0126 18:53:31.899751 4737 generic.go:334] "Generic (PLEG): container finished" podID="e633aa68-a0b7-4ee4-bf00-7d46105654e2" containerID="19fb879be2b1f5e4c909d3ae1f53209501da34f7194b3ac04975a500296a26f0" exitCode=0 Jan 26 18:53:31 crc kubenswrapper[4737]: I0126 18:53:31.900162 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-ntqg8" event={"ID":"e633aa68-a0b7-4ee4-bf00-7d46105654e2","Type":"ContainerDied","Data":"19fb879be2b1f5e4c909d3ae1f53209501da34f7194b3ac04975a500296a26f0"} Jan 26 18:53:31 crc kubenswrapper[4737]: I0126 18:53:31.901940 4737 generic.go:334] "Generic (PLEG): container finished" podID="b117bcd7-b58c-4af6-9bd6-ce70ec70f601" containerID="e42da6a8906b8f31ce69bc4df544287660d854164363428666a1ec3065ce5f7e" exitCode=0 Jan 26 18:53:31 crc kubenswrapper[4737]: I0126 18:53:31.902015 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-xx2nh" event={"ID":"b117bcd7-b58c-4af6-9bd6-ce70ec70f601","Type":"ContainerDied","Data":"e42da6a8906b8f31ce69bc4df544287660d854164363428666a1ec3065ce5f7e"} Jan 26 18:53:31 crc kubenswrapper[4737]: I0126 18:53:31.905441 4737 generic.go:334] "Generic (PLEG): container finished" podID="a9404174-9225-41ad-9db6-d523f17739d0" containerID="4cd50035a3e48d31c3cd0ab6dcc00f71ad619f8897624e53fd8f7336e60488b6" exitCode=0 Jan 26 18:53:31 crc kubenswrapper[4737]: I0126 18:53:31.905512 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a4fd-account-create-update-jq2tl" event={"ID":"a9404174-9225-41ad-9db6-d523f17739d0","Type":"ContainerDied","Data":"4cd50035a3e48d31c3cd0ab6dcc00f71ad619f8897624e53fd8f7336e60488b6"} Jan 26 18:53:31 crc kubenswrapper[4737]: I0126 18:53:31.907479 4737 generic.go:334] "Generic (PLEG): container finished" podID="5e51d252-fc4e-4694-87e5-dade4de60ec5" containerID="95edc5c0585e3b1e7fa8f478d2913c1d1b1bb8aa7d88d2db5c8f3c342eae47be" exitCode=0 Jan 26 18:53:31 crc kubenswrapper[4737]: I0126 18:53:31.907546 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f9d4-account-create-update-bf25x" event={"ID":"5e51d252-fc4e-4694-87e5-dade4de60ec5","Type":"ContainerDied","Data":"95edc5c0585e3b1e7fa8f478d2913c1d1b1bb8aa7d88d2db5c8f3c342eae47be"} Jan 26 18:53:31 crc kubenswrapper[4737]: I0126 18:53:31.909455 4737 generic.go:334] "Generic (PLEG): container finished" podID="e57f4e7a-0e31-4911-9f19-a43e3d91e721" containerID="354db663809afa6815ca0106d2eb43df1c1cfc00166bdc2dd4bfad1209d5940c" exitCode=0 Jan 26 18:53:31 crc kubenswrapper[4737]: I0126 18:53:31.909512 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-887b-account-create-update-zvz84" event={"ID":"e57f4e7a-0e31-4911-9f19-a43e3d91e721","Type":"ContainerDied","Data":"354db663809afa6815ca0106d2eb43df1c1cfc00166bdc2dd4bfad1209d5940c"} Jan 26 18:53:31 crc kubenswrapper[4737]: I0126 18:53:31.909913 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/907a6367-2724-43bf-aabf-b9488debfed4-operator-scripts\") pod \"mysqld-exporter-bc81-account-create-update-fjw9f\" (UID: \"907a6367-2724-43bf-aabf-b9488debfed4\") " pod="openstack/mysqld-exporter-bc81-account-create-update-fjw9f" Jan 26 18:53:31 crc kubenswrapper[4737]: I0126 18:53:31.909975 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk86r\" (UniqueName: \"kubernetes.io/projected/907a6367-2724-43bf-aabf-b9488debfed4-kube-api-access-nk86r\") pod \"mysqld-exporter-bc81-account-create-update-fjw9f\" (UID: \"907a6367-2724-43bf-aabf-b9488debfed4\") " pod="openstack/mysqld-exporter-bc81-account-create-update-fjw9f" Jan 26 18:53:31 crc kubenswrapper[4737]: I0126 18:53:31.910926 4737 generic.go:334] "Generic (PLEG): container finished" podID="c9c00937-8b37-4a41-8403-c69b2e307675" containerID="0233166c83b96ca780f8bf20d11d9e4c36794a2c5a1095fcae0d8f4383628120" exitCode=0 Jan 26 18:53:31 crc kubenswrapper[4737]: I0126 18:53:31.911002 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2wspq" Jan 26 18:53:31 crc kubenswrapper[4737]: I0126 18:53:31.913781 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-pb2pj" event={"ID":"c9c00937-8b37-4a41-8403-c69b2e307675","Type":"ContainerDied","Data":"0233166c83b96ca780f8bf20d11d9e4c36794a2c5a1095fcae0d8f4383628120"} Jan 26 18:53:32 crc kubenswrapper[4737]: I0126 18:53:32.011815 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/907a6367-2724-43bf-aabf-b9488debfed4-operator-scripts\") pod \"mysqld-exporter-bc81-account-create-update-fjw9f\" (UID: \"907a6367-2724-43bf-aabf-b9488debfed4\") " pod="openstack/mysqld-exporter-bc81-account-create-update-fjw9f" Jan 26 18:53:32 crc kubenswrapper[4737]: I0126 18:53:32.011909 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nk86r\" (UniqueName: \"kubernetes.io/projected/907a6367-2724-43bf-aabf-b9488debfed4-kube-api-access-nk86r\") pod \"mysqld-exporter-bc81-account-create-update-fjw9f\" (UID: \"907a6367-2724-43bf-aabf-b9488debfed4\") " pod="openstack/mysqld-exporter-bc81-account-create-update-fjw9f" Jan 26 18:53:32 crc kubenswrapper[4737]: I0126 18:53:32.012718 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/907a6367-2724-43bf-aabf-b9488debfed4-operator-scripts\") pod \"mysqld-exporter-bc81-account-create-update-fjw9f\" (UID: \"907a6367-2724-43bf-aabf-b9488debfed4\") " pod="openstack/mysqld-exporter-bc81-account-create-update-fjw9f" Jan 26 18:53:32 crc kubenswrapper[4737]: I0126 18:53:32.030626 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nk86r\" (UniqueName: \"kubernetes.io/projected/907a6367-2724-43bf-aabf-b9488debfed4-kube-api-access-nk86r\") pod \"mysqld-exporter-bc81-account-create-update-fjw9f\" (UID: \"907a6367-2724-43bf-aabf-b9488debfed4\") " pod="openstack/mysqld-exporter-bc81-account-create-update-fjw9f" Jan 26 18:53:32 crc kubenswrapper[4737]: I0126 18:53:32.133502 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-nppft" Jan 26 18:53:32 crc kubenswrapper[4737]: I0126 18:53:32.209362 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-bc81-account-create-update-fjw9f" Jan 26 18:53:32 crc kubenswrapper[4737]: I0126 18:53:32.496956 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-2wspq"] Jan 26 18:53:32 crc kubenswrapper[4737]: I0126 18:53:32.506860 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-2wspq"] Jan 26 18:53:32 crc kubenswrapper[4737]: I0126 18:53:32.635466 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-nppft"] Jan 26 18:53:32 crc kubenswrapper[4737]: W0126 18:53:32.635601 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3228aed7_c127_465a_ba59_822d4e6e92e6.slice/crio-34206a767deeb7f6248723e9022d8973a7d57bfc08378c6c85d2acebdef6f528 WatchSource:0}: Error finding container 34206a767deeb7f6248723e9022d8973a7d57bfc08378c6c85d2acebdef6f528: Status 404 returned error can't find the container with id 34206a767deeb7f6248723e9022d8973a7d57bfc08378c6c85d2acebdef6f528 Jan 26 18:53:32 crc kubenswrapper[4737]: W0126 18:53:32.824020 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod907a6367_2724_43bf_aabf_b9488debfed4.slice/crio-6586a5c692c9f58f094264b97ea4cf7dd697a083c028768898012608f38101ec WatchSource:0}: Error finding container 6586a5c692c9f58f094264b97ea4cf7dd697a083c028768898012608f38101ec: Status 404 returned error can't find the container with id 6586a5c692c9f58f094264b97ea4cf7dd697a083c028768898012608f38101ec Jan 26 18:53:32 crc kubenswrapper[4737]: I0126 18:53:32.826024 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-bc81-account-create-update-fjw9f"] Jan 26 18:53:32 crc kubenswrapper[4737]: I0126 18:53:32.922458 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-nppft" event={"ID":"3228aed7-c127-465a-ba59-822d4e6e92e6","Type":"ContainerStarted","Data":"b5695aa4f93e4692032724d9acaf7537beb4dbcf2ee8a5e8ac43a123d4724b66"} Jan 26 18:53:32 crc kubenswrapper[4737]: I0126 18:53:32.922783 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-nppft" event={"ID":"3228aed7-c127-465a-ba59-822d4e6e92e6","Type":"ContainerStarted","Data":"34206a767deeb7f6248723e9022d8973a7d57bfc08378c6c85d2acebdef6f528"} Jan 26 18:53:32 crc kubenswrapper[4737]: I0126 18:53:32.924198 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-bc81-account-create-update-fjw9f" event={"ID":"907a6367-2724-43bf-aabf-b9488debfed4","Type":"ContainerStarted","Data":"6586a5c692c9f58f094264b97ea4cf7dd697a083c028768898012608f38101ec"} Jan 26 18:53:32 crc kubenswrapper[4737]: I0126 18:53:32.945593 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-openstack-db-create-nppft" podStartSLOduration=1.9455744799999999 podStartE2EDuration="1.94557448s" podCreationTimestamp="2026-01-26 18:53:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:53:32.939712151 +0000 UTC m=+1386.247906879" watchObservedRunningTime="2026-01-26 18:53:32.94557448 +0000 UTC m=+1386.253769188" Jan 26 18:53:32 crc kubenswrapper[4737]: I0126 18:53:32.993094 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b1d8684-f062-414f-a991-fe492f651e21" path="/var/lib/kubelet/pods/3b1d8684-f062-414f-a991-fe492f651e21/volumes" Jan 26 18:53:33 crc kubenswrapper[4737]: I0126 18:53:33.416864 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f9d4-account-create-update-bf25x" Jan 26 18:53:33 crc kubenswrapper[4737]: I0126 18:53:33.444893 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e51d252-fc4e-4694-87e5-dade4de60ec5-operator-scripts\") pod \"5e51d252-fc4e-4694-87e5-dade4de60ec5\" (UID: \"5e51d252-fc4e-4694-87e5-dade4de60ec5\") " Jan 26 18:53:33 crc kubenswrapper[4737]: I0126 18:53:33.444986 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8txrc\" (UniqueName: \"kubernetes.io/projected/5e51d252-fc4e-4694-87e5-dade4de60ec5-kube-api-access-8txrc\") pod \"5e51d252-fc4e-4694-87e5-dade4de60ec5\" (UID: \"5e51d252-fc4e-4694-87e5-dade4de60ec5\") " Jan 26 18:53:33 crc kubenswrapper[4737]: I0126 18:53:33.446434 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e51d252-fc4e-4694-87e5-dade4de60ec5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5e51d252-fc4e-4694-87e5-dade4de60ec5" (UID: "5e51d252-fc4e-4694-87e5-dade4de60ec5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:53:33 crc kubenswrapper[4737]: I0126 18:53:33.451597 4737 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod425352a9-7fbe-4370-be54-cb85d79de0b1"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod425352a9-7fbe-4370-be54-cb85d79de0b1] : Timed out while waiting for systemd to remove kubepods-besteffort-pod425352a9_7fbe_4370_be54_cb85d79de0b1.slice" Jan 26 18:53:33 crc kubenswrapper[4737]: E0126 18:53:33.451671 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod425352a9-7fbe-4370-be54-cb85d79de0b1] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod425352a9-7fbe-4370-be54-cb85d79de0b1] : Timed out while waiting for systemd to remove kubepods-besteffort-pod425352a9_7fbe_4370_be54_cb85d79de0b1.slice" pod="openstack/dnsmasq-dns-675f4bcbfc-9xtr2" podUID="425352a9-7fbe-4370-be54-cb85d79de0b1" Jan 26 18:53:33 crc kubenswrapper[4737]: I0126 18:53:33.455319 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e51d252-fc4e-4694-87e5-dade4de60ec5-kube-api-access-8txrc" (OuterVolumeSpecName: "kube-api-access-8txrc") pod "5e51d252-fc4e-4694-87e5-dade4de60ec5" (UID: "5e51d252-fc4e-4694-87e5-dade4de60ec5"). InnerVolumeSpecName "kube-api-access-8txrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:53:33 crc kubenswrapper[4737]: I0126 18:53:33.455337 4737 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podbb1f1f93-5c26-47f2-b5f1-42d96632aa89"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podbb1f1f93-5c26-47f2-b5f1-42d96632aa89] : Timed out while waiting for systemd to remove kubepods-besteffort-podbb1f1f93_5c26_47f2_b5f1_42d96632aa89.slice" Jan 26 18:53:33 crc kubenswrapper[4737]: E0126 18:53:33.455391 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort podbb1f1f93-5c26-47f2-b5f1-42d96632aa89] : unable to destroy cgroup paths for cgroup [kubepods besteffort podbb1f1f93-5c26-47f2-b5f1-42d96632aa89] : Timed out while waiting for systemd to remove kubepods-besteffort-podbb1f1f93_5c26_47f2_b5f1_42d96632aa89.slice" pod="openstack/dnsmasq-dns-78dd6ddcc-527jf" podUID="bb1f1f93-5c26-47f2-b5f1-42d96632aa89" Jan 26 18:53:33 crc kubenswrapper[4737]: I0126 18:53:33.548170 4737 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e51d252-fc4e-4694-87e5-dade4de60ec5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:33 crc kubenswrapper[4737]: I0126 18:53:33.548230 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8txrc\" (UniqueName: \"kubernetes.io/projected/5e51d252-fc4e-4694-87e5-dade4de60ec5-kube-api-access-8txrc\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:33 crc kubenswrapper[4737]: I0126 18:53:33.763503 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-7xhdj" Jan 26 18:53:33 crc kubenswrapper[4737]: I0126 18:53:33.849382 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a4fd-account-create-update-jq2tl" Jan 26 18:53:33 crc kubenswrapper[4737]: I0126 18:53:33.854414 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdtsf\" (UniqueName: \"kubernetes.io/projected/a9404174-9225-41ad-9db6-d523f17739d0-kube-api-access-wdtsf\") pod \"a9404174-9225-41ad-9db6-d523f17739d0\" (UID: \"a9404174-9225-41ad-9db6-d523f17739d0\") " Jan 26 18:53:33 crc kubenswrapper[4737]: I0126 18:53:33.854554 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9404174-9225-41ad-9db6-d523f17739d0-operator-scripts\") pod \"a9404174-9225-41ad-9db6-d523f17739d0\" (UID: \"a9404174-9225-41ad-9db6-d523f17739d0\") " Jan 26 18:53:33 crc kubenswrapper[4737]: I0126 18:53:33.856047 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9404174-9225-41ad-9db6-d523f17739d0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a9404174-9225-41ad-9db6-d523f17739d0" (UID: "a9404174-9225-41ad-9db6-d523f17739d0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:53:33 crc kubenswrapper[4737]: I0126 18:53:33.860245 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-kjzn5"] Jan 26 18:53:33 crc kubenswrapper[4737]: I0126 18:53:33.860658 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74f6f696b9-kjzn5" podUID="a8bf10ed-050c-48c9-8967-f2bb9b53b9eb" containerName="dnsmasq-dns" containerID="cri-o://f3693d3822b348d34ac56e55d4634eca01ce143e5df194d6e5232e471f5e3ecf" gracePeriod=10 Jan 26 18:53:33 crc kubenswrapper[4737]: I0126 18:53:33.866797 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-pb2pj" Jan 26 18:53:33 crc kubenswrapper[4737]: I0126 18:53:33.868358 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9404174-9225-41ad-9db6-d523f17739d0-kube-api-access-wdtsf" (OuterVolumeSpecName: "kube-api-access-wdtsf") pod "a9404174-9225-41ad-9db6-d523f17739d0" (UID: "a9404174-9225-41ad-9db6-d523f17739d0"). InnerVolumeSpecName "kube-api-access-wdtsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:53:33 crc kubenswrapper[4737]: I0126 18:53:33.876546 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-ntqg8" Jan 26 18:53:33 crc kubenswrapper[4737]: I0126 18:53:33.902861 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-887b-account-create-update-zvz84" Jan 26 18:53:33 crc kubenswrapper[4737]: I0126 18:53:33.914309 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-xx2nh" Jan 26 18:53:33 crc kubenswrapper[4737]: I0126 18:53:33.958131 4737 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9404174-9225-41ad-9db6-d523f17739d0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:33 crc kubenswrapper[4737]: I0126 18:53:33.958165 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdtsf\" (UniqueName: \"kubernetes.io/projected/a9404174-9225-41ad-9db6-d523f17739d0-kube-api-access-wdtsf\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:33 crc kubenswrapper[4737]: I0126 18:53:33.992618 4737 generic.go:334] "Generic (PLEG): container finished" podID="907a6367-2724-43bf-aabf-b9488debfed4" containerID="1f6b732a694472af725010e14d95dd6f76a2262a7c4689b900956af6492c75b9" exitCode=0 Jan 26 18:53:33 crc kubenswrapper[4737]: I0126 18:53:33.992717 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-bc81-account-create-update-fjw9f" event={"ID":"907a6367-2724-43bf-aabf-b9488debfed4","Type":"ContainerDied","Data":"1f6b732a694472af725010e14d95dd6f76a2262a7c4689b900956af6492c75b9"} Jan 26 18:53:33 crc kubenswrapper[4737]: I0126 18:53:33.999006 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-ntqg8" event={"ID":"e633aa68-a0b7-4ee4-bf00-7d46105654e2","Type":"ContainerDied","Data":"aaa2655e3e7923485bf313b5caf3bc38bb74acc5668adcae7cb4444b8317df4a"} Jan 26 18:53:33 crc kubenswrapper[4737]: I0126 18:53:33.999059 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aaa2655e3e7923485bf313b5caf3bc38bb74acc5668adcae7cb4444b8317df4a" Jan 26 18:53:33 crc kubenswrapper[4737]: I0126 18:53:33.999155 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-ntqg8" Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.002046 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-xx2nh" event={"ID":"b117bcd7-b58c-4af6-9bd6-ce70ec70f601","Type":"ContainerDied","Data":"1b4594a9be0940b7993e800f93bbe366dc501fa4e9730a024775a29377d9f2ba"} Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.002494 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b4594a9be0940b7993e800f93bbe366dc501fa4e9730a024775a29377d9f2ba" Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.002573 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-xx2nh" Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.022352 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a4fd-account-create-update-jq2tl" Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.022363 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a4fd-account-create-update-jq2tl" event={"ID":"a9404174-9225-41ad-9db6-d523f17739d0","Type":"ContainerDied","Data":"a781f243c477db96239118a50dbe3146cd7700683a01c3f177065045d4d08612"} Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.022685 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a781f243c477db96239118a50dbe3146cd7700683a01c3f177065045d4d08612" Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.025913 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f9d4-account-create-update-bf25x" event={"ID":"5e51d252-fc4e-4694-87e5-dade4de60ec5","Type":"ContainerDied","Data":"7a2e817047b054dea7ab88e612db0191ec52e27750f1b1776587ef5e3416eda3"} Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.025943 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a2e817047b054dea7ab88e612db0191ec52e27750f1b1776587ef5e3416eda3" Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.026007 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f9d4-account-create-update-bf25x" Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.032841 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-887b-account-create-update-zvz84" Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.033283 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-887b-account-create-update-zvz84" event={"ID":"e57f4e7a-0e31-4911-9f19-a43e3d91e721","Type":"ContainerDied","Data":"cb19d7543861ea57f1c447d098756f09f11f2488c92ea7e02aaf92896036540d"} Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.033317 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb19d7543861ea57f1c447d098756f09f11f2488c92ea7e02aaf92896036540d" Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.035376 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-pb2pj" event={"ID":"c9c00937-8b37-4a41-8403-c69b2e307675","Type":"ContainerDied","Data":"f2a39d1750dd54eeaabf2642450abdcf5256979fe8df5e7005ae3326af0e0fa6"} Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.035429 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2a39d1750dd54eeaabf2642450abdcf5256979fe8df5e7005ae3326af0e0fa6" Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.035518 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-pb2pj" Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.041030 4737 generic.go:334] "Generic (PLEG): container finished" podID="3228aed7-c127-465a-ba59-822d4e6e92e6" containerID="b5695aa4f93e4692032724d9acaf7537beb4dbcf2ee8a5e8ac43a123d4724b66" exitCode=0 Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.041438 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-nppft" event={"ID":"3228aed7-c127-465a-ba59-822d4e6e92e6","Type":"ContainerDied","Data":"b5695aa4f93e4692032724d9acaf7537beb4dbcf2ee8a5e8ac43a123d4724b66"} Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.041836 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-9xtr2" Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.043831 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-527jf" Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.059772 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lxbz\" (UniqueName: \"kubernetes.io/projected/b117bcd7-b58c-4af6-9bd6-ce70ec70f601-kube-api-access-7lxbz\") pod \"b117bcd7-b58c-4af6-9bd6-ce70ec70f601\" (UID: \"b117bcd7-b58c-4af6-9bd6-ce70ec70f601\") " Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.059867 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e633aa68-a0b7-4ee4-bf00-7d46105654e2-operator-scripts\") pod \"e633aa68-a0b7-4ee4-bf00-7d46105654e2\" (UID: \"e633aa68-a0b7-4ee4-bf00-7d46105654e2\") " Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.059891 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hv4v9\" (UniqueName: \"kubernetes.io/projected/c9c00937-8b37-4a41-8403-c69b2e307675-kube-api-access-hv4v9\") pod \"c9c00937-8b37-4a41-8403-c69b2e307675\" (UID: \"c9c00937-8b37-4a41-8403-c69b2e307675\") " Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.059945 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9c00937-8b37-4a41-8403-c69b2e307675-operator-scripts\") pod \"c9c00937-8b37-4a41-8403-c69b2e307675\" (UID: \"c9c00937-8b37-4a41-8403-c69b2e307675\") " Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.060208 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b117bcd7-b58c-4af6-9bd6-ce70ec70f601-operator-scripts\") pod \"b117bcd7-b58c-4af6-9bd6-ce70ec70f601\" (UID: \"b117bcd7-b58c-4af6-9bd6-ce70ec70f601\") " Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.060234 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e57f4e7a-0e31-4911-9f19-a43e3d91e721-operator-scripts\") pod \"e57f4e7a-0e31-4911-9f19-a43e3d91e721\" (UID: \"e57f4e7a-0e31-4911-9f19-a43e3d91e721\") " Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.060297 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjb26\" (UniqueName: \"kubernetes.io/projected/e57f4e7a-0e31-4911-9f19-a43e3d91e721-kube-api-access-jjb26\") pod \"e57f4e7a-0e31-4911-9f19-a43e3d91e721\" (UID: \"e57f4e7a-0e31-4911-9f19-a43e3d91e721\") " Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.060326 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqzc6\" (UniqueName: \"kubernetes.io/projected/e633aa68-a0b7-4ee4-bf00-7d46105654e2-kube-api-access-jqzc6\") pod \"e633aa68-a0b7-4ee4-bf00-7d46105654e2\" (UID: \"e633aa68-a0b7-4ee4-bf00-7d46105654e2\") " Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.061829 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b117bcd7-b58c-4af6-9bd6-ce70ec70f601-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b117bcd7-b58c-4af6-9bd6-ce70ec70f601" (UID: "b117bcd7-b58c-4af6-9bd6-ce70ec70f601"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.061943 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e57f4e7a-0e31-4911-9f19-a43e3d91e721-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e57f4e7a-0e31-4911-9f19-a43e3d91e721" (UID: "e57f4e7a-0e31-4911-9f19-a43e3d91e721"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.062792 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9c00937-8b37-4a41-8403-c69b2e307675-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c9c00937-8b37-4a41-8403-c69b2e307675" (UID: "c9c00937-8b37-4a41-8403-c69b2e307675"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.063383 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e633aa68-a0b7-4ee4-bf00-7d46105654e2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e633aa68-a0b7-4ee4-bf00-7d46105654e2" (UID: "e633aa68-a0b7-4ee4-bf00-7d46105654e2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.066940 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e633aa68-a0b7-4ee4-bf00-7d46105654e2-kube-api-access-jqzc6" (OuterVolumeSpecName: "kube-api-access-jqzc6") pod "e633aa68-a0b7-4ee4-bf00-7d46105654e2" (UID: "e633aa68-a0b7-4ee4-bf00-7d46105654e2"). InnerVolumeSpecName "kube-api-access-jqzc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.068635 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b117bcd7-b58c-4af6-9bd6-ce70ec70f601-kube-api-access-7lxbz" (OuterVolumeSpecName: "kube-api-access-7lxbz") pod "b117bcd7-b58c-4af6-9bd6-ce70ec70f601" (UID: "b117bcd7-b58c-4af6-9bd6-ce70ec70f601"). InnerVolumeSpecName "kube-api-access-7lxbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.069749 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9c00937-8b37-4a41-8403-c69b2e307675-kube-api-access-hv4v9" (OuterVolumeSpecName: "kube-api-access-hv4v9") pod "c9c00937-8b37-4a41-8403-c69b2e307675" (UID: "c9c00937-8b37-4a41-8403-c69b2e307675"). InnerVolumeSpecName "kube-api-access-hv4v9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.071542 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e57f4e7a-0e31-4911-9f19-a43e3d91e721-kube-api-access-jjb26" (OuterVolumeSpecName: "kube-api-access-jjb26") pod "e57f4e7a-0e31-4911-9f19-a43e3d91e721" (UID: "e57f4e7a-0e31-4911-9f19-a43e3d91e721"). InnerVolumeSpecName "kube-api-access-jjb26". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.150460 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-527jf"] Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.163159 4737 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b117bcd7-b58c-4af6-9bd6-ce70ec70f601-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.163206 4737 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e57f4e7a-0e31-4911-9f19-a43e3d91e721-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.163218 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjb26\" (UniqueName: \"kubernetes.io/projected/e57f4e7a-0e31-4911-9f19-a43e3d91e721-kube-api-access-jjb26\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.163232 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqzc6\" (UniqueName: \"kubernetes.io/projected/e633aa68-a0b7-4ee4-bf00-7d46105654e2-kube-api-access-jqzc6\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.163244 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lxbz\" (UniqueName: \"kubernetes.io/projected/b117bcd7-b58c-4af6-9bd6-ce70ec70f601-kube-api-access-7lxbz\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.163255 4737 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e633aa68-a0b7-4ee4-bf00-7d46105654e2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.163266 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hv4v9\" (UniqueName: \"kubernetes.io/projected/c9c00937-8b37-4a41-8403-c69b2e307675-kube-api-access-hv4v9\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.163280 4737 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9c00937-8b37-4a41-8403-c69b2e307675-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.168298 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-527jf"] Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.203929 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-9xtr2"] Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.218472 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-9xtr2"] Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.641599 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6f696b9-kjzn5" Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.775719 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pq2kl\" (UniqueName: \"kubernetes.io/projected/a8bf10ed-050c-48c9-8967-f2bb9b53b9eb-kube-api-access-pq2kl\") pod \"a8bf10ed-050c-48c9-8967-f2bb9b53b9eb\" (UID: \"a8bf10ed-050c-48c9-8967-f2bb9b53b9eb\") " Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.775857 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a8bf10ed-050c-48c9-8967-f2bb9b53b9eb-dns-svc\") pod \"a8bf10ed-050c-48c9-8967-f2bb9b53b9eb\" (UID: \"a8bf10ed-050c-48c9-8967-f2bb9b53b9eb\") " Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.775906 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8bf10ed-050c-48c9-8967-f2bb9b53b9eb-config\") pod \"a8bf10ed-050c-48c9-8967-f2bb9b53b9eb\" (UID: \"a8bf10ed-050c-48c9-8967-f2bb9b53b9eb\") " Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.778916 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a8bf10ed-050c-48c9-8967-f2bb9b53b9eb-ovsdbserver-nb\") pod \"a8bf10ed-050c-48c9-8967-f2bb9b53b9eb\" (UID: \"a8bf10ed-050c-48c9-8967-f2bb9b53b9eb\") " Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.788294 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8bf10ed-050c-48c9-8967-f2bb9b53b9eb-kube-api-access-pq2kl" (OuterVolumeSpecName: "kube-api-access-pq2kl") pod "a8bf10ed-050c-48c9-8967-f2bb9b53b9eb" (UID: "a8bf10ed-050c-48c9-8967-f2bb9b53b9eb"). InnerVolumeSpecName "kube-api-access-pq2kl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.840729 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8bf10ed-050c-48c9-8967-f2bb9b53b9eb-config" (OuterVolumeSpecName: "config") pod "a8bf10ed-050c-48c9-8967-f2bb9b53b9eb" (UID: "a8bf10ed-050c-48c9-8967-f2bb9b53b9eb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.842421 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8bf10ed-050c-48c9-8967-f2bb9b53b9eb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a8bf10ed-050c-48c9-8967-f2bb9b53b9eb" (UID: "a8bf10ed-050c-48c9-8967-f2bb9b53b9eb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.844546 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8bf10ed-050c-48c9-8967-f2bb9b53b9eb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a8bf10ed-050c-48c9-8967-f2bb9b53b9eb" (UID: "a8bf10ed-050c-48c9-8967-f2bb9b53b9eb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.883152 4737 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a8bf10ed-050c-48c9-8967-f2bb9b53b9eb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.883184 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pq2kl\" (UniqueName: \"kubernetes.io/projected/a8bf10ed-050c-48c9-8967-f2bb9b53b9eb-kube-api-access-pq2kl\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.883196 4737 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a8bf10ed-050c-48c9-8967-f2bb9b53b9eb-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.883204 4737 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8bf10ed-050c-48c9-8967-f2bb9b53b9eb-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.995727 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="425352a9-7fbe-4370-be54-cb85d79de0b1" path="/var/lib/kubelet/pods/425352a9-7fbe-4370-be54-cb85d79de0b1/volumes" Jan 26 18:53:34 crc kubenswrapper[4737]: I0126 18:53:34.996380 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb1f1f93-5c26-47f2-b5f1-42d96632aa89" path="/var/lib/kubelet/pods/bb1f1f93-5c26-47f2-b5f1-42d96632aa89/volumes" Jan 26 18:53:35 crc kubenswrapper[4737]: I0126 18:53:35.056223 4737 generic.go:334] "Generic (PLEG): container finished" podID="a8bf10ed-050c-48c9-8967-f2bb9b53b9eb" containerID="f3693d3822b348d34ac56e55d4634eca01ce143e5df194d6e5232e471f5e3ecf" exitCode=0 Jan 26 18:53:35 crc kubenswrapper[4737]: I0126 18:53:35.056626 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6f696b9-kjzn5" Jan 26 18:53:35 crc kubenswrapper[4737]: I0126 18:53:35.057009 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6f696b9-kjzn5" event={"ID":"a8bf10ed-050c-48c9-8967-f2bb9b53b9eb","Type":"ContainerDied","Data":"f3693d3822b348d34ac56e55d4634eca01ce143e5df194d6e5232e471f5e3ecf"} Jan 26 18:53:35 crc kubenswrapper[4737]: I0126 18:53:35.057093 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6f696b9-kjzn5" event={"ID":"a8bf10ed-050c-48c9-8967-f2bb9b53b9eb","Type":"ContainerDied","Data":"67a4228c8f047840cf4cd363df218607266d982917d94ef5fb6f850921fa791d"} Jan 26 18:53:35 crc kubenswrapper[4737]: I0126 18:53:35.057123 4737 scope.go:117] "RemoveContainer" containerID="f3693d3822b348d34ac56e55d4634eca01ce143e5df194d6e5232e471f5e3ecf" Jan 26 18:53:35 crc kubenswrapper[4737]: I0126 18:53:35.086205 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-kjzn5"] Jan 26 18:53:35 crc kubenswrapper[4737]: I0126 18:53:35.098351 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-kjzn5"] Jan 26 18:53:35 crc kubenswrapper[4737]: I0126 18:53:35.100337 4737 scope.go:117] "RemoveContainer" containerID="b04071c9f667679af64f9f30c5016a6c3d393ca9486101f824a4a4e248545887" Jan 26 18:53:35 crc kubenswrapper[4737]: I0126 18:53:35.132735 4737 scope.go:117] "RemoveContainer" containerID="f3693d3822b348d34ac56e55d4634eca01ce143e5df194d6e5232e471f5e3ecf" Jan 26 18:53:35 crc kubenswrapper[4737]: E0126 18:53:35.133563 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3693d3822b348d34ac56e55d4634eca01ce143e5df194d6e5232e471f5e3ecf\": container with ID starting with f3693d3822b348d34ac56e55d4634eca01ce143e5df194d6e5232e471f5e3ecf not found: ID does not exist" containerID="f3693d3822b348d34ac56e55d4634eca01ce143e5df194d6e5232e471f5e3ecf" Jan 26 18:53:35 crc kubenswrapper[4737]: I0126 18:53:35.133599 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3693d3822b348d34ac56e55d4634eca01ce143e5df194d6e5232e471f5e3ecf"} err="failed to get container status \"f3693d3822b348d34ac56e55d4634eca01ce143e5df194d6e5232e471f5e3ecf\": rpc error: code = NotFound desc = could not find container \"f3693d3822b348d34ac56e55d4634eca01ce143e5df194d6e5232e471f5e3ecf\": container with ID starting with f3693d3822b348d34ac56e55d4634eca01ce143e5df194d6e5232e471f5e3ecf not found: ID does not exist" Jan 26 18:53:35 crc kubenswrapper[4737]: I0126 18:53:35.133624 4737 scope.go:117] "RemoveContainer" containerID="b04071c9f667679af64f9f30c5016a6c3d393ca9486101f824a4a4e248545887" Jan 26 18:53:35 crc kubenswrapper[4737]: E0126 18:53:35.134398 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b04071c9f667679af64f9f30c5016a6c3d393ca9486101f824a4a4e248545887\": container with ID starting with b04071c9f667679af64f9f30c5016a6c3d393ca9486101f824a4a4e248545887 not found: ID does not exist" containerID="b04071c9f667679af64f9f30c5016a6c3d393ca9486101f824a4a4e248545887" Jan 26 18:53:35 crc kubenswrapper[4737]: I0126 18:53:35.134423 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b04071c9f667679af64f9f30c5016a6c3d393ca9486101f824a4a4e248545887"} err="failed to get container status \"b04071c9f667679af64f9f30c5016a6c3d393ca9486101f824a4a4e248545887\": rpc error: code = NotFound desc = could not find container \"b04071c9f667679af64f9f30c5016a6c3d393ca9486101f824a4a4e248545887\": container with ID starting with b04071c9f667679af64f9f30c5016a6c3d393ca9486101f824a4a4e248545887 not found: ID does not exist" Jan 26 18:53:36 crc kubenswrapper[4737]: I0126 18:53:36.070791 4737 generic.go:334] "Generic (PLEG): container finished" podID="c9be0bf2-1b3f-4f77-89ec-b5afa2362e47" containerID="92342e732a0b918a1eaac74018c29ac11c769fbea4c5a6e7349f67293c72f3fb" exitCode=0 Jan 26 18:53:36 crc kubenswrapper[4737]: I0126 18:53:36.071237 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-2fbb8" event={"ID":"c9be0bf2-1b3f-4f77-89ec-b5afa2362e47","Type":"ContainerDied","Data":"92342e732a0b918a1eaac74018c29ac11c769fbea4c5a6e7349f67293c72f3fb"} Jan 26 18:53:36 crc kubenswrapper[4737]: I0126 18:53:36.272609 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-nppft" Jan 26 18:53:36 crc kubenswrapper[4737]: I0126 18:53:36.287659 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-bc81-account-create-update-fjw9f" Jan 26 18:53:36 crc kubenswrapper[4737]: I0126 18:53:36.328392 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnxfw\" (UniqueName: \"kubernetes.io/projected/3228aed7-c127-465a-ba59-822d4e6e92e6-kube-api-access-pnxfw\") pod \"3228aed7-c127-465a-ba59-822d4e6e92e6\" (UID: \"3228aed7-c127-465a-ba59-822d4e6e92e6\") " Jan 26 18:53:36 crc kubenswrapper[4737]: I0126 18:53:36.328487 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/907a6367-2724-43bf-aabf-b9488debfed4-operator-scripts\") pod \"907a6367-2724-43bf-aabf-b9488debfed4\" (UID: \"907a6367-2724-43bf-aabf-b9488debfed4\") " Jan 26 18:53:36 crc kubenswrapper[4737]: I0126 18:53:36.328546 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3228aed7-c127-465a-ba59-822d4e6e92e6-operator-scripts\") pod \"3228aed7-c127-465a-ba59-822d4e6e92e6\" (UID: \"3228aed7-c127-465a-ba59-822d4e6e92e6\") " Jan 26 18:53:36 crc kubenswrapper[4737]: I0126 18:53:36.328572 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nk86r\" (UniqueName: \"kubernetes.io/projected/907a6367-2724-43bf-aabf-b9488debfed4-kube-api-access-nk86r\") pod \"907a6367-2724-43bf-aabf-b9488debfed4\" (UID: \"907a6367-2724-43bf-aabf-b9488debfed4\") " Jan 26 18:53:36 crc kubenswrapper[4737]: I0126 18:53:36.329469 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3228aed7-c127-465a-ba59-822d4e6e92e6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3228aed7-c127-465a-ba59-822d4e6e92e6" (UID: "3228aed7-c127-465a-ba59-822d4e6e92e6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:53:36 crc kubenswrapper[4737]: I0126 18:53:36.329651 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/907a6367-2724-43bf-aabf-b9488debfed4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "907a6367-2724-43bf-aabf-b9488debfed4" (UID: "907a6367-2724-43bf-aabf-b9488debfed4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:53:36 crc kubenswrapper[4737]: I0126 18:53:36.334782 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/907a6367-2724-43bf-aabf-b9488debfed4-kube-api-access-nk86r" (OuterVolumeSpecName: "kube-api-access-nk86r") pod "907a6367-2724-43bf-aabf-b9488debfed4" (UID: "907a6367-2724-43bf-aabf-b9488debfed4"). InnerVolumeSpecName "kube-api-access-nk86r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:53:36 crc kubenswrapper[4737]: I0126 18:53:36.335052 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3228aed7-c127-465a-ba59-822d4e6e92e6-kube-api-access-pnxfw" (OuterVolumeSpecName: "kube-api-access-pnxfw") pod "3228aed7-c127-465a-ba59-822d4e6e92e6" (UID: "3228aed7-c127-465a-ba59-822d4e6e92e6"). InnerVolumeSpecName "kube-api-access-pnxfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:53:36 crc kubenswrapper[4737]: I0126 18:53:36.345710 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-f5nm8"] Jan 26 18:53:36 crc kubenswrapper[4737]: E0126 18:53:36.346877 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b117bcd7-b58c-4af6-9bd6-ce70ec70f601" containerName="mariadb-database-create" Jan 26 18:53:36 crc kubenswrapper[4737]: I0126 18:53:36.346979 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="b117bcd7-b58c-4af6-9bd6-ce70ec70f601" containerName="mariadb-database-create" Jan 26 18:53:36 crc kubenswrapper[4737]: E0126 18:53:36.347106 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8bf10ed-050c-48c9-8967-f2bb9b53b9eb" containerName="dnsmasq-dns" Jan 26 18:53:36 crc kubenswrapper[4737]: I0126 18:53:36.347180 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8bf10ed-050c-48c9-8967-f2bb9b53b9eb" containerName="dnsmasq-dns" Jan 26 18:53:36 crc kubenswrapper[4737]: E0126 18:53:36.347270 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e51d252-fc4e-4694-87e5-dade4de60ec5" containerName="mariadb-account-create-update" Jan 26 18:53:36 crc kubenswrapper[4737]: I0126 18:53:36.347340 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e51d252-fc4e-4694-87e5-dade4de60ec5" containerName="mariadb-account-create-update" Jan 26 18:53:36 crc kubenswrapper[4737]: E0126 18:53:36.347428 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3228aed7-c127-465a-ba59-822d4e6e92e6" containerName="mariadb-database-create" Jan 26 18:53:36 crc kubenswrapper[4737]: I0126 18:53:36.347499 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="3228aed7-c127-465a-ba59-822d4e6e92e6" containerName="mariadb-database-create" Jan 26 18:53:36 crc kubenswrapper[4737]: E0126 18:53:36.347581 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9c00937-8b37-4a41-8403-c69b2e307675" containerName="mariadb-database-create" Jan 26 18:53:36 crc kubenswrapper[4737]: I0126 18:53:36.347879 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9c00937-8b37-4a41-8403-c69b2e307675" containerName="mariadb-database-create" Jan 26 18:53:36 crc kubenswrapper[4737]: E0126 18:53:36.348233 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e57f4e7a-0e31-4911-9f19-a43e3d91e721" containerName="mariadb-account-create-update" Jan 26 18:53:36 crc kubenswrapper[4737]: I0126 18:53:36.348335 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="e57f4e7a-0e31-4911-9f19-a43e3d91e721" containerName="mariadb-account-create-update" Jan 26 18:53:36 crc kubenswrapper[4737]: E0126 18:53:36.348426 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e633aa68-a0b7-4ee4-bf00-7d46105654e2" containerName="mariadb-database-create" Jan 26 18:53:36 crc kubenswrapper[4737]: I0126 18:53:36.349892 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="e633aa68-a0b7-4ee4-bf00-7d46105654e2" containerName="mariadb-database-create" Jan 26 18:53:36 crc kubenswrapper[4737]: E0126 18:53:36.349993 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8bf10ed-050c-48c9-8967-f2bb9b53b9eb" containerName="init" Jan 26 18:53:36 crc kubenswrapper[4737]: I0126 18:53:36.350272 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8bf10ed-050c-48c9-8967-f2bb9b53b9eb" containerName="init" Jan 26 18:53:36 crc kubenswrapper[4737]: E0126 18:53:36.350375 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9404174-9225-41ad-9db6-d523f17739d0" containerName="mariadb-account-create-update" Jan 26 18:53:36 crc kubenswrapper[4737]: I0126 18:53:36.350446 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9404174-9225-41ad-9db6-d523f17739d0" containerName="mariadb-account-create-update" Jan 26 18:53:36 crc kubenswrapper[4737]: E0126 18:53:36.350590 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="907a6367-2724-43bf-aabf-b9488debfed4" containerName="mariadb-account-create-update" Jan 26 18:53:36 crc kubenswrapper[4737]: I0126 18:53:36.351050 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="907a6367-2724-43bf-aabf-b9488debfed4" containerName="mariadb-account-create-update" Jan 26 18:53:36 crc kubenswrapper[4737]: I0126 18:53:36.351894 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9c00937-8b37-4a41-8403-c69b2e307675" containerName="mariadb-database-create" Jan 26 18:53:36 crc kubenswrapper[4737]: I0126 18:53:36.352000 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="907a6367-2724-43bf-aabf-b9488debfed4" containerName="mariadb-account-create-update" Jan 26 18:53:36 crc kubenswrapper[4737]: I0126 18:53:36.352113 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="3228aed7-c127-465a-ba59-822d4e6e92e6" containerName="mariadb-database-create" Jan 26 18:53:36 crc kubenswrapper[4737]: I0126 18:53:36.352190 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9404174-9225-41ad-9db6-d523f17739d0" containerName="mariadb-account-create-update" Jan 26 18:53:36 crc kubenswrapper[4737]: I0126 18:53:36.352265 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e51d252-fc4e-4694-87e5-dade4de60ec5" containerName="mariadb-account-create-update" Jan 26 18:53:36 crc kubenswrapper[4737]: I0126 18:53:36.352337 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8bf10ed-050c-48c9-8967-f2bb9b53b9eb" containerName="dnsmasq-dns" Jan 26 18:53:36 crc kubenswrapper[4737]: I0126 18:53:36.352412 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="e633aa68-a0b7-4ee4-bf00-7d46105654e2" containerName="mariadb-database-create" Jan 26 18:53:36 crc kubenswrapper[4737]: I0126 18:53:36.352506 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="b117bcd7-b58c-4af6-9bd6-ce70ec70f601" containerName="mariadb-database-create" Jan 26 18:53:36 crc kubenswrapper[4737]: I0126 18:53:36.352679 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="e57f4e7a-0e31-4911-9f19-a43e3d91e721" containerName="mariadb-account-create-update" Jan 26 18:53:36 crc kubenswrapper[4737]: I0126 18:53:36.353918 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-f5nm8" Jan 26 18:53:36 crc kubenswrapper[4737]: I0126 18:53:36.357493 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 26 18:53:36 crc kubenswrapper[4737]: I0126 18:53:36.368981 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-f5nm8"] Jan 26 18:53:36 crc kubenswrapper[4737]: I0126 18:53:36.431014 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bca29a83-4351-44f8-9f9a-d677dc49e2cc-operator-scripts\") pod \"root-account-create-update-f5nm8\" (UID: \"bca29a83-4351-44f8-9f9a-d677dc49e2cc\") " pod="openstack/root-account-create-update-f5nm8" Jan 26 18:53:36 crc kubenswrapper[4737]: I0126 18:53:36.431190 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sncdx\" (UniqueName: \"kubernetes.io/projected/bca29a83-4351-44f8-9f9a-d677dc49e2cc-kube-api-access-sncdx\") pod \"root-account-create-update-f5nm8\" (UID: \"bca29a83-4351-44f8-9f9a-d677dc49e2cc\") " pod="openstack/root-account-create-update-f5nm8" Jan 26 18:53:36 crc kubenswrapper[4737]: I0126 18:53:36.431337 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnxfw\" (UniqueName: \"kubernetes.io/projected/3228aed7-c127-465a-ba59-822d4e6e92e6-kube-api-access-pnxfw\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:36 crc kubenswrapper[4737]: I0126 18:53:36.431357 4737 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/907a6367-2724-43bf-aabf-b9488debfed4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:36 crc kubenswrapper[4737]: I0126 18:53:36.431367 4737 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3228aed7-c127-465a-ba59-822d4e6e92e6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:36 crc kubenswrapper[4737]: I0126 18:53:36.431376 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nk86r\" (UniqueName: \"kubernetes.io/projected/907a6367-2724-43bf-aabf-b9488debfed4-kube-api-access-nk86r\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:36 crc kubenswrapper[4737]: I0126 18:53:36.533007 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bca29a83-4351-44f8-9f9a-d677dc49e2cc-operator-scripts\") pod \"root-account-create-update-f5nm8\" (UID: \"bca29a83-4351-44f8-9f9a-d677dc49e2cc\") " pod="openstack/root-account-create-update-f5nm8" Jan 26 18:53:36 crc kubenswrapper[4737]: I0126 18:53:36.533393 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sncdx\" (UniqueName: \"kubernetes.io/projected/bca29a83-4351-44f8-9f9a-d677dc49e2cc-kube-api-access-sncdx\") pod \"root-account-create-update-f5nm8\" (UID: \"bca29a83-4351-44f8-9f9a-d677dc49e2cc\") " pod="openstack/root-account-create-update-f5nm8" Jan 26 18:53:36 crc kubenswrapper[4737]: I0126 18:53:36.534018 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bca29a83-4351-44f8-9f9a-d677dc49e2cc-operator-scripts\") pod \"root-account-create-update-f5nm8\" (UID: \"bca29a83-4351-44f8-9f9a-d677dc49e2cc\") " pod="openstack/root-account-create-update-f5nm8" Jan 26 18:53:36 crc kubenswrapper[4737]: I0126 18:53:36.550823 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sncdx\" (UniqueName: \"kubernetes.io/projected/bca29a83-4351-44f8-9f9a-d677dc49e2cc-kube-api-access-sncdx\") pod \"root-account-create-update-f5nm8\" (UID: \"bca29a83-4351-44f8-9f9a-d677dc49e2cc\") " pod="openstack/root-account-create-update-f5nm8" Jan 26 18:53:36 crc kubenswrapper[4737]: I0126 18:53:36.751036 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-f5nm8" Jan 26 18:53:37 crc kubenswrapper[4737]: I0126 18:53:37.023351 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8bf10ed-050c-48c9-8967-f2bb9b53b9eb" path="/var/lib/kubelet/pods/a8bf10ed-050c-48c9-8967-f2bb9b53b9eb/volumes" Jan 26 18:53:37 crc kubenswrapper[4737]: I0126 18:53:37.086512 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"539f99ad-d4f8-4e02-aca3-f247bc802698","Type":"ContainerStarted","Data":"8253bc6498d5cf6f20fcf6b2dfc0c45659a00e72c45105857da29af053278bea"} Jan 26 18:53:37 crc kubenswrapper[4737]: I0126 18:53:37.090890 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-bc81-account-create-update-fjw9f" event={"ID":"907a6367-2724-43bf-aabf-b9488debfed4","Type":"ContainerDied","Data":"6586a5c692c9f58f094264b97ea4cf7dd697a083c028768898012608f38101ec"} Jan 26 18:53:37 crc kubenswrapper[4737]: I0126 18:53:37.090929 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6586a5c692c9f58f094264b97ea4cf7dd697a083c028768898012608f38101ec" Jan 26 18:53:37 crc kubenswrapper[4737]: I0126 18:53:37.090994 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-bc81-account-create-update-fjw9f" Jan 26 18:53:37 crc kubenswrapper[4737]: I0126 18:53:37.095704 4737 generic.go:334] "Generic (PLEG): container finished" podID="49c4dfd6-d334-4e11-8a1d-0dd773f91b1f" containerID="c04a9af212861452c83b676661f97393cc144f3603cfef17b7005dfd75266a8c" exitCode=0 Jan 26 18:53:37 crc kubenswrapper[4737]: I0126 18:53:37.095788 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f","Type":"ContainerDied","Data":"c04a9af212861452c83b676661f97393cc144f3603cfef17b7005dfd75266a8c"} Jan 26 18:53:37 crc kubenswrapper[4737]: I0126 18:53:37.098511 4737 generic.go:334] "Generic (PLEG): container finished" podID="5bfe0217-6204-407d-aaeb-94051bb8255b" containerID="3014aff826d6940c1d9ef79a0dc47bd5a4dba695d4fb45b94f0378a1b7619f38" exitCode=0 Jan 26 18:53:37 crc kubenswrapper[4737]: I0126 18:53:37.098572 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"5bfe0217-6204-407d-aaeb-94051bb8255b","Type":"ContainerDied","Data":"3014aff826d6940c1d9ef79a0dc47bd5a4dba695d4fb45b94f0378a1b7619f38"} Jan 26 18:53:37 crc kubenswrapper[4737]: I0126 18:53:37.102921 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-nppft" Jan 26 18:53:37 crc kubenswrapper[4737]: I0126 18:53:37.103160 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-nppft" event={"ID":"3228aed7-c127-465a-ba59-822d4e6e92e6","Type":"ContainerDied","Data":"34206a767deeb7f6248723e9022d8973a7d57bfc08378c6c85d2acebdef6f528"} Jan 26 18:53:37 crc kubenswrapper[4737]: I0126 18:53:37.103210 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34206a767deeb7f6248723e9022d8973a7d57bfc08378c6c85d2acebdef6f528" Jan 26 18:53:37 crc kubenswrapper[4737]: I0126 18:53:37.115227 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=5.991910289 podStartE2EDuration="1m16.115210237s" podCreationTimestamp="2026-01-26 18:52:21 +0000 UTC" firstStartedPulling="2026-01-26 18:52:26.130199229 +0000 UTC m=+1319.438393937" lastFinishedPulling="2026-01-26 18:53:36.253499177 +0000 UTC m=+1389.561693885" observedRunningTime="2026-01-26 18:53:37.109603455 +0000 UTC m=+1390.417798183" watchObservedRunningTime="2026-01-26 18:53:37.115210237 +0000 UTC m=+1390.423404945" Jan 26 18:53:37 crc kubenswrapper[4737]: I0126 18:53:37.187424 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-f5nm8"] Jan 26 18:53:37 crc kubenswrapper[4737]: W0126 18:53:37.210961 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbca29a83_4351_44f8_9f9a_d677dc49e2cc.slice/crio-7ca8d66f4d39c2c27b2978d96a9cff0946bb3c71b825ee892cf35c3615c8e67f WatchSource:0}: Error finding container 7ca8d66f4d39c2c27b2978d96a9cff0946bb3c71b825ee892cf35c3615c8e67f: Status 404 returned error can't find the container with id 7ca8d66f4d39c2c27b2978d96a9cff0946bb3c71b825ee892cf35c3615c8e67f Jan 26 18:53:37 crc kubenswrapper[4737]: I0126 18:53:37.594055 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-2fbb8" Jan 26 18:53:37 crc kubenswrapper[4737]: I0126 18:53:37.763221 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c9be0bf2-1b3f-4f77-89ec-b5afa2362e47-etc-swift\") pod \"c9be0bf2-1b3f-4f77-89ec-b5afa2362e47\" (UID: \"c9be0bf2-1b3f-4f77-89ec-b5afa2362e47\") " Jan 26 18:53:37 crc kubenswrapper[4737]: I0126 18:53:37.763279 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9be0bf2-1b3f-4f77-89ec-b5afa2362e47-scripts\") pod \"c9be0bf2-1b3f-4f77-89ec-b5afa2362e47\" (UID: \"c9be0bf2-1b3f-4f77-89ec-b5afa2362e47\") " Jan 26 18:53:37 crc kubenswrapper[4737]: I0126 18:53:37.763385 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c9be0bf2-1b3f-4f77-89ec-b5afa2362e47-ring-data-devices\") pod \"c9be0bf2-1b3f-4f77-89ec-b5afa2362e47\" (UID: \"c9be0bf2-1b3f-4f77-89ec-b5afa2362e47\") " Jan 26 18:53:37 crc kubenswrapper[4737]: I0126 18:53:37.763415 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vnv5\" (UniqueName: \"kubernetes.io/projected/c9be0bf2-1b3f-4f77-89ec-b5afa2362e47-kube-api-access-8vnv5\") pod \"c9be0bf2-1b3f-4f77-89ec-b5afa2362e47\" (UID: \"c9be0bf2-1b3f-4f77-89ec-b5afa2362e47\") " Jan 26 18:53:37 crc kubenswrapper[4737]: I0126 18:53:37.763486 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c9be0bf2-1b3f-4f77-89ec-b5afa2362e47-swiftconf\") pod \"c9be0bf2-1b3f-4f77-89ec-b5afa2362e47\" (UID: \"c9be0bf2-1b3f-4f77-89ec-b5afa2362e47\") " Jan 26 18:53:37 crc kubenswrapper[4737]: I0126 18:53:37.763548 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c9be0bf2-1b3f-4f77-89ec-b5afa2362e47-dispersionconf\") pod \"c9be0bf2-1b3f-4f77-89ec-b5afa2362e47\" (UID: \"c9be0bf2-1b3f-4f77-89ec-b5afa2362e47\") " Jan 26 18:53:37 crc kubenswrapper[4737]: I0126 18:53:37.763606 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9be0bf2-1b3f-4f77-89ec-b5afa2362e47-combined-ca-bundle\") pod \"c9be0bf2-1b3f-4f77-89ec-b5afa2362e47\" (UID: \"c9be0bf2-1b3f-4f77-89ec-b5afa2362e47\") " Jan 26 18:53:37 crc kubenswrapper[4737]: I0126 18:53:37.765814 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9be0bf2-1b3f-4f77-89ec-b5afa2362e47-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "c9be0bf2-1b3f-4f77-89ec-b5afa2362e47" (UID: "c9be0bf2-1b3f-4f77-89ec-b5afa2362e47"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:53:37 crc kubenswrapper[4737]: I0126 18:53:37.766523 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9be0bf2-1b3f-4f77-89ec-b5afa2362e47-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "c9be0bf2-1b3f-4f77-89ec-b5afa2362e47" (UID: "c9be0bf2-1b3f-4f77-89ec-b5afa2362e47"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:53:37 crc kubenswrapper[4737]: I0126 18:53:37.797488 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9be0bf2-1b3f-4f77-89ec-b5afa2362e47-scripts" (OuterVolumeSpecName: "scripts") pod "c9be0bf2-1b3f-4f77-89ec-b5afa2362e47" (UID: "c9be0bf2-1b3f-4f77-89ec-b5afa2362e47"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:53:37 crc kubenswrapper[4737]: I0126 18:53:37.806789 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9be0bf2-1b3f-4f77-89ec-b5afa2362e47-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "c9be0bf2-1b3f-4f77-89ec-b5afa2362e47" (UID: "c9be0bf2-1b3f-4f77-89ec-b5afa2362e47"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:53:37 crc kubenswrapper[4737]: I0126 18:53:37.806789 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9be0bf2-1b3f-4f77-89ec-b5afa2362e47-kube-api-access-8vnv5" (OuterVolumeSpecName: "kube-api-access-8vnv5") pod "c9be0bf2-1b3f-4f77-89ec-b5afa2362e47" (UID: "c9be0bf2-1b3f-4f77-89ec-b5afa2362e47"). InnerVolumeSpecName "kube-api-access-8vnv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:53:37 crc kubenswrapper[4737]: I0126 18:53:37.806858 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9be0bf2-1b3f-4f77-89ec-b5afa2362e47-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "c9be0bf2-1b3f-4f77-89ec-b5afa2362e47" (UID: "c9be0bf2-1b3f-4f77-89ec-b5afa2362e47"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:53:37 crc kubenswrapper[4737]: I0126 18:53:37.806895 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9be0bf2-1b3f-4f77-89ec-b5afa2362e47-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c9be0bf2-1b3f-4f77-89ec-b5afa2362e47" (UID: "c9be0bf2-1b3f-4f77-89ec-b5afa2362e47"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:53:37 crc kubenswrapper[4737]: I0126 18:53:37.866157 4737 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c9be0bf2-1b3f-4f77-89ec-b5afa2362e47-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:37 crc kubenswrapper[4737]: I0126 18:53:37.866193 4737 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c9be0bf2-1b3f-4f77-89ec-b5afa2362e47-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:37 crc kubenswrapper[4737]: I0126 18:53:37.866206 4737 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9be0bf2-1b3f-4f77-89ec-b5afa2362e47-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:37 crc kubenswrapper[4737]: I0126 18:53:37.866214 4737 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c9be0bf2-1b3f-4f77-89ec-b5afa2362e47-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:37 crc kubenswrapper[4737]: I0126 18:53:37.866223 4737 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9be0bf2-1b3f-4f77-89ec-b5afa2362e47-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:37 crc kubenswrapper[4737]: I0126 18:53:37.866231 4737 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c9be0bf2-1b3f-4f77-89ec-b5afa2362e47-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:37 crc kubenswrapper[4737]: I0126 18:53:37.866239 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vnv5\" (UniqueName: \"kubernetes.io/projected/c9be0bf2-1b3f-4f77-89ec-b5afa2362e47-kube-api-access-8vnv5\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:38 crc kubenswrapper[4737]: I0126 18:53:38.112804 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"5bfe0217-6204-407d-aaeb-94051bb8255b","Type":"ContainerStarted","Data":"10ba0aca777890fd9ac6c38b0f53691f9bca7a7ec22d9448fc1c1adc2a454d16"} Jan 26 18:53:38 crc kubenswrapper[4737]: I0126 18:53:38.113357 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Jan 26 18:53:38 crc kubenswrapper[4737]: I0126 18:53:38.115192 4737 generic.go:334] "Generic (PLEG): container finished" podID="bca29a83-4351-44f8-9f9a-d677dc49e2cc" containerID="030a0100206de3b5ac22e3f507c2013ce42eebd95236b216e3933c2d5ccf93b1" exitCode=0 Jan 26 18:53:38 crc kubenswrapper[4737]: I0126 18:53:38.115235 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-f5nm8" event={"ID":"bca29a83-4351-44f8-9f9a-d677dc49e2cc","Type":"ContainerDied","Data":"030a0100206de3b5ac22e3f507c2013ce42eebd95236b216e3933c2d5ccf93b1"} Jan 26 18:53:38 crc kubenswrapper[4737]: I0126 18:53:38.115297 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-f5nm8" event={"ID":"bca29a83-4351-44f8-9f9a-d677dc49e2cc","Type":"ContainerStarted","Data":"7ca8d66f4d39c2c27b2978d96a9cff0946bb3c71b825ee892cf35c3615c8e67f"} Jan 26 18:53:38 crc kubenswrapper[4737]: I0126 18:53:38.117228 4737 generic.go:334] "Generic (PLEG): container finished" podID="89a3c35d-3e74-49b8-ae17-81808321d00d" containerID="2a45bf488bd58772199e809a22fe3c7f3e42578b271a140966f49ff0c91d3844" exitCode=0 Jan 26 18:53:38 crc kubenswrapper[4737]: I0126 18:53:38.117306 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"89a3c35d-3e74-49b8-ae17-81808321d00d","Type":"ContainerDied","Data":"2a45bf488bd58772199e809a22fe3c7f3e42578b271a140966f49ff0c91d3844"} Jan 26 18:53:38 crc kubenswrapper[4737]: I0126 18:53:38.119181 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-2fbb8" event={"ID":"c9be0bf2-1b3f-4f77-89ec-b5afa2362e47","Type":"ContainerDied","Data":"c6c69df6a043e1281050607167efc61d992be5d4ebecee768f4c2c844853652f"} Jan 26 18:53:38 crc kubenswrapper[4737]: I0126 18:53:38.119230 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6c69df6a043e1281050607167efc61d992be5d4ebecee768f4c2c844853652f" Jan 26 18:53:38 crc kubenswrapper[4737]: I0126 18:53:38.119193 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-2fbb8" Jan 26 18:53:38 crc kubenswrapper[4737]: I0126 18:53:38.122543 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f","Type":"ContainerStarted","Data":"022be3a0298b767246af123798dbc6e92b83adbf032bcac0595eebfe08f81137"} Jan 26 18:53:38 crc kubenswrapper[4737]: I0126 18:53:38.122835 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 26 18:53:38 crc kubenswrapper[4737]: I0126 18:53:38.142985 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=40.270513972 podStartE2EDuration="1m24.14296511s" podCreationTimestamp="2026-01-26 18:52:14 +0000 UTC" firstStartedPulling="2026-01-26 18:52:17.361240911 +0000 UTC m=+1310.669435619" lastFinishedPulling="2026-01-26 18:53:01.233692049 +0000 UTC m=+1354.541886757" observedRunningTime="2026-01-26 18:53:38.139351204 +0000 UTC m=+1391.447545932" watchObservedRunningTime="2026-01-26 18:53:38.14296511 +0000 UTC m=+1391.451159818" Jan 26 18:53:38 crc kubenswrapper[4737]: I0126 18:53:38.242253 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=40.216079376 podStartE2EDuration="1m24.242233996s" podCreationTimestamp="2026-01-26 18:52:14 +0000 UTC" firstStartedPulling="2026-01-26 18:52:17.199584325 +0000 UTC m=+1310.507779033" lastFinishedPulling="2026-01-26 18:53:01.225738955 +0000 UTC m=+1354.533933653" observedRunningTime="2026-01-26 18:53:38.235030755 +0000 UTC m=+1391.543225493" watchObservedRunningTime="2026-01-26 18:53:38.242233996 +0000 UTC m=+1391.550428704" Jan 26 18:53:38 crc kubenswrapper[4737]: I0126 18:53:38.419013 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 26 18:53:38 crc kubenswrapper[4737]: I0126 18:53:38.419062 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 26 18:53:38 crc kubenswrapper[4737]: I0126 18:53:38.422184 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 26 18:53:38 crc kubenswrapper[4737]: I0126 18:53:38.526817 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 26 18:53:39 crc kubenswrapper[4737]: I0126 18:53:39.133128 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"89a3c35d-3e74-49b8-ae17-81808321d00d","Type":"ContainerStarted","Data":"06ea35a5ccb8ba1fbe6e8de8565abfd8337b400abc61eb1d009c2e44d87e15bc"} Jan 26 18:53:39 crc kubenswrapper[4737]: I0126 18:53:39.133754 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:53:39 crc kubenswrapper[4737]: I0126 18:53:39.135317 4737 generic.go:334] "Generic (PLEG): container finished" podID="ca2ccc7a-b591-4abe-b133-f959b5445611" containerID="06306e7466a0c6f5f61dfb9fca1c925ea9079f79f0d7027946b84c72b13358b0" exitCode=0 Jan 26 18:53:39 crc kubenswrapper[4737]: I0126 18:53:39.136208 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"ca2ccc7a-b591-4abe-b133-f959b5445611","Type":"ContainerDied","Data":"06306e7466a0c6f5f61dfb9fca1c925ea9079f79f0d7027946b84c72b13358b0"} Jan 26 18:53:39 crc kubenswrapper[4737]: I0126 18:53:39.151596 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 26 18:53:39 crc kubenswrapper[4737]: I0126 18:53:39.197343 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=40.138813385 podStartE2EDuration="1m24.197314101s" podCreationTimestamp="2026-01-26 18:52:15 +0000 UTC" firstStartedPulling="2026-01-26 18:52:17.1438528 +0000 UTC m=+1310.452047508" lastFinishedPulling="2026-01-26 18:53:01.202353516 +0000 UTC m=+1354.510548224" observedRunningTime="2026-01-26 18:53:39.194837823 +0000 UTC m=+1392.503032531" watchObservedRunningTime="2026-01-26 18:53:39.197314101 +0000 UTC m=+1392.505508809" Jan 26 18:53:39 crc kubenswrapper[4737]: I0126 18:53:39.748798 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-z8mqw"] Jan 26 18:53:39 crc kubenswrapper[4737]: E0126 18:53:39.749554 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9be0bf2-1b3f-4f77-89ec-b5afa2362e47" containerName="swift-ring-rebalance" Jan 26 18:53:39 crc kubenswrapper[4737]: I0126 18:53:39.749572 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9be0bf2-1b3f-4f77-89ec-b5afa2362e47" containerName="swift-ring-rebalance" Jan 26 18:53:39 crc kubenswrapper[4737]: I0126 18:53:39.749807 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9be0bf2-1b3f-4f77-89ec-b5afa2362e47" containerName="swift-ring-rebalance" Jan 26 18:53:39 crc kubenswrapper[4737]: I0126 18:53:39.750635 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-z8mqw" Jan 26 18:53:39 crc kubenswrapper[4737]: I0126 18:53:39.754642 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 26 18:53:39 crc kubenswrapper[4737]: I0126 18:53:39.755108 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-5gpvt" Jan 26 18:53:39 crc kubenswrapper[4737]: I0126 18:53:39.769660 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-z8mqw"] Jan 26 18:53:39 crc kubenswrapper[4737]: I0126 18:53:39.841058 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9db6e67-d109-41f6-bd12-a68553ab3bf6-config-data\") pod \"glance-db-sync-z8mqw\" (UID: \"b9db6e67-d109-41f6-bd12-a68553ab3bf6\") " pod="openstack/glance-db-sync-z8mqw" Jan 26 18:53:39 crc kubenswrapper[4737]: I0126 18:53:39.841169 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbpkp\" (UniqueName: \"kubernetes.io/projected/b9db6e67-d109-41f6-bd12-a68553ab3bf6-kube-api-access-wbpkp\") pod \"glance-db-sync-z8mqw\" (UID: \"b9db6e67-d109-41f6-bd12-a68553ab3bf6\") " pod="openstack/glance-db-sync-z8mqw" Jan 26 18:53:39 crc kubenswrapper[4737]: I0126 18:53:39.841315 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b9db6e67-d109-41f6-bd12-a68553ab3bf6-db-sync-config-data\") pod \"glance-db-sync-z8mqw\" (UID: \"b9db6e67-d109-41f6-bd12-a68553ab3bf6\") " pod="openstack/glance-db-sync-z8mqw" Jan 26 18:53:39 crc kubenswrapper[4737]: I0126 18:53:39.841400 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9db6e67-d109-41f6-bd12-a68553ab3bf6-combined-ca-bundle\") pod \"glance-db-sync-z8mqw\" (UID: \"b9db6e67-d109-41f6-bd12-a68553ab3bf6\") " pod="openstack/glance-db-sync-z8mqw" Jan 26 18:53:39 crc kubenswrapper[4737]: I0126 18:53:39.861414 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-f5nm8" Jan 26 18:53:39 crc kubenswrapper[4737]: I0126 18:53:39.947046 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9db6e67-d109-41f6-bd12-a68553ab3bf6-config-data\") pod \"glance-db-sync-z8mqw\" (UID: \"b9db6e67-d109-41f6-bd12-a68553ab3bf6\") " pod="openstack/glance-db-sync-z8mqw" Jan 26 18:53:39 crc kubenswrapper[4737]: I0126 18:53:39.947135 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbpkp\" (UniqueName: \"kubernetes.io/projected/b9db6e67-d109-41f6-bd12-a68553ab3bf6-kube-api-access-wbpkp\") pod \"glance-db-sync-z8mqw\" (UID: \"b9db6e67-d109-41f6-bd12-a68553ab3bf6\") " pod="openstack/glance-db-sync-z8mqw" Jan 26 18:53:39 crc kubenswrapper[4737]: I0126 18:53:39.947218 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b9db6e67-d109-41f6-bd12-a68553ab3bf6-db-sync-config-data\") pod \"glance-db-sync-z8mqw\" (UID: \"b9db6e67-d109-41f6-bd12-a68553ab3bf6\") " pod="openstack/glance-db-sync-z8mqw" Jan 26 18:53:39 crc kubenswrapper[4737]: I0126 18:53:39.947274 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9db6e67-d109-41f6-bd12-a68553ab3bf6-combined-ca-bundle\") pod \"glance-db-sync-z8mqw\" (UID: \"b9db6e67-d109-41f6-bd12-a68553ab3bf6\") " pod="openstack/glance-db-sync-z8mqw" Jan 26 18:53:39 crc kubenswrapper[4737]: I0126 18:53:39.955800 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b9db6e67-d109-41f6-bd12-a68553ab3bf6-db-sync-config-data\") pod \"glance-db-sync-z8mqw\" (UID: \"b9db6e67-d109-41f6-bd12-a68553ab3bf6\") " pod="openstack/glance-db-sync-z8mqw" Jan 26 18:53:39 crc kubenswrapper[4737]: I0126 18:53:39.955888 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9db6e67-d109-41f6-bd12-a68553ab3bf6-combined-ca-bundle\") pod \"glance-db-sync-z8mqw\" (UID: \"b9db6e67-d109-41f6-bd12-a68553ab3bf6\") " pod="openstack/glance-db-sync-z8mqw" Jan 26 18:53:39 crc kubenswrapper[4737]: I0126 18:53:39.956441 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9db6e67-d109-41f6-bd12-a68553ab3bf6-config-data\") pod \"glance-db-sync-z8mqw\" (UID: \"b9db6e67-d109-41f6-bd12-a68553ab3bf6\") " pod="openstack/glance-db-sync-z8mqw" Jan 26 18:53:39 crc kubenswrapper[4737]: I0126 18:53:39.971752 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-zrckb" podUID="11408d0f-4b45-4dab-bc9e-965ac14aed79" containerName="ovn-controller" probeResult="failure" output=< Jan 26 18:53:39 crc kubenswrapper[4737]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 26 18:53:39 crc kubenswrapper[4737]: > Jan 26 18:53:39 crc kubenswrapper[4737]: I0126 18:53:39.976545 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbpkp\" (UniqueName: \"kubernetes.io/projected/b9db6e67-d109-41f6-bd12-a68553ab3bf6-kube-api-access-wbpkp\") pod \"glance-db-sync-z8mqw\" (UID: \"b9db6e67-d109-41f6-bd12-a68553ab3bf6\") " pod="openstack/glance-db-sync-z8mqw" Jan 26 18:53:39 crc kubenswrapper[4737]: I0126 18:53:39.983785 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-tnjz7" Jan 26 18:53:40 crc kubenswrapper[4737]: I0126 18:53:40.049017 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bca29a83-4351-44f8-9f9a-d677dc49e2cc-operator-scripts\") pod \"bca29a83-4351-44f8-9f9a-d677dc49e2cc\" (UID: \"bca29a83-4351-44f8-9f9a-d677dc49e2cc\") " Jan 26 18:53:40 crc kubenswrapper[4737]: I0126 18:53:40.049114 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sncdx\" (UniqueName: \"kubernetes.io/projected/bca29a83-4351-44f8-9f9a-d677dc49e2cc-kube-api-access-sncdx\") pod \"bca29a83-4351-44f8-9f9a-d677dc49e2cc\" (UID: \"bca29a83-4351-44f8-9f9a-d677dc49e2cc\") " Jan 26 18:53:40 crc kubenswrapper[4737]: I0126 18:53:40.050908 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bca29a83-4351-44f8-9f9a-d677dc49e2cc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bca29a83-4351-44f8-9f9a-d677dc49e2cc" (UID: "bca29a83-4351-44f8-9f9a-d677dc49e2cc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:53:40 crc kubenswrapper[4737]: I0126 18:53:40.053041 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bca29a83-4351-44f8-9f9a-d677dc49e2cc-kube-api-access-sncdx" (OuterVolumeSpecName: "kube-api-access-sncdx") pod "bca29a83-4351-44f8-9f9a-d677dc49e2cc" (UID: "bca29a83-4351-44f8-9f9a-d677dc49e2cc"). InnerVolumeSpecName "kube-api-access-sncdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:53:40 crc kubenswrapper[4737]: I0126 18:53:40.077057 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-z8mqw" Jan 26 18:53:40 crc kubenswrapper[4737]: I0126 18:53:40.149003 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"ca2ccc7a-b591-4abe-b133-f959b5445611","Type":"ContainerStarted","Data":"afd662fed630029ff5f2e324a72eedc21f44c56b09e0acccce1a15ca6ba0a38d"} Jan 26 18:53:40 crc kubenswrapper[4737]: I0126 18:53:40.149655 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Jan 26 18:53:40 crc kubenswrapper[4737]: I0126 18:53:40.151421 4737 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bca29a83-4351-44f8-9f9a-d677dc49e2cc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:40 crc kubenswrapper[4737]: I0126 18:53:40.151442 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sncdx\" (UniqueName: \"kubernetes.io/projected/bca29a83-4351-44f8-9f9a-d677dc49e2cc-kube-api-access-sncdx\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:40 crc kubenswrapper[4737]: I0126 18:53:40.154129 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-f5nm8" Jan 26 18:53:40 crc kubenswrapper[4737]: I0126 18:53:40.154129 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-f5nm8" event={"ID":"bca29a83-4351-44f8-9f9a-d677dc49e2cc","Type":"ContainerDied","Data":"7ca8d66f4d39c2c27b2978d96a9cff0946bb3c71b825ee892cf35c3615c8e67f"} Jan 26 18:53:40 crc kubenswrapper[4737]: I0126 18:53:40.154191 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ca8d66f4d39c2c27b2978d96a9cff0946bb3c71b825ee892cf35c3615c8e67f" Jan 26 18:53:40 crc kubenswrapper[4737]: I0126 18:53:40.217926 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=-9223371950.636875 podStartE2EDuration="1m26.217900635s" podCreationTimestamp="2026-01-26 18:52:14 +0000 UTC" firstStartedPulling="2026-01-26 18:52:16.834033689 +0000 UTC m=+1310.142228397" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:53:40.210761737 +0000 UTC m=+1393.518956445" watchObservedRunningTime="2026-01-26 18:53:40.217900635 +0000 UTC m=+1393.526095343" Jan 26 18:53:40 crc kubenswrapper[4737]: W0126 18:53:40.738359 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb9db6e67_d109_41f6_bd12_a68553ab3bf6.slice/crio-8023d92cc4b13c767a30c9d050f1d4b5ead390a533eb3aa10afb01c0149e1c5b WatchSource:0}: Error finding container 8023d92cc4b13c767a30c9d050f1d4b5ead390a533eb3aa10afb01c0149e1c5b: Status 404 returned error can't find the container with id 8023d92cc4b13c767a30c9d050f1d4b5ead390a533eb3aa10afb01c0149e1c5b Jan 26 18:53:40 crc kubenswrapper[4737]: I0126 18:53:40.757172 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-z8mqw"] Jan 26 18:53:41 crc kubenswrapper[4737]: I0126 18:53:41.165761 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-z8mqw" event={"ID":"b9db6e67-d109-41f6-bd12-a68553ab3bf6","Type":"ContainerStarted","Data":"8023d92cc4b13c767a30c9d050f1d4b5ead390a533eb3aa10afb01c0149e1c5b"} Jan 26 18:53:41 crc kubenswrapper[4737]: I0126 18:53:41.744568 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 18:53:42 crc kubenswrapper[4737]: I0126 18:53:42.059712 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-8zd8m"] Jan 26 18:53:42 crc kubenswrapper[4737]: E0126 18:53:42.060420 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bca29a83-4351-44f8-9f9a-d677dc49e2cc" containerName="mariadb-account-create-update" Jan 26 18:53:42 crc kubenswrapper[4737]: I0126 18:53:42.060442 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="bca29a83-4351-44f8-9f9a-d677dc49e2cc" containerName="mariadb-account-create-update" Jan 26 18:53:42 crc kubenswrapper[4737]: I0126 18:53:42.060681 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="bca29a83-4351-44f8-9f9a-d677dc49e2cc" containerName="mariadb-account-create-update" Jan 26 18:53:42 crc kubenswrapper[4737]: I0126 18:53:42.061666 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-8zd8m" Jan 26 18:53:42 crc kubenswrapper[4737]: I0126 18:53:42.070326 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-8zd8m"] Jan 26 18:53:42 crc kubenswrapper[4737]: I0126 18:53:42.178636 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="539f99ad-d4f8-4e02-aca3-f247bc802698" containerName="prometheus" containerID="cri-o://cb418ce5af80bfbb3b78e37a5ceb2c2e826568d5dbe57eea92dd3ac6d0bf783f" gracePeriod=600 Jan 26 18:53:42 crc kubenswrapper[4737]: I0126 18:53:42.178691 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="539f99ad-d4f8-4e02-aca3-f247bc802698" containerName="thanos-sidecar" containerID="cri-o://8253bc6498d5cf6f20fcf6b2dfc0c45659a00e72c45105857da29af053278bea" gracePeriod=600 Jan 26 18:53:42 crc kubenswrapper[4737]: I0126 18:53:42.178818 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="539f99ad-d4f8-4e02-aca3-f247bc802698" containerName="config-reloader" containerID="cri-o://2357be3fbbd97bef6dd25c9b79beff5a03788cc4d28b237f3b702ffcf5bf2b92" gracePeriod=600 Jan 26 18:53:42 crc kubenswrapper[4737]: I0126 18:53:42.193431 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rqdq\" (UniqueName: \"kubernetes.io/projected/7a88f548-5326-4e23-bda1-cf97ba384393-kube-api-access-9rqdq\") pod \"mysqld-exporter-openstack-cell1-db-create-8zd8m\" (UID: \"7a88f548-5326-4e23-bda1-cf97ba384393\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-8zd8m" Jan 26 18:53:42 crc kubenswrapper[4737]: I0126 18:53:42.193855 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a88f548-5326-4e23-bda1-cf97ba384393-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-8zd8m\" (UID: \"7a88f548-5326-4e23-bda1-cf97ba384393\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-8zd8m" Jan 26 18:53:42 crc kubenswrapper[4737]: I0126 18:53:42.281914 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-f388-account-create-update-2kjdh"] Jan 26 18:53:42 crc kubenswrapper[4737]: I0126 18:53:42.283756 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-f388-account-create-update-2kjdh" Jan 26 18:53:42 crc kubenswrapper[4737]: I0126 18:53:42.287370 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Jan 26 18:53:42 crc kubenswrapper[4737]: I0126 18:53:42.293625 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-f388-account-create-update-2kjdh"] Jan 26 18:53:42 crc kubenswrapper[4737]: I0126 18:53:42.296097 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rqdq\" (UniqueName: \"kubernetes.io/projected/7a88f548-5326-4e23-bda1-cf97ba384393-kube-api-access-9rqdq\") pod \"mysqld-exporter-openstack-cell1-db-create-8zd8m\" (UID: \"7a88f548-5326-4e23-bda1-cf97ba384393\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-8zd8m" Jan 26 18:53:42 crc kubenswrapper[4737]: I0126 18:53:42.296264 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a88f548-5326-4e23-bda1-cf97ba384393-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-8zd8m\" (UID: \"7a88f548-5326-4e23-bda1-cf97ba384393\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-8zd8m" Jan 26 18:53:42 crc kubenswrapper[4737]: I0126 18:53:42.296965 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a88f548-5326-4e23-bda1-cf97ba384393-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-8zd8m\" (UID: \"7a88f548-5326-4e23-bda1-cf97ba384393\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-8zd8m" Jan 26 18:53:42 crc kubenswrapper[4737]: I0126 18:53:42.338982 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rqdq\" (UniqueName: \"kubernetes.io/projected/7a88f548-5326-4e23-bda1-cf97ba384393-kube-api-access-9rqdq\") pod \"mysqld-exporter-openstack-cell1-db-create-8zd8m\" (UID: \"7a88f548-5326-4e23-bda1-cf97ba384393\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-8zd8m" Jan 26 18:53:42 crc kubenswrapper[4737]: I0126 18:53:42.386970 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-8zd8m" Jan 26 18:53:42 crc kubenswrapper[4737]: I0126 18:53:42.400956 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8bccfe1e-6106-4184-8cff-37e44dfaef61-operator-scripts\") pod \"mysqld-exporter-f388-account-create-update-2kjdh\" (UID: \"8bccfe1e-6106-4184-8cff-37e44dfaef61\") " pod="openstack/mysqld-exporter-f388-account-create-update-2kjdh" Jan 26 18:53:42 crc kubenswrapper[4737]: I0126 18:53:42.401028 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x9ps\" (UniqueName: \"kubernetes.io/projected/8bccfe1e-6106-4184-8cff-37e44dfaef61-kube-api-access-5x9ps\") pod \"mysqld-exporter-f388-account-create-update-2kjdh\" (UID: \"8bccfe1e-6106-4184-8cff-37e44dfaef61\") " pod="openstack/mysqld-exporter-f388-account-create-update-2kjdh" Jan 26 18:53:42 crc kubenswrapper[4737]: I0126 18:53:42.504232 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8bccfe1e-6106-4184-8cff-37e44dfaef61-operator-scripts\") pod \"mysqld-exporter-f388-account-create-update-2kjdh\" (UID: \"8bccfe1e-6106-4184-8cff-37e44dfaef61\") " pod="openstack/mysqld-exporter-f388-account-create-update-2kjdh" Jan 26 18:53:42 crc kubenswrapper[4737]: I0126 18:53:42.504292 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5x9ps\" (UniqueName: \"kubernetes.io/projected/8bccfe1e-6106-4184-8cff-37e44dfaef61-kube-api-access-5x9ps\") pod \"mysqld-exporter-f388-account-create-update-2kjdh\" (UID: \"8bccfe1e-6106-4184-8cff-37e44dfaef61\") " pod="openstack/mysqld-exporter-f388-account-create-update-2kjdh" Jan 26 18:53:42 crc kubenswrapper[4737]: I0126 18:53:42.505177 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8bccfe1e-6106-4184-8cff-37e44dfaef61-operator-scripts\") pod \"mysqld-exporter-f388-account-create-update-2kjdh\" (UID: \"8bccfe1e-6106-4184-8cff-37e44dfaef61\") " pod="openstack/mysqld-exporter-f388-account-create-update-2kjdh" Jan 26 18:53:42 crc kubenswrapper[4737]: I0126 18:53:42.525752 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5x9ps\" (UniqueName: \"kubernetes.io/projected/8bccfe1e-6106-4184-8cff-37e44dfaef61-kube-api-access-5x9ps\") pod \"mysqld-exporter-f388-account-create-update-2kjdh\" (UID: \"8bccfe1e-6106-4184-8cff-37e44dfaef61\") " pod="openstack/mysqld-exporter-f388-account-create-update-2kjdh" Jan 26 18:53:42 crc kubenswrapper[4737]: I0126 18:53:42.570720 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-f5nm8"] Jan 26 18:53:42 crc kubenswrapper[4737]: I0126 18:53:42.583368 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-f5nm8"] Jan 26 18:53:42 crc kubenswrapper[4737]: I0126 18:53:42.816508 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-f388-account-create-update-2kjdh" Jan 26 18:53:42 crc kubenswrapper[4737]: I0126 18:53:42.995619 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bca29a83-4351-44f8-9f9a-d677dc49e2cc" path="/var/lib/kubelet/pods/bca29a83-4351-44f8-9f9a-d677dc49e2cc/volumes" Jan 26 18:53:42 crc kubenswrapper[4737]: W0126 18:53:42.997393 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a88f548_5326_4e23_bda1_cf97ba384393.slice/crio-8b47b0c21f5aba1199a2684b2520ebbcb60ca07bd6bc2fd4bc6625ceef97a1f0 WatchSource:0}: Error finding container 8b47b0c21f5aba1199a2684b2520ebbcb60ca07bd6bc2fd4bc6625ceef97a1f0: Status 404 returned error can't find the container with id 8b47b0c21f5aba1199a2684b2520ebbcb60ca07bd6bc2fd4bc6625ceef97a1f0 Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.001689 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-8zd8m"] Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.196268 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-8zd8m" event={"ID":"7a88f548-5326-4e23-bda1-cf97ba384393","Type":"ContainerStarted","Data":"8b47b0c21f5aba1199a2684b2520ebbcb60ca07bd6bc2fd4bc6625ceef97a1f0"} Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.205122 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.223960 4737 generic.go:334] "Generic (PLEG): container finished" podID="539f99ad-d4f8-4e02-aca3-f247bc802698" containerID="8253bc6498d5cf6f20fcf6b2dfc0c45659a00e72c45105857da29af053278bea" exitCode=0 Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.224122 4737 generic.go:334] "Generic (PLEG): container finished" podID="539f99ad-d4f8-4e02-aca3-f247bc802698" containerID="2357be3fbbd97bef6dd25c9b79beff5a03788cc4d28b237f3b702ffcf5bf2b92" exitCode=0 Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.224170 4737 generic.go:334] "Generic (PLEG): container finished" podID="539f99ad-d4f8-4e02-aca3-f247bc802698" containerID="cb418ce5af80bfbb3b78e37a5ceb2c2e826568d5dbe57eea92dd3ac6d0bf783f" exitCode=0 Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.224268 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"539f99ad-d4f8-4e02-aca3-f247bc802698","Type":"ContainerDied","Data":"8253bc6498d5cf6f20fcf6b2dfc0c45659a00e72c45105857da29af053278bea"} Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.224342 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"539f99ad-d4f8-4e02-aca3-f247bc802698","Type":"ContainerDied","Data":"2357be3fbbd97bef6dd25c9b79beff5a03788cc4d28b237f3b702ffcf5bf2b92"} Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.224400 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"539f99ad-d4f8-4e02-aca3-f247bc802698","Type":"ContainerDied","Data":"cb418ce5af80bfbb3b78e37a5ceb2c2e826568d5dbe57eea92dd3ac6d0bf783f"} Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.224421 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"539f99ad-d4f8-4e02-aca3-f247bc802698","Type":"ContainerDied","Data":"d15c6f609cd91a92edf04dbfbaf960fd3d9092a25c528a4624b62dc5fc4e75c6"} Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.224477 4737 scope.go:117] "RemoveContainer" containerID="8253bc6498d5cf6f20fcf6b2dfc0c45659a00e72c45105857da29af053278bea" Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.260361 4737 scope.go:117] "RemoveContainer" containerID="2357be3fbbd97bef6dd25c9b79beff5a03788cc4d28b237f3b702ffcf5bf2b92" Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.297642 4737 scope.go:117] "RemoveContainer" containerID="cb418ce5af80bfbb3b78e37a5ceb2c2e826568d5dbe57eea92dd3ac6d0bf783f" Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.326668 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/539f99ad-d4f8-4e02-aca3-f247bc802698-config-out\") pod \"539f99ad-d4f8-4e02-aca3-f247bc802698\" (UID: \"539f99ad-d4f8-4e02-aca3-f247bc802698\") " Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.326781 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcw7m\" (UniqueName: \"kubernetes.io/projected/539f99ad-d4f8-4e02-aca3-f247bc802698-kube-api-access-vcw7m\") pod \"539f99ad-d4f8-4e02-aca3-f247bc802698\" (UID: \"539f99ad-d4f8-4e02-aca3-f247bc802698\") " Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.326821 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/539f99ad-d4f8-4e02-aca3-f247bc802698-prometheus-metric-storage-rulefiles-2\") pod \"539f99ad-d4f8-4e02-aca3-f247bc802698\" (UID: \"539f99ad-d4f8-4e02-aca3-f247bc802698\") " Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.327002 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/539f99ad-d4f8-4e02-aca3-f247bc802698-config\") pod \"539f99ad-d4f8-4e02-aca3-f247bc802698\" (UID: \"539f99ad-d4f8-4e02-aca3-f247bc802698\") " Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.327035 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/539f99ad-d4f8-4e02-aca3-f247bc802698-prometheus-metric-storage-rulefiles-1\") pod \"539f99ad-d4f8-4e02-aca3-f247bc802698\" (UID: \"539f99ad-d4f8-4e02-aca3-f247bc802698\") " Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.327129 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/539f99ad-d4f8-4e02-aca3-f247bc802698-tls-assets\") pod \"539f99ad-d4f8-4e02-aca3-f247bc802698\" (UID: \"539f99ad-d4f8-4e02-aca3-f247bc802698\") " Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.327179 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/539f99ad-d4f8-4e02-aca3-f247bc802698-thanos-prometheus-http-client-file\") pod \"539f99ad-d4f8-4e02-aca3-f247bc802698\" (UID: \"539f99ad-d4f8-4e02-aca3-f247bc802698\") " Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.327213 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/539f99ad-d4f8-4e02-aca3-f247bc802698-web-config\") pod \"539f99ad-d4f8-4e02-aca3-f247bc802698\" (UID: \"539f99ad-d4f8-4e02-aca3-f247bc802698\") " Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.327378 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e74d0cb-707b-46f9-94e4-b1f98c52eb48\") pod \"539f99ad-d4f8-4e02-aca3-f247bc802698\" (UID: \"539f99ad-d4f8-4e02-aca3-f247bc802698\") " Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.327444 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/539f99ad-d4f8-4e02-aca3-f247bc802698-prometheus-metric-storage-rulefiles-0\") pod \"539f99ad-d4f8-4e02-aca3-f247bc802698\" (UID: \"539f99ad-d4f8-4e02-aca3-f247bc802698\") " Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.328927 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/539f99ad-d4f8-4e02-aca3-f247bc802698-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "539f99ad-d4f8-4e02-aca3-f247bc802698" (UID: "539f99ad-d4f8-4e02-aca3-f247bc802698"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.330245 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/539f99ad-d4f8-4e02-aca3-f247bc802698-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "539f99ad-d4f8-4e02-aca3-f247bc802698" (UID: "539f99ad-d4f8-4e02-aca3-f247bc802698"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.331502 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/539f99ad-d4f8-4e02-aca3-f247bc802698-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "539f99ad-d4f8-4e02-aca3-f247bc802698" (UID: "539f99ad-d4f8-4e02-aca3-f247bc802698"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.336572 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/539f99ad-d4f8-4e02-aca3-f247bc802698-config" (OuterVolumeSpecName: "config") pod "539f99ad-d4f8-4e02-aca3-f247bc802698" (UID: "539f99ad-d4f8-4e02-aca3-f247bc802698"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.337579 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/539f99ad-d4f8-4e02-aca3-f247bc802698-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "539f99ad-d4f8-4e02-aca3-f247bc802698" (UID: "539f99ad-d4f8-4e02-aca3-f247bc802698"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.338209 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/539f99ad-d4f8-4e02-aca3-f247bc802698-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "539f99ad-d4f8-4e02-aca3-f247bc802698" (UID: "539f99ad-d4f8-4e02-aca3-f247bc802698"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.338761 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/539f99ad-d4f8-4e02-aca3-f247bc802698-config-out" (OuterVolumeSpecName: "config-out") pod "539f99ad-d4f8-4e02-aca3-f247bc802698" (UID: "539f99ad-d4f8-4e02-aca3-f247bc802698"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.346660 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/539f99ad-d4f8-4e02-aca3-f247bc802698-kube-api-access-vcw7m" (OuterVolumeSpecName: "kube-api-access-vcw7m") pod "539f99ad-d4f8-4e02-aca3-f247bc802698" (UID: "539f99ad-d4f8-4e02-aca3-f247bc802698"). InnerVolumeSpecName "kube-api-access-vcw7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.380346 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e74d0cb-707b-46f9-94e4-b1f98c52eb48" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "539f99ad-d4f8-4e02-aca3-f247bc802698" (UID: "539f99ad-d4f8-4e02-aca3-f247bc802698"). InnerVolumeSpecName "pvc-5e74d0cb-707b-46f9-94e4-b1f98c52eb48". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.381482 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/539f99ad-d4f8-4e02-aca3-f247bc802698-web-config" (OuterVolumeSpecName: "web-config") pod "539f99ad-d4f8-4e02-aca3-f247bc802698" (UID: "539f99ad-d4f8-4e02-aca3-f247bc802698"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.402140 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-f388-account-create-update-2kjdh"] Jan 26 18:53:43 crc kubenswrapper[4737]: W0126 18:53:43.404675 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8bccfe1e_6106_4184_8cff_37e44dfaef61.slice/crio-4c22bfeefa58d09e1c58d6f9961dc4581686558b44deb3ad42bcca6e71bb1d21 WatchSource:0}: Error finding container 4c22bfeefa58d09e1c58d6f9961dc4581686558b44deb3ad42bcca6e71bb1d21: Status 404 returned error can't find the container with id 4c22bfeefa58d09e1c58d6f9961dc4581686558b44deb3ad42bcca6e71bb1d21 Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.431617 4737 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/539f99ad-d4f8-4e02-aca3-f247bc802698-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.431649 4737 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/539f99ad-d4f8-4e02-aca3-f247bc802698-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.431661 4737 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/539f99ad-d4f8-4e02-aca3-f247bc802698-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.431673 4737 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/539f99ad-d4f8-4e02-aca3-f247bc802698-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.431682 4737 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/539f99ad-d4f8-4e02-aca3-f247bc802698-web-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.431713 4737 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-5e74d0cb-707b-46f9-94e4-b1f98c52eb48\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e74d0cb-707b-46f9-94e4-b1f98c52eb48\") on node \"crc\" " Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.431724 4737 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/539f99ad-d4f8-4e02-aca3-f247bc802698-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.431736 4737 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/539f99ad-d4f8-4e02-aca3-f247bc802698-config-out\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.431747 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcw7m\" (UniqueName: \"kubernetes.io/projected/539f99ad-d4f8-4e02-aca3-f247bc802698-kube-api-access-vcw7m\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.431758 4737 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/539f99ad-d4f8-4e02-aca3-f247bc802698-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.473126 4737 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.473355 4737 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-5e74d0cb-707b-46f9-94e4-b1f98c52eb48" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e74d0cb-707b-46f9-94e4-b1f98c52eb48") on node "crc" Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.488624 4737 scope.go:117] "RemoveContainer" containerID="534e53f36240e829133ec44ea5c49ff7a0fb2f54f9886eafd969f2271734cf1f" Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.533311 4737 reconciler_common.go:293] "Volume detached for volume \"pvc-5e74d0cb-707b-46f9-94e4-b1f98c52eb48\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e74d0cb-707b-46f9-94e4-b1f98c52eb48\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.549503 4737 scope.go:117] "RemoveContainer" containerID="8253bc6498d5cf6f20fcf6b2dfc0c45659a00e72c45105857da29af053278bea" Jan 26 18:53:43 crc kubenswrapper[4737]: E0126 18:53:43.550714 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8253bc6498d5cf6f20fcf6b2dfc0c45659a00e72c45105857da29af053278bea\": container with ID starting with 8253bc6498d5cf6f20fcf6b2dfc0c45659a00e72c45105857da29af053278bea not found: ID does not exist" containerID="8253bc6498d5cf6f20fcf6b2dfc0c45659a00e72c45105857da29af053278bea" Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.550760 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8253bc6498d5cf6f20fcf6b2dfc0c45659a00e72c45105857da29af053278bea"} err="failed to get container status \"8253bc6498d5cf6f20fcf6b2dfc0c45659a00e72c45105857da29af053278bea\": rpc error: code = NotFound desc = could not find container \"8253bc6498d5cf6f20fcf6b2dfc0c45659a00e72c45105857da29af053278bea\": container with ID starting with 8253bc6498d5cf6f20fcf6b2dfc0c45659a00e72c45105857da29af053278bea not found: ID does not exist" Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.550798 4737 scope.go:117] "RemoveContainer" containerID="2357be3fbbd97bef6dd25c9b79beff5a03788cc4d28b237f3b702ffcf5bf2b92" Jan 26 18:53:43 crc kubenswrapper[4737]: E0126 18:53:43.551213 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2357be3fbbd97bef6dd25c9b79beff5a03788cc4d28b237f3b702ffcf5bf2b92\": container with ID starting with 2357be3fbbd97bef6dd25c9b79beff5a03788cc4d28b237f3b702ffcf5bf2b92 not found: ID does not exist" containerID="2357be3fbbd97bef6dd25c9b79beff5a03788cc4d28b237f3b702ffcf5bf2b92" Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.551262 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2357be3fbbd97bef6dd25c9b79beff5a03788cc4d28b237f3b702ffcf5bf2b92"} err="failed to get container status \"2357be3fbbd97bef6dd25c9b79beff5a03788cc4d28b237f3b702ffcf5bf2b92\": rpc error: code = NotFound desc = could not find container \"2357be3fbbd97bef6dd25c9b79beff5a03788cc4d28b237f3b702ffcf5bf2b92\": container with ID starting with 2357be3fbbd97bef6dd25c9b79beff5a03788cc4d28b237f3b702ffcf5bf2b92 not found: ID does not exist" Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.551293 4737 scope.go:117] "RemoveContainer" containerID="cb418ce5af80bfbb3b78e37a5ceb2c2e826568d5dbe57eea92dd3ac6d0bf783f" Jan 26 18:53:43 crc kubenswrapper[4737]: E0126 18:53:43.551564 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb418ce5af80bfbb3b78e37a5ceb2c2e826568d5dbe57eea92dd3ac6d0bf783f\": container with ID starting with cb418ce5af80bfbb3b78e37a5ceb2c2e826568d5dbe57eea92dd3ac6d0bf783f not found: ID does not exist" containerID="cb418ce5af80bfbb3b78e37a5ceb2c2e826568d5dbe57eea92dd3ac6d0bf783f" Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.551589 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb418ce5af80bfbb3b78e37a5ceb2c2e826568d5dbe57eea92dd3ac6d0bf783f"} err="failed to get container status \"cb418ce5af80bfbb3b78e37a5ceb2c2e826568d5dbe57eea92dd3ac6d0bf783f\": rpc error: code = NotFound desc = could not find container \"cb418ce5af80bfbb3b78e37a5ceb2c2e826568d5dbe57eea92dd3ac6d0bf783f\": container with ID starting with cb418ce5af80bfbb3b78e37a5ceb2c2e826568d5dbe57eea92dd3ac6d0bf783f not found: ID does not exist" Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.551604 4737 scope.go:117] "RemoveContainer" containerID="534e53f36240e829133ec44ea5c49ff7a0fb2f54f9886eafd969f2271734cf1f" Jan 26 18:53:43 crc kubenswrapper[4737]: E0126 18:53:43.552055 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"534e53f36240e829133ec44ea5c49ff7a0fb2f54f9886eafd969f2271734cf1f\": container with ID starting with 534e53f36240e829133ec44ea5c49ff7a0fb2f54f9886eafd969f2271734cf1f not found: ID does not exist" containerID="534e53f36240e829133ec44ea5c49ff7a0fb2f54f9886eafd969f2271734cf1f" Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.552123 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"534e53f36240e829133ec44ea5c49ff7a0fb2f54f9886eafd969f2271734cf1f"} err="failed to get container status \"534e53f36240e829133ec44ea5c49ff7a0fb2f54f9886eafd969f2271734cf1f\": rpc error: code = NotFound desc = could not find container \"534e53f36240e829133ec44ea5c49ff7a0fb2f54f9886eafd969f2271734cf1f\": container with ID starting with 534e53f36240e829133ec44ea5c49ff7a0fb2f54f9886eafd969f2271734cf1f not found: ID does not exist" Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.552139 4737 scope.go:117] "RemoveContainer" containerID="8253bc6498d5cf6f20fcf6b2dfc0c45659a00e72c45105857da29af053278bea" Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.552557 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8253bc6498d5cf6f20fcf6b2dfc0c45659a00e72c45105857da29af053278bea"} err="failed to get container status \"8253bc6498d5cf6f20fcf6b2dfc0c45659a00e72c45105857da29af053278bea\": rpc error: code = NotFound desc = could not find container \"8253bc6498d5cf6f20fcf6b2dfc0c45659a00e72c45105857da29af053278bea\": container with ID starting with 8253bc6498d5cf6f20fcf6b2dfc0c45659a00e72c45105857da29af053278bea not found: ID does not exist" Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.552579 4737 scope.go:117] "RemoveContainer" containerID="2357be3fbbd97bef6dd25c9b79beff5a03788cc4d28b237f3b702ffcf5bf2b92" Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.552877 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2357be3fbbd97bef6dd25c9b79beff5a03788cc4d28b237f3b702ffcf5bf2b92"} err="failed to get container status \"2357be3fbbd97bef6dd25c9b79beff5a03788cc4d28b237f3b702ffcf5bf2b92\": rpc error: code = NotFound desc = could not find container \"2357be3fbbd97bef6dd25c9b79beff5a03788cc4d28b237f3b702ffcf5bf2b92\": container with ID starting with 2357be3fbbd97bef6dd25c9b79beff5a03788cc4d28b237f3b702ffcf5bf2b92 not found: ID does not exist" Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.552901 4737 scope.go:117] "RemoveContainer" containerID="cb418ce5af80bfbb3b78e37a5ceb2c2e826568d5dbe57eea92dd3ac6d0bf783f" Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.553166 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb418ce5af80bfbb3b78e37a5ceb2c2e826568d5dbe57eea92dd3ac6d0bf783f"} err="failed to get container status \"cb418ce5af80bfbb3b78e37a5ceb2c2e826568d5dbe57eea92dd3ac6d0bf783f\": rpc error: code = NotFound desc = could not find container \"cb418ce5af80bfbb3b78e37a5ceb2c2e826568d5dbe57eea92dd3ac6d0bf783f\": container with ID starting with cb418ce5af80bfbb3b78e37a5ceb2c2e826568d5dbe57eea92dd3ac6d0bf783f not found: ID does not exist" Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.553186 4737 scope.go:117] "RemoveContainer" containerID="534e53f36240e829133ec44ea5c49ff7a0fb2f54f9886eafd969f2271734cf1f" Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.553437 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"534e53f36240e829133ec44ea5c49ff7a0fb2f54f9886eafd969f2271734cf1f"} err="failed to get container status \"534e53f36240e829133ec44ea5c49ff7a0fb2f54f9886eafd969f2271734cf1f\": rpc error: code = NotFound desc = could not find container \"534e53f36240e829133ec44ea5c49ff7a0fb2f54f9886eafd969f2271734cf1f\": container with ID starting with 534e53f36240e829133ec44ea5c49ff7a0fb2f54f9886eafd969f2271734cf1f not found: ID does not exist" Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.553466 4737 scope.go:117] "RemoveContainer" containerID="8253bc6498d5cf6f20fcf6b2dfc0c45659a00e72c45105857da29af053278bea" Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.553757 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8253bc6498d5cf6f20fcf6b2dfc0c45659a00e72c45105857da29af053278bea"} err="failed to get container status \"8253bc6498d5cf6f20fcf6b2dfc0c45659a00e72c45105857da29af053278bea\": rpc error: code = NotFound desc = could not find container \"8253bc6498d5cf6f20fcf6b2dfc0c45659a00e72c45105857da29af053278bea\": container with ID starting with 8253bc6498d5cf6f20fcf6b2dfc0c45659a00e72c45105857da29af053278bea not found: ID does not exist" Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.553780 4737 scope.go:117] "RemoveContainer" containerID="2357be3fbbd97bef6dd25c9b79beff5a03788cc4d28b237f3b702ffcf5bf2b92" Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.554261 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2357be3fbbd97bef6dd25c9b79beff5a03788cc4d28b237f3b702ffcf5bf2b92"} err="failed to get container status \"2357be3fbbd97bef6dd25c9b79beff5a03788cc4d28b237f3b702ffcf5bf2b92\": rpc error: code = NotFound desc = could not find container \"2357be3fbbd97bef6dd25c9b79beff5a03788cc4d28b237f3b702ffcf5bf2b92\": container with ID starting with 2357be3fbbd97bef6dd25c9b79beff5a03788cc4d28b237f3b702ffcf5bf2b92 not found: ID does not exist" Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.554288 4737 scope.go:117] "RemoveContainer" containerID="cb418ce5af80bfbb3b78e37a5ceb2c2e826568d5dbe57eea92dd3ac6d0bf783f" Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.554538 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb418ce5af80bfbb3b78e37a5ceb2c2e826568d5dbe57eea92dd3ac6d0bf783f"} err="failed to get container status \"cb418ce5af80bfbb3b78e37a5ceb2c2e826568d5dbe57eea92dd3ac6d0bf783f\": rpc error: code = NotFound desc = could not find container \"cb418ce5af80bfbb3b78e37a5ceb2c2e826568d5dbe57eea92dd3ac6d0bf783f\": container with ID starting with cb418ce5af80bfbb3b78e37a5ceb2c2e826568d5dbe57eea92dd3ac6d0bf783f not found: ID does not exist" Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.554562 4737 scope.go:117] "RemoveContainer" containerID="534e53f36240e829133ec44ea5c49ff7a0fb2f54f9886eafd969f2271734cf1f" Jan 26 18:53:43 crc kubenswrapper[4737]: I0126 18:53:43.554872 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"534e53f36240e829133ec44ea5c49ff7a0fb2f54f9886eafd969f2271734cf1f"} err="failed to get container status \"534e53f36240e829133ec44ea5c49ff7a0fb2f54f9886eafd969f2271734cf1f\": rpc error: code = NotFound desc = could not find container \"534e53f36240e829133ec44ea5c49ff7a0fb2f54f9886eafd969f2271734cf1f\": container with ID starting with 534e53f36240e829133ec44ea5c49ff7a0fb2f54f9886eafd969f2271734cf1f not found: ID does not exist" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.243568 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.247060 4737 generic.go:334] "Generic (PLEG): container finished" podID="8bccfe1e-6106-4184-8cff-37e44dfaef61" containerID="c71cbd06b9f2c1575dd0d13338932cb066dc87ea66baddeafdefe031565e0be4" exitCode=0 Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.247143 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-f388-account-create-update-2kjdh" event={"ID":"8bccfe1e-6106-4184-8cff-37e44dfaef61","Type":"ContainerDied","Data":"c71cbd06b9f2c1575dd0d13338932cb066dc87ea66baddeafdefe031565e0be4"} Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.247175 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-f388-account-create-update-2kjdh" event={"ID":"8bccfe1e-6106-4184-8cff-37e44dfaef61","Type":"ContainerStarted","Data":"4c22bfeefa58d09e1c58d6f9961dc4581686558b44deb3ad42bcca6e71bb1d21"} Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.255167 4737 generic.go:334] "Generic (PLEG): container finished" podID="7a88f548-5326-4e23-bda1-cf97ba384393" containerID="74c87092c1d976faa5b23ff53c96d4f88178745c10530b0f67f1dd31577b9725" exitCode=0 Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.255199 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-8zd8m" event={"ID":"7a88f548-5326-4e23-bda1-cf97ba384393","Type":"ContainerDied","Data":"74c87092c1d976faa5b23ff53c96d4f88178745c10530b0f67f1dd31577b9725"} Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.299146 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.309607 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.370689 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 18:53:44 crc kubenswrapper[4737]: E0126 18:53:44.371263 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="539f99ad-d4f8-4e02-aca3-f247bc802698" containerName="prometheus" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.371286 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="539f99ad-d4f8-4e02-aca3-f247bc802698" containerName="prometheus" Jan 26 18:53:44 crc kubenswrapper[4737]: E0126 18:53:44.371304 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="539f99ad-d4f8-4e02-aca3-f247bc802698" containerName="thanos-sidecar" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.371312 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="539f99ad-d4f8-4e02-aca3-f247bc802698" containerName="thanos-sidecar" Jan 26 18:53:44 crc kubenswrapper[4737]: E0126 18:53:44.371326 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="539f99ad-d4f8-4e02-aca3-f247bc802698" containerName="init-config-reloader" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.371334 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="539f99ad-d4f8-4e02-aca3-f247bc802698" containerName="init-config-reloader" Jan 26 18:53:44 crc kubenswrapper[4737]: E0126 18:53:44.371372 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="539f99ad-d4f8-4e02-aca3-f247bc802698" containerName="config-reloader" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.371380 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="539f99ad-d4f8-4e02-aca3-f247bc802698" containerName="config-reloader" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.371622 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="539f99ad-d4f8-4e02-aca3-f247bc802698" containerName="config-reloader" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.371645 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="539f99ad-d4f8-4e02-aca3-f247bc802698" containerName="prometheus" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.371659 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="539f99ad-d4f8-4e02-aca3-f247bc802698" containerName="thanos-sidecar" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.374594 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.378908 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.379033 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.379164 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-f47fq" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.388427 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.389772 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.389993 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.390040 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.390369 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.401699 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.444172 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.451956 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/dd029654-7895-4949-9ef7-b5cdd6043451-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"dd029654-7895-4949-9ef7-b5cdd6043451\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.452194 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd029654-7895-4949-9ef7-b5cdd6043451-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"dd029654-7895-4949-9ef7-b5cdd6043451\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.452250 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/dd029654-7895-4949-9ef7-b5cdd6043451-config\") pod \"prometheus-metric-storage-0\" (UID: \"dd029654-7895-4949-9ef7-b5cdd6043451\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.452276 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/dd029654-7895-4949-9ef7-b5cdd6043451-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"dd029654-7895-4949-9ef7-b5cdd6043451\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.452432 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v6qt\" (UniqueName: \"kubernetes.io/projected/dd029654-7895-4949-9ef7-b5cdd6043451-kube-api-access-4v6qt\") pod \"prometheus-metric-storage-0\" (UID: \"dd029654-7895-4949-9ef7-b5cdd6043451\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.452789 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/dd029654-7895-4949-9ef7-b5cdd6043451-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"dd029654-7895-4949-9ef7-b5cdd6043451\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.453035 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/dd029654-7895-4949-9ef7-b5cdd6043451-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"dd029654-7895-4949-9ef7-b5cdd6043451\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.453116 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/dd029654-7895-4949-9ef7-b5cdd6043451-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"dd029654-7895-4949-9ef7-b5cdd6043451\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.453235 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/dd029654-7895-4949-9ef7-b5cdd6043451-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"dd029654-7895-4949-9ef7-b5cdd6043451\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.453303 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/dd029654-7895-4949-9ef7-b5cdd6043451-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"dd029654-7895-4949-9ef7-b5cdd6043451\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.453340 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5e74d0cb-707b-46f9-94e4-b1f98c52eb48\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e74d0cb-707b-46f9-94e4-b1f98c52eb48\") pod \"prometheus-metric-storage-0\" (UID: \"dd029654-7895-4949-9ef7-b5cdd6043451\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.453367 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/dd029654-7895-4949-9ef7-b5cdd6043451-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"dd029654-7895-4949-9ef7-b5cdd6043451\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.453412 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/dd029654-7895-4949-9ef7-b5cdd6043451-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"dd029654-7895-4949-9ef7-b5cdd6043451\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.555507 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd029654-7895-4949-9ef7-b5cdd6043451-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"dd029654-7895-4949-9ef7-b5cdd6043451\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.555908 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/dd029654-7895-4949-9ef7-b5cdd6043451-config\") pod \"prometheus-metric-storage-0\" (UID: \"dd029654-7895-4949-9ef7-b5cdd6043451\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.555936 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/dd029654-7895-4949-9ef7-b5cdd6043451-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"dd029654-7895-4949-9ef7-b5cdd6043451\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.555975 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4v6qt\" (UniqueName: \"kubernetes.io/projected/dd029654-7895-4949-9ef7-b5cdd6043451-kube-api-access-4v6qt\") pod \"prometheus-metric-storage-0\" (UID: \"dd029654-7895-4949-9ef7-b5cdd6043451\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.556012 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/dd029654-7895-4949-9ef7-b5cdd6043451-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"dd029654-7895-4949-9ef7-b5cdd6043451\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.556102 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/dd029654-7895-4949-9ef7-b5cdd6043451-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"dd029654-7895-4949-9ef7-b5cdd6043451\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.556143 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/dd029654-7895-4949-9ef7-b5cdd6043451-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"dd029654-7895-4949-9ef7-b5cdd6043451\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.556228 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/dd029654-7895-4949-9ef7-b5cdd6043451-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"dd029654-7895-4949-9ef7-b5cdd6043451\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.556282 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/dd029654-7895-4949-9ef7-b5cdd6043451-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"dd029654-7895-4949-9ef7-b5cdd6043451\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.556315 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5e74d0cb-707b-46f9-94e4-b1f98c52eb48\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e74d0cb-707b-46f9-94e4-b1f98c52eb48\") pod \"prometheus-metric-storage-0\" (UID: \"dd029654-7895-4949-9ef7-b5cdd6043451\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.556342 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/dd029654-7895-4949-9ef7-b5cdd6043451-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"dd029654-7895-4949-9ef7-b5cdd6043451\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.556386 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/dd029654-7895-4949-9ef7-b5cdd6043451-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"dd029654-7895-4949-9ef7-b5cdd6043451\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.556428 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/dd029654-7895-4949-9ef7-b5cdd6043451-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"dd029654-7895-4949-9ef7-b5cdd6043451\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.557271 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/dd029654-7895-4949-9ef7-b5cdd6043451-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"dd029654-7895-4949-9ef7-b5cdd6043451\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.558112 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/dd029654-7895-4949-9ef7-b5cdd6043451-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"dd029654-7895-4949-9ef7-b5cdd6043451\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.558493 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/dd029654-7895-4949-9ef7-b5cdd6043451-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"dd029654-7895-4949-9ef7-b5cdd6043451\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.561796 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/dd029654-7895-4949-9ef7-b5cdd6043451-config\") pod \"prometheus-metric-storage-0\" (UID: \"dd029654-7895-4949-9ef7-b5cdd6043451\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.562516 4737 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.562565 4737 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5e74d0cb-707b-46f9-94e4-b1f98c52eb48\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e74d0cb-707b-46f9-94e4-b1f98c52eb48\") pod \"prometheus-metric-storage-0\" (UID: \"dd029654-7895-4949-9ef7-b5cdd6043451\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/74f1aeb064e68dd5bb300f4ee340cba58d92675dd4510f16aad36f018da9b6f4/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.564270 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/dd029654-7895-4949-9ef7-b5cdd6043451-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"dd029654-7895-4949-9ef7-b5cdd6043451\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.564314 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/dd029654-7895-4949-9ef7-b5cdd6043451-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"dd029654-7895-4949-9ef7-b5cdd6043451\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.564602 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/dd029654-7895-4949-9ef7-b5cdd6043451-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"dd029654-7895-4949-9ef7-b5cdd6043451\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.564918 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/dd029654-7895-4949-9ef7-b5cdd6043451-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"dd029654-7895-4949-9ef7-b5cdd6043451\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.565717 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/dd029654-7895-4949-9ef7-b5cdd6043451-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"dd029654-7895-4949-9ef7-b5cdd6043451\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.565882 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd029654-7895-4949-9ef7-b5cdd6043451-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"dd029654-7895-4949-9ef7-b5cdd6043451\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.580851 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/dd029654-7895-4949-9ef7-b5cdd6043451-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"dd029654-7895-4949-9ef7-b5cdd6043451\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.605463 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4v6qt\" (UniqueName: \"kubernetes.io/projected/dd029654-7895-4949-9ef7-b5cdd6043451-kube-api-access-4v6qt\") pod \"prometheus-metric-storage-0\" (UID: \"dd029654-7895-4949-9ef7-b5cdd6043451\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.628965 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5e74d0cb-707b-46f9-94e4-b1f98c52eb48\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e74d0cb-707b-46f9-94e4-b1f98c52eb48\") pod \"prometheus-metric-storage-0\" (UID: \"dd029654-7895-4949-9ef7-b5cdd6043451\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.701399 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.760663 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/03970489-bf21-4d19-afc2-bf8d39aa683e-etc-swift\") pod \"swift-storage-0\" (UID: \"03970489-bf21-4d19-afc2-bf8d39aa683e\") " pod="openstack/swift-storage-0" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.765965 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/03970489-bf21-4d19-afc2-bf8d39aa683e-etc-swift\") pod \"swift-storage-0\" (UID: \"03970489-bf21-4d19-afc2-bf8d39aa683e\") " pod="openstack/swift-storage-0" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.867405 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 26 18:53:44 crc kubenswrapper[4737]: I0126 18:53:44.946040 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-zrckb" podUID="11408d0f-4b45-4dab-bc9e-965ac14aed79" containerName="ovn-controller" probeResult="failure" output=< Jan 26 18:53:44 crc kubenswrapper[4737]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 26 18:53:44 crc kubenswrapper[4737]: > Jan 26 18:53:45 crc kubenswrapper[4737]: I0126 18:53:45.002048 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="539f99ad-d4f8-4e02-aca3-f247bc802698" path="/var/lib/kubelet/pods/539f99ad-d4f8-4e02-aca3-f247bc802698/volumes" Jan 26 18:53:45 crc kubenswrapper[4737]: I0126 18:53:45.247962 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 18:53:45 crc kubenswrapper[4737]: W0126 18:53:45.254847 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd029654_7895_4949_9ef7_b5cdd6043451.slice/crio-6c232c9fa7ce0c329893fa55f0b6bfeef46e0ee3204e35a2e97bb7a46ef62493 WatchSource:0}: Error finding container 6c232c9fa7ce0c329893fa55f0b6bfeef46e0ee3204e35a2e97bb7a46ef62493: Status 404 returned error can't find the container with id 6c232c9fa7ce0c329893fa55f0b6bfeef46e0ee3204e35a2e97bb7a46ef62493 Jan 26 18:53:45 crc kubenswrapper[4737]: I0126 18:53:45.268686 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"dd029654-7895-4949-9ef7-b5cdd6043451","Type":"ContainerStarted","Data":"6c232c9fa7ce0c329893fa55f0b6bfeef46e0ee3204e35a2e97bb7a46ef62493"} Jan 26 18:53:45 crc kubenswrapper[4737]: I0126 18:53:45.523369 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 26 18:53:45 crc kubenswrapper[4737]: I0126 18:53:45.895048 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-f388-account-create-update-2kjdh" Jan 26 18:53:45 crc kubenswrapper[4737]: I0126 18:53:45.901965 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-8zd8m" Jan 26 18:53:45 crc kubenswrapper[4737]: I0126 18:53:45.993274 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8bccfe1e-6106-4184-8cff-37e44dfaef61-operator-scripts\") pod \"8bccfe1e-6106-4184-8cff-37e44dfaef61\" (UID: \"8bccfe1e-6106-4184-8cff-37e44dfaef61\") " Jan 26 18:53:45 crc kubenswrapper[4737]: I0126 18:53:45.993365 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a88f548-5326-4e23-bda1-cf97ba384393-operator-scripts\") pod \"7a88f548-5326-4e23-bda1-cf97ba384393\" (UID: \"7a88f548-5326-4e23-bda1-cf97ba384393\") " Jan 26 18:53:45 crc kubenswrapper[4737]: I0126 18:53:45.993600 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rqdq\" (UniqueName: \"kubernetes.io/projected/7a88f548-5326-4e23-bda1-cf97ba384393-kube-api-access-9rqdq\") pod \"7a88f548-5326-4e23-bda1-cf97ba384393\" (UID: \"7a88f548-5326-4e23-bda1-cf97ba384393\") " Jan 26 18:53:45 crc kubenswrapper[4737]: I0126 18:53:45.993730 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5x9ps\" (UniqueName: \"kubernetes.io/projected/8bccfe1e-6106-4184-8cff-37e44dfaef61-kube-api-access-5x9ps\") pod \"8bccfe1e-6106-4184-8cff-37e44dfaef61\" (UID: \"8bccfe1e-6106-4184-8cff-37e44dfaef61\") " Jan 26 18:53:45 crc kubenswrapper[4737]: I0126 18:53:45.994449 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a88f548-5326-4e23-bda1-cf97ba384393-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7a88f548-5326-4e23-bda1-cf97ba384393" (UID: "7a88f548-5326-4e23-bda1-cf97ba384393"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:53:45 crc kubenswrapper[4737]: I0126 18:53:45.995694 4737 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a88f548-5326-4e23-bda1-cf97ba384393-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:45 crc kubenswrapper[4737]: I0126 18:53:45.996841 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bccfe1e-6106-4184-8cff-37e44dfaef61-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8bccfe1e-6106-4184-8cff-37e44dfaef61" (UID: "8bccfe1e-6106-4184-8cff-37e44dfaef61"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:53:46 crc kubenswrapper[4737]: I0126 18:53:46.002627 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bccfe1e-6106-4184-8cff-37e44dfaef61-kube-api-access-5x9ps" (OuterVolumeSpecName: "kube-api-access-5x9ps") pod "8bccfe1e-6106-4184-8cff-37e44dfaef61" (UID: "8bccfe1e-6106-4184-8cff-37e44dfaef61"). InnerVolumeSpecName "kube-api-access-5x9ps". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:53:46 crc kubenswrapper[4737]: I0126 18:53:46.005181 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a88f548-5326-4e23-bda1-cf97ba384393-kube-api-access-9rqdq" (OuterVolumeSpecName: "kube-api-access-9rqdq") pod "7a88f548-5326-4e23-bda1-cf97ba384393" (UID: "7a88f548-5326-4e23-bda1-cf97ba384393"). InnerVolumeSpecName "kube-api-access-9rqdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:53:46 crc kubenswrapper[4737]: I0126 18:53:46.098632 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rqdq\" (UniqueName: \"kubernetes.io/projected/7a88f548-5326-4e23-bda1-cf97ba384393-kube-api-access-9rqdq\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:46 crc kubenswrapper[4737]: I0126 18:53:46.098998 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5x9ps\" (UniqueName: \"kubernetes.io/projected/8bccfe1e-6106-4184-8cff-37e44dfaef61-kube-api-access-5x9ps\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:46 crc kubenswrapper[4737]: I0126 18:53:46.099016 4737 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8bccfe1e-6106-4184-8cff-37e44dfaef61-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:46 crc kubenswrapper[4737]: I0126 18:53:46.281138 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"03970489-bf21-4d19-afc2-bf8d39aa683e","Type":"ContainerStarted","Data":"d705534eb032b46cb56d062e8b33d6f1ca2d0c327e618672886fa9da6e45cb65"} Jan 26 18:53:46 crc kubenswrapper[4737]: I0126 18:53:46.283572 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-8zd8m" event={"ID":"7a88f548-5326-4e23-bda1-cf97ba384393","Type":"ContainerDied","Data":"8b47b0c21f5aba1199a2684b2520ebbcb60ca07bd6bc2fd4bc6625ceef97a1f0"} Jan 26 18:53:46 crc kubenswrapper[4737]: I0126 18:53:46.283618 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b47b0c21f5aba1199a2684b2520ebbcb60ca07bd6bc2fd4bc6625ceef97a1f0" Jan 26 18:53:46 crc kubenswrapper[4737]: I0126 18:53:46.283613 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-8zd8m" Jan 26 18:53:46 crc kubenswrapper[4737]: I0126 18:53:46.285097 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-f388-account-create-update-2kjdh" event={"ID":"8bccfe1e-6106-4184-8cff-37e44dfaef61","Type":"ContainerDied","Data":"4c22bfeefa58d09e1c58d6f9961dc4581686558b44deb3ad42bcca6e71bb1d21"} Jan 26 18:53:46 crc kubenswrapper[4737]: I0126 18:53:46.285135 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-f388-account-create-update-2kjdh" Jan 26 18:53:46 crc kubenswrapper[4737]: I0126 18:53:46.285141 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c22bfeefa58d09e1c58d6f9961dc4581686558b44deb3ad42bcca6e71bb1d21" Jan 26 18:53:47 crc kubenswrapper[4737]: I0126 18:53:47.301213 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"03970489-bf21-4d19-afc2-bf8d39aa683e","Type":"ContainerStarted","Data":"c4e8460593e7802aeb48adf86afca06c9a2641af4a149cb26b629d1c1561e40d"} Jan 26 18:53:47 crc kubenswrapper[4737]: I0126 18:53:47.571942 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Jan 26 18:53:47 crc kubenswrapper[4737]: E0126 18:53:47.572365 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bccfe1e-6106-4184-8cff-37e44dfaef61" containerName="mariadb-account-create-update" Jan 26 18:53:47 crc kubenswrapper[4737]: I0126 18:53:47.572381 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bccfe1e-6106-4184-8cff-37e44dfaef61" containerName="mariadb-account-create-update" Jan 26 18:53:47 crc kubenswrapper[4737]: E0126 18:53:47.572405 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a88f548-5326-4e23-bda1-cf97ba384393" containerName="mariadb-database-create" Jan 26 18:53:47 crc kubenswrapper[4737]: I0126 18:53:47.572412 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a88f548-5326-4e23-bda1-cf97ba384393" containerName="mariadb-database-create" Jan 26 18:53:47 crc kubenswrapper[4737]: I0126 18:53:47.572592 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a88f548-5326-4e23-bda1-cf97ba384393" containerName="mariadb-database-create" Jan 26 18:53:47 crc kubenswrapper[4737]: I0126 18:53:47.572618 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bccfe1e-6106-4184-8cff-37e44dfaef61" containerName="mariadb-account-create-update" Jan 26 18:53:47 crc kubenswrapper[4737]: I0126 18:53:47.574411 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 26 18:53:47 crc kubenswrapper[4737]: I0126 18:53:47.577291 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Jan 26 18:53:47 crc kubenswrapper[4737]: I0126 18:53:47.603568 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 26 18:53:47 crc kubenswrapper[4737]: I0126 18:53:47.631456 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w6zg\" (UniqueName: \"kubernetes.io/projected/7686b11b-6dd6-4748-9358-79a3885e118a-kube-api-access-7w6zg\") pod \"mysqld-exporter-0\" (UID: \"7686b11b-6dd6-4748-9358-79a3885e118a\") " pod="openstack/mysqld-exporter-0" Jan 26 18:53:47 crc kubenswrapper[4737]: I0126 18:53:47.631614 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7686b11b-6dd6-4748-9358-79a3885e118a-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"7686b11b-6dd6-4748-9358-79a3885e118a\") " pod="openstack/mysqld-exporter-0" Jan 26 18:53:47 crc kubenswrapper[4737]: I0126 18:53:47.631658 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7686b11b-6dd6-4748-9358-79a3885e118a-config-data\") pod \"mysqld-exporter-0\" (UID: \"7686b11b-6dd6-4748-9358-79a3885e118a\") " pod="openstack/mysqld-exporter-0" Jan 26 18:53:47 crc kubenswrapper[4737]: I0126 18:53:47.667134 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-x5zfk"] Jan 26 18:53:47 crc kubenswrapper[4737]: I0126 18:53:47.668500 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-x5zfk" Jan 26 18:53:47 crc kubenswrapper[4737]: I0126 18:53:47.672297 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 26 18:53:47 crc kubenswrapper[4737]: I0126 18:53:47.685625 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-x5zfk"] Jan 26 18:53:47 crc kubenswrapper[4737]: I0126 18:53:47.734051 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7686b11b-6dd6-4748-9358-79a3885e118a-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"7686b11b-6dd6-4748-9358-79a3885e118a\") " pod="openstack/mysqld-exporter-0" Jan 26 18:53:47 crc kubenswrapper[4737]: I0126 18:53:47.734136 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7686b11b-6dd6-4748-9358-79a3885e118a-config-data\") pod \"mysqld-exporter-0\" (UID: \"7686b11b-6dd6-4748-9358-79a3885e118a\") " pod="openstack/mysqld-exporter-0" Jan 26 18:53:47 crc kubenswrapper[4737]: I0126 18:53:47.734187 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7w6zg\" (UniqueName: \"kubernetes.io/projected/7686b11b-6dd6-4748-9358-79a3885e118a-kube-api-access-7w6zg\") pod \"mysqld-exporter-0\" (UID: \"7686b11b-6dd6-4748-9358-79a3885e118a\") " pod="openstack/mysqld-exporter-0" Jan 26 18:53:47 crc kubenswrapper[4737]: I0126 18:53:47.734243 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4010ae2-e90f-44a2-99a0-28dd9db76d50-operator-scripts\") pod \"root-account-create-update-x5zfk\" (UID: \"b4010ae2-e90f-44a2-99a0-28dd9db76d50\") " pod="openstack/root-account-create-update-x5zfk" Jan 26 18:53:47 crc kubenswrapper[4737]: I0126 18:53:47.734282 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvqns\" (UniqueName: \"kubernetes.io/projected/b4010ae2-e90f-44a2-99a0-28dd9db76d50-kube-api-access-xvqns\") pod \"root-account-create-update-x5zfk\" (UID: \"b4010ae2-e90f-44a2-99a0-28dd9db76d50\") " pod="openstack/root-account-create-update-x5zfk" Jan 26 18:53:47 crc kubenswrapper[4737]: I0126 18:53:47.743989 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7686b11b-6dd6-4748-9358-79a3885e118a-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"7686b11b-6dd6-4748-9358-79a3885e118a\") " pod="openstack/mysqld-exporter-0" Jan 26 18:53:47 crc kubenswrapper[4737]: I0126 18:53:47.746756 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7686b11b-6dd6-4748-9358-79a3885e118a-config-data\") pod \"mysqld-exporter-0\" (UID: \"7686b11b-6dd6-4748-9358-79a3885e118a\") " pod="openstack/mysqld-exporter-0" Jan 26 18:53:47 crc kubenswrapper[4737]: I0126 18:53:47.763882 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7w6zg\" (UniqueName: \"kubernetes.io/projected/7686b11b-6dd6-4748-9358-79a3885e118a-kube-api-access-7w6zg\") pod \"mysqld-exporter-0\" (UID: \"7686b11b-6dd6-4748-9358-79a3885e118a\") " pod="openstack/mysqld-exporter-0" Jan 26 18:53:47 crc kubenswrapper[4737]: I0126 18:53:47.836724 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4010ae2-e90f-44a2-99a0-28dd9db76d50-operator-scripts\") pod \"root-account-create-update-x5zfk\" (UID: \"b4010ae2-e90f-44a2-99a0-28dd9db76d50\") " pod="openstack/root-account-create-update-x5zfk" Jan 26 18:53:47 crc kubenswrapper[4737]: I0126 18:53:47.836816 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvqns\" (UniqueName: \"kubernetes.io/projected/b4010ae2-e90f-44a2-99a0-28dd9db76d50-kube-api-access-xvqns\") pod \"root-account-create-update-x5zfk\" (UID: \"b4010ae2-e90f-44a2-99a0-28dd9db76d50\") " pod="openstack/root-account-create-update-x5zfk" Jan 26 18:53:47 crc kubenswrapper[4737]: I0126 18:53:47.837970 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4010ae2-e90f-44a2-99a0-28dd9db76d50-operator-scripts\") pod \"root-account-create-update-x5zfk\" (UID: \"b4010ae2-e90f-44a2-99a0-28dd9db76d50\") " pod="openstack/root-account-create-update-x5zfk" Jan 26 18:53:47 crc kubenswrapper[4737]: I0126 18:53:47.857857 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvqns\" (UniqueName: \"kubernetes.io/projected/b4010ae2-e90f-44a2-99a0-28dd9db76d50-kube-api-access-xvqns\") pod \"root-account-create-update-x5zfk\" (UID: \"b4010ae2-e90f-44a2-99a0-28dd9db76d50\") " pod="openstack/root-account-create-update-x5zfk" Jan 26 18:53:48 crc kubenswrapper[4737]: I0126 18:53:48.032162 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 26 18:53:48 crc kubenswrapper[4737]: I0126 18:53:48.033213 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-x5zfk" Jan 26 18:53:48 crc kubenswrapper[4737]: I0126 18:53:48.317848 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"03970489-bf21-4d19-afc2-bf8d39aa683e","Type":"ContainerStarted","Data":"43df0078f06f00baae85192d2df42c9cbeba2c3ab8aa211869c0cbce36863154"} Jan 26 18:53:48 crc kubenswrapper[4737]: I0126 18:53:48.866937 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 26 18:53:48 crc kubenswrapper[4737]: I0126 18:53:48.952743 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-x5zfk"] Jan 26 18:53:48 crc kubenswrapper[4737]: W0126 18:53:48.957893 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4010ae2_e90f_44a2_99a0_28dd9db76d50.slice/crio-08a31274421d3adc1a64d92f3d8dccbc4837f95acb94ab3430cf00f6b327d86b WatchSource:0}: Error finding container 08a31274421d3adc1a64d92f3d8dccbc4837f95acb94ab3430cf00f6b327d86b: Status 404 returned error can't find the container with id 08a31274421d3adc1a64d92f3d8dccbc4837f95acb94ab3430cf00f6b327d86b Jan 26 18:53:49 crc kubenswrapper[4737]: I0126 18:53:49.332617 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-x5zfk" event={"ID":"b4010ae2-e90f-44a2-99a0-28dd9db76d50","Type":"ContainerStarted","Data":"58a5ad454430f0076c66274925b3c8b8a3b05c45ac6d886158b079bd6965f426"} Jan 26 18:53:49 crc kubenswrapper[4737]: I0126 18:53:49.332970 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-x5zfk" event={"ID":"b4010ae2-e90f-44a2-99a0-28dd9db76d50","Type":"ContainerStarted","Data":"08a31274421d3adc1a64d92f3d8dccbc4837f95acb94ab3430cf00f6b327d86b"} Jan 26 18:53:49 crc kubenswrapper[4737]: I0126 18:53:49.338595 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"dd029654-7895-4949-9ef7-b5cdd6043451","Type":"ContainerStarted","Data":"002a05a2277f966dcaf38dec7907db684074f0c08d9dc91061ecc140f57bb472"} Jan 26 18:53:49 crc kubenswrapper[4737]: I0126 18:53:49.340644 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"7686b11b-6dd6-4748-9358-79a3885e118a","Type":"ContainerStarted","Data":"f9622f3c4edcb1044f695b8bcd667deb079c1982cc6323f18ac12ad84653fef4"} Jan 26 18:53:49 crc kubenswrapper[4737]: I0126 18:53:49.343007 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"03970489-bf21-4d19-afc2-bf8d39aa683e","Type":"ContainerStarted","Data":"d4eb07d10f9a55ab9bf0f05539208e53b217ad2ed06dfa4c4ad50a1d4bd77dff"} Jan 26 18:53:49 crc kubenswrapper[4737]: I0126 18:53:49.358342 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-x5zfk" podStartSLOduration=2.358320668 podStartE2EDuration="2.358320668s" podCreationTimestamp="2026-01-26 18:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:53:49.356490655 +0000 UTC m=+1402.664685363" watchObservedRunningTime="2026-01-26 18:53:49.358320668 +0000 UTC m=+1402.666515376" Jan 26 18:53:49 crc kubenswrapper[4737]: I0126 18:53:49.901091 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-zrckb" podUID="11408d0f-4b45-4dab-bc9e-965ac14aed79" containerName="ovn-controller" probeResult="failure" output=< Jan 26 18:53:49 crc kubenswrapper[4737]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 26 18:53:49 crc kubenswrapper[4737]: > Jan 26 18:53:49 crc kubenswrapper[4737]: I0126 18:53:49.925565 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-tnjz7" Jan 26 18:53:50 crc kubenswrapper[4737]: I0126 18:53:50.150145 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-zrckb-config-d87mj"] Jan 26 18:53:50 crc kubenswrapper[4737]: I0126 18:53:50.151970 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zrckb-config-d87mj" Jan 26 18:53:50 crc kubenswrapper[4737]: I0126 18:53:50.155365 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 26 18:53:50 crc kubenswrapper[4737]: I0126 18:53:50.166942 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zrckb-config-d87mj"] Jan 26 18:53:50 crc kubenswrapper[4737]: I0126 18:53:50.297572 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c741553d-2891-46b9-b086-951b945611d4-scripts\") pod \"ovn-controller-zrckb-config-d87mj\" (UID: \"c741553d-2891-46b9-b086-951b945611d4\") " pod="openstack/ovn-controller-zrckb-config-d87mj" Jan 26 18:53:50 crc kubenswrapper[4737]: I0126 18:53:50.297772 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c741553d-2891-46b9-b086-951b945611d4-additional-scripts\") pod \"ovn-controller-zrckb-config-d87mj\" (UID: \"c741553d-2891-46b9-b086-951b945611d4\") " pod="openstack/ovn-controller-zrckb-config-d87mj" Jan 26 18:53:50 crc kubenswrapper[4737]: I0126 18:53:50.297836 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c741553d-2891-46b9-b086-951b945611d4-var-run-ovn\") pod \"ovn-controller-zrckb-config-d87mj\" (UID: \"c741553d-2891-46b9-b086-951b945611d4\") " pod="openstack/ovn-controller-zrckb-config-d87mj" Jan 26 18:53:50 crc kubenswrapper[4737]: I0126 18:53:50.298139 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c741553d-2891-46b9-b086-951b945611d4-var-log-ovn\") pod \"ovn-controller-zrckb-config-d87mj\" (UID: \"c741553d-2891-46b9-b086-951b945611d4\") " pod="openstack/ovn-controller-zrckb-config-d87mj" Jan 26 18:53:50 crc kubenswrapper[4737]: I0126 18:53:50.298206 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l79zc\" (UniqueName: \"kubernetes.io/projected/c741553d-2891-46b9-b086-951b945611d4-kube-api-access-l79zc\") pod \"ovn-controller-zrckb-config-d87mj\" (UID: \"c741553d-2891-46b9-b086-951b945611d4\") " pod="openstack/ovn-controller-zrckb-config-d87mj" Jan 26 18:53:50 crc kubenswrapper[4737]: I0126 18:53:50.298276 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c741553d-2891-46b9-b086-951b945611d4-var-run\") pod \"ovn-controller-zrckb-config-d87mj\" (UID: \"c741553d-2891-46b9-b086-951b945611d4\") " pod="openstack/ovn-controller-zrckb-config-d87mj" Jan 26 18:53:50 crc kubenswrapper[4737]: I0126 18:53:50.358129 4737 generic.go:334] "Generic (PLEG): container finished" podID="b4010ae2-e90f-44a2-99a0-28dd9db76d50" containerID="58a5ad454430f0076c66274925b3c8b8a3b05c45ac6d886158b079bd6965f426" exitCode=0 Jan 26 18:53:50 crc kubenswrapper[4737]: I0126 18:53:50.358721 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-x5zfk" event={"ID":"b4010ae2-e90f-44a2-99a0-28dd9db76d50","Type":"ContainerDied","Data":"58a5ad454430f0076c66274925b3c8b8a3b05c45ac6d886158b079bd6965f426"} Jan 26 18:53:50 crc kubenswrapper[4737]: I0126 18:53:50.400108 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c741553d-2891-46b9-b086-951b945611d4-var-log-ovn\") pod \"ovn-controller-zrckb-config-d87mj\" (UID: \"c741553d-2891-46b9-b086-951b945611d4\") " pod="openstack/ovn-controller-zrckb-config-d87mj" Jan 26 18:53:50 crc kubenswrapper[4737]: I0126 18:53:50.400163 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l79zc\" (UniqueName: \"kubernetes.io/projected/c741553d-2891-46b9-b086-951b945611d4-kube-api-access-l79zc\") pod \"ovn-controller-zrckb-config-d87mj\" (UID: \"c741553d-2891-46b9-b086-951b945611d4\") " pod="openstack/ovn-controller-zrckb-config-d87mj" Jan 26 18:53:50 crc kubenswrapper[4737]: I0126 18:53:50.400203 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c741553d-2891-46b9-b086-951b945611d4-var-run\") pod \"ovn-controller-zrckb-config-d87mj\" (UID: \"c741553d-2891-46b9-b086-951b945611d4\") " pod="openstack/ovn-controller-zrckb-config-d87mj" Jan 26 18:53:50 crc kubenswrapper[4737]: I0126 18:53:50.400279 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c741553d-2891-46b9-b086-951b945611d4-scripts\") pod \"ovn-controller-zrckb-config-d87mj\" (UID: \"c741553d-2891-46b9-b086-951b945611d4\") " pod="openstack/ovn-controller-zrckb-config-d87mj" Jan 26 18:53:50 crc kubenswrapper[4737]: I0126 18:53:50.400356 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c741553d-2891-46b9-b086-951b945611d4-additional-scripts\") pod \"ovn-controller-zrckb-config-d87mj\" (UID: \"c741553d-2891-46b9-b086-951b945611d4\") " pod="openstack/ovn-controller-zrckb-config-d87mj" Jan 26 18:53:50 crc kubenswrapper[4737]: I0126 18:53:50.400383 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c741553d-2891-46b9-b086-951b945611d4-var-run-ovn\") pod \"ovn-controller-zrckb-config-d87mj\" (UID: \"c741553d-2891-46b9-b086-951b945611d4\") " pod="openstack/ovn-controller-zrckb-config-d87mj" Jan 26 18:53:50 crc kubenswrapper[4737]: I0126 18:53:50.400570 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c741553d-2891-46b9-b086-951b945611d4-var-log-ovn\") pod \"ovn-controller-zrckb-config-d87mj\" (UID: \"c741553d-2891-46b9-b086-951b945611d4\") " pod="openstack/ovn-controller-zrckb-config-d87mj" Jan 26 18:53:50 crc kubenswrapper[4737]: I0126 18:53:50.400604 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c741553d-2891-46b9-b086-951b945611d4-var-run\") pod \"ovn-controller-zrckb-config-d87mj\" (UID: \"c741553d-2891-46b9-b086-951b945611d4\") " pod="openstack/ovn-controller-zrckb-config-d87mj" Jan 26 18:53:50 crc kubenswrapper[4737]: I0126 18:53:50.400617 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c741553d-2891-46b9-b086-951b945611d4-var-run-ovn\") pod \"ovn-controller-zrckb-config-d87mj\" (UID: \"c741553d-2891-46b9-b086-951b945611d4\") " pod="openstack/ovn-controller-zrckb-config-d87mj" Jan 26 18:53:50 crc kubenswrapper[4737]: I0126 18:53:50.401385 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c741553d-2891-46b9-b086-951b945611d4-additional-scripts\") pod \"ovn-controller-zrckb-config-d87mj\" (UID: \"c741553d-2891-46b9-b086-951b945611d4\") " pod="openstack/ovn-controller-zrckb-config-d87mj" Jan 26 18:53:50 crc kubenswrapper[4737]: I0126 18:53:50.402966 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c741553d-2891-46b9-b086-951b945611d4-scripts\") pod \"ovn-controller-zrckb-config-d87mj\" (UID: \"c741553d-2891-46b9-b086-951b945611d4\") " pod="openstack/ovn-controller-zrckb-config-d87mj" Jan 26 18:53:50 crc kubenswrapper[4737]: I0126 18:53:50.436847 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l79zc\" (UniqueName: \"kubernetes.io/projected/c741553d-2891-46b9-b086-951b945611d4-kube-api-access-l79zc\") pod \"ovn-controller-zrckb-config-d87mj\" (UID: \"c741553d-2891-46b9-b086-951b945611d4\") " pod="openstack/ovn-controller-zrckb-config-d87mj" Jan 26 18:53:50 crc kubenswrapper[4737]: I0126 18:53:50.479263 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zrckb-config-d87mj" Jan 26 18:53:54 crc kubenswrapper[4737]: I0126 18:53:54.896055 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-zrckb" podUID="11408d0f-4b45-4dab-bc9e-965ac14aed79" containerName="ovn-controller" probeResult="failure" output=< Jan 26 18:53:54 crc kubenswrapper[4737]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 26 18:53:54 crc kubenswrapper[4737]: > Jan 26 18:53:56 crc kubenswrapper[4737]: I0126 18:53:56.184827 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="ca2ccc7a-b591-4abe-b133-f959b5445611" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Jan 26 18:53:56 crc kubenswrapper[4737]: I0126 18:53:56.460501 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="49c4dfd6-d334-4e11-8a1d-0dd773f91b1f" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Jan 26 18:53:56 crc kubenswrapper[4737]: I0126 18:53:56.484309 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:53:56 crc kubenswrapper[4737]: I0126 18:53:56.516560 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="5bfe0217-6204-407d-aaeb-94051bb8255b" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.131:5671: connect: connection refused" Jan 26 18:53:57 crc kubenswrapper[4737]: I0126 18:53:57.402115 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-x5zfk" Jan 26 18:53:57 crc kubenswrapper[4737]: I0126 18:53:57.436215 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-x5zfk" event={"ID":"b4010ae2-e90f-44a2-99a0-28dd9db76d50","Type":"ContainerDied","Data":"08a31274421d3adc1a64d92f3d8dccbc4837f95acb94ab3430cf00f6b327d86b"} Jan 26 18:53:57 crc kubenswrapper[4737]: I0126 18:53:57.436253 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08a31274421d3adc1a64d92f3d8dccbc4837f95acb94ab3430cf00f6b327d86b" Jan 26 18:53:57 crc kubenswrapper[4737]: I0126 18:53:57.436309 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-x5zfk" Jan 26 18:53:57 crc kubenswrapper[4737]: I0126 18:53:57.559453 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvqns\" (UniqueName: \"kubernetes.io/projected/b4010ae2-e90f-44a2-99a0-28dd9db76d50-kube-api-access-xvqns\") pod \"b4010ae2-e90f-44a2-99a0-28dd9db76d50\" (UID: \"b4010ae2-e90f-44a2-99a0-28dd9db76d50\") " Jan 26 18:53:57 crc kubenswrapper[4737]: I0126 18:53:57.559587 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4010ae2-e90f-44a2-99a0-28dd9db76d50-operator-scripts\") pod \"b4010ae2-e90f-44a2-99a0-28dd9db76d50\" (UID: \"b4010ae2-e90f-44a2-99a0-28dd9db76d50\") " Jan 26 18:53:57 crc kubenswrapper[4737]: I0126 18:53:57.560606 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4010ae2-e90f-44a2-99a0-28dd9db76d50-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b4010ae2-e90f-44a2-99a0-28dd9db76d50" (UID: "b4010ae2-e90f-44a2-99a0-28dd9db76d50"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:53:57 crc kubenswrapper[4737]: I0126 18:53:57.564479 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4010ae2-e90f-44a2-99a0-28dd9db76d50-kube-api-access-xvqns" (OuterVolumeSpecName: "kube-api-access-xvqns") pod "b4010ae2-e90f-44a2-99a0-28dd9db76d50" (UID: "b4010ae2-e90f-44a2-99a0-28dd9db76d50"). InnerVolumeSpecName "kube-api-access-xvqns". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:53:57 crc kubenswrapper[4737]: I0126 18:53:57.641837 4737 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 18:53:57 crc kubenswrapper[4737]: I0126 18:53:57.663377 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvqns\" (UniqueName: \"kubernetes.io/projected/b4010ae2-e90f-44a2-99a0-28dd9db76d50-kube-api-access-xvqns\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:57 crc kubenswrapper[4737]: I0126 18:53:57.663838 4737 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4010ae2-e90f-44a2-99a0-28dd9db76d50-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:57 crc kubenswrapper[4737]: I0126 18:53:57.698941 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zrckb-config-d87mj"] Jan 26 18:53:58 crc kubenswrapper[4737]: I0126 18:53:58.460509 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zrckb-config-d87mj" event={"ID":"c741553d-2891-46b9-b086-951b945611d4","Type":"ContainerStarted","Data":"5ded3aed4f10969261e5907897ffcdd6c20883db2a436433a8bd4d0723911bc7"} Jan 26 18:53:58 crc kubenswrapper[4737]: I0126 18:53:58.464439 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-z8mqw" event={"ID":"b9db6e67-d109-41f6-bd12-a68553ab3bf6","Type":"ContainerStarted","Data":"ebc5e53482312752e5620db1c3faf6a43156abe8180fd55a0742f27476539166"} Jan 26 18:53:58 crc kubenswrapper[4737]: I0126 18:53:58.472429 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"03970489-bf21-4d19-afc2-bf8d39aa683e","Type":"ContainerStarted","Data":"478243f367bdffee7c55a11f72a5d521a66a40daea75882c9b01cb1b80bd046a"} Jan 26 18:53:58 crc kubenswrapper[4737]: I0126 18:53:58.496282 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-z8mqw" podStartSLOduration=2.946369773 podStartE2EDuration="19.496256152s" podCreationTimestamp="2026-01-26 18:53:39 +0000 UTC" firstStartedPulling="2026-01-26 18:53:40.74121497 +0000 UTC m=+1394.049409678" lastFinishedPulling="2026-01-26 18:53:57.291101349 +0000 UTC m=+1410.599296057" observedRunningTime="2026-01-26 18:53:58.49273124 +0000 UTC m=+1411.800925948" watchObservedRunningTime="2026-01-26 18:53:58.496256152 +0000 UTC m=+1411.804450860" Jan 26 18:53:59 crc kubenswrapper[4737]: I0126 18:53:59.485144 4737 generic.go:334] "Generic (PLEG): container finished" podID="c741553d-2891-46b9-b086-951b945611d4" containerID="30ea1d45258592e4482ad8d2cf21b32cd34002b74a4604b607e91a5e253c915b" exitCode=0 Jan 26 18:53:59 crc kubenswrapper[4737]: I0126 18:53:59.485234 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zrckb-config-d87mj" event={"ID":"c741553d-2891-46b9-b086-951b945611d4","Type":"ContainerDied","Data":"30ea1d45258592e4482ad8d2cf21b32cd34002b74a4604b607e91a5e253c915b"} Jan 26 18:53:59 crc kubenswrapper[4737]: I0126 18:53:59.489296 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"7686b11b-6dd6-4748-9358-79a3885e118a","Type":"ContainerStarted","Data":"9253bbafc6a499c29336cc16c2ebfe243ba2163593aa21d9015a2048ec239a99"} Jan 26 18:53:59 crc kubenswrapper[4737]: I0126 18:53:59.533839 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=2.626280189 podStartE2EDuration="12.533814377s" podCreationTimestamp="2026-01-26 18:53:47 +0000 UTC" firstStartedPulling="2026-01-26 18:53:48.881179314 +0000 UTC m=+1402.189374022" lastFinishedPulling="2026-01-26 18:53:58.788713502 +0000 UTC m=+1412.096908210" observedRunningTime="2026-01-26 18:53:59.52801551 +0000 UTC m=+1412.836210218" watchObservedRunningTime="2026-01-26 18:53:59.533814377 +0000 UTC m=+1412.842009095" Jan 26 18:53:59 crc kubenswrapper[4737]: I0126 18:53:59.896256 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-zrckb" Jan 26 18:54:00 crc kubenswrapper[4737]: I0126 18:54:00.918478 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zrckb-config-d87mj" Jan 26 18:54:00 crc kubenswrapper[4737]: I0126 18:54:00.949526 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:54:00 crc kubenswrapper[4737]: I0126 18:54:00.949583 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:54:01 crc kubenswrapper[4737]: I0126 18:54:01.054524 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c741553d-2891-46b9-b086-951b945611d4-scripts\") pod \"c741553d-2891-46b9-b086-951b945611d4\" (UID: \"c741553d-2891-46b9-b086-951b945611d4\") " Jan 26 18:54:01 crc kubenswrapper[4737]: I0126 18:54:01.054563 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c741553d-2891-46b9-b086-951b945611d4-var-run\") pod \"c741553d-2891-46b9-b086-951b945611d4\" (UID: \"c741553d-2891-46b9-b086-951b945611d4\") " Jan 26 18:54:01 crc kubenswrapper[4737]: I0126 18:54:01.054647 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c741553d-2891-46b9-b086-951b945611d4-additional-scripts\") pod \"c741553d-2891-46b9-b086-951b945611d4\" (UID: \"c741553d-2891-46b9-b086-951b945611d4\") " Jan 26 18:54:01 crc kubenswrapper[4737]: I0126 18:54:01.054677 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l79zc\" (UniqueName: \"kubernetes.io/projected/c741553d-2891-46b9-b086-951b945611d4-kube-api-access-l79zc\") pod \"c741553d-2891-46b9-b086-951b945611d4\" (UID: \"c741553d-2891-46b9-b086-951b945611d4\") " Jan 26 18:54:01 crc kubenswrapper[4737]: I0126 18:54:01.054729 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c741553d-2891-46b9-b086-951b945611d4-var-run-ovn\") pod \"c741553d-2891-46b9-b086-951b945611d4\" (UID: \"c741553d-2891-46b9-b086-951b945611d4\") " Jan 26 18:54:01 crc kubenswrapper[4737]: I0126 18:54:01.054752 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c741553d-2891-46b9-b086-951b945611d4-var-log-ovn\") pod \"c741553d-2891-46b9-b086-951b945611d4\" (UID: \"c741553d-2891-46b9-b086-951b945611d4\") " Jan 26 18:54:01 crc kubenswrapper[4737]: I0126 18:54:01.055292 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c741553d-2891-46b9-b086-951b945611d4-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "c741553d-2891-46b9-b086-951b945611d4" (UID: "c741553d-2891-46b9-b086-951b945611d4"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:54:01 crc kubenswrapper[4737]: I0126 18:54:01.055567 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c741553d-2891-46b9-b086-951b945611d4-var-run" (OuterVolumeSpecName: "var-run") pod "c741553d-2891-46b9-b086-951b945611d4" (UID: "c741553d-2891-46b9-b086-951b945611d4"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:54:01 crc kubenswrapper[4737]: I0126 18:54:01.056103 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c741553d-2891-46b9-b086-951b945611d4-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "c741553d-2891-46b9-b086-951b945611d4" (UID: "c741553d-2891-46b9-b086-951b945611d4"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:54:01 crc kubenswrapper[4737]: I0126 18:54:01.056389 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c741553d-2891-46b9-b086-951b945611d4-scripts" (OuterVolumeSpecName: "scripts") pod "c741553d-2891-46b9-b086-951b945611d4" (UID: "c741553d-2891-46b9-b086-951b945611d4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:54:01 crc kubenswrapper[4737]: I0126 18:54:01.056421 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c741553d-2891-46b9-b086-951b945611d4-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "c741553d-2891-46b9-b086-951b945611d4" (UID: "c741553d-2891-46b9-b086-951b945611d4"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:54:01 crc kubenswrapper[4737]: I0126 18:54:01.059650 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c741553d-2891-46b9-b086-951b945611d4-kube-api-access-l79zc" (OuterVolumeSpecName: "kube-api-access-l79zc") pod "c741553d-2891-46b9-b086-951b945611d4" (UID: "c741553d-2891-46b9-b086-951b945611d4"). InnerVolumeSpecName "kube-api-access-l79zc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:54:01 crc kubenswrapper[4737]: I0126 18:54:01.159565 4737 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c741553d-2891-46b9-b086-951b945611d4-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:01 crc kubenswrapper[4737]: I0126 18:54:01.159597 4737 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c741553d-2891-46b9-b086-951b945611d4-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:01 crc kubenswrapper[4737]: I0126 18:54:01.159609 4737 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c741553d-2891-46b9-b086-951b945611d4-var-run\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:01 crc kubenswrapper[4737]: I0126 18:54:01.159621 4737 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c741553d-2891-46b9-b086-951b945611d4-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:01 crc kubenswrapper[4737]: I0126 18:54:01.159652 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l79zc\" (UniqueName: \"kubernetes.io/projected/c741553d-2891-46b9-b086-951b945611d4-kube-api-access-l79zc\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:01 crc kubenswrapper[4737]: I0126 18:54:01.159679 4737 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c741553d-2891-46b9-b086-951b945611d4-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:01 crc kubenswrapper[4737]: I0126 18:54:01.540292 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zrckb-config-d87mj" event={"ID":"c741553d-2891-46b9-b086-951b945611d4","Type":"ContainerDied","Data":"5ded3aed4f10969261e5907897ffcdd6c20883db2a436433a8bd4d0723911bc7"} Jan 26 18:54:01 crc kubenswrapper[4737]: I0126 18:54:01.540365 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ded3aed4f10969261e5907897ffcdd6c20883db2a436433a8bd4d0723911bc7" Jan 26 18:54:01 crc kubenswrapper[4737]: I0126 18:54:01.540500 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zrckb-config-d87mj" Jan 26 18:54:01 crc kubenswrapper[4737]: I0126 18:54:01.546584 4737 generic.go:334] "Generic (PLEG): container finished" podID="dd029654-7895-4949-9ef7-b5cdd6043451" containerID="002a05a2277f966dcaf38dec7907db684074f0c08d9dc91061ecc140f57bb472" exitCode=0 Jan 26 18:54:01 crc kubenswrapper[4737]: I0126 18:54:01.546782 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"dd029654-7895-4949-9ef7-b5cdd6043451","Type":"ContainerDied","Data":"002a05a2277f966dcaf38dec7907db684074f0c08d9dc91061ecc140f57bb472"} Jan 26 18:54:01 crc kubenswrapper[4737]: I0126 18:54:01.558392 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"03970489-bf21-4d19-afc2-bf8d39aa683e","Type":"ContainerStarted","Data":"c9a9af4c6e92669ae2ab43e555324de3b0903eac9052d7662fca655849af240e"} Jan 26 18:54:01 crc kubenswrapper[4737]: I0126 18:54:01.558440 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"03970489-bf21-4d19-afc2-bf8d39aa683e","Type":"ContainerStarted","Data":"76c8a13a9b103412bebd614bd9ff35449d01dccf07c2b9ff56bdce3b728e797d"} Jan 26 18:54:01 crc kubenswrapper[4737]: I0126 18:54:01.558458 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"03970489-bf21-4d19-afc2-bf8d39aa683e","Type":"ContainerStarted","Data":"19bcdeb6922c4540816b5413a1d4c81909395af57b155785bdb87bac360dbf17"} Jan 26 18:54:01 crc kubenswrapper[4737]: I0126 18:54:01.558467 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"03970489-bf21-4d19-afc2-bf8d39aa683e","Type":"ContainerStarted","Data":"53a53adff018f6c105e31f8311e1e3a26c29a8c4c2c0b3b2450d8c50321f71d8"} Jan 26 18:54:02 crc kubenswrapper[4737]: I0126 18:54:02.042787 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-zrckb-config-d87mj"] Jan 26 18:54:02 crc kubenswrapper[4737]: I0126 18:54:02.055456 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-zrckb-config-d87mj"] Jan 26 18:54:02 crc kubenswrapper[4737]: I0126 18:54:02.078122 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-zrckb-config-dx6zg"] Jan 26 18:54:02 crc kubenswrapper[4737]: E0126 18:54:02.078590 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c741553d-2891-46b9-b086-951b945611d4" containerName="ovn-config" Jan 26 18:54:02 crc kubenswrapper[4737]: I0126 18:54:02.078607 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="c741553d-2891-46b9-b086-951b945611d4" containerName="ovn-config" Jan 26 18:54:02 crc kubenswrapper[4737]: E0126 18:54:02.078623 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4010ae2-e90f-44a2-99a0-28dd9db76d50" containerName="mariadb-account-create-update" Jan 26 18:54:02 crc kubenswrapper[4737]: I0126 18:54:02.078630 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4010ae2-e90f-44a2-99a0-28dd9db76d50" containerName="mariadb-account-create-update" Jan 26 18:54:02 crc kubenswrapper[4737]: I0126 18:54:02.078819 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4010ae2-e90f-44a2-99a0-28dd9db76d50" containerName="mariadb-account-create-update" Jan 26 18:54:02 crc kubenswrapper[4737]: I0126 18:54:02.078843 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="c741553d-2891-46b9-b086-951b945611d4" containerName="ovn-config" Jan 26 18:54:02 crc kubenswrapper[4737]: I0126 18:54:02.079623 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zrckb-config-dx6zg" Jan 26 18:54:02 crc kubenswrapper[4737]: I0126 18:54:02.082268 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 26 18:54:02 crc kubenswrapper[4737]: I0126 18:54:02.102774 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zrckb-config-dx6zg"] Jan 26 18:54:02 crc kubenswrapper[4737]: I0126 18:54:02.179742 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e1492ea9-c901-47d5-b9d1-d95e65eec0b6-var-run\") pod \"ovn-controller-zrckb-config-dx6zg\" (UID: \"e1492ea9-c901-47d5-b9d1-d95e65eec0b6\") " pod="openstack/ovn-controller-zrckb-config-dx6zg" Jan 26 18:54:02 crc kubenswrapper[4737]: I0126 18:54:02.179814 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e1492ea9-c901-47d5-b9d1-d95e65eec0b6-scripts\") pod \"ovn-controller-zrckb-config-dx6zg\" (UID: \"e1492ea9-c901-47d5-b9d1-d95e65eec0b6\") " pod="openstack/ovn-controller-zrckb-config-dx6zg" Jan 26 18:54:02 crc kubenswrapper[4737]: I0126 18:54:02.179974 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e1492ea9-c901-47d5-b9d1-d95e65eec0b6-var-run-ovn\") pod \"ovn-controller-zrckb-config-dx6zg\" (UID: \"e1492ea9-c901-47d5-b9d1-d95e65eec0b6\") " pod="openstack/ovn-controller-zrckb-config-dx6zg" Jan 26 18:54:02 crc kubenswrapper[4737]: I0126 18:54:02.180166 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e1492ea9-c901-47d5-b9d1-d95e65eec0b6-var-log-ovn\") pod \"ovn-controller-zrckb-config-dx6zg\" (UID: \"e1492ea9-c901-47d5-b9d1-d95e65eec0b6\") " pod="openstack/ovn-controller-zrckb-config-dx6zg" Jan 26 18:54:02 crc kubenswrapper[4737]: I0126 18:54:02.180196 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e1492ea9-c901-47d5-b9d1-d95e65eec0b6-additional-scripts\") pod \"ovn-controller-zrckb-config-dx6zg\" (UID: \"e1492ea9-c901-47d5-b9d1-d95e65eec0b6\") " pod="openstack/ovn-controller-zrckb-config-dx6zg" Jan 26 18:54:02 crc kubenswrapper[4737]: I0126 18:54:02.180357 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wrhq\" (UniqueName: \"kubernetes.io/projected/e1492ea9-c901-47d5-b9d1-d95e65eec0b6-kube-api-access-4wrhq\") pod \"ovn-controller-zrckb-config-dx6zg\" (UID: \"e1492ea9-c901-47d5-b9d1-d95e65eec0b6\") " pod="openstack/ovn-controller-zrckb-config-dx6zg" Jan 26 18:54:02 crc kubenswrapper[4737]: I0126 18:54:02.282515 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e1492ea9-c901-47d5-b9d1-d95e65eec0b6-var-run\") pod \"ovn-controller-zrckb-config-dx6zg\" (UID: \"e1492ea9-c901-47d5-b9d1-d95e65eec0b6\") " pod="openstack/ovn-controller-zrckb-config-dx6zg" Jan 26 18:54:02 crc kubenswrapper[4737]: I0126 18:54:02.282592 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e1492ea9-c901-47d5-b9d1-d95e65eec0b6-scripts\") pod \"ovn-controller-zrckb-config-dx6zg\" (UID: \"e1492ea9-c901-47d5-b9d1-d95e65eec0b6\") " pod="openstack/ovn-controller-zrckb-config-dx6zg" Jan 26 18:54:02 crc kubenswrapper[4737]: I0126 18:54:02.282641 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e1492ea9-c901-47d5-b9d1-d95e65eec0b6-var-run-ovn\") pod \"ovn-controller-zrckb-config-dx6zg\" (UID: \"e1492ea9-c901-47d5-b9d1-d95e65eec0b6\") " pod="openstack/ovn-controller-zrckb-config-dx6zg" Jan 26 18:54:02 crc kubenswrapper[4737]: I0126 18:54:02.282693 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e1492ea9-c901-47d5-b9d1-d95e65eec0b6-var-log-ovn\") pod \"ovn-controller-zrckb-config-dx6zg\" (UID: \"e1492ea9-c901-47d5-b9d1-d95e65eec0b6\") " pod="openstack/ovn-controller-zrckb-config-dx6zg" Jan 26 18:54:02 crc kubenswrapper[4737]: I0126 18:54:02.282755 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e1492ea9-c901-47d5-b9d1-d95e65eec0b6-additional-scripts\") pod \"ovn-controller-zrckb-config-dx6zg\" (UID: \"e1492ea9-c901-47d5-b9d1-d95e65eec0b6\") " pod="openstack/ovn-controller-zrckb-config-dx6zg" Jan 26 18:54:02 crc kubenswrapper[4737]: I0126 18:54:02.282795 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wrhq\" (UniqueName: \"kubernetes.io/projected/e1492ea9-c901-47d5-b9d1-d95e65eec0b6-kube-api-access-4wrhq\") pod \"ovn-controller-zrckb-config-dx6zg\" (UID: \"e1492ea9-c901-47d5-b9d1-d95e65eec0b6\") " pod="openstack/ovn-controller-zrckb-config-dx6zg" Jan 26 18:54:02 crc kubenswrapper[4737]: I0126 18:54:02.283384 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e1492ea9-c901-47d5-b9d1-d95e65eec0b6-var-run\") pod \"ovn-controller-zrckb-config-dx6zg\" (UID: \"e1492ea9-c901-47d5-b9d1-d95e65eec0b6\") " pod="openstack/ovn-controller-zrckb-config-dx6zg" Jan 26 18:54:02 crc kubenswrapper[4737]: I0126 18:54:02.283712 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e1492ea9-c901-47d5-b9d1-d95e65eec0b6-var-log-ovn\") pod \"ovn-controller-zrckb-config-dx6zg\" (UID: \"e1492ea9-c901-47d5-b9d1-d95e65eec0b6\") " pod="openstack/ovn-controller-zrckb-config-dx6zg" Jan 26 18:54:02 crc kubenswrapper[4737]: I0126 18:54:02.283820 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e1492ea9-c901-47d5-b9d1-d95e65eec0b6-var-run-ovn\") pod \"ovn-controller-zrckb-config-dx6zg\" (UID: \"e1492ea9-c901-47d5-b9d1-d95e65eec0b6\") " pod="openstack/ovn-controller-zrckb-config-dx6zg" Jan 26 18:54:02 crc kubenswrapper[4737]: I0126 18:54:02.284537 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e1492ea9-c901-47d5-b9d1-d95e65eec0b6-additional-scripts\") pod \"ovn-controller-zrckb-config-dx6zg\" (UID: \"e1492ea9-c901-47d5-b9d1-d95e65eec0b6\") " pod="openstack/ovn-controller-zrckb-config-dx6zg" Jan 26 18:54:02 crc kubenswrapper[4737]: I0126 18:54:02.286293 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e1492ea9-c901-47d5-b9d1-d95e65eec0b6-scripts\") pod \"ovn-controller-zrckb-config-dx6zg\" (UID: \"e1492ea9-c901-47d5-b9d1-d95e65eec0b6\") " pod="openstack/ovn-controller-zrckb-config-dx6zg" Jan 26 18:54:02 crc kubenswrapper[4737]: I0126 18:54:02.305137 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wrhq\" (UniqueName: \"kubernetes.io/projected/e1492ea9-c901-47d5-b9d1-d95e65eec0b6-kube-api-access-4wrhq\") pod \"ovn-controller-zrckb-config-dx6zg\" (UID: \"e1492ea9-c901-47d5-b9d1-d95e65eec0b6\") " pod="openstack/ovn-controller-zrckb-config-dx6zg" Jan 26 18:54:02 crc kubenswrapper[4737]: I0126 18:54:02.400378 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zrckb-config-dx6zg" Jan 26 18:54:02 crc kubenswrapper[4737]: I0126 18:54:02.572819 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"dd029654-7895-4949-9ef7-b5cdd6043451","Type":"ContainerStarted","Data":"e077c2a5ddd03d329c33333d80d8c25cb99b56f915ef055d5dd4a0d5b7cf21a9"} Jan 26 18:54:03 crc kubenswrapper[4737]: I0126 18:54:03.031315 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c741553d-2891-46b9-b086-951b945611d4" path="/var/lib/kubelet/pods/c741553d-2891-46b9-b086-951b945611d4/volumes" Jan 26 18:54:03 crc kubenswrapper[4737]: I0126 18:54:03.104655 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zrckb-config-dx6zg"] Jan 26 18:54:03 crc kubenswrapper[4737]: I0126 18:54:03.599032 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zrckb-config-dx6zg" event={"ID":"e1492ea9-c901-47d5-b9d1-d95e65eec0b6","Type":"ContainerStarted","Data":"f20e4d4c577e8e539ad11813ea69c80f37dee7eabb6f108154211ee6d49e87e9"} Jan 26 18:54:03 crc kubenswrapper[4737]: I0126 18:54:03.599406 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zrckb-config-dx6zg" event={"ID":"e1492ea9-c901-47d5-b9d1-d95e65eec0b6","Type":"ContainerStarted","Data":"fa66645fb48c6f654fd662d3463a88eb07b98dfbf20e930f3d399052b6881bcf"} Jan 26 18:54:03 crc kubenswrapper[4737]: I0126 18:54:03.614783 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"03970489-bf21-4d19-afc2-bf8d39aa683e","Type":"ContainerStarted","Data":"2ead80b5f5c7669276fe504cf786a15d9c28216f8f03163fa952cdb610f1b7c2"} Jan 26 18:54:03 crc kubenswrapper[4737]: I0126 18:54:03.614837 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"03970489-bf21-4d19-afc2-bf8d39aa683e","Type":"ContainerStarted","Data":"c7a8e39ed14b9b0770b9bec00cb2c0519994fb2261ecb57ebc2317c48b16cdcb"} Jan 26 18:54:03 crc kubenswrapper[4737]: I0126 18:54:03.614853 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"03970489-bf21-4d19-afc2-bf8d39aa683e","Type":"ContainerStarted","Data":"637fea177171260210904b2dfcc1e501b7b59d4442d4413239dd4a489f4cd1d5"} Jan 26 18:54:03 crc kubenswrapper[4737]: I0126 18:54:03.627307 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-zrckb-config-dx6zg" podStartSLOduration=1.627288756 podStartE2EDuration="1.627288756s" podCreationTimestamp="2026-01-26 18:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:54:03.625663708 +0000 UTC m=+1416.933858426" watchObservedRunningTime="2026-01-26 18:54:03.627288756 +0000 UTC m=+1416.935483464" Jan 26 18:54:04 crc kubenswrapper[4737]: I0126 18:54:04.626178 4737 generic.go:334] "Generic (PLEG): container finished" podID="e1492ea9-c901-47d5-b9d1-d95e65eec0b6" containerID="f20e4d4c577e8e539ad11813ea69c80f37dee7eabb6f108154211ee6d49e87e9" exitCode=0 Jan 26 18:54:04 crc kubenswrapper[4737]: I0126 18:54:04.627221 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zrckb-config-dx6zg" event={"ID":"e1492ea9-c901-47d5-b9d1-d95e65eec0b6","Type":"ContainerDied","Data":"f20e4d4c577e8e539ad11813ea69c80f37dee7eabb6f108154211ee6d49e87e9"} Jan 26 18:54:04 crc kubenswrapper[4737]: I0126 18:54:04.639708 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"03970489-bf21-4d19-afc2-bf8d39aa683e","Type":"ContainerStarted","Data":"dad89d0610e9b5e1a9e784b5084e1fb09169c19826a3fa536bbf737f2bfffed2"} Jan 26 18:54:04 crc kubenswrapper[4737]: I0126 18:54:04.639758 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"03970489-bf21-4d19-afc2-bf8d39aa683e","Type":"ContainerStarted","Data":"3d76d75f9abc8a4d502007e3930ecaefcd131562dbe9f2bc632045db18910a98"} Jan 26 18:54:05 crc kubenswrapper[4737]: I0126 18:54:05.661384 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"dd029654-7895-4949-9ef7-b5cdd6043451","Type":"ContainerStarted","Data":"4aea1b8ebe69c3f07fdf723a8c781aec92cb6d1e2da4f06dd0bf2b45a689a79e"} Jan 26 18:54:05 crc kubenswrapper[4737]: I0126 18:54:05.662425 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"dd029654-7895-4949-9ef7-b5cdd6043451","Type":"ContainerStarted","Data":"34091d46c5b225c0a930d4f7d89f7786b19ac77f8477f3aeb280dc9b37097626"} Jan 26 18:54:05 crc kubenswrapper[4737]: I0126 18:54:05.664760 4737 generic.go:334] "Generic (PLEG): container finished" podID="b9db6e67-d109-41f6-bd12-a68553ab3bf6" containerID="ebc5e53482312752e5620db1c3faf6a43156abe8180fd55a0742f27476539166" exitCode=0 Jan 26 18:54:05 crc kubenswrapper[4737]: I0126 18:54:05.664840 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-z8mqw" event={"ID":"b9db6e67-d109-41f6-bd12-a68553ab3bf6","Type":"ContainerDied","Data":"ebc5e53482312752e5620db1c3faf6a43156abe8180fd55a0742f27476539166"} Jan 26 18:54:05 crc kubenswrapper[4737]: I0126 18:54:05.679199 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"03970489-bf21-4d19-afc2-bf8d39aa683e","Type":"ContainerStarted","Data":"ab457006b71575d3d3339da94a7d74caadcc86e5b072bdd029b6af90d79b2eaf"} Jan 26 18:54:05 crc kubenswrapper[4737]: I0126 18:54:05.679726 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"03970489-bf21-4d19-afc2-bf8d39aa683e","Type":"ContainerStarted","Data":"62de52c4c72235bbe9a4223235145956926f948313653894ea9ed79535eefc7d"} Jan 26 18:54:05 crc kubenswrapper[4737]: I0126 18:54:05.698987 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=21.698959764 podStartE2EDuration="21.698959764s" podCreationTimestamp="2026-01-26 18:53:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:54:05.688838514 +0000 UTC m=+1418.997033222" watchObservedRunningTime="2026-01-26 18:54:05.698959764 +0000 UTC m=+1419.007154472" Jan 26 18:54:05 crc kubenswrapper[4737]: I0126 18:54:05.739870 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=37.660908739999996 podStartE2EDuration="54.73984957s" podCreationTimestamp="2026-01-26 18:53:11 +0000 UTC" firstStartedPulling="2026-01-26 18:53:45.585496216 +0000 UTC m=+1398.893690924" lastFinishedPulling="2026-01-26 18:54:02.664437036 +0000 UTC m=+1415.972631754" observedRunningTime="2026-01-26 18:54:05.733413968 +0000 UTC m=+1419.041608706" watchObservedRunningTime="2026-01-26 18:54:05.73984957 +0000 UTC m=+1419.048044278" Jan 26 18:54:06 crc kubenswrapper[4737]: I0126 18:54:06.063406 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-rk8q8"] Jan 26 18:54:06 crc kubenswrapper[4737]: I0126 18:54:06.074874 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-rk8q8" Jan 26 18:54:06 crc kubenswrapper[4737]: I0126 18:54:06.077656 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 26 18:54:06 crc kubenswrapper[4737]: I0126 18:54:06.089155 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-rk8q8"] Jan 26 18:54:06 crc kubenswrapper[4737]: E0126 18:54:06.106220 4737 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod12b024f4_26dd_4b1f_91af_0785762d6793.slice\": RecentStats: unable to find data in memory cache]" Jan 26 18:54:06 crc kubenswrapper[4737]: I0126 18:54:06.182296 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Jan 26 18:54:06 crc kubenswrapper[4737]: I0126 18:54:06.192788 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12b024f4-26dd-4b1f-91af-0785762d6793-config\") pod \"dnsmasq-dns-764c5664d7-rk8q8\" (UID: \"12b024f4-26dd-4b1f-91af-0785762d6793\") " pod="openstack/dnsmasq-dns-764c5664d7-rk8q8" Jan 26 18:54:06 crc kubenswrapper[4737]: I0126 18:54:06.192901 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqwmx\" (UniqueName: \"kubernetes.io/projected/12b024f4-26dd-4b1f-91af-0785762d6793-kube-api-access-nqwmx\") pod \"dnsmasq-dns-764c5664d7-rk8q8\" (UID: \"12b024f4-26dd-4b1f-91af-0785762d6793\") " pod="openstack/dnsmasq-dns-764c5664d7-rk8q8" Jan 26 18:54:06 crc kubenswrapper[4737]: I0126 18:54:06.192934 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/12b024f4-26dd-4b1f-91af-0785762d6793-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-rk8q8\" (UID: \"12b024f4-26dd-4b1f-91af-0785762d6793\") " pod="openstack/dnsmasq-dns-764c5664d7-rk8q8" Jan 26 18:54:06 crc kubenswrapper[4737]: I0126 18:54:06.193018 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/12b024f4-26dd-4b1f-91af-0785762d6793-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-rk8q8\" (UID: \"12b024f4-26dd-4b1f-91af-0785762d6793\") " pod="openstack/dnsmasq-dns-764c5664d7-rk8q8" Jan 26 18:54:06 crc kubenswrapper[4737]: I0126 18:54:06.193100 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/12b024f4-26dd-4b1f-91af-0785762d6793-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-rk8q8\" (UID: \"12b024f4-26dd-4b1f-91af-0785762d6793\") " pod="openstack/dnsmasq-dns-764c5664d7-rk8q8" Jan 26 18:54:06 crc kubenswrapper[4737]: I0126 18:54:06.193123 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/12b024f4-26dd-4b1f-91af-0785762d6793-dns-svc\") pod \"dnsmasq-dns-764c5664d7-rk8q8\" (UID: \"12b024f4-26dd-4b1f-91af-0785762d6793\") " pod="openstack/dnsmasq-dns-764c5664d7-rk8q8" Jan 26 18:54:06 crc kubenswrapper[4737]: I0126 18:54:06.210731 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zrckb-config-dx6zg" Jan 26 18:54:06 crc kubenswrapper[4737]: I0126 18:54:06.293988 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e1492ea9-c901-47d5-b9d1-d95e65eec0b6-additional-scripts\") pod \"e1492ea9-c901-47d5-b9d1-d95e65eec0b6\" (UID: \"e1492ea9-c901-47d5-b9d1-d95e65eec0b6\") " Jan 26 18:54:06 crc kubenswrapper[4737]: I0126 18:54:06.294155 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wrhq\" (UniqueName: \"kubernetes.io/projected/e1492ea9-c901-47d5-b9d1-d95e65eec0b6-kube-api-access-4wrhq\") pod \"e1492ea9-c901-47d5-b9d1-d95e65eec0b6\" (UID: \"e1492ea9-c901-47d5-b9d1-d95e65eec0b6\") " Jan 26 18:54:06 crc kubenswrapper[4737]: I0126 18:54:06.294254 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e1492ea9-c901-47d5-b9d1-d95e65eec0b6-var-log-ovn\") pod \"e1492ea9-c901-47d5-b9d1-d95e65eec0b6\" (UID: \"e1492ea9-c901-47d5-b9d1-d95e65eec0b6\") " Jan 26 18:54:06 crc kubenswrapper[4737]: I0126 18:54:06.294285 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e1492ea9-c901-47d5-b9d1-d95e65eec0b6-var-run-ovn\") pod \"e1492ea9-c901-47d5-b9d1-d95e65eec0b6\" (UID: \"e1492ea9-c901-47d5-b9d1-d95e65eec0b6\") " Jan 26 18:54:06 crc kubenswrapper[4737]: I0126 18:54:06.294328 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e1492ea9-c901-47d5-b9d1-d95e65eec0b6-scripts\") pod \"e1492ea9-c901-47d5-b9d1-d95e65eec0b6\" (UID: \"e1492ea9-c901-47d5-b9d1-d95e65eec0b6\") " Jan 26 18:54:06 crc kubenswrapper[4737]: I0126 18:54:06.294433 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e1492ea9-c901-47d5-b9d1-d95e65eec0b6-var-run\") pod \"e1492ea9-c901-47d5-b9d1-d95e65eec0b6\" (UID: \"e1492ea9-c901-47d5-b9d1-d95e65eec0b6\") " Jan 26 18:54:06 crc kubenswrapper[4737]: I0126 18:54:06.294696 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12b024f4-26dd-4b1f-91af-0785762d6793-config\") pod \"dnsmasq-dns-764c5664d7-rk8q8\" (UID: \"12b024f4-26dd-4b1f-91af-0785762d6793\") " pod="openstack/dnsmasq-dns-764c5664d7-rk8q8" Jan 26 18:54:06 crc kubenswrapper[4737]: I0126 18:54:06.294718 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1492ea9-c901-47d5-b9d1-d95e65eec0b6-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "e1492ea9-c901-47d5-b9d1-d95e65eec0b6" (UID: "e1492ea9-c901-47d5-b9d1-d95e65eec0b6"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:54:06 crc kubenswrapper[4737]: I0126 18:54:06.294751 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1492ea9-c901-47d5-b9d1-d95e65eec0b6-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "e1492ea9-c901-47d5-b9d1-d95e65eec0b6" (UID: "e1492ea9-c901-47d5-b9d1-d95e65eec0b6"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:54:06 crc kubenswrapper[4737]: I0126 18:54:06.294751 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1492ea9-c901-47d5-b9d1-d95e65eec0b6-var-run" (OuterVolumeSpecName: "var-run") pod "e1492ea9-c901-47d5-b9d1-d95e65eec0b6" (UID: "e1492ea9-c901-47d5-b9d1-d95e65eec0b6"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:54:06 crc kubenswrapper[4737]: I0126 18:54:06.294855 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqwmx\" (UniqueName: \"kubernetes.io/projected/12b024f4-26dd-4b1f-91af-0785762d6793-kube-api-access-nqwmx\") pod \"dnsmasq-dns-764c5664d7-rk8q8\" (UID: \"12b024f4-26dd-4b1f-91af-0785762d6793\") " pod="openstack/dnsmasq-dns-764c5664d7-rk8q8" Jan 26 18:54:06 crc kubenswrapper[4737]: I0126 18:54:06.294915 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/12b024f4-26dd-4b1f-91af-0785762d6793-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-rk8q8\" (UID: \"12b024f4-26dd-4b1f-91af-0785762d6793\") " pod="openstack/dnsmasq-dns-764c5664d7-rk8q8" Jan 26 18:54:06 crc kubenswrapper[4737]: I0126 18:54:06.295333 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/12b024f4-26dd-4b1f-91af-0785762d6793-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-rk8q8\" (UID: \"12b024f4-26dd-4b1f-91af-0785762d6793\") " pod="openstack/dnsmasq-dns-764c5664d7-rk8q8" Jan 26 18:54:06 crc kubenswrapper[4737]: I0126 18:54:06.295487 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/12b024f4-26dd-4b1f-91af-0785762d6793-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-rk8q8\" (UID: \"12b024f4-26dd-4b1f-91af-0785762d6793\") " pod="openstack/dnsmasq-dns-764c5664d7-rk8q8" Jan 26 18:54:06 crc kubenswrapper[4737]: I0126 18:54:06.295522 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/12b024f4-26dd-4b1f-91af-0785762d6793-dns-svc\") pod \"dnsmasq-dns-764c5664d7-rk8q8\" (UID: \"12b024f4-26dd-4b1f-91af-0785762d6793\") " pod="openstack/dnsmasq-dns-764c5664d7-rk8q8" Jan 26 18:54:06 crc kubenswrapper[4737]: I0126 18:54:06.296079 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12b024f4-26dd-4b1f-91af-0785762d6793-config\") pod \"dnsmasq-dns-764c5664d7-rk8q8\" (UID: \"12b024f4-26dd-4b1f-91af-0785762d6793\") " pod="openstack/dnsmasq-dns-764c5664d7-rk8q8" Jan 26 18:54:06 crc kubenswrapper[4737]: I0126 18:54:06.296113 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/12b024f4-26dd-4b1f-91af-0785762d6793-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-rk8q8\" (UID: \"12b024f4-26dd-4b1f-91af-0785762d6793\") " pod="openstack/dnsmasq-dns-764c5664d7-rk8q8" Jan 26 18:54:06 crc kubenswrapper[4737]: I0126 18:54:06.296178 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1492ea9-c901-47d5-b9d1-d95e65eec0b6-scripts" (OuterVolumeSpecName: "scripts") pod "e1492ea9-c901-47d5-b9d1-d95e65eec0b6" (UID: "e1492ea9-c901-47d5-b9d1-d95e65eec0b6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:54:06 crc kubenswrapper[4737]: I0126 18:54:06.296242 4737 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e1492ea9-c901-47d5-b9d1-d95e65eec0b6-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:06 crc kubenswrapper[4737]: I0126 18:54:06.296654 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/12b024f4-26dd-4b1f-91af-0785762d6793-dns-svc\") pod \"dnsmasq-dns-764c5664d7-rk8q8\" (UID: \"12b024f4-26dd-4b1f-91af-0785762d6793\") " pod="openstack/dnsmasq-dns-764c5664d7-rk8q8" Jan 26 18:54:06 crc kubenswrapper[4737]: I0126 18:54:06.296999 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1492ea9-c901-47d5-b9d1-d95e65eec0b6-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "e1492ea9-c901-47d5-b9d1-d95e65eec0b6" (UID: "e1492ea9-c901-47d5-b9d1-d95e65eec0b6"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:54:06 crc kubenswrapper[4737]: I0126 18:54:06.297210 4737 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e1492ea9-c901-47d5-b9d1-d95e65eec0b6-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:06 crc kubenswrapper[4737]: I0126 18:54:06.297237 4737 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e1492ea9-c901-47d5-b9d1-d95e65eec0b6-var-run\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:06 crc kubenswrapper[4737]: I0126 18:54:06.297712 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/12b024f4-26dd-4b1f-91af-0785762d6793-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-rk8q8\" (UID: \"12b024f4-26dd-4b1f-91af-0785762d6793\") " pod="openstack/dnsmasq-dns-764c5664d7-rk8q8" Jan 26 18:54:06 crc kubenswrapper[4737]: I0126 18:54:06.297928 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/12b024f4-26dd-4b1f-91af-0785762d6793-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-rk8q8\" (UID: \"12b024f4-26dd-4b1f-91af-0785762d6793\") " pod="openstack/dnsmasq-dns-764c5664d7-rk8q8" Jan 26 18:54:06 crc kubenswrapper[4737]: I0126 18:54:06.302337 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1492ea9-c901-47d5-b9d1-d95e65eec0b6-kube-api-access-4wrhq" (OuterVolumeSpecName: "kube-api-access-4wrhq") pod "e1492ea9-c901-47d5-b9d1-d95e65eec0b6" (UID: "e1492ea9-c901-47d5-b9d1-d95e65eec0b6"). InnerVolumeSpecName "kube-api-access-4wrhq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:54:06 crc kubenswrapper[4737]: I0126 18:54:06.326369 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqwmx\" (UniqueName: \"kubernetes.io/projected/12b024f4-26dd-4b1f-91af-0785762d6793-kube-api-access-nqwmx\") pod \"dnsmasq-dns-764c5664d7-rk8q8\" (UID: \"12b024f4-26dd-4b1f-91af-0785762d6793\") " pod="openstack/dnsmasq-dns-764c5664d7-rk8q8" Jan 26 18:54:06 crc kubenswrapper[4737]: I0126 18:54:06.399183 4737 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e1492ea9-c901-47d5-b9d1-d95e65eec0b6-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:06 crc kubenswrapper[4737]: I0126 18:54:06.399447 4737 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e1492ea9-c901-47d5-b9d1-d95e65eec0b6-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:06 crc kubenswrapper[4737]: I0126 18:54:06.399463 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wrhq\" (UniqueName: \"kubernetes.io/projected/e1492ea9-c901-47d5-b9d1-d95e65eec0b6-kube-api-access-4wrhq\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:06 crc kubenswrapper[4737]: I0126 18:54:06.462338 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 26 18:54:06 crc kubenswrapper[4737]: I0126 18:54:06.502113 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-rk8q8" Jan 26 18:54:06 crc kubenswrapper[4737]: I0126 18:54:06.512840 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="5bfe0217-6204-407d-aaeb-94051bb8255b" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.131:5671: connect: connection refused" Jan 26 18:54:06 crc kubenswrapper[4737]: I0126 18:54:06.702846 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zrckb-config-dx6zg" event={"ID":"e1492ea9-c901-47d5-b9d1-d95e65eec0b6","Type":"ContainerDied","Data":"fa66645fb48c6f654fd662d3463a88eb07b98dfbf20e930f3d399052b6881bcf"} Jan 26 18:54:06 crc kubenswrapper[4737]: I0126 18:54:06.702922 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa66645fb48c6f654fd662d3463a88eb07b98dfbf20e930f3d399052b6881bcf" Jan 26 18:54:06 crc kubenswrapper[4737]: I0126 18:54:06.703007 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zrckb-config-dx6zg" Jan 26 18:54:07 crc kubenswrapper[4737]: I0126 18:54:07.077367 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-rk8q8"] Jan 26 18:54:07 crc kubenswrapper[4737]: I0126 18:54:07.344831 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-zrckb-config-dx6zg"] Jan 26 18:54:07 crc kubenswrapper[4737]: I0126 18:54:07.356398 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-zrckb-config-dx6zg"] Jan 26 18:54:07 crc kubenswrapper[4737]: I0126 18:54:07.454734 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-z8mqw" Jan 26 18:54:07 crc kubenswrapper[4737]: I0126 18:54:07.461750 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-zrckb-config-ndxsx"] Jan 26 18:54:07 crc kubenswrapper[4737]: E0126 18:54:07.462272 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1492ea9-c901-47d5-b9d1-d95e65eec0b6" containerName="ovn-config" Jan 26 18:54:07 crc kubenswrapper[4737]: I0126 18:54:07.462388 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1492ea9-c901-47d5-b9d1-d95e65eec0b6" containerName="ovn-config" Jan 26 18:54:07 crc kubenswrapper[4737]: E0126 18:54:07.462420 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9db6e67-d109-41f6-bd12-a68553ab3bf6" containerName="glance-db-sync" Jan 26 18:54:07 crc kubenswrapper[4737]: I0126 18:54:07.462426 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9db6e67-d109-41f6-bd12-a68553ab3bf6" containerName="glance-db-sync" Jan 26 18:54:07 crc kubenswrapper[4737]: I0126 18:54:07.462634 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9db6e67-d109-41f6-bd12-a68553ab3bf6" containerName="glance-db-sync" Jan 26 18:54:07 crc kubenswrapper[4737]: I0126 18:54:07.462654 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1492ea9-c901-47d5-b9d1-d95e65eec0b6" containerName="ovn-config" Jan 26 18:54:07 crc kubenswrapper[4737]: I0126 18:54:07.463445 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zrckb-config-ndxsx" Jan 26 18:54:07 crc kubenswrapper[4737]: I0126 18:54:07.466686 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 26 18:54:07 crc kubenswrapper[4737]: I0126 18:54:07.473447 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zrckb-config-ndxsx"] Jan 26 18:54:07 crc kubenswrapper[4737]: I0126 18:54:07.554705 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9db6e67-d109-41f6-bd12-a68553ab3bf6-combined-ca-bundle\") pod \"b9db6e67-d109-41f6-bd12-a68553ab3bf6\" (UID: \"b9db6e67-d109-41f6-bd12-a68553ab3bf6\") " Jan 26 18:54:07 crc kubenswrapper[4737]: I0126 18:54:07.554788 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbpkp\" (UniqueName: \"kubernetes.io/projected/b9db6e67-d109-41f6-bd12-a68553ab3bf6-kube-api-access-wbpkp\") pod \"b9db6e67-d109-41f6-bd12-a68553ab3bf6\" (UID: \"b9db6e67-d109-41f6-bd12-a68553ab3bf6\") " Jan 26 18:54:07 crc kubenswrapper[4737]: I0126 18:54:07.554833 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b9db6e67-d109-41f6-bd12-a68553ab3bf6-db-sync-config-data\") pod \"b9db6e67-d109-41f6-bd12-a68553ab3bf6\" (UID: \"b9db6e67-d109-41f6-bd12-a68553ab3bf6\") " Jan 26 18:54:07 crc kubenswrapper[4737]: I0126 18:54:07.554880 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9db6e67-d109-41f6-bd12-a68553ab3bf6-config-data\") pod \"b9db6e67-d109-41f6-bd12-a68553ab3bf6\" (UID: \"b9db6e67-d109-41f6-bd12-a68553ab3bf6\") " Jan 26 18:54:07 crc kubenswrapper[4737]: I0126 18:54:07.555343 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/49d598de-7b50-45ba-a269-42509e1cb38e-scripts\") pod \"ovn-controller-zrckb-config-ndxsx\" (UID: \"49d598de-7b50-45ba-a269-42509e1cb38e\") " pod="openstack/ovn-controller-zrckb-config-ndxsx" Jan 26 18:54:07 crc kubenswrapper[4737]: I0126 18:54:07.555385 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/49d598de-7b50-45ba-a269-42509e1cb38e-var-run-ovn\") pod \"ovn-controller-zrckb-config-ndxsx\" (UID: \"49d598de-7b50-45ba-a269-42509e1cb38e\") " pod="openstack/ovn-controller-zrckb-config-ndxsx" Jan 26 18:54:07 crc kubenswrapper[4737]: I0126 18:54:07.555410 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/49d598de-7b50-45ba-a269-42509e1cb38e-var-run\") pod \"ovn-controller-zrckb-config-ndxsx\" (UID: \"49d598de-7b50-45ba-a269-42509e1cb38e\") " pod="openstack/ovn-controller-zrckb-config-ndxsx" Jan 26 18:54:07 crc kubenswrapper[4737]: I0126 18:54:07.555459 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dstft\" (UniqueName: \"kubernetes.io/projected/49d598de-7b50-45ba-a269-42509e1cb38e-kube-api-access-dstft\") pod \"ovn-controller-zrckb-config-ndxsx\" (UID: \"49d598de-7b50-45ba-a269-42509e1cb38e\") " pod="openstack/ovn-controller-zrckb-config-ndxsx" Jan 26 18:54:07 crc kubenswrapper[4737]: I0126 18:54:07.555476 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/49d598de-7b50-45ba-a269-42509e1cb38e-additional-scripts\") pod \"ovn-controller-zrckb-config-ndxsx\" (UID: \"49d598de-7b50-45ba-a269-42509e1cb38e\") " pod="openstack/ovn-controller-zrckb-config-ndxsx" Jan 26 18:54:07 crc kubenswrapper[4737]: I0126 18:54:07.555494 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/49d598de-7b50-45ba-a269-42509e1cb38e-var-log-ovn\") pod \"ovn-controller-zrckb-config-ndxsx\" (UID: \"49d598de-7b50-45ba-a269-42509e1cb38e\") " pod="openstack/ovn-controller-zrckb-config-ndxsx" Jan 26 18:54:07 crc kubenswrapper[4737]: I0126 18:54:07.559944 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9db6e67-d109-41f6-bd12-a68553ab3bf6-kube-api-access-wbpkp" (OuterVolumeSpecName: "kube-api-access-wbpkp") pod "b9db6e67-d109-41f6-bd12-a68553ab3bf6" (UID: "b9db6e67-d109-41f6-bd12-a68553ab3bf6"). InnerVolumeSpecName "kube-api-access-wbpkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:54:07 crc kubenswrapper[4737]: I0126 18:54:07.561256 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9db6e67-d109-41f6-bd12-a68553ab3bf6-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b9db6e67-d109-41f6-bd12-a68553ab3bf6" (UID: "b9db6e67-d109-41f6-bd12-a68553ab3bf6"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:54:07 crc kubenswrapper[4737]: I0126 18:54:07.588914 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9db6e67-d109-41f6-bd12-a68553ab3bf6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b9db6e67-d109-41f6-bd12-a68553ab3bf6" (UID: "b9db6e67-d109-41f6-bd12-a68553ab3bf6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:54:07 crc kubenswrapper[4737]: I0126 18:54:07.615930 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9db6e67-d109-41f6-bd12-a68553ab3bf6-config-data" (OuterVolumeSpecName: "config-data") pod "b9db6e67-d109-41f6-bd12-a68553ab3bf6" (UID: "b9db6e67-d109-41f6-bd12-a68553ab3bf6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:54:07 crc kubenswrapper[4737]: I0126 18:54:07.657364 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dstft\" (UniqueName: \"kubernetes.io/projected/49d598de-7b50-45ba-a269-42509e1cb38e-kube-api-access-dstft\") pod \"ovn-controller-zrckb-config-ndxsx\" (UID: \"49d598de-7b50-45ba-a269-42509e1cb38e\") " pod="openstack/ovn-controller-zrckb-config-ndxsx" Jan 26 18:54:07 crc kubenswrapper[4737]: I0126 18:54:07.657482 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/49d598de-7b50-45ba-a269-42509e1cb38e-additional-scripts\") pod \"ovn-controller-zrckb-config-ndxsx\" (UID: \"49d598de-7b50-45ba-a269-42509e1cb38e\") " pod="openstack/ovn-controller-zrckb-config-ndxsx" Jan 26 18:54:07 crc kubenswrapper[4737]: I0126 18:54:07.657510 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/49d598de-7b50-45ba-a269-42509e1cb38e-var-log-ovn\") pod \"ovn-controller-zrckb-config-ndxsx\" (UID: \"49d598de-7b50-45ba-a269-42509e1cb38e\") " pod="openstack/ovn-controller-zrckb-config-ndxsx" Jan 26 18:54:07 crc kubenswrapper[4737]: I0126 18:54:07.657933 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/49d598de-7b50-45ba-a269-42509e1cb38e-scripts\") pod \"ovn-controller-zrckb-config-ndxsx\" (UID: \"49d598de-7b50-45ba-a269-42509e1cb38e\") " pod="openstack/ovn-controller-zrckb-config-ndxsx" Jan 26 18:54:07 crc kubenswrapper[4737]: I0126 18:54:07.657994 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/49d598de-7b50-45ba-a269-42509e1cb38e-var-run-ovn\") pod \"ovn-controller-zrckb-config-ndxsx\" (UID: \"49d598de-7b50-45ba-a269-42509e1cb38e\") " pod="openstack/ovn-controller-zrckb-config-ndxsx" Jan 26 18:54:07 crc kubenswrapper[4737]: I0126 18:54:07.658024 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/49d598de-7b50-45ba-a269-42509e1cb38e-var-run\") pod \"ovn-controller-zrckb-config-ndxsx\" (UID: \"49d598de-7b50-45ba-a269-42509e1cb38e\") " pod="openstack/ovn-controller-zrckb-config-ndxsx" Jan 26 18:54:07 crc kubenswrapper[4737]: I0126 18:54:07.658151 4737 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9db6e67-d109-41f6-bd12-a68553ab3bf6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:07 crc kubenswrapper[4737]: I0126 18:54:07.658200 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/49d598de-7b50-45ba-a269-42509e1cb38e-var-log-ovn\") pod \"ovn-controller-zrckb-config-ndxsx\" (UID: \"49d598de-7b50-45ba-a269-42509e1cb38e\") " pod="openstack/ovn-controller-zrckb-config-ndxsx" Jan 26 18:54:07 crc kubenswrapper[4737]: I0126 18:54:07.658219 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbpkp\" (UniqueName: \"kubernetes.io/projected/b9db6e67-d109-41f6-bd12-a68553ab3bf6-kube-api-access-wbpkp\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:07 crc kubenswrapper[4737]: I0126 18:54:07.658306 4737 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b9db6e67-d109-41f6-bd12-a68553ab3bf6-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:07 crc kubenswrapper[4737]: I0126 18:54:07.658328 4737 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9db6e67-d109-41f6-bd12-a68553ab3bf6-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:07 crc kubenswrapper[4737]: I0126 18:54:07.658268 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/49d598de-7b50-45ba-a269-42509e1cb38e-var-run-ovn\") pod \"ovn-controller-zrckb-config-ndxsx\" (UID: \"49d598de-7b50-45ba-a269-42509e1cb38e\") " pod="openstack/ovn-controller-zrckb-config-ndxsx" Jan 26 18:54:07 crc kubenswrapper[4737]: I0126 18:54:07.658265 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/49d598de-7b50-45ba-a269-42509e1cb38e-var-run\") pod \"ovn-controller-zrckb-config-ndxsx\" (UID: \"49d598de-7b50-45ba-a269-42509e1cb38e\") " pod="openstack/ovn-controller-zrckb-config-ndxsx" Jan 26 18:54:07 crc kubenswrapper[4737]: I0126 18:54:07.658471 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/49d598de-7b50-45ba-a269-42509e1cb38e-additional-scripts\") pod \"ovn-controller-zrckb-config-ndxsx\" (UID: \"49d598de-7b50-45ba-a269-42509e1cb38e\") " pod="openstack/ovn-controller-zrckb-config-ndxsx" Jan 26 18:54:07 crc kubenswrapper[4737]: I0126 18:54:07.661001 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/49d598de-7b50-45ba-a269-42509e1cb38e-scripts\") pod \"ovn-controller-zrckb-config-ndxsx\" (UID: \"49d598de-7b50-45ba-a269-42509e1cb38e\") " pod="openstack/ovn-controller-zrckb-config-ndxsx" Jan 26 18:54:07 crc kubenswrapper[4737]: I0126 18:54:07.678163 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dstft\" (UniqueName: \"kubernetes.io/projected/49d598de-7b50-45ba-a269-42509e1cb38e-kube-api-access-dstft\") pod \"ovn-controller-zrckb-config-ndxsx\" (UID: \"49d598de-7b50-45ba-a269-42509e1cb38e\") " pod="openstack/ovn-controller-zrckb-config-ndxsx" Jan 26 18:54:07 crc kubenswrapper[4737]: I0126 18:54:07.712862 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-z8mqw" event={"ID":"b9db6e67-d109-41f6-bd12-a68553ab3bf6","Type":"ContainerDied","Data":"8023d92cc4b13c767a30c9d050f1d4b5ead390a533eb3aa10afb01c0149e1c5b"} Jan 26 18:54:07 crc kubenswrapper[4737]: I0126 18:54:07.713030 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-z8mqw" Jan 26 18:54:07 crc kubenswrapper[4737]: I0126 18:54:07.713665 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8023d92cc4b13c767a30c9d050f1d4b5ead390a533eb3aa10afb01c0149e1c5b" Jan 26 18:54:07 crc kubenswrapper[4737]: I0126 18:54:07.715382 4737 generic.go:334] "Generic (PLEG): container finished" podID="12b024f4-26dd-4b1f-91af-0785762d6793" containerID="3aafe3e750fb5ca9b5cfa2ed24a677cbb26c6ef0f8095145cd9094a192737c53" exitCode=0 Jan 26 18:54:07 crc kubenswrapper[4737]: I0126 18:54:07.715446 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-rk8q8" event={"ID":"12b024f4-26dd-4b1f-91af-0785762d6793","Type":"ContainerDied","Data":"3aafe3e750fb5ca9b5cfa2ed24a677cbb26c6ef0f8095145cd9094a192737c53"} Jan 26 18:54:07 crc kubenswrapper[4737]: I0126 18:54:07.715481 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-rk8q8" event={"ID":"12b024f4-26dd-4b1f-91af-0785762d6793","Type":"ContainerStarted","Data":"f32f7dbe0f6291c3529f976ddf3dfd3a4e94702571ac6d7587d04eb51b702529"} Jan 26 18:54:07 crc kubenswrapper[4737]: I0126 18:54:07.845826 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zrckb-config-ndxsx" Jan 26 18:54:08 crc kubenswrapper[4737]: I0126 18:54:08.214181 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-rk8q8"] Jan 26 18:54:08 crc kubenswrapper[4737]: I0126 18:54:08.267442 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-p5nqb"] Jan 26 18:54:08 crc kubenswrapper[4737]: I0126 18:54:08.271624 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-p5nqb" Jan 26 18:54:08 crc kubenswrapper[4737]: I0126 18:54:08.291835 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-p5nqb"] Jan 26 18:54:08 crc kubenswrapper[4737]: I0126 18:54:08.412524 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdf19f0a-8101-42b8-85d0-c97f63045b3d-config\") pod \"dnsmasq-dns-74f6bcbc87-p5nqb\" (UID: \"fdf19f0a-8101-42b8-85d0-c97f63045b3d\") " pod="openstack/dnsmasq-dns-74f6bcbc87-p5nqb" Jan 26 18:54:08 crc kubenswrapper[4737]: I0126 18:54:08.412683 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdf19f0a-8101-42b8-85d0-c97f63045b3d-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-p5nqb\" (UID: \"fdf19f0a-8101-42b8-85d0-c97f63045b3d\") " pod="openstack/dnsmasq-dns-74f6bcbc87-p5nqb" Jan 26 18:54:08 crc kubenswrapper[4737]: I0126 18:54:08.412811 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdf19f0a-8101-42b8-85d0-c97f63045b3d-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-p5nqb\" (UID: \"fdf19f0a-8101-42b8-85d0-c97f63045b3d\") " pod="openstack/dnsmasq-dns-74f6bcbc87-p5nqb" Jan 26 18:54:08 crc kubenswrapper[4737]: I0126 18:54:08.412867 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdf19f0a-8101-42b8-85d0-c97f63045b3d-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-p5nqb\" (UID: \"fdf19f0a-8101-42b8-85d0-c97f63045b3d\") " pod="openstack/dnsmasq-dns-74f6bcbc87-p5nqb" Jan 26 18:54:08 crc kubenswrapper[4737]: I0126 18:54:08.412891 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fdf19f0a-8101-42b8-85d0-c97f63045b3d-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-p5nqb\" (UID: \"fdf19f0a-8101-42b8-85d0-c97f63045b3d\") " pod="openstack/dnsmasq-dns-74f6bcbc87-p5nqb" Jan 26 18:54:08 crc kubenswrapper[4737]: I0126 18:54:08.413100 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5tph\" (UniqueName: \"kubernetes.io/projected/fdf19f0a-8101-42b8-85d0-c97f63045b3d-kube-api-access-q5tph\") pod \"dnsmasq-dns-74f6bcbc87-p5nqb\" (UID: \"fdf19f0a-8101-42b8-85d0-c97f63045b3d\") " pod="openstack/dnsmasq-dns-74f6bcbc87-p5nqb" Jan 26 18:54:08 crc kubenswrapper[4737]: I0126 18:54:08.502348 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zrckb-config-ndxsx"] Jan 26 18:54:08 crc kubenswrapper[4737]: I0126 18:54:08.524154 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdf19f0a-8101-42b8-85d0-c97f63045b3d-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-p5nqb\" (UID: \"fdf19f0a-8101-42b8-85d0-c97f63045b3d\") " pod="openstack/dnsmasq-dns-74f6bcbc87-p5nqb" Jan 26 18:54:08 crc kubenswrapper[4737]: I0126 18:54:08.524240 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdf19f0a-8101-42b8-85d0-c97f63045b3d-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-p5nqb\" (UID: \"fdf19f0a-8101-42b8-85d0-c97f63045b3d\") " pod="openstack/dnsmasq-dns-74f6bcbc87-p5nqb" Jan 26 18:54:08 crc kubenswrapper[4737]: I0126 18:54:08.524620 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdf19f0a-8101-42b8-85d0-c97f63045b3d-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-p5nqb\" (UID: \"fdf19f0a-8101-42b8-85d0-c97f63045b3d\") " pod="openstack/dnsmasq-dns-74f6bcbc87-p5nqb" Jan 26 18:54:08 crc kubenswrapper[4737]: I0126 18:54:08.524737 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdf19f0a-8101-42b8-85d0-c97f63045b3d-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-p5nqb\" (UID: \"fdf19f0a-8101-42b8-85d0-c97f63045b3d\") " pod="openstack/dnsmasq-dns-74f6bcbc87-p5nqb" Jan 26 18:54:08 crc kubenswrapper[4737]: I0126 18:54:08.524766 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fdf19f0a-8101-42b8-85d0-c97f63045b3d-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-p5nqb\" (UID: \"fdf19f0a-8101-42b8-85d0-c97f63045b3d\") " pod="openstack/dnsmasq-dns-74f6bcbc87-p5nqb" Jan 26 18:54:08 crc kubenswrapper[4737]: I0126 18:54:08.524861 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5tph\" (UniqueName: \"kubernetes.io/projected/fdf19f0a-8101-42b8-85d0-c97f63045b3d-kube-api-access-q5tph\") pod \"dnsmasq-dns-74f6bcbc87-p5nqb\" (UID: \"fdf19f0a-8101-42b8-85d0-c97f63045b3d\") " pod="openstack/dnsmasq-dns-74f6bcbc87-p5nqb" Jan 26 18:54:08 crc kubenswrapper[4737]: I0126 18:54:08.524984 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdf19f0a-8101-42b8-85d0-c97f63045b3d-config\") pod \"dnsmasq-dns-74f6bcbc87-p5nqb\" (UID: \"fdf19f0a-8101-42b8-85d0-c97f63045b3d\") " pod="openstack/dnsmasq-dns-74f6bcbc87-p5nqb" Jan 26 18:54:08 crc kubenswrapper[4737]: I0126 18:54:08.525917 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdf19f0a-8101-42b8-85d0-c97f63045b3d-config\") pod \"dnsmasq-dns-74f6bcbc87-p5nqb\" (UID: \"fdf19f0a-8101-42b8-85d0-c97f63045b3d\") " pod="openstack/dnsmasq-dns-74f6bcbc87-p5nqb" Jan 26 18:54:08 crc kubenswrapper[4737]: I0126 18:54:08.526568 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdf19f0a-8101-42b8-85d0-c97f63045b3d-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-p5nqb\" (UID: \"fdf19f0a-8101-42b8-85d0-c97f63045b3d\") " pod="openstack/dnsmasq-dns-74f6bcbc87-p5nqb" Jan 26 18:54:08 crc kubenswrapper[4737]: I0126 18:54:08.527211 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fdf19f0a-8101-42b8-85d0-c97f63045b3d-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-p5nqb\" (UID: \"fdf19f0a-8101-42b8-85d0-c97f63045b3d\") " pod="openstack/dnsmasq-dns-74f6bcbc87-p5nqb" Jan 26 18:54:08 crc kubenswrapper[4737]: I0126 18:54:08.538318 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdf19f0a-8101-42b8-85d0-c97f63045b3d-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-p5nqb\" (UID: \"fdf19f0a-8101-42b8-85d0-c97f63045b3d\") " pod="openstack/dnsmasq-dns-74f6bcbc87-p5nqb" Jan 26 18:54:08 crc kubenswrapper[4737]: I0126 18:54:08.556565 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5tph\" (UniqueName: \"kubernetes.io/projected/fdf19f0a-8101-42b8-85d0-c97f63045b3d-kube-api-access-q5tph\") pod \"dnsmasq-dns-74f6bcbc87-p5nqb\" (UID: \"fdf19f0a-8101-42b8-85d0-c97f63045b3d\") " pod="openstack/dnsmasq-dns-74f6bcbc87-p5nqb" Jan 26 18:54:08 crc kubenswrapper[4737]: I0126 18:54:08.624727 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-p5nqb" Jan 26 18:54:08 crc kubenswrapper[4737]: I0126 18:54:08.743790 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-rk8q8" event={"ID":"12b024f4-26dd-4b1f-91af-0785762d6793","Type":"ContainerStarted","Data":"7a36badbe520f6fa6ca2558b3903752474767bee7352927e464463f963be461b"} Jan 26 18:54:08 crc kubenswrapper[4737]: I0126 18:54:08.744416 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-rk8q8" Jan 26 18:54:08 crc kubenswrapper[4737]: I0126 18:54:08.747262 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zrckb-config-ndxsx" event={"ID":"49d598de-7b50-45ba-a269-42509e1cb38e","Type":"ContainerStarted","Data":"c695089621f790daf1a0422c2c09de3289561443fadb775e1aa0c6f44173c7b0"} Jan 26 18:54:08 crc kubenswrapper[4737]: I0126 18:54:08.798156 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-rk8q8" podStartSLOduration=2.798132929 podStartE2EDuration="2.798132929s" podCreationTimestamp="2026-01-26 18:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:54:08.776889468 +0000 UTC m=+1422.085084176" watchObservedRunningTime="2026-01-26 18:54:08.798132929 +0000 UTC m=+1422.106327637" Jan 26 18:54:09 crc kubenswrapper[4737]: I0126 18:54:08.999300 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1492ea9-c901-47d5-b9d1-d95e65eec0b6" path="/var/lib/kubelet/pods/e1492ea9-c901-47d5-b9d1-d95e65eec0b6/volumes" Jan 26 18:54:09 crc kubenswrapper[4737]: I0126 18:54:09.240299 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-p5nqb"] Jan 26 18:54:09 crc kubenswrapper[4737]: I0126 18:54:09.701863 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 26 18:54:09 crc kubenswrapper[4737]: I0126 18:54:09.759258 4737 generic.go:334] "Generic (PLEG): container finished" podID="fdf19f0a-8101-42b8-85d0-c97f63045b3d" containerID="f2f9579a9dff8ba4e02e9b187368d702c7fcb91178fd7706d4f0b4ba38f27103" exitCode=0 Jan 26 18:54:09 crc kubenswrapper[4737]: I0126 18:54:09.759363 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-p5nqb" event={"ID":"fdf19f0a-8101-42b8-85d0-c97f63045b3d","Type":"ContainerDied","Data":"f2f9579a9dff8ba4e02e9b187368d702c7fcb91178fd7706d4f0b4ba38f27103"} Jan 26 18:54:09 crc kubenswrapper[4737]: I0126 18:54:09.760609 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-p5nqb" event={"ID":"fdf19f0a-8101-42b8-85d0-c97f63045b3d","Type":"ContainerStarted","Data":"dcaebecce088f70e98083f98dcd6c618ccbbf2f031da050b70e64844554dbfaf"} Jan 26 18:54:09 crc kubenswrapper[4737]: I0126 18:54:09.764481 4737 generic.go:334] "Generic (PLEG): container finished" podID="49d598de-7b50-45ba-a269-42509e1cb38e" containerID="2ebfdcdfdbe14aa0e28829a0a38464bd85c37378a7a7e87d87baabaa0d87c375" exitCode=0 Jan 26 18:54:09 crc kubenswrapper[4737]: I0126 18:54:09.764710 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-rk8q8" podUID="12b024f4-26dd-4b1f-91af-0785762d6793" containerName="dnsmasq-dns" containerID="cri-o://7a36badbe520f6fa6ca2558b3903752474767bee7352927e464463f963be461b" gracePeriod=10 Jan 26 18:54:09 crc kubenswrapper[4737]: I0126 18:54:09.764859 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zrckb-config-ndxsx" event={"ID":"49d598de-7b50-45ba-a269-42509e1cb38e","Type":"ContainerDied","Data":"2ebfdcdfdbe14aa0e28829a0a38464bd85c37378a7a7e87d87baabaa0d87c375"} Jan 26 18:54:10 crc kubenswrapper[4737]: I0126 18:54:10.409755 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-rk8q8" Jan 26 18:54:10 crc kubenswrapper[4737]: I0126 18:54:10.567927 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/12b024f4-26dd-4b1f-91af-0785762d6793-dns-swift-storage-0\") pod \"12b024f4-26dd-4b1f-91af-0785762d6793\" (UID: \"12b024f4-26dd-4b1f-91af-0785762d6793\") " Jan 26 18:54:10 crc kubenswrapper[4737]: I0126 18:54:10.568038 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/12b024f4-26dd-4b1f-91af-0785762d6793-dns-svc\") pod \"12b024f4-26dd-4b1f-91af-0785762d6793\" (UID: \"12b024f4-26dd-4b1f-91af-0785762d6793\") " Jan 26 18:54:10 crc kubenswrapper[4737]: I0126 18:54:10.568064 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/12b024f4-26dd-4b1f-91af-0785762d6793-ovsdbserver-nb\") pod \"12b024f4-26dd-4b1f-91af-0785762d6793\" (UID: \"12b024f4-26dd-4b1f-91af-0785762d6793\") " Jan 26 18:54:10 crc kubenswrapper[4737]: I0126 18:54:10.568203 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12b024f4-26dd-4b1f-91af-0785762d6793-config\") pod \"12b024f4-26dd-4b1f-91af-0785762d6793\" (UID: \"12b024f4-26dd-4b1f-91af-0785762d6793\") " Jan 26 18:54:10 crc kubenswrapper[4737]: I0126 18:54:10.568356 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/12b024f4-26dd-4b1f-91af-0785762d6793-ovsdbserver-sb\") pod \"12b024f4-26dd-4b1f-91af-0785762d6793\" (UID: \"12b024f4-26dd-4b1f-91af-0785762d6793\") " Jan 26 18:54:10 crc kubenswrapper[4737]: I0126 18:54:10.568575 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqwmx\" (UniqueName: \"kubernetes.io/projected/12b024f4-26dd-4b1f-91af-0785762d6793-kube-api-access-nqwmx\") pod \"12b024f4-26dd-4b1f-91af-0785762d6793\" (UID: \"12b024f4-26dd-4b1f-91af-0785762d6793\") " Jan 26 18:54:10 crc kubenswrapper[4737]: I0126 18:54:10.575078 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12b024f4-26dd-4b1f-91af-0785762d6793-kube-api-access-nqwmx" (OuterVolumeSpecName: "kube-api-access-nqwmx") pod "12b024f4-26dd-4b1f-91af-0785762d6793" (UID: "12b024f4-26dd-4b1f-91af-0785762d6793"). InnerVolumeSpecName "kube-api-access-nqwmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:54:10 crc kubenswrapper[4737]: I0126 18:54:10.643868 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12b024f4-26dd-4b1f-91af-0785762d6793-config" (OuterVolumeSpecName: "config") pod "12b024f4-26dd-4b1f-91af-0785762d6793" (UID: "12b024f4-26dd-4b1f-91af-0785762d6793"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:54:10 crc kubenswrapper[4737]: I0126 18:54:10.643894 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12b024f4-26dd-4b1f-91af-0785762d6793-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "12b024f4-26dd-4b1f-91af-0785762d6793" (UID: "12b024f4-26dd-4b1f-91af-0785762d6793"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:54:10 crc kubenswrapper[4737]: I0126 18:54:10.644096 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12b024f4-26dd-4b1f-91af-0785762d6793-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "12b024f4-26dd-4b1f-91af-0785762d6793" (UID: "12b024f4-26dd-4b1f-91af-0785762d6793"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:54:10 crc kubenswrapper[4737]: I0126 18:54:10.645914 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12b024f4-26dd-4b1f-91af-0785762d6793-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "12b024f4-26dd-4b1f-91af-0785762d6793" (UID: "12b024f4-26dd-4b1f-91af-0785762d6793"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:54:10 crc kubenswrapper[4737]: I0126 18:54:10.655788 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12b024f4-26dd-4b1f-91af-0785762d6793-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "12b024f4-26dd-4b1f-91af-0785762d6793" (UID: "12b024f4-26dd-4b1f-91af-0785762d6793"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:54:10 crc kubenswrapper[4737]: I0126 18:54:10.670931 4737 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/12b024f4-26dd-4b1f-91af-0785762d6793-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:10 crc kubenswrapper[4737]: I0126 18:54:10.670970 4737 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/12b024f4-26dd-4b1f-91af-0785762d6793-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:10 crc kubenswrapper[4737]: I0126 18:54:10.670981 4737 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/12b024f4-26dd-4b1f-91af-0785762d6793-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:10 crc kubenswrapper[4737]: I0126 18:54:10.670991 4737 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12b024f4-26dd-4b1f-91af-0785762d6793-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:10 crc kubenswrapper[4737]: I0126 18:54:10.671000 4737 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/12b024f4-26dd-4b1f-91af-0785762d6793-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:10 crc kubenswrapper[4737]: I0126 18:54:10.671008 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqwmx\" (UniqueName: \"kubernetes.io/projected/12b024f4-26dd-4b1f-91af-0785762d6793-kube-api-access-nqwmx\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:10 crc kubenswrapper[4737]: I0126 18:54:10.775684 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-p5nqb" event={"ID":"fdf19f0a-8101-42b8-85d0-c97f63045b3d","Type":"ContainerStarted","Data":"0139b3b8a2667f813f0f611daa16ab2f4f01af86dcb8ff3a6f36ef7c7ed9b22e"} Jan 26 18:54:10 crc kubenswrapper[4737]: I0126 18:54:10.777024 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-p5nqb" Jan 26 18:54:10 crc kubenswrapper[4737]: I0126 18:54:10.780510 4737 generic.go:334] "Generic (PLEG): container finished" podID="12b024f4-26dd-4b1f-91af-0785762d6793" containerID="7a36badbe520f6fa6ca2558b3903752474767bee7352927e464463f963be461b" exitCode=0 Jan 26 18:54:10 crc kubenswrapper[4737]: I0126 18:54:10.780825 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-rk8q8" Jan 26 18:54:10 crc kubenswrapper[4737]: I0126 18:54:10.781225 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-rk8q8" event={"ID":"12b024f4-26dd-4b1f-91af-0785762d6793","Type":"ContainerDied","Data":"7a36badbe520f6fa6ca2558b3903752474767bee7352927e464463f963be461b"} Jan 26 18:54:10 crc kubenswrapper[4737]: I0126 18:54:10.781276 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-rk8q8" event={"ID":"12b024f4-26dd-4b1f-91af-0785762d6793","Type":"ContainerDied","Data":"f32f7dbe0f6291c3529f976ddf3dfd3a4e94702571ac6d7587d04eb51b702529"} Jan 26 18:54:10 crc kubenswrapper[4737]: I0126 18:54:10.781302 4737 scope.go:117] "RemoveContainer" containerID="7a36badbe520f6fa6ca2558b3903752474767bee7352927e464463f963be461b" Jan 26 18:54:10 crc kubenswrapper[4737]: I0126 18:54:10.802911 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74f6bcbc87-p5nqb" podStartSLOduration=2.802890406 podStartE2EDuration="2.802890406s" podCreationTimestamp="2026-01-26 18:54:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:54:10.795326628 +0000 UTC m=+1424.103521336" watchObservedRunningTime="2026-01-26 18:54:10.802890406 +0000 UTC m=+1424.111085114" Jan 26 18:54:10 crc kubenswrapper[4737]: I0126 18:54:10.845415 4737 scope.go:117] "RemoveContainer" containerID="3aafe3e750fb5ca9b5cfa2ed24a677cbb26c6ef0f8095145cd9094a192737c53" Jan 26 18:54:10 crc kubenswrapper[4737]: I0126 18:54:10.860928 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-rk8q8"] Jan 26 18:54:10 crc kubenswrapper[4737]: I0126 18:54:10.883036 4737 scope.go:117] "RemoveContainer" containerID="7a36badbe520f6fa6ca2558b3903752474767bee7352927e464463f963be461b" Jan 26 18:54:10 crc kubenswrapper[4737]: I0126 18:54:10.884019 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-rk8q8"] Jan 26 18:54:10 crc kubenswrapper[4737]: E0126 18:54:10.886268 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a36badbe520f6fa6ca2558b3903752474767bee7352927e464463f963be461b\": container with ID starting with 7a36badbe520f6fa6ca2558b3903752474767bee7352927e464463f963be461b not found: ID does not exist" containerID="7a36badbe520f6fa6ca2558b3903752474767bee7352927e464463f963be461b" Jan 26 18:54:10 crc kubenswrapper[4737]: I0126 18:54:10.886307 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a36badbe520f6fa6ca2558b3903752474767bee7352927e464463f963be461b"} err="failed to get container status \"7a36badbe520f6fa6ca2558b3903752474767bee7352927e464463f963be461b\": rpc error: code = NotFound desc = could not find container \"7a36badbe520f6fa6ca2558b3903752474767bee7352927e464463f963be461b\": container with ID starting with 7a36badbe520f6fa6ca2558b3903752474767bee7352927e464463f963be461b not found: ID does not exist" Jan 26 18:54:10 crc kubenswrapper[4737]: I0126 18:54:10.886333 4737 scope.go:117] "RemoveContainer" containerID="3aafe3e750fb5ca9b5cfa2ed24a677cbb26c6ef0f8095145cd9094a192737c53" Jan 26 18:54:10 crc kubenswrapper[4737]: E0126 18:54:10.886875 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3aafe3e750fb5ca9b5cfa2ed24a677cbb26c6ef0f8095145cd9094a192737c53\": container with ID starting with 3aafe3e750fb5ca9b5cfa2ed24a677cbb26c6ef0f8095145cd9094a192737c53 not found: ID does not exist" containerID="3aafe3e750fb5ca9b5cfa2ed24a677cbb26c6ef0f8095145cd9094a192737c53" Jan 26 18:54:10 crc kubenswrapper[4737]: I0126 18:54:10.886907 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3aafe3e750fb5ca9b5cfa2ed24a677cbb26c6ef0f8095145cd9094a192737c53"} err="failed to get container status \"3aafe3e750fb5ca9b5cfa2ed24a677cbb26c6ef0f8095145cd9094a192737c53\": rpc error: code = NotFound desc = could not find container \"3aafe3e750fb5ca9b5cfa2ed24a677cbb26c6ef0f8095145cd9094a192737c53\": container with ID starting with 3aafe3e750fb5ca9b5cfa2ed24a677cbb26c6ef0f8095145cd9094a192737c53 not found: ID does not exist" Jan 26 18:54:11 crc kubenswrapper[4737]: I0126 18:54:11.005322 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12b024f4-26dd-4b1f-91af-0785762d6793" path="/var/lib/kubelet/pods/12b024f4-26dd-4b1f-91af-0785762d6793/volumes" Jan 26 18:54:11 crc kubenswrapper[4737]: I0126 18:54:11.248534 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zrckb-config-ndxsx" Jan 26 18:54:11 crc kubenswrapper[4737]: I0126 18:54:11.387563 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/49d598de-7b50-45ba-a269-42509e1cb38e-var-run-ovn\") pod \"49d598de-7b50-45ba-a269-42509e1cb38e\" (UID: \"49d598de-7b50-45ba-a269-42509e1cb38e\") " Jan 26 18:54:11 crc kubenswrapper[4737]: I0126 18:54:11.387709 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/49d598de-7b50-45ba-a269-42509e1cb38e-additional-scripts\") pod \"49d598de-7b50-45ba-a269-42509e1cb38e\" (UID: \"49d598de-7b50-45ba-a269-42509e1cb38e\") " Jan 26 18:54:11 crc kubenswrapper[4737]: I0126 18:54:11.387791 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/49d598de-7b50-45ba-a269-42509e1cb38e-var-log-ovn\") pod \"49d598de-7b50-45ba-a269-42509e1cb38e\" (UID: \"49d598de-7b50-45ba-a269-42509e1cb38e\") " Jan 26 18:54:11 crc kubenswrapper[4737]: I0126 18:54:11.387815 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/49d598de-7b50-45ba-a269-42509e1cb38e-scripts\") pod \"49d598de-7b50-45ba-a269-42509e1cb38e\" (UID: \"49d598de-7b50-45ba-a269-42509e1cb38e\") " Jan 26 18:54:11 crc kubenswrapper[4737]: I0126 18:54:11.387887 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dstft\" (UniqueName: \"kubernetes.io/projected/49d598de-7b50-45ba-a269-42509e1cb38e-kube-api-access-dstft\") pod \"49d598de-7b50-45ba-a269-42509e1cb38e\" (UID: \"49d598de-7b50-45ba-a269-42509e1cb38e\") " Jan 26 18:54:11 crc kubenswrapper[4737]: I0126 18:54:11.387978 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/49d598de-7b50-45ba-a269-42509e1cb38e-var-run\") pod \"49d598de-7b50-45ba-a269-42509e1cb38e\" (UID: \"49d598de-7b50-45ba-a269-42509e1cb38e\") " Jan 26 18:54:11 crc kubenswrapper[4737]: I0126 18:54:11.388343 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49d598de-7b50-45ba-a269-42509e1cb38e-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "49d598de-7b50-45ba-a269-42509e1cb38e" (UID: "49d598de-7b50-45ba-a269-42509e1cb38e"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:54:11 crc kubenswrapper[4737]: I0126 18:54:11.388409 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49d598de-7b50-45ba-a269-42509e1cb38e-var-run" (OuterVolumeSpecName: "var-run") pod "49d598de-7b50-45ba-a269-42509e1cb38e" (UID: "49d598de-7b50-45ba-a269-42509e1cb38e"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:54:11 crc kubenswrapper[4737]: I0126 18:54:11.388577 4737 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/49d598de-7b50-45ba-a269-42509e1cb38e-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:11 crc kubenswrapper[4737]: I0126 18:54:11.389112 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49d598de-7b50-45ba-a269-42509e1cb38e-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "49d598de-7b50-45ba-a269-42509e1cb38e" (UID: "49d598de-7b50-45ba-a269-42509e1cb38e"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:54:11 crc kubenswrapper[4737]: I0126 18:54:11.389188 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49d598de-7b50-45ba-a269-42509e1cb38e-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "49d598de-7b50-45ba-a269-42509e1cb38e" (UID: "49d598de-7b50-45ba-a269-42509e1cb38e"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:54:11 crc kubenswrapper[4737]: I0126 18:54:11.389284 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49d598de-7b50-45ba-a269-42509e1cb38e-scripts" (OuterVolumeSpecName: "scripts") pod "49d598de-7b50-45ba-a269-42509e1cb38e" (UID: "49d598de-7b50-45ba-a269-42509e1cb38e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:54:11 crc kubenswrapper[4737]: I0126 18:54:11.395173 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49d598de-7b50-45ba-a269-42509e1cb38e-kube-api-access-dstft" (OuterVolumeSpecName: "kube-api-access-dstft") pod "49d598de-7b50-45ba-a269-42509e1cb38e" (UID: "49d598de-7b50-45ba-a269-42509e1cb38e"). InnerVolumeSpecName "kube-api-access-dstft". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:54:11 crc kubenswrapper[4737]: I0126 18:54:11.493752 4737 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/49d598de-7b50-45ba-a269-42509e1cb38e-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:11 crc kubenswrapper[4737]: I0126 18:54:11.493791 4737 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/49d598de-7b50-45ba-a269-42509e1cb38e-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:11 crc kubenswrapper[4737]: I0126 18:54:11.493805 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dstft\" (UniqueName: \"kubernetes.io/projected/49d598de-7b50-45ba-a269-42509e1cb38e-kube-api-access-dstft\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:11 crc kubenswrapper[4737]: I0126 18:54:11.493818 4737 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/49d598de-7b50-45ba-a269-42509e1cb38e-var-run\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:11 crc kubenswrapper[4737]: I0126 18:54:11.493833 4737 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/49d598de-7b50-45ba-a269-42509e1cb38e-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:11 crc kubenswrapper[4737]: I0126 18:54:11.791516 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zrckb-config-ndxsx" event={"ID":"49d598de-7b50-45ba-a269-42509e1cb38e","Type":"ContainerDied","Data":"c695089621f790daf1a0422c2c09de3289561443fadb775e1aa0c6f44173c7b0"} Jan 26 18:54:11 crc kubenswrapper[4737]: I0126 18:54:11.791882 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c695089621f790daf1a0422c2c09de3289561443fadb775e1aa0c6f44173c7b0" Jan 26 18:54:11 crc kubenswrapper[4737]: I0126 18:54:11.791542 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zrckb-config-ndxsx" Jan 26 18:54:12 crc kubenswrapper[4737]: I0126 18:54:12.332491 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-zrckb-config-ndxsx"] Jan 26 18:54:12 crc kubenswrapper[4737]: I0126 18:54:12.346116 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-zrckb-config-ndxsx"] Jan 26 18:54:12 crc kubenswrapper[4737]: I0126 18:54:12.993600 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49d598de-7b50-45ba-a269-42509e1cb38e" path="/var/lib/kubelet/pods/49d598de-7b50-45ba-a269-42509e1cb38e/volumes" Jan 26 18:54:14 crc kubenswrapper[4737]: I0126 18:54:14.701878 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 26 18:54:14 crc kubenswrapper[4737]: I0126 18:54:14.710824 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 26 18:54:14 crc kubenswrapper[4737]: I0126 18:54:14.824472 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 26 18:54:16 crc kubenswrapper[4737]: I0126 18:54:16.514327 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Jan 26 18:54:16 crc kubenswrapper[4737]: I0126 18:54:16.933898 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-w2r6t"] Jan 26 18:54:16 crc kubenswrapper[4737]: E0126 18:54:16.934481 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49d598de-7b50-45ba-a269-42509e1cb38e" containerName="ovn-config" Jan 26 18:54:16 crc kubenswrapper[4737]: I0126 18:54:16.934508 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="49d598de-7b50-45ba-a269-42509e1cb38e" containerName="ovn-config" Jan 26 18:54:16 crc kubenswrapper[4737]: E0126 18:54:16.934545 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12b024f4-26dd-4b1f-91af-0785762d6793" containerName="init" Jan 26 18:54:16 crc kubenswrapper[4737]: I0126 18:54:16.934557 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="12b024f4-26dd-4b1f-91af-0785762d6793" containerName="init" Jan 26 18:54:16 crc kubenswrapper[4737]: E0126 18:54:16.934572 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12b024f4-26dd-4b1f-91af-0785762d6793" containerName="dnsmasq-dns" Jan 26 18:54:16 crc kubenswrapper[4737]: I0126 18:54:16.934580 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="12b024f4-26dd-4b1f-91af-0785762d6793" containerName="dnsmasq-dns" Jan 26 18:54:16 crc kubenswrapper[4737]: I0126 18:54:16.934874 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="12b024f4-26dd-4b1f-91af-0785762d6793" containerName="dnsmasq-dns" Jan 26 18:54:16 crc kubenswrapper[4737]: I0126 18:54:16.934913 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="49d598de-7b50-45ba-a269-42509e1cb38e" containerName="ovn-config" Jan 26 18:54:16 crc kubenswrapper[4737]: I0126 18:54:16.935787 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-w2r6t" Jan 26 18:54:16 crc kubenswrapper[4737]: I0126 18:54:16.959957 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-w2r6t"] Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.038180 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-2mhwn"] Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.041052 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-2mhwn" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.046821 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30b6ccc8-eb69-4780-b3dc-f53000859836-operator-scripts\") pod \"barbican-db-create-w2r6t\" (UID: \"30b6ccc8-eb69-4780-b3dc-f53000859836\") " pod="openstack/barbican-db-create-w2r6t" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.046969 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf4st\" (UniqueName: \"kubernetes.io/projected/30b6ccc8-eb69-4780-b3dc-f53000859836-kube-api-access-lf4st\") pod \"barbican-db-create-w2r6t\" (UID: \"30b6ccc8-eb69-4780-b3dc-f53000859836\") " pod="openstack/barbican-db-create-w2r6t" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.054059 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-2mhwn"] Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.135990 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-j7bgx"] Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.141857 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-j7bgx" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.159576 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15d05428-1fe0-474f-8b0e-761f90c035bd-operator-scripts\") pod \"cinder-db-create-2mhwn\" (UID: \"15d05428-1fe0-474f-8b0e-761f90c035bd\") " pod="openstack/cinder-db-create-2mhwn" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.159758 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfj9g\" (UniqueName: \"kubernetes.io/projected/15d05428-1fe0-474f-8b0e-761f90c035bd-kube-api-access-nfj9g\") pod \"cinder-db-create-2mhwn\" (UID: \"15d05428-1fe0-474f-8b0e-761f90c035bd\") " pod="openstack/cinder-db-create-2mhwn" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.159888 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30b6ccc8-eb69-4780-b3dc-f53000859836-operator-scripts\") pod \"barbican-db-create-w2r6t\" (UID: \"30b6ccc8-eb69-4780-b3dc-f53000859836\") " pod="openstack/barbican-db-create-w2r6t" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.160178 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lf4st\" (UniqueName: \"kubernetes.io/projected/30b6ccc8-eb69-4780-b3dc-f53000859836-kube-api-access-lf4st\") pod \"barbican-db-create-w2r6t\" (UID: \"30b6ccc8-eb69-4780-b3dc-f53000859836\") " pod="openstack/barbican-db-create-w2r6t" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.161841 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30b6ccc8-eb69-4780-b3dc-f53000859836-operator-scripts\") pod \"barbican-db-create-w2r6t\" (UID: \"30b6ccc8-eb69-4780-b3dc-f53000859836\") " pod="openstack/barbican-db-create-w2r6t" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.217476 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-8950-account-create-update-l8njp"] Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.228460 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lf4st\" (UniqueName: \"kubernetes.io/projected/30b6ccc8-eb69-4780-b3dc-f53000859836-kube-api-access-lf4st\") pod \"barbican-db-create-w2r6t\" (UID: \"30b6ccc8-eb69-4780-b3dc-f53000859836\") " pod="openstack/barbican-db-create-w2r6t" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.234051 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-8950-account-create-update-l8njp" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.241809 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.263830 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcff1539-022e-45f1-9e55-2e633b8a0346-operator-scripts\") pod \"heat-db-create-j7bgx\" (UID: \"bcff1539-022e-45f1-9e55-2e633b8a0346\") " pod="openstack/heat-db-create-j7bgx" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.263921 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15d05428-1fe0-474f-8b0e-761f90c035bd-operator-scripts\") pod \"cinder-db-create-2mhwn\" (UID: \"15d05428-1fe0-474f-8b0e-761f90c035bd\") " pod="openstack/cinder-db-create-2mhwn" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.263957 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hrhv\" (UniqueName: \"kubernetes.io/projected/bcff1539-022e-45f1-9e55-2e633b8a0346-kube-api-access-4hrhv\") pod \"heat-db-create-j7bgx\" (UID: \"bcff1539-022e-45f1-9e55-2e633b8a0346\") " pod="openstack/heat-db-create-j7bgx" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.264017 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfj9g\" (UniqueName: \"kubernetes.io/projected/15d05428-1fe0-474f-8b0e-761f90c035bd-kube-api-access-nfj9g\") pod \"cinder-db-create-2mhwn\" (UID: \"15d05428-1fe0-474f-8b0e-761f90c035bd\") " pod="openstack/cinder-db-create-2mhwn" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.265561 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-w2r6t" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.266817 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15d05428-1fe0-474f-8b0e-761f90c035bd-operator-scripts\") pod \"cinder-db-create-2mhwn\" (UID: \"15d05428-1fe0-474f-8b0e-761f90c035bd\") " pod="openstack/cinder-db-create-2mhwn" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.285034 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-j7bgx"] Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.315496 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-8950-account-create-update-l8njp"] Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.324335 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfj9g\" (UniqueName: \"kubernetes.io/projected/15d05428-1fe0-474f-8b0e-761f90c035bd-kube-api-access-nfj9g\") pod \"cinder-db-create-2mhwn\" (UID: \"15d05428-1fe0-474f-8b0e-761f90c035bd\") " pod="openstack/cinder-db-create-2mhwn" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.366895 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-74b2-account-create-update-7gqdr"] Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.368399 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-74b2-account-create-update-7gqdr" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.368392 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e85eb58-126a-4fe4-9006-e46c8baceac8-operator-scripts\") pod \"heat-8950-account-create-update-l8njp\" (UID: \"7e85eb58-126a-4fe4-9006-e46c8baceac8\") " pod="openstack/heat-8950-account-create-update-l8njp" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.368620 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcff1539-022e-45f1-9e55-2e633b8a0346-operator-scripts\") pod \"heat-db-create-j7bgx\" (UID: \"bcff1539-022e-45f1-9e55-2e633b8a0346\") " pod="openstack/heat-db-create-j7bgx" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.368704 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hrhv\" (UniqueName: \"kubernetes.io/projected/bcff1539-022e-45f1-9e55-2e633b8a0346-kube-api-access-4hrhv\") pod \"heat-db-create-j7bgx\" (UID: \"bcff1539-022e-45f1-9e55-2e633b8a0346\") " pod="openstack/heat-db-create-j7bgx" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.372501 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl752\" (UniqueName: \"kubernetes.io/projected/7e85eb58-126a-4fe4-9006-e46c8baceac8-kube-api-access-zl752\") pod \"heat-8950-account-create-update-l8njp\" (UID: \"7e85eb58-126a-4fe4-9006-e46c8baceac8\") " pod="openstack/heat-8950-account-create-update-l8njp" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.370500 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-2mhwn" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.369783 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcff1539-022e-45f1-9e55-2e633b8a0346-operator-scripts\") pod \"heat-db-create-j7bgx\" (UID: \"bcff1539-022e-45f1-9e55-2e633b8a0346\") " pod="openstack/heat-db-create-j7bgx" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.377137 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.403168 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-74b2-account-create-update-7gqdr"] Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.412868 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hrhv\" (UniqueName: \"kubernetes.io/projected/bcff1539-022e-45f1-9e55-2e633b8a0346-kube-api-access-4hrhv\") pod \"heat-db-create-j7bgx\" (UID: \"bcff1539-022e-45f1-9e55-2e633b8a0346\") " pod="openstack/heat-db-create-j7bgx" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.419398 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-z87tf"] Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.421224 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-z87tf" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.425086 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.425912 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-z69hk" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.426886 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.431849 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.433722 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-6bef-account-create-update-nnbl4"] Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.435510 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6bef-account-create-update-nnbl4" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.439927 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.452645 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-6bef-account-create-update-nnbl4"] Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.469154 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-z87tf"] Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.472731 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-j7bgx" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.474532 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzf4d\" (UniqueName: \"kubernetes.io/projected/f82324be-8ee8-45b6-8f16-23c70c1e9011-kube-api-access-qzf4d\") pod \"barbican-74b2-account-create-update-7gqdr\" (UID: \"f82324be-8ee8-45b6-8f16-23c70c1e9011\") " pod="openstack/barbican-74b2-account-create-update-7gqdr" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.474626 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f82324be-8ee8-45b6-8f16-23c70c1e9011-operator-scripts\") pod \"barbican-74b2-account-create-update-7gqdr\" (UID: \"f82324be-8ee8-45b6-8f16-23c70c1e9011\") " pod="openstack/barbican-74b2-account-create-update-7gqdr" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.475290 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e85eb58-126a-4fe4-9006-e46c8baceac8-operator-scripts\") pod \"heat-8950-account-create-update-l8njp\" (UID: \"7e85eb58-126a-4fe4-9006-e46c8baceac8\") " pod="openstack/heat-8950-account-create-update-l8njp" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.478731 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl752\" (UniqueName: \"kubernetes.io/projected/7e85eb58-126a-4fe4-9006-e46c8baceac8-kube-api-access-zl752\") pod \"heat-8950-account-create-update-l8njp\" (UID: \"7e85eb58-126a-4fe4-9006-e46c8baceac8\") " pod="openstack/heat-8950-account-create-update-l8njp" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.484951 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e85eb58-126a-4fe4-9006-e46c8baceac8-operator-scripts\") pod \"heat-8950-account-create-update-l8njp\" (UID: \"7e85eb58-126a-4fe4-9006-e46c8baceac8\") " pod="openstack/heat-8950-account-create-update-l8njp" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.499144 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl752\" (UniqueName: \"kubernetes.io/projected/7e85eb58-126a-4fe4-9006-e46c8baceac8-kube-api-access-zl752\") pod \"heat-8950-account-create-update-l8njp\" (UID: \"7e85eb58-126a-4fe4-9006-e46c8baceac8\") " pod="openstack/heat-8950-account-create-update-l8njp" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.579212 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-p7gjm"] Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.586030 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pmth\" (UniqueName: \"kubernetes.io/projected/86138c40-9654-4e2b-8fe9-13d418f93750-kube-api-access-5pmth\") pod \"keystone-db-sync-z87tf\" (UID: \"86138c40-9654-4e2b-8fe9-13d418f93750\") " pod="openstack/keystone-db-sync-z87tf" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.589946 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdmnk\" (UniqueName: \"kubernetes.io/projected/a431f6b9-1717-4441-88e6-81b22a7abde0-kube-api-access-cdmnk\") pod \"cinder-6bef-account-create-update-nnbl4\" (UID: \"a431f6b9-1717-4441-88e6-81b22a7abde0\") " pod="openstack/cinder-6bef-account-create-update-nnbl4" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.590048 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86138c40-9654-4e2b-8fe9-13d418f93750-combined-ca-bundle\") pod \"keystone-db-sync-z87tf\" (UID: \"86138c40-9654-4e2b-8fe9-13d418f93750\") " pod="openstack/keystone-db-sync-z87tf" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.595248 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a431f6b9-1717-4441-88e6-81b22a7abde0-operator-scripts\") pod \"cinder-6bef-account-create-update-nnbl4\" (UID: \"a431f6b9-1717-4441-88e6-81b22a7abde0\") " pod="openstack/cinder-6bef-account-create-update-nnbl4" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.595473 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzf4d\" (UniqueName: \"kubernetes.io/projected/f82324be-8ee8-45b6-8f16-23c70c1e9011-kube-api-access-qzf4d\") pod \"barbican-74b2-account-create-update-7gqdr\" (UID: \"f82324be-8ee8-45b6-8f16-23c70c1e9011\") " pod="openstack/barbican-74b2-account-create-update-7gqdr" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.595591 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86138c40-9654-4e2b-8fe9-13d418f93750-config-data\") pod \"keystone-db-sync-z87tf\" (UID: \"86138c40-9654-4e2b-8fe9-13d418f93750\") " pod="openstack/keystone-db-sync-z87tf" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.595620 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f82324be-8ee8-45b6-8f16-23c70c1e9011-operator-scripts\") pod \"barbican-74b2-account-create-update-7gqdr\" (UID: \"f82324be-8ee8-45b6-8f16-23c70c1e9011\") " pod="openstack/barbican-74b2-account-create-update-7gqdr" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.599488 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f82324be-8ee8-45b6-8f16-23c70c1e9011-operator-scripts\") pod \"barbican-74b2-account-create-update-7gqdr\" (UID: \"f82324be-8ee8-45b6-8f16-23c70c1e9011\") " pod="openstack/barbican-74b2-account-create-update-7gqdr" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.605869 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-p7gjm"] Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.606000 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-p7gjm" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.606642 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-8950-account-create-update-l8njp" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.680217 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzf4d\" (UniqueName: \"kubernetes.io/projected/f82324be-8ee8-45b6-8f16-23c70c1e9011-kube-api-access-qzf4d\") pod \"barbican-74b2-account-create-update-7gqdr\" (UID: \"f82324be-8ee8-45b6-8f16-23c70c1e9011\") " pod="openstack/barbican-74b2-account-create-update-7gqdr" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.697499 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjhbd\" (UniqueName: \"kubernetes.io/projected/89375687-18cd-4325-87c3-6be0a83ebfd1-kube-api-access-cjhbd\") pod \"neutron-db-create-p7gjm\" (UID: \"89375687-18cd-4325-87c3-6be0a83ebfd1\") " pod="openstack/neutron-db-create-p7gjm" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.697598 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pmth\" (UniqueName: \"kubernetes.io/projected/86138c40-9654-4e2b-8fe9-13d418f93750-kube-api-access-5pmth\") pod \"keystone-db-sync-z87tf\" (UID: \"86138c40-9654-4e2b-8fe9-13d418f93750\") " pod="openstack/keystone-db-sync-z87tf" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.697626 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdmnk\" (UniqueName: \"kubernetes.io/projected/a431f6b9-1717-4441-88e6-81b22a7abde0-kube-api-access-cdmnk\") pod \"cinder-6bef-account-create-update-nnbl4\" (UID: \"a431f6b9-1717-4441-88e6-81b22a7abde0\") " pod="openstack/cinder-6bef-account-create-update-nnbl4" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.697661 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86138c40-9654-4e2b-8fe9-13d418f93750-combined-ca-bundle\") pod \"keystone-db-sync-z87tf\" (UID: \"86138c40-9654-4e2b-8fe9-13d418f93750\") " pod="openstack/keystone-db-sync-z87tf" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.697680 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a431f6b9-1717-4441-88e6-81b22a7abde0-operator-scripts\") pod \"cinder-6bef-account-create-update-nnbl4\" (UID: \"a431f6b9-1717-4441-88e6-81b22a7abde0\") " pod="openstack/cinder-6bef-account-create-update-nnbl4" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.697704 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89375687-18cd-4325-87c3-6be0a83ebfd1-operator-scripts\") pod \"neutron-db-create-p7gjm\" (UID: \"89375687-18cd-4325-87c3-6be0a83ebfd1\") " pod="openstack/neutron-db-create-p7gjm" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.697795 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86138c40-9654-4e2b-8fe9-13d418f93750-config-data\") pod \"keystone-db-sync-z87tf\" (UID: \"86138c40-9654-4e2b-8fe9-13d418f93750\") " pod="openstack/keystone-db-sync-z87tf" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.703371 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a431f6b9-1717-4441-88e6-81b22a7abde0-operator-scripts\") pod \"cinder-6bef-account-create-update-nnbl4\" (UID: \"a431f6b9-1717-4441-88e6-81b22a7abde0\") " pod="openstack/cinder-6bef-account-create-update-nnbl4" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.707894 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-698d-account-create-update-vzz2k"] Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.709692 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-698d-account-create-update-vzz2k" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.712979 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86138c40-9654-4e2b-8fe9-13d418f93750-combined-ca-bundle\") pod \"keystone-db-sync-z87tf\" (UID: \"86138c40-9654-4e2b-8fe9-13d418f93750\") " pod="openstack/keystone-db-sync-z87tf" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.713016 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86138c40-9654-4e2b-8fe9-13d418f93750-config-data\") pod \"keystone-db-sync-z87tf\" (UID: \"86138c40-9654-4e2b-8fe9-13d418f93750\") " pod="openstack/keystone-db-sync-z87tf" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.717770 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.791452 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdmnk\" (UniqueName: \"kubernetes.io/projected/a431f6b9-1717-4441-88e6-81b22a7abde0-kube-api-access-cdmnk\") pod \"cinder-6bef-account-create-update-nnbl4\" (UID: \"a431f6b9-1717-4441-88e6-81b22a7abde0\") " pod="openstack/cinder-6bef-account-create-update-nnbl4" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.801012 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89375687-18cd-4325-87c3-6be0a83ebfd1-operator-scripts\") pod \"neutron-db-create-p7gjm\" (UID: \"89375687-18cd-4325-87c3-6be0a83ebfd1\") " pod="openstack/neutron-db-create-p7gjm" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.801783 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f06fdcbc-c0a2-4149-903f-cad2c7c9dc9a-operator-scripts\") pod \"neutron-698d-account-create-update-vzz2k\" (UID: \"f06fdcbc-c0a2-4149-903f-cad2c7c9dc9a\") " pod="openstack/neutron-698d-account-create-update-vzz2k" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.801903 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlczm\" (UniqueName: \"kubernetes.io/projected/f06fdcbc-c0a2-4149-903f-cad2c7c9dc9a-kube-api-access-wlczm\") pod \"neutron-698d-account-create-update-vzz2k\" (UID: \"f06fdcbc-c0a2-4149-903f-cad2c7c9dc9a\") " pod="openstack/neutron-698d-account-create-update-vzz2k" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.807095 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89375687-18cd-4325-87c3-6be0a83ebfd1-operator-scripts\") pod \"neutron-db-create-p7gjm\" (UID: \"89375687-18cd-4325-87c3-6be0a83ebfd1\") " pod="openstack/neutron-db-create-p7gjm" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.807497 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjhbd\" (UniqueName: \"kubernetes.io/projected/89375687-18cd-4325-87c3-6be0a83ebfd1-kube-api-access-cjhbd\") pod \"neutron-db-create-p7gjm\" (UID: \"89375687-18cd-4325-87c3-6be0a83ebfd1\") " pod="openstack/neutron-db-create-p7gjm" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.810783 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pmth\" (UniqueName: \"kubernetes.io/projected/86138c40-9654-4e2b-8fe9-13d418f93750-kube-api-access-5pmth\") pod \"keystone-db-sync-z87tf\" (UID: \"86138c40-9654-4e2b-8fe9-13d418f93750\") " pod="openstack/keystone-db-sync-z87tf" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.815393 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-698d-account-create-update-vzz2k"] Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.831732 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-z87tf" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.844751 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjhbd\" (UniqueName: \"kubernetes.io/projected/89375687-18cd-4325-87c3-6be0a83ebfd1-kube-api-access-cjhbd\") pod \"neutron-db-create-p7gjm\" (UID: \"89375687-18cd-4325-87c3-6be0a83ebfd1\") " pod="openstack/neutron-db-create-p7gjm" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.937916 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlczm\" (UniqueName: \"kubernetes.io/projected/f06fdcbc-c0a2-4149-903f-cad2c7c9dc9a-kube-api-access-wlczm\") pod \"neutron-698d-account-create-update-vzz2k\" (UID: \"f06fdcbc-c0a2-4149-903f-cad2c7c9dc9a\") " pod="openstack/neutron-698d-account-create-update-vzz2k" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.937968 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f06fdcbc-c0a2-4149-903f-cad2c7c9dc9a-operator-scripts\") pod \"neutron-698d-account-create-update-vzz2k\" (UID: \"f06fdcbc-c0a2-4149-903f-cad2c7c9dc9a\") " pod="openstack/neutron-698d-account-create-update-vzz2k" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.941913 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f06fdcbc-c0a2-4149-903f-cad2c7c9dc9a-operator-scripts\") pod \"neutron-698d-account-create-update-vzz2k\" (UID: \"f06fdcbc-c0a2-4149-903f-cad2c7c9dc9a\") " pod="openstack/neutron-698d-account-create-update-vzz2k" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.966549 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlczm\" (UniqueName: \"kubernetes.io/projected/f06fdcbc-c0a2-4149-903f-cad2c7c9dc9a-kube-api-access-wlczm\") pod \"neutron-698d-account-create-update-vzz2k\" (UID: \"f06fdcbc-c0a2-4149-903f-cad2c7c9dc9a\") " pod="openstack/neutron-698d-account-create-update-vzz2k" Jan 26 18:54:17 crc kubenswrapper[4737]: I0126 18:54:17.977539 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-74b2-account-create-update-7gqdr" Jan 26 18:54:18 crc kubenswrapper[4737]: I0126 18:54:18.085187 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-2mhwn"] Jan 26 18:54:18 crc kubenswrapper[4737]: I0126 18:54:18.090737 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6bef-account-create-update-nnbl4" Jan 26 18:54:18 crc kubenswrapper[4737]: I0126 18:54:18.143102 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-p7gjm" Jan 26 18:54:18 crc kubenswrapper[4737]: I0126 18:54:18.166455 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-698d-account-create-update-vzz2k" Jan 26 18:54:18 crc kubenswrapper[4737]: I0126 18:54:18.259424 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-w2r6t"] Jan 26 18:54:18 crc kubenswrapper[4737]: I0126 18:54:18.630388 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-74f6bcbc87-p5nqb" Jan 26 18:54:18 crc kubenswrapper[4737]: I0126 18:54:18.698578 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-j7bgx"] Jan 26 18:54:18 crc kubenswrapper[4737]: I0126 18:54:18.728839 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-7xhdj"] Jan 26 18:54:18 crc kubenswrapper[4737]: I0126 18:54:18.730919 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-7xhdj" podUID="4943ea2e-2d2e-4024-97f5-b7a2b288e3b2" containerName="dnsmasq-dns" containerID="cri-o://2c9c0d4d99b533cca672fb2062e4a3ece43523c094dbf869dabc5092baf30fd6" gracePeriod=10 Jan 26 18:54:18 crc kubenswrapper[4737]: I0126 18:54:18.748428 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-8950-account-create-update-l8njp"] Jan 26 18:54:18 crc kubenswrapper[4737]: I0126 18:54:18.768513 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-7xhdj" podUID="4943ea2e-2d2e-4024-97f5-b7a2b288e3b2" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.152:5353: connect: connection refused" Jan 26 18:54:18 crc kubenswrapper[4737]: I0126 18:54:18.911541 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-2mhwn" event={"ID":"15d05428-1fe0-474f-8b0e-761f90c035bd","Type":"ContainerStarted","Data":"80e54d313887f949b25e630eb5d0517b1f60fa9851f9bc2d5bf26545c5ad7579"} Jan 26 18:54:18 crc kubenswrapper[4737]: I0126 18:54:18.911935 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-2mhwn" event={"ID":"15d05428-1fe0-474f-8b0e-761f90c035bd","Type":"ContainerStarted","Data":"2f0094ff5c0f00600390b5f9cb8612284d8c30d4704b12a622920dd9e6b558c4"} Jan 26 18:54:18 crc kubenswrapper[4737]: I0126 18:54:18.919446 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-j7bgx" event={"ID":"bcff1539-022e-45f1-9e55-2e633b8a0346","Type":"ContainerStarted","Data":"8276ac3b9c953a3da1fd1b6306d9446e5b2f3645f3460901ea0d74ffb3138d15"} Jan 26 18:54:18 crc kubenswrapper[4737]: I0126 18:54:18.921570 4737 generic.go:334] "Generic (PLEG): container finished" podID="4943ea2e-2d2e-4024-97f5-b7a2b288e3b2" containerID="2c9c0d4d99b533cca672fb2062e4a3ece43523c094dbf869dabc5092baf30fd6" exitCode=0 Jan 26 18:54:18 crc kubenswrapper[4737]: I0126 18:54:18.921632 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-7xhdj" event={"ID":"4943ea2e-2d2e-4024-97f5-b7a2b288e3b2","Type":"ContainerDied","Data":"2c9c0d4d99b533cca672fb2062e4a3ece43523c094dbf869dabc5092baf30fd6"} Jan 26 18:54:18 crc kubenswrapper[4737]: I0126 18:54:18.924440 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-w2r6t" event={"ID":"30b6ccc8-eb69-4780-b3dc-f53000859836","Type":"ContainerStarted","Data":"126f46c0424173aaa97d436ffa78f8be0f8c62dedad0d5fac4a866c3980104a2"} Jan 26 18:54:18 crc kubenswrapper[4737]: I0126 18:54:18.924520 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-w2r6t" event={"ID":"30b6ccc8-eb69-4780-b3dc-f53000859836","Type":"ContainerStarted","Data":"d925755c2a6caecb7a4dbb337fc349c80dc568fb1f296bdfd9ca14126ba383af"} Jan 26 18:54:18 crc kubenswrapper[4737]: I0126 18:54:18.930930 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-8950-account-create-update-l8njp" event={"ID":"7e85eb58-126a-4fe4-9006-e46c8baceac8","Type":"ContainerStarted","Data":"ffc42ffb51d00724c3af740478e7603c1c5c34bb6da3dd209f5f63972a90ecef"} Jan 26 18:54:18 crc kubenswrapper[4737]: I0126 18:54:18.974682 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-w2r6t" podStartSLOduration=2.974662703 podStartE2EDuration="2.974662703s" podCreationTimestamp="2026-01-26 18:54:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:54:18.958088702 +0000 UTC m=+1432.266283410" watchObservedRunningTime="2026-01-26 18:54:18.974662703 +0000 UTC m=+1432.282857411" Jan 26 18:54:19 crc kubenswrapper[4737]: I0126 18:54:19.068447 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-z87tf"] Jan 26 18:54:19 crc kubenswrapper[4737]: I0126 18:54:19.088668 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-74b2-account-create-update-7gqdr"] Jan 26 18:54:19 crc kubenswrapper[4737]: I0126 18:54:19.569367 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-6bef-account-create-update-nnbl4"] Jan 26 18:54:19 crc kubenswrapper[4737]: I0126 18:54:19.651123 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-7xhdj" Jan 26 18:54:19 crc kubenswrapper[4737]: I0126 18:54:19.707865 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4943ea2e-2d2e-4024-97f5-b7a2b288e3b2-dns-svc\") pod \"4943ea2e-2d2e-4024-97f5-b7a2b288e3b2\" (UID: \"4943ea2e-2d2e-4024-97f5-b7a2b288e3b2\") " Jan 26 18:54:19 crc kubenswrapper[4737]: I0126 18:54:19.708274 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4943ea2e-2d2e-4024-97f5-b7a2b288e3b2-ovsdbserver-sb\") pod \"4943ea2e-2d2e-4024-97f5-b7a2b288e3b2\" (UID: \"4943ea2e-2d2e-4024-97f5-b7a2b288e3b2\") " Jan 26 18:54:19 crc kubenswrapper[4737]: I0126 18:54:19.708317 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4943ea2e-2d2e-4024-97f5-b7a2b288e3b2-config\") pod \"4943ea2e-2d2e-4024-97f5-b7a2b288e3b2\" (UID: \"4943ea2e-2d2e-4024-97f5-b7a2b288e3b2\") " Jan 26 18:54:19 crc kubenswrapper[4737]: I0126 18:54:19.708373 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4943ea2e-2d2e-4024-97f5-b7a2b288e3b2-ovsdbserver-nb\") pod \"4943ea2e-2d2e-4024-97f5-b7a2b288e3b2\" (UID: \"4943ea2e-2d2e-4024-97f5-b7a2b288e3b2\") " Jan 26 18:54:19 crc kubenswrapper[4737]: I0126 18:54:19.708535 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljmbk\" (UniqueName: \"kubernetes.io/projected/4943ea2e-2d2e-4024-97f5-b7a2b288e3b2-kube-api-access-ljmbk\") pod \"4943ea2e-2d2e-4024-97f5-b7a2b288e3b2\" (UID: \"4943ea2e-2d2e-4024-97f5-b7a2b288e3b2\") " Jan 26 18:54:19 crc kubenswrapper[4737]: I0126 18:54:19.722159 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4943ea2e-2d2e-4024-97f5-b7a2b288e3b2-kube-api-access-ljmbk" (OuterVolumeSpecName: "kube-api-access-ljmbk") pod "4943ea2e-2d2e-4024-97f5-b7a2b288e3b2" (UID: "4943ea2e-2d2e-4024-97f5-b7a2b288e3b2"). InnerVolumeSpecName "kube-api-access-ljmbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:54:19 crc kubenswrapper[4737]: I0126 18:54:19.782369 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4943ea2e-2d2e-4024-97f5-b7a2b288e3b2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4943ea2e-2d2e-4024-97f5-b7a2b288e3b2" (UID: "4943ea2e-2d2e-4024-97f5-b7a2b288e3b2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:54:19 crc kubenswrapper[4737]: I0126 18:54:19.811374 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljmbk\" (UniqueName: \"kubernetes.io/projected/4943ea2e-2d2e-4024-97f5-b7a2b288e3b2-kube-api-access-ljmbk\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:19 crc kubenswrapper[4737]: I0126 18:54:19.811421 4737 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4943ea2e-2d2e-4024-97f5-b7a2b288e3b2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:19 crc kubenswrapper[4737]: I0126 18:54:19.824870 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4943ea2e-2d2e-4024-97f5-b7a2b288e3b2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4943ea2e-2d2e-4024-97f5-b7a2b288e3b2" (UID: "4943ea2e-2d2e-4024-97f5-b7a2b288e3b2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:54:19 crc kubenswrapper[4737]: I0126 18:54:19.857675 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-698d-account-create-update-vzz2k"] Jan 26 18:54:19 crc kubenswrapper[4737]: W0126 18:54:19.868124 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf06fdcbc_c0a2_4149_903f_cad2c7c9dc9a.slice/crio-632a19e68d6ef93c60018de73a22ecd7ea43d5aca9d2990d1a9bd1b28bf1ebcd WatchSource:0}: Error finding container 632a19e68d6ef93c60018de73a22ecd7ea43d5aca9d2990d1a9bd1b28bf1ebcd: Status 404 returned error can't find the container with id 632a19e68d6ef93c60018de73a22ecd7ea43d5aca9d2990d1a9bd1b28bf1ebcd Jan 26 18:54:19 crc kubenswrapper[4737]: I0126 18:54:19.880782 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4943ea2e-2d2e-4024-97f5-b7a2b288e3b2-config" (OuterVolumeSpecName: "config") pod "4943ea2e-2d2e-4024-97f5-b7a2b288e3b2" (UID: "4943ea2e-2d2e-4024-97f5-b7a2b288e3b2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:54:19 crc kubenswrapper[4737]: I0126 18:54:19.883699 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4943ea2e-2d2e-4024-97f5-b7a2b288e3b2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4943ea2e-2d2e-4024-97f5-b7a2b288e3b2" (UID: "4943ea2e-2d2e-4024-97f5-b7a2b288e3b2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:54:19 crc kubenswrapper[4737]: I0126 18:54:19.887298 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-p7gjm"] Jan 26 18:54:19 crc kubenswrapper[4737]: I0126 18:54:19.913158 4737 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4943ea2e-2d2e-4024-97f5-b7a2b288e3b2-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:19 crc kubenswrapper[4737]: I0126 18:54:19.913190 4737 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4943ea2e-2d2e-4024-97f5-b7a2b288e3b2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:19 crc kubenswrapper[4737]: I0126 18:54:19.913202 4737 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4943ea2e-2d2e-4024-97f5-b7a2b288e3b2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:19 crc kubenswrapper[4737]: I0126 18:54:19.958065 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-p7gjm" event={"ID":"89375687-18cd-4325-87c3-6be0a83ebfd1","Type":"ContainerStarted","Data":"f331553d7f5e4fbc3cd7f5dfa239616fd0dbff9cbdc232d8debfdc9113b9869f"} Jan 26 18:54:19 crc kubenswrapper[4737]: I0126 18:54:19.962294 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-74b2-account-create-update-7gqdr" event={"ID":"f82324be-8ee8-45b6-8f16-23c70c1e9011","Type":"ContainerStarted","Data":"eea43a1d80a8f10682440132a37fc26ab6737c3a964263f4286dce176f6d7459"} Jan 26 18:54:19 crc kubenswrapper[4737]: I0126 18:54:19.962344 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-74b2-account-create-update-7gqdr" event={"ID":"f82324be-8ee8-45b6-8f16-23c70c1e9011","Type":"ContainerStarted","Data":"50a0e498561b36f398a1f7395351fb58e43ad0ebe866de7f236f3213e522ea0e"} Jan 26 18:54:19 crc kubenswrapper[4737]: I0126 18:54:19.968348 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-z87tf" event={"ID":"86138c40-9654-4e2b-8fe9-13d418f93750","Type":"ContainerStarted","Data":"5eb61e1dab396b3e7fa148f8a4d2415bf1abcc21ffcaff6717c61e69cd363fe3"} Jan 26 18:54:19 crc kubenswrapper[4737]: I0126 18:54:19.972039 4737 generic.go:334] "Generic (PLEG): container finished" podID="7e85eb58-126a-4fe4-9006-e46c8baceac8" containerID="35e081f01a7a2ffb1625e45a980fb08216ed8ae600eff76c6c04e8be9677a3bc" exitCode=0 Jan 26 18:54:19 crc kubenswrapper[4737]: I0126 18:54:19.972209 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-8950-account-create-update-l8njp" event={"ID":"7e85eb58-126a-4fe4-9006-e46c8baceac8","Type":"ContainerDied","Data":"35e081f01a7a2ffb1625e45a980fb08216ed8ae600eff76c6c04e8be9677a3bc"} Jan 26 18:54:19 crc kubenswrapper[4737]: I0126 18:54:19.980883 4737 generic.go:334] "Generic (PLEG): container finished" podID="bcff1539-022e-45f1-9e55-2e633b8a0346" containerID="c0b86c586a0dd74fc3f94c27cc8df4b69d63c9e535411907c719269b90d2d16d" exitCode=0 Jan 26 18:54:19 crc kubenswrapper[4737]: I0126 18:54:19.981088 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-j7bgx" event={"ID":"bcff1539-022e-45f1-9e55-2e633b8a0346","Type":"ContainerDied","Data":"c0b86c586a0dd74fc3f94c27cc8df4b69d63c9e535411907c719269b90d2d16d"} Jan 26 18:54:19 crc kubenswrapper[4737]: I0126 18:54:19.987598 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-7xhdj" event={"ID":"4943ea2e-2d2e-4024-97f5-b7a2b288e3b2","Type":"ContainerDied","Data":"9da804a911f9cfc50e69b67ac067af3d65ee5e741c5cbcd32cb0d1bccfa39bd9"} Jan 26 18:54:19 crc kubenswrapper[4737]: I0126 18:54:19.987669 4737 scope.go:117] "RemoveContainer" containerID="2c9c0d4d99b533cca672fb2062e4a3ece43523c094dbf869dabc5092baf30fd6" Jan 26 18:54:19 crc kubenswrapper[4737]: I0126 18:54:19.987714 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-7xhdj" Jan 26 18:54:20 crc kubenswrapper[4737]: I0126 18:54:20.009011 4737 generic.go:334] "Generic (PLEG): container finished" podID="30b6ccc8-eb69-4780-b3dc-f53000859836" containerID="126f46c0424173aaa97d436ffa78f8be0f8c62dedad0d5fac4a866c3980104a2" exitCode=0 Jan 26 18:54:20 crc kubenswrapper[4737]: I0126 18:54:20.009156 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-w2r6t" event={"ID":"30b6ccc8-eb69-4780-b3dc-f53000859836","Type":"ContainerDied","Data":"126f46c0424173aaa97d436ffa78f8be0f8c62dedad0d5fac4a866c3980104a2"} Jan 26 18:54:20 crc kubenswrapper[4737]: I0126 18:54:20.035531 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-74b2-account-create-update-7gqdr" podStartSLOduration=3.012060084 podStartE2EDuration="3.012060084s" podCreationTimestamp="2026-01-26 18:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:54:19.985886536 +0000 UTC m=+1433.294081334" watchObservedRunningTime="2026-01-26 18:54:20.012060084 +0000 UTC m=+1433.320254792" Jan 26 18:54:20 crc kubenswrapper[4737]: I0126 18:54:20.041914 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6bef-account-create-update-nnbl4" event={"ID":"a431f6b9-1717-4441-88e6-81b22a7abde0","Type":"ContainerStarted","Data":"c0be04bd7efa2bcfe9cd8b7e461991b005c3c61b1d8d6258725a790524bb355d"} Jan 26 18:54:20 crc kubenswrapper[4737]: I0126 18:54:20.041975 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6bef-account-create-update-nnbl4" event={"ID":"a431f6b9-1717-4441-88e6-81b22a7abde0","Type":"ContainerStarted","Data":"bc0aa6502827ddbb8ca70e0623208fd8f246f19a7a6de6320c88088bc592aaa1"} Jan 26 18:54:20 crc kubenswrapper[4737]: I0126 18:54:20.087899 4737 generic.go:334] "Generic (PLEG): container finished" podID="15d05428-1fe0-474f-8b0e-761f90c035bd" containerID="80e54d313887f949b25e630eb5d0517b1f60fa9851f9bc2d5bf26545c5ad7579" exitCode=0 Jan 26 18:54:20 crc kubenswrapper[4737]: I0126 18:54:20.087988 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-2mhwn" event={"ID":"15d05428-1fe0-474f-8b0e-761f90c035bd","Type":"ContainerDied","Data":"80e54d313887f949b25e630eb5d0517b1f60fa9851f9bc2d5bf26545c5ad7579"} Jan 26 18:54:20 crc kubenswrapper[4737]: I0126 18:54:20.139465 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-698d-account-create-update-vzz2k" event={"ID":"f06fdcbc-c0a2-4149-903f-cad2c7c9dc9a","Type":"ContainerStarted","Data":"632a19e68d6ef93c60018de73a22ecd7ea43d5aca9d2990d1a9bd1b28bf1ebcd"} Jan 26 18:54:20 crc kubenswrapper[4737]: I0126 18:54:20.151187 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-7xhdj"] Jan 26 18:54:20 crc kubenswrapper[4737]: I0126 18:54:20.170258 4737 scope.go:117] "RemoveContainer" containerID="63068da46dd032ec15d7b3b5928e294fd23fa62a1a292859de77e65f8bb1b7ef" Jan 26 18:54:20 crc kubenswrapper[4737]: I0126 18:54:20.208113 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-7xhdj"] Jan 26 18:54:20 crc kubenswrapper[4737]: I0126 18:54:20.264902 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-6bef-account-create-update-nnbl4" podStartSLOduration=3.264872078 podStartE2EDuration="3.264872078s" podCreationTimestamp="2026-01-26 18:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:54:20.185791109 +0000 UTC m=+1433.493985817" watchObservedRunningTime="2026-01-26 18:54:20.264872078 +0000 UTC m=+1433.573066786" Jan 26 18:54:20 crc kubenswrapper[4737]: I0126 18:54:20.824029 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-2mhwn" Jan 26 18:54:20 crc kubenswrapper[4737]: I0126 18:54:20.966290 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15d05428-1fe0-474f-8b0e-761f90c035bd-operator-scripts\") pod \"15d05428-1fe0-474f-8b0e-761f90c035bd\" (UID: \"15d05428-1fe0-474f-8b0e-761f90c035bd\") " Jan 26 18:54:20 crc kubenswrapper[4737]: I0126 18:54:20.966688 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfj9g\" (UniqueName: \"kubernetes.io/projected/15d05428-1fe0-474f-8b0e-761f90c035bd-kube-api-access-nfj9g\") pod \"15d05428-1fe0-474f-8b0e-761f90c035bd\" (UID: \"15d05428-1fe0-474f-8b0e-761f90c035bd\") " Jan 26 18:54:20 crc kubenswrapper[4737]: I0126 18:54:20.967554 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15d05428-1fe0-474f-8b0e-761f90c035bd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "15d05428-1fe0-474f-8b0e-761f90c035bd" (UID: "15d05428-1fe0-474f-8b0e-761f90c035bd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:54:20 crc kubenswrapper[4737]: I0126 18:54:20.974980 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15d05428-1fe0-474f-8b0e-761f90c035bd-kube-api-access-nfj9g" (OuterVolumeSpecName: "kube-api-access-nfj9g") pod "15d05428-1fe0-474f-8b0e-761f90c035bd" (UID: "15d05428-1fe0-474f-8b0e-761f90c035bd"). InnerVolumeSpecName "kube-api-access-nfj9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:54:21 crc kubenswrapper[4737]: I0126 18:54:21.012131 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4943ea2e-2d2e-4024-97f5-b7a2b288e3b2" path="/var/lib/kubelet/pods/4943ea2e-2d2e-4024-97f5-b7a2b288e3b2/volumes" Jan 26 18:54:21 crc kubenswrapper[4737]: I0126 18:54:21.070866 4737 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15d05428-1fe0-474f-8b0e-761f90c035bd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:21 crc kubenswrapper[4737]: I0126 18:54:21.071167 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfj9g\" (UniqueName: \"kubernetes.io/projected/15d05428-1fe0-474f-8b0e-761f90c035bd-kube-api-access-nfj9g\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:21 crc kubenswrapper[4737]: I0126 18:54:21.151699 4737 generic.go:334] "Generic (PLEG): container finished" podID="a431f6b9-1717-4441-88e6-81b22a7abde0" containerID="c0be04bd7efa2bcfe9cd8b7e461991b005c3c61b1d8d6258725a790524bb355d" exitCode=0 Jan 26 18:54:21 crc kubenswrapper[4737]: I0126 18:54:21.151744 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6bef-account-create-update-nnbl4" event={"ID":"a431f6b9-1717-4441-88e6-81b22a7abde0","Type":"ContainerDied","Data":"c0be04bd7efa2bcfe9cd8b7e461991b005c3c61b1d8d6258725a790524bb355d"} Jan 26 18:54:21 crc kubenswrapper[4737]: I0126 18:54:21.153902 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-2mhwn" event={"ID":"15d05428-1fe0-474f-8b0e-761f90c035bd","Type":"ContainerDied","Data":"2f0094ff5c0f00600390b5f9cb8612284d8c30d4704b12a622920dd9e6b558c4"} Jan 26 18:54:21 crc kubenswrapper[4737]: I0126 18:54:21.153929 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f0094ff5c0f00600390b5f9cb8612284d8c30d4704b12a622920dd9e6b558c4" Jan 26 18:54:21 crc kubenswrapper[4737]: I0126 18:54:21.153991 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-2mhwn" Jan 26 18:54:21 crc kubenswrapper[4737]: I0126 18:54:21.159369 4737 generic.go:334] "Generic (PLEG): container finished" podID="f06fdcbc-c0a2-4149-903f-cad2c7c9dc9a" containerID="0598d9556f51bbdab4b3ce9937f1dc5b50d46abd1d8303745114ea66bc7f0ce4" exitCode=0 Jan 26 18:54:21 crc kubenswrapper[4737]: I0126 18:54:21.159479 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-698d-account-create-update-vzz2k" event={"ID":"f06fdcbc-c0a2-4149-903f-cad2c7c9dc9a","Type":"ContainerDied","Data":"0598d9556f51bbdab4b3ce9937f1dc5b50d46abd1d8303745114ea66bc7f0ce4"} Jan 26 18:54:21 crc kubenswrapper[4737]: I0126 18:54:21.169392 4737 generic.go:334] "Generic (PLEG): container finished" podID="89375687-18cd-4325-87c3-6be0a83ebfd1" containerID="a6f695f816a8da29b882b244979f3aa7a5752def6e1521c19f59a81dc9ca9de8" exitCode=0 Jan 26 18:54:21 crc kubenswrapper[4737]: I0126 18:54:21.169628 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-p7gjm" event={"ID":"89375687-18cd-4325-87c3-6be0a83ebfd1","Type":"ContainerDied","Data":"a6f695f816a8da29b882b244979f3aa7a5752def6e1521c19f59a81dc9ca9de8"} Jan 26 18:54:21 crc kubenswrapper[4737]: I0126 18:54:21.174698 4737 generic.go:334] "Generic (PLEG): container finished" podID="f82324be-8ee8-45b6-8f16-23c70c1e9011" containerID="eea43a1d80a8f10682440132a37fc26ab6737c3a964263f4286dce176f6d7459" exitCode=0 Jan 26 18:54:21 crc kubenswrapper[4737]: I0126 18:54:21.175308 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-74b2-account-create-update-7gqdr" event={"ID":"f82324be-8ee8-45b6-8f16-23c70c1e9011","Type":"ContainerDied","Data":"eea43a1d80a8f10682440132a37fc26ab6737c3a964263f4286dce176f6d7459"} Jan 26 18:54:21 crc kubenswrapper[4737]: I0126 18:54:21.755184 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-8950-account-create-update-l8njp" Jan 26 18:54:21 crc kubenswrapper[4737]: I0126 18:54:21.888614 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zl752\" (UniqueName: \"kubernetes.io/projected/7e85eb58-126a-4fe4-9006-e46c8baceac8-kube-api-access-zl752\") pod \"7e85eb58-126a-4fe4-9006-e46c8baceac8\" (UID: \"7e85eb58-126a-4fe4-9006-e46c8baceac8\") " Jan 26 18:54:21 crc kubenswrapper[4737]: I0126 18:54:21.888889 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e85eb58-126a-4fe4-9006-e46c8baceac8-operator-scripts\") pod \"7e85eb58-126a-4fe4-9006-e46c8baceac8\" (UID: \"7e85eb58-126a-4fe4-9006-e46c8baceac8\") " Jan 26 18:54:21 crc kubenswrapper[4737]: I0126 18:54:21.890685 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e85eb58-126a-4fe4-9006-e46c8baceac8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7e85eb58-126a-4fe4-9006-e46c8baceac8" (UID: "7e85eb58-126a-4fe4-9006-e46c8baceac8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:54:21 crc kubenswrapper[4737]: I0126 18:54:21.894478 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e85eb58-126a-4fe4-9006-e46c8baceac8-kube-api-access-zl752" (OuterVolumeSpecName: "kube-api-access-zl752") pod "7e85eb58-126a-4fe4-9006-e46c8baceac8" (UID: "7e85eb58-126a-4fe4-9006-e46c8baceac8"). InnerVolumeSpecName "kube-api-access-zl752". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:54:21 crc kubenswrapper[4737]: I0126 18:54:21.974694 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-w2r6t" Jan 26 18:54:21 crc kubenswrapper[4737]: I0126 18:54:21.997699 4737 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e85eb58-126a-4fe4-9006-e46c8baceac8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:21 crc kubenswrapper[4737]: I0126 18:54:21.997737 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zl752\" (UniqueName: \"kubernetes.io/projected/7e85eb58-126a-4fe4-9006-e46c8baceac8-kube-api-access-zl752\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:21 crc kubenswrapper[4737]: I0126 18:54:21.999136 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-j7bgx" Jan 26 18:54:22 crc kubenswrapper[4737]: I0126 18:54:22.100696 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hrhv\" (UniqueName: \"kubernetes.io/projected/bcff1539-022e-45f1-9e55-2e633b8a0346-kube-api-access-4hrhv\") pod \"bcff1539-022e-45f1-9e55-2e633b8a0346\" (UID: \"bcff1539-022e-45f1-9e55-2e633b8a0346\") " Jan 26 18:54:22 crc kubenswrapper[4737]: I0126 18:54:22.100858 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30b6ccc8-eb69-4780-b3dc-f53000859836-operator-scripts\") pod \"30b6ccc8-eb69-4780-b3dc-f53000859836\" (UID: \"30b6ccc8-eb69-4780-b3dc-f53000859836\") " Jan 26 18:54:22 crc kubenswrapper[4737]: I0126 18:54:22.100908 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcff1539-022e-45f1-9e55-2e633b8a0346-operator-scripts\") pod \"bcff1539-022e-45f1-9e55-2e633b8a0346\" (UID: \"bcff1539-022e-45f1-9e55-2e633b8a0346\") " Jan 26 18:54:22 crc kubenswrapper[4737]: I0126 18:54:22.101030 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lf4st\" (UniqueName: \"kubernetes.io/projected/30b6ccc8-eb69-4780-b3dc-f53000859836-kube-api-access-lf4st\") pod \"30b6ccc8-eb69-4780-b3dc-f53000859836\" (UID: \"30b6ccc8-eb69-4780-b3dc-f53000859836\") " Jan 26 18:54:22 crc kubenswrapper[4737]: I0126 18:54:22.102021 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30b6ccc8-eb69-4780-b3dc-f53000859836-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "30b6ccc8-eb69-4780-b3dc-f53000859836" (UID: "30b6ccc8-eb69-4780-b3dc-f53000859836"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:54:22 crc kubenswrapper[4737]: I0126 18:54:22.102095 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcff1539-022e-45f1-9e55-2e633b8a0346-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bcff1539-022e-45f1-9e55-2e633b8a0346" (UID: "bcff1539-022e-45f1-9e55-2e633b8a0346"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:54:22 crc kubenswrapper[4737]: I0126 18:54:22.104674 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30b6ccc8-eb69-4780-b3dc-f53000859836-kube-api-access-lf4st" (OuterVolumeSpecName: "kube-api-access-lf4st") pod "30b6ccc8-eb69-4780-b3dc-f53000859836" (UID: "30b6ccc8-eb69-4780-b3dc-f53000859836"). InnerVolumeSpecName "kube-api-access-lf4st". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:54:22 crc kubenswrapper[4737]: I0126 18:54:22.107612 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcff1539-022e-45f1-9e55-2e633b8a0346-kube-api-access-4hrhv" (OuterVolumeSpecName: "kube-api-access-4hrhv") pod "bcff1539-022e-45f1-9e55-2e633b8a0346" (UID: "bcff1539-022e-45f1-9e55-2e633b8a0346"). InnerVolumeSpecName "kube-api-access-4hrhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:54:22 crc kubenswrapper[4737]: I0126 18:54:22.194015 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-8950-account-create-update-l8njp" Jan 26 18:54:22 crc kubenswrapper[4737]: I0126 18:54:22.194015 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-8950-account-create-update-l8njp" event={"ID":"7e85eb58-126a-4fe4-9006-e46c8baceac8","Type":"ContainerDied","Data":"ffc42ffb51d00724c3af740478e7603c1c5c34bb6da3dd209f5f63972a90ecef"} Jan 26 18:54:22 crc kubenswrapper[4737]: I0126 18:54:22.194113 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ffc42ffb51d00724c3af740478e7603c1c5c34bb6da3dd209f5f63972a90ecef" Jan 26 18:54:22 crc kubenswrapper[4737]: I0126 18:54:22.196389 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-j7bgx" event={"ID":"bcff1539-022e-45f1-9e55-2e633b8a0346","Type":"ContainerDied","Data":"8276ac3b9c953a3da1fd1b6306d9446e5b2f3645f3460901ea0d74ffb3138d15"} Jan 26 18:54:22 crc kubenswrapper[4737]: I0126 18:54:22.196417 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8276ac3b9c953a3da1fd1b6306d9446e5b2f3645f3460901ea0d74ffb3138d15" Jan 26 18:54:22 crc kubenswrapper[4737]: I0126 18:54:22.196486 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-j7bgx" Jan 26 18:54:22 crc kubenswrapper[4737]: I0126 18:54:22.198726 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-w2r6t" event={"ID":"30b6ccc8-eb69-4780-b3dc-f53000859836","Type":"ContainerDied","Data":"d925755c2a6caecb7a4dbb337fc349c80dc568fb1f296bdfd9ca14126ba383af"} Jan 26 18:54:22 crc kubenswrapper[4737]: I0126 18:54:22.198774 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d925755c2a6caecb7a4dbb337fc349c80dc568fb1f296bdfd9ca14126ba383af" Jan 26 18:54:22 crc kubenswrapper[4737]: I0126 18:54:22.199037 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-w2r6t" Jan 26 18:54:22 crc kubenswrapper[4737]: I0126 18:54:22.204954 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lf4st\" (UniqueName: \"kubernetes.io/projected/30b6ccc8-eb69-4780-b3dc-f53000859836-kube-api-access-lf4st\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:22 crc kubenswrapper[4737]: I0126 18:54:22.204980 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hrhv\" (UniqueName: \"kubernetes.io/projected/bcff1539-022e-45f1-9e55-2e633b8a0346-kube-api-access-4hrhv\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:22 crc kubenswrapper[4737]: I0126 18:54:22.204995 4737 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30b6ccc8-eb69-4780-b3dc-f53000859836-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:22 crc kubenswrapper[4737]: I0126 18:54:22.205010 4737 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcff1539-022e-45f1-9e55-2e633b8a0346-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:25 crc kubenswrapper[4737]: I0126 18:54:25.000648 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-698d-account-create-update-vzz2k" Jan 26 18:54:25 crc kubenswrapper[4737]: I0126 18:54:25.011341 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-p7gjm" Jan 26 18:54:25 crc kubenswrapper[4737]: I0126 18:54:25.049648 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6bef-account-create-update-nnbl4" Jan 26 18:54:25 crc kubenswrapper[4737]: I0126 18:54:25.058186 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-74b2-account-create-update-7gqdr" Jan 26 18:54:25 crc kubenswrapper[4737]: I0126 18:54:25.076974 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f06fdcbc-c0a2-4149-903f-cad2c7c9dc9a-operator-scripts\") pod \"f06fdcbc-c0a2-4149-903f-cad2c7c9dc9a\" (UID: \"f06fdcbc-c0a2-4149-903f-cad2c7c9dc9a\") " Jan 26 18:54:25 crc kubenswrapper[4737]: I0126 18:54:25.077144 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89375687-18cd-4325-87c3-6be0a83ebfd1-operator-scripts\") pod \"89375687-18cd-4325-87c3-6be0a83ebfd1\" (UID: \"89375687-18cd-4325-87c3-6be0a83ebfd1\") " Jan 26 18:54:25 crc kubenswrapper[4737]: I0126 18:54:25.077233 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjhbd\" (UniqueName: \"kubernetes.io/projected/89375687-18cd-4325-87c3-6be0a83ebfd1-kube-api-access-cjhbd\") pod \"89375687-18cd-4325-87c3-6be0a83ebfd1\" (UID: \"89375687-18cd-4325-87c3-6be0a83ebfd1\") " Jan 26 18:54:25 crc kubenswrapper[4737]: I0126 18:54:25.077331 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlczm\" (UniqueName: \"kubernetes.io/projected/f06fdcbc-c0a2-4149-903f-cad2c7c9dc9a-kube-api-access-wlczm\") pod \"f06fdcbc-c0a2-4149-903f-cad2c7c9dc9a\" (UID: \"f06fdcbc-c0a2-4149-903f-cad2c7c9dc9a\") " Jan 26 18:54:25 crc kubenswrapper[4737]: I0126 18:54:25.077785 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f06fdcbc-c0a2-4149-903f-cad2c7c9dc9a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f06fdcbc-c0a2-4149-903f-cad2c7c9dc9a" (UID: "f06fdcbc-c0a2-4149-903f-cad2c7c9dc9a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:54:25 crc kubenswrapper[4737]: I0126 18:54:25.077961 4737 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f06fdcbc-c0a2-4149-903f-cad2c7c9dc9a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:25 crc kubenswrapper[4737]: I0126 18:54:25.078051 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89375687-18cd-4325-87c3-6be0a83ebfd1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "89375687-18cd-4325-87c3-6be0a83ebfd1" (UID: "89375687-18cd-4325-87c3-6be0a83ebfd1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:54:25 crc kubenswrapper[4737]: I0126 18:54:25.094214 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f06fdcbc-c0a2-4149-903f-cad2c7c9dc9a-kube-api-access-wlczm" (OuterVolumeSpecName: "kube-api-access-wlczm") pod "f06fdcbc-c0a2-4149-903f-cad2c7c9dc9a" (UID: "f06fdcbc-c0a2-4149-903f-cad2c7c9dc9a"). InnerVolumeSpecName "kube-api-access-wlczm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:54:25 crc kubenswrapper[4737]: I0126 18:54:25.095552 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89375687-18cd-4325-87c3-6be0a83ebfd1-kube-api-access-cjhbd" (OuterVolumeSpecName: "kube-api-access-cjhbd") pod "89375687-18cd-4325-87c3-6be0a83ebfd1" (UID: "89375687-18cd-4325-87c3-6be0a83ebfd1"). InnerVolumeSpecName "kube-api-access-cjhbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:54:25 crc kubenswrapper[4737]: I0126 18:54:25.180150 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f82324be-8ee8-45b6-8f16-23c70c1e9011-operator-scripts\") pod \"f82324be-8ee8-45b6-8f16-23c70c1e9011\" (UID: \"f82324be-8ee8-45b6-8f16-23c70c1e9011\") " Jan 26 18:54:25 crc kubenswrapper[4737]: I0126 18:54:25.180201 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzf4d\" (UniqueName: \"kubernetes.io/projected/f82324be-8ee8-45b6-8f16-23c70c1e9011-kube-api-access-qzf4d\") pod \"f82324be-8ee8-45b6-8f16-23c70c1e9011\" (UID: \"f82324be-8ee8-45b6-8f16-23c70c1e9011\") " Jan 26 18:54:25 crc kubenswrapper[4737]: I0126 18:54:25.180317 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdmnk\" (UniqueName: \"kubernetes.io/projected/a431f6b9-1717-4441-88e6-81b22a7abde0-kube-api-access-cdmnk\") pod \"a431f6b9-1717-4441-88e6-81b22a7abde0\" (UID: \"a431f6b9-1717-4441-88e6-81b22a7abde0\") " Jan 26 18:54:25 crc kubenswrapper[4737]: I0126 18:54:25.180367 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a431f6b9-1717-4441-88e6-81b22a7abde0-operator-scripts\") pod \"a431f6b9-1717-4441-88e6-81b22a7abde0\" (UID: \"a431f6b9-1717-4441-88e6-81b22a7abde0\") " Jan 26 18:54:25 crc kubenswrapper[4737]: I0126 18:54:25.180791 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f82324be-8ee8-45b6-8f16-23c70c1e9011-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f82324be-8ee8-45b6-8f16-23c70c1e9011" (UID: "f82324be-8ee8-45b6-8f16-23c70c1e9011"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:54:25 crc kubenswrapper[4737]: I0126 18:54:25.181093 4737 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89375687-18cd-4325-87c3-6be0a83ebfd1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:25 crc kubenswrapper[4737]: I0126 18:54:25.181115 4737 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f82324be-8ee8-45b6-8f16-23c70c1e9011-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:25 crc kubenswrapper[4737]: I0126 18:54:25.181125 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjhbd\" (UniqueName: \"kubernetes.io/projected/89375687-18cd-4325-87c3-6be0a83ebfd1-kube-api-access-cjhbd\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:25 crc kubenswrapper[4737]: I0126 18:54:25.181139 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlczm\" (UniqueName: \"kubernetes.io/projected/f06fdcbc-c0a2-4149-903f-cad2c7c9dc9a-kube-api-access-wlczm\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:25 crc kubenswrapper[4737]: I0126 18:54:25.181199 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a431f6b9-1717-4441-88e6-81b22a7abde0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a431f6b9-1717-4441-88e6-81b22a7abde0" (UID: "a431f6b9-1717-4441-88e6-81b22a7abde0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:54:25 crc kubenswrapper[4737]: I0126 18:54:25.184968 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f82324be-8ee8-45b6-8f16-23c70c1e9011-kube-api-access-qzf4d" (OuterVolumeSpecName: "kube-api-access-qzf4d") pod "f82324be-8ee8-45b6-8f16-23c70c1e9011" (UID: "f82324be-8ee8-45b6-8f16-23c70c1e9011"). InnerVolumeSpecName "kube-api-access-qzf4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:54:25 crc kubenswrapper[4737]: I0126 18:54:25.186526 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a431f6b9-1717-4441-88e6-81b22a7abde0-kube-api-access-cdmnk" (OuterVolumeSpecName: "kube-api-access-cdmnk") pod "a431f6b9-1717-4441-88e6-81b22a7abde0" (UID: "a431f6b9-1717-4441-88e6-81b22a7abde0"). InnerVolumeSpecName "kube-api-access-cdmnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:54:25 crc kubenswrapper[4737]: I0126 18:54:25.237863 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6bef-account-create-update-nnbl4" Jan 26 18:54:25 crc kubenswrapper[4737]: I0126 18:54:25.237894 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6bef-account-create-update-nnbl4" event={"ID":"a431f6b9-1717-4441-88e6-81b22a7abde0","Type":"ContainerDied","Data":"bc0aa6502827ddbb8ca70e0623208fd8f246f19a7a6de6320c88088bc592aaa1"} Jan 26 18:54:25 crc kubenswrapper[4737]: I0126 18:54:25.238425 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc0aa6502827ddbb8ca70e0623208fd8f246f19a7a6de6320c88088bc592aaa1" Jan 26 18:54:25 crc kubenswrapper[4737]: I0126 18:54:25.240660 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-698d-account-create-update-vzz2k" event={"ID":"f06fdcbc-c0a2-4149-903f-cad2c7c9dc9a","Type":"ContainerDied","Data":"632a19e68d6ef93c60018de73a22ecd7ea43d5aca9d2990d1a9bd1b28bf1ebcd"} Jan 26 18:54:25 crc kubenswrapper[4737]: I0126 18:54:25.240728 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="632a19e68d6ef93c60018de73a22ecd7ea43d5aca9d2990d1a9bd1b28bf1ebcd" Jan 26 18:54:25 crc kubenswrapper[4737]: I0126 18:54:25.240696 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-698d-account-create-update-vzz2k" Jan 26 18:54:25 crc kubenswrapper[4737]: I0126 18:54:25.242828 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-p7gjm" event={"ID":"89375687-18cd-4325-87c3-6be0a83ebfd1","Type":"ContainerDied","Data":"f331553d7f5e4fbc3cd7f5dfa239616fd0dbff9cbdc232d8debfdc9113b9869f"} Jan 26 18:54:25 crc kubenswrapper[4737]: I0126 18:54:25.242904 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f331553d7f5e4fbc3cd7f5dfa239616fd0dbff9cbdc232d8debfdc9113b9869f" Jan 26 18:54:25 crc kubenswrapper[4737]: I0126 18:54:25.242923 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-p7gjm" Jan 26 18:54:25 crc kubenswrapper[4737]: I0126 18:54:25.244714 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-74b2-account-create-update-7gqdr" event={"ID":"f82324be-8ee8-45b6-8f16-23c70c1e9011","Type":"ContainerDied","Data":"50a0e498561b36f398a1f7395351fb58e43ad0ebe866de7f236f3213e522ea0e"} Jan 26 18:54:25 crc kubenswrapper[4737]: I0126 18:54:25.244747 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-74b2-account-create-update-7gqdr" Jan 26 18:54:25 crc kubenswrapper[4737]: I0126 18:54:25.244755 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50a0e498561b36f398a1f7395351fb58e43ad0ebe866de7f236f3213e522ea0e" Jan 26 18:54:25 crc kubenswrapper[4737]: I0126 18:54:25.246496 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-z87tf" event={"ID":"86138c40-9654-4e2b-8fe9-13d418f93750","Type":"ContainerStarted","Data":"86bc79a2bab351c2dfebd5e893856dac42d6ef63370648a2e52d8bb5de7625af"} Jan 26 18:54:25 crc kubenswrapper[4737]: I0126 18:54:25.282106 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-z87tf" podStartSLOduration=2.5521588189999997 podStartE2EDuration="8.28204639s" podCreationTimestamp="2026-01-26 18:54:17 +0000 UTC" firstStartedPulling="2026-01-26 18:54:19.06762206 +0000 UTC m=+1432.375816768" lastFinishedPulling="2026-01-26 18:54:24.797509631 +0000 UTC m=+1438.105704339" observedRunningTime="2026-01-26 18:54:25.271379838 +0000 UTC m=+1438.579574546" watchObservedRunningTime="2026-01-26 18:54:25.28204639 +0000 UTC m=+1438.590241098" Jan 26 18:54:25 crc kubenswrapper[4737]: I0126 18:54:25.283156 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdmnk\" (UniqueName: \"kubernetes.io/projected/a431f6b9-1717-4441-88e6-81b22a7abde0-kube-api-access-cdmnk\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:25 crc kubenswrapper[4737]: I0126 18:54:25.283193 4737 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a431f6b9-1717-4441-88e6-81b22a7abde0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:25 crc kubenswrapper[4737]: I0126 18:54:25.283204 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzf4d\" (UniqueName: \"kubernetes.io/projected/f82324be-8ee8-45b6-8f16-23c70c1e9011-kube-api-access-qzf4d\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:29 crc kubenswrapper[4737]: I0126 18:54:29.286531 4737 generic.go:334] "Generic (PLEG): container finished" podID="86138c40-9654-4e2b-8fe9-13d418f93750" containerID="86bc79a2bab351c2dfebd5e893856dac42d6ef63370648a2e52d8bb5de7625af" exitCode=0 Jan 26 18:54:29 crc kubenswrapper[4737]: I0126 18:54:29.286615 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-z87tf" event={"ID":"86138c40-9654-4e2b-8fe9-13d418f93750","Type":"ContainerDied","Data":"86bc79a2bab351c2dfebd5e893856dac42d6ef63370648a2e52d8bb5de7625af"} Jan 26 18:54:30 crc kubenswrapper[4737]: I0126 18:54:30.670720 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-z87tf" Jan 26 18:54:30 crc kubenswrapper[4737]: I0126 18:54:30.809178 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86138c40-9654-4e2b-8fe9-13d418f93750-config-data\") pod \"86138c40-9654-4e2b-8fe9-13d418f93750\" (UID: \"86138c40-9654-4e2b-8fe9-13d418f93750\") " Jan 26 18:54:30 crc kubenswrapper[4737]: I0126 18:54:30.809237 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pmth\" (UniqueName: \"kubernetes.io/projected/86138c40-9654-4e2b-8fe9-13d418f93750-kube-api-access-5pmth\") pod \"86138c40-9654-4e2b-8fe9-13d418f93750\" (UID: \"86138c40-9654-4e2b-8fe9-13d418f93750\") " Jan 26 18:54:30 crc kubenswrapper[4737]: I0126 18:54:30.809342 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86138c40-9654-4e2b-8fe9-13d418f93750-combined-ca-bundle\") pod \"86138c40-9654-4e2b-8fe9-13d418f93750\" (UID: \"86138c40-9654-4e2b-8fe9-13d418f93750\") " Jan 26 18:54:30 crc kubenswrapper[4737]: I0126 18:54:30.815640 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86138c40-9654-4e2b-8fe9-13d418f93750-kube-api-access-5pmth" (OuterVolumeSpecName: "kube-api-access-5pmth") pod "86138c40-9654-4e2b-8fe9-13d418f93750" (UID: "86138c40-9654-4e2b-8fe9-13d418f93750"). InnerVolumeSpecName "kube-api-access-5pmth". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:54:30 crc kubenswrapper[4737]: I0126 18:54:30.837413 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86138c40-9654-4e2b-8fe9-13d418f93750-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "86138c40-9654-4e2b-8fe9-13d418f93750" (UID: "86138c40-9654-4e2b-8fe9-13d418f93750"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:54:30 crc kubenswrapper[4737]: I0126 18:54:30.858681 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86138c40-9654-4e2b-8fe9-13d418f93750-config-data" (OuterVolumeSpecName: "config-data") pod "86138c40-9654-4e2b-8fe9-13d418f93750" (UID: "86138c40-9654-4e2b-8fe9-13d418f93750"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:54:30 crc kubenswrapper[4737]: I0126 18:54:30.912254 4737 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86138c40-9654-4e2b-8fe9-13d418f93750-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:30 crc kubenswrapper[4737]: I0126 18:54:30.912290 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pmth\" (UniqueName: \"kubernetes.io/projected/86138c40-9654-4e2b-8fe9-13d418f93750-kube-api-access-5pmth\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:30 crc kubenswrapper[4737]: I0126 18:54:30.912303 4737 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86138c40-9654-4e2b-8fe9-13d418f93750-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:30 crc kubenswrapper[4737]: I0126 18:54:30.948923 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:54:30 crc kubenswrapper[4737]: I0126 18:54:30.949207 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.308572 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-z87tf" event={"ID":"86138c40-9654-4e2b-8fe9-13d418f93750","Type":"ContainerDied","Data":"5eb61e1dab396b3e7fa148f8a4d2415bf1abcc21ffcaff6717c61e69cd363fe3"} Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.309309 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5eb61e1dab396b3e7fa148f8a4d2415bf1abcc21ffcaff6717c61e69cd363fe3" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.308643 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-z87tf" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.614327 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-m4bfc"] Jan 26 18:54:31 crc kubenswrapper[4737]: E0126 18:54:31.623770 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a431f6b9-1717-4441-88e6-81b22a7abde0" containerName="mariadb-account-create-update" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.623810 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="a431f6b9-1717-4441-88e6-81b22a7abde0" containerName="mariadb-account-create-update" Jan 26 18:54:31 crc kubenswrapper[4737]: E0126 18:54:31.623822 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcff1539-022e-45f1-9e55-2e633b8a0346" containerName="mariadb-database-create" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.623828 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcff1539-022e-45f1-9e55-2e633b8a0346" containerName="mariadb-database-create" Jan 26 18:54:31 crc kubenswrapper[4737]: E0126 18:54:31.623844 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e85eb58-126a-4fe4-9006-e46c8baceac8" containerName="mariadb-account-create-update" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.623850 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e85eb58-126a-4fe4-9006-e46c8baceac8" containerName="mariadb-account-create-update" Jan 26 18:54:31 crc kubenswrapper[4737]: E0126 18:54:31.623866 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86138c40-9654-4e2b-8fe9-13d418f93750" containerName="keystone-db-sync" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.623872 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="86138c40-9654-4e2b-8fe9-13d418f93750" containerName="keystone-db-sync" Jan 26 18:54:31 crc kubenswrapper[4737]: E0126 18:54:31.623887 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4943ea2e-2d2e-4024-97f5-b7a2b288e3b2" containerName="init" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.623892 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="4943ea2e-2d2e-4024-97f5-b7a2b288e3b2" containerName="init" Jan 26 18:54:31 crc kubenswrapper[4737]: E0126 18:54:31.623902 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15d05428-1fe0-474f-8b0e-761f90c035bd" containerName="mariadb-database-create" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.623913 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="15d05428-1fe0-474f-8b0e-761f90c035bd" containerName="mariadb-database-create" Jan 26 18:54:31 crc kubenswrapper[4737]: E0126 18:54:31.623926 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4943ea2e-2d2e-4024-97f5-b7a2b288e3b2" containerName="dnsmasq-dns" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.623931 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="4943ea2e-2d2e-4024-97f5-b7a2b288e3b2" containerName="dnsmasq-dns" Jan 26 18:54:31 crc kubenswrapper[4737]: E0126 18:54:31.623939 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89375687-18cd-4325-87c3-6be0a83ebfd1" containerName="mariadb-database-create" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.623945 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="89375687-18cd-4325-87c3-6be0a83ebfd1" containerName="mariadb-database-create" Jan 26 18:54:31 crc kubenswrapper[4737]: E0126 18:54:31.623957 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f06fdcbc-c0a2-4149-903f-cad2c7c9dc9a" containerName="mariadb-account-create-update" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.623963 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="f06fdcbc-c0a2-4149-903f-cad2c7c9dc9a" containerName="mariadb-account-create-update" Jan 26 18:54:31 crc kubenswrapper[4737]: E0126 18:54:31.623972 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30b6ccc8-eb69-4780-b3dc-f53000859836" containerName="mariadb-database-create" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.623978 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="30b6ccc8-eb69-4780-b3dc-f53000859836" containerName="mariadb-database-create" Jan 26 18:54:31 crc kubenswrapper[4737]: E0126 18:54:31.623987 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f82324be-8ee8-45b6-8f16-23c70c1e9011" containerName="mariadb-account-create-update" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.623993 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="f82324be-8ee8-45b6-8f16-23c70c1e9011" containerName="mariadb-account-create-update" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.624224 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="86138c40-9654-4e2b-8fe9-13d418f93750" containerName="keystone-db-sync" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.624243 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="15d05428-1fe0-474f-8b0e-761f90c035bd" containerName="mariadb-database-create" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.624252 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="89375687-18cd-4325-87c3-6be0a83ebfd1" containerName="mariadb-database-create" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.624262 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="4943ea2e-2d2e-4024-97f5-b7a2b288e3b2" containerName="dnsmasq-dns" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.624274 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e85eb58-126a-4fe4-9006-e46c8baceac8" containerName="mariadb-account-create-update" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.624287 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="30b6ccc8-eb69-4780-b3dc-f53000859836" containerName="mariadb-database-create" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.624298 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="f06fdcbc-c0a2-4149-903f-cad2c7c9dc9a" containerName="mariadb-account-create-update" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.624309 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="f82324be-8ee8-45b6-8f16-23c70c1e9011" containerName="mariadb-account-create-update" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.624326 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcff1539-022e-45f1-9e55-2e633b8a0346" containerName="mariadb-database-create" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.624335 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="a431f6b9-1717-4441-88e6-81b22a7abde0" containerName="mariadb-account-create-update" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.625536 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-m4bfc" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.641203 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-lgxxc"] Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.642673 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lgxxc" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.645209 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.645351 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.645492 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-z69hk" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.645621 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.646201 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.677340 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-m4bfc"] Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.695364 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-lgxxc"] Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.736781 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d734d9b-30ed-4ef7-b4c6-6958b19e6118-combined-ca-bundle\") pod \"keystone-bootstrap-lgxxc\" (UID: \"4d734d9b-30ed-4ef7-b4c6-6958b19e6118\") " pod="openstack/keystone-bootstrap-lgxxc" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.736860 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-m4bfc\" (UID: \"ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf\") " pod="openstack/dnsmasq-dns-847c4cc679-m4bfc" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.736998 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d734d9b-30ed-4ef7-b4c6-6958b19e6118-scripts\") pod \"keystone-bootstrap-lgxxc\" (UID: \"4d734d9b-30ed-4ef7-b4c6-6958b19e6118\") " pod="openstack/keystone-bootstrap-lgxxc" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.737030 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf-dns-svc\") pod \"dnsmasq-dns-847c4cc679-m4bfc\" (UID: \"ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf\") " pod="openstack/dnsmasq-dns-847c4cc679-m4bfc" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.737125 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf-config\") pod \"dnsmasq-dns-847c4cc679-m4bfc\" (UID: \"ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf\") " pod="openstack/dnsmasq-dns-847c4cc679-m4bfc" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.737179 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-m4bfc\" (UID: \"ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf\") " pod="openstack/dnsmasq-dns-847c4cc679-m4bfc" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.737209 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-m4bfc\" (UID: \"ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf\") " pod="openstack/dnsmasq-dns-847c4cc679-m4bfc" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.737250 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d734d9b-30ed-4ef7-b4c6-6958b19e6118-config-data\") pod \"keystone-bootstrap-lgxxc\" (UID: \"4d734d9b-30ed-4ef7-b4c6-6958b19e6118\") " pod="openstack/keystone-bootstrap-lgxxc" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.737306 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4d734d9b-30ed-4ef7-b4c6-6958b19e6118-fernet-keys\") pod \"keystone-bootstrap-lgxxc\" (UID: \"4d734d9b-30ed-4ef7-b4c6-6958b19e6118\") " pod="openstack/keystone-bootstrap-lgxxc" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.737341 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn58z\" (UniqueName: \"kubernetes.io/projected/ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf-kube-api-access-mn58z\") pod \"dnsmasq-dns-847c4cc679-m4bfc\" (UID: \"ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf\") " pod="openstack/dnsmasq-dns-847c4cc679-m4bfc" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.737425 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4d734d9b-30ed-4ef7-b4c6-6958b19e6118-credential-keys\") pod \"keystone-bootstrap-lgxxc\" (UID: \"4d734d9b-30ed-4ef7-b4c6-6958b19e6118\") " pod="openstack/keystone-bootstrap-lgxxc" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.737602 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khptd\" (UniqueName: \"kubernetes.io/projected/4d734d9b-30ed-4ef7-b4c6-6958b19e6118-kube-api-access-khptd\") pod \"keystone-bootstrap-lgxxc\" (UID: \"4d734d9b-30ed-4ef7-b4c6-6958b19e6118\") " pod="openstack/keystone-bootstrap-lgxxc" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.764693 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-kdfn7"] Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.766194 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-kdfn7" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.779153 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.779483 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-4flsc" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.795349 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-kdfn7"] Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.842455 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d734d9b-30ed-4ef7-b4c6-6958b19e6118-combined-ca-bundle\") pod \"keystone-bootstrap-lgxxc\" (UID: \"4d734d9b-30ed-4ef7-b4c6-6958b19e6118\") " pod="openstack/keystone-bootstrap-lgxxc" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.842510 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-m4bfc\" (UID: \"ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf\") " pod="openstack/dnsmasq-dns-847c4cc679-m4bfc" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.842550 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54a9f74e-fc12-43b7-aca3-0594480e0222-combined-ca-bundle\") pod \"heat-db-sync-kdfn7\" (UID: \"54a9f74e-fc12-43b7-aca3-0594480e0222\") " pod="openstack/heat-db-sync-kdfn7" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.842596 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d734d9b-30ed-4ef7-b4c6-6958b19e6118-scripts\") pod \"keystone-bootstrap-lgxxc\" (UID: \"4d734d9b-30ed-4ef7-b4c6-6958b19e6118\") " pod="openstack/keystone-bootstrap-lgxxc" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.842622 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf-dns-svc\") pod \"dnsmasq-dns-847c4cc679-m4bfc\" (UID: \"ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf\") " pod="openstack/dnsmasq-dns-847c4cc679-m4bfc" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.842637 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf-config\") pod \"dnsmasq-dns-847c4cc679-m4bfc\" (UID: \"ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf\") " pod="openstack/dnsmasq-dns-847c4cc679-m4bfc" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.842657 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-m4bfc\" (UID: \"ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf\") " pod="openstack/dnsmasq-dns-847c4cc679-m4bfc" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.842677 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-m4bfc\" (UID: \"ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf\") " pod="openstack/dnsmasq-dns-847c4cc679-m4bfc" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.842697 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d734d9b-30ed-4ef7-b4c6-6958b19e6118-config-data\") pod \"keystone-bootstrap-lgxxc\" (UID: \"4d734d9b-30ed-4ef7-b4c6-6958b19e6118\") " pod="openstack/keystone-bootstrap-lgxxc" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.842718 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4d734d9b-30ed-4ef7-b4c6-6958b19e6118-fernet-keys\") pod \"keystone-bootstrap-lgxxc\" (UID: \"4d734d9b-30ed-4ef7-b4c6-6958b19e6118\") " pod="openstack/keystone-bootstrap-lgxxc" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.842735 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54a9f74e-fc12-43b7-aca3-0594480e0222-config-data\") pod \"heat-db-sync-kdfn7\" (UID: \"54a9f74e-fc12-43b7-aca3-0594480e0222\") " pod="openstack/heat-db-sync-kdfn7" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.842755 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mn58z\" (UniqueName: \"kubernetes.io/projected/ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf-kube-api-access-mn58z\") pod \"dnsmasq-dns-847c4cc679-m4bfc\" (UID: \"ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf\") " pod="openstack/dnsmasq-dns-847c4cc679-m4bfc" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.842782 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spxnd\" (UniqueName: \"kubernetes.io/projected/54a9f74e-fc12-43b7-aca3-0594480e0222-kube-api-access-spxnd\") pod \"heat-db-sync-kdfn7\" (UID: \"54a9f74e-fc12-43b7-aca3-0594480e0222\") " pod="openstack/heat-db-sync-kdfn7" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.842806 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4d734d9b-30ed-4ef7-b4c6-6958b19e6118-credential-keys\") pod \"keystone-bootstrap-lgxxc\" (UID: \"4d734d9b-30ed-4ef7-b4c6-6958b19e6118\") " pod="openstack/keystone-bootstrap-lgxxc" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.842873 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khptd\" (UniqueName: \"kubernetes.io/projected/4d734d9b-30ed-4ef7-b4c6-6958b19e6118-kube-api-access-khptd\") pod \"keystone-bootstrap-lgxxc\" (UID: \"4d734d9b-30ed-4ef7-b4c6-6958b19e6118\") " pod="openstack/keystone-bootstrap-lgxxc" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.844566 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-m4bfc\" (UID: \"ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf\") " pod="openstack/dnsmasq-dns-847c4cc679-m4bfc" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.845571 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-m4bfc\" (UID: \"ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf\") " pod="openstack/dnsmasq-dns-847c4cc679-m4bfc" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.846244 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf-dns-svc\") pod \"dnsmasq-dns-847c4cc679-m4bfc\" (UID: \"ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf\") " pod="openstack/dnsmasq-dns-847c4cc679-m4bfc" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.846818 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf-config\") pod \"dnsmasq-dns-847c4cc679-m4bfc\" (UID: \"ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf\") " pod="openstack/dnsmasq-dns-847c4cc679-m4bfc" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.847413 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-m4bfc\" (UID: \"ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf\") " pod="openstack/dnsmasq-dns-847c4cc679-m4bfc" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.860432 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4d734d9b-30ed-4ef7-b4c6-6958b19e6118-fernet-keys\") pod \"keystone-bootstrap-lgxxc\" (UID: \"4d734d9b-30ed-4ef7-b4c6-6958b19e6118\") " pod="openstack/keystone-bootstrap-lgxxc" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.886456 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4d734d9b-30ed-4ef7-b4c6-6958b19e6118-credential-keys\") pod \"keystone-bootstrap-lgxxc\" (UID: \"4d734d9b-30ed-4ef7-b4c6-6958b19e6118\") " pod="openstack/keystone-bootstrap-lgxxc" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.887006 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d734d9b-30ed-4ef7-b4c6-6958b19e6118-config-data\") pod \"keystone-bootstrap-lgxxc\" (UID: \"4d734d9b-30ed-4ef7-b4c6-6958b19e6118\") " pod="openstack/keystone-bootstrap-lgxxc" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.887272 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d734d9b-30ed-4ef7-b4c6-6958b19e6118-combined-ca-bundle\") pod \"keystone-bootstrap-lgxxc\" (UID: \"4d734d9b-30ed-4ef7-b4c6-6958b19e6118\") " pod="openstack/keystone-bootstrap-lgxxc" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.896805 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d734d9b-30ed-4ef7-b4c6-6958b19e6118-scripts\") pod \"keystone-bootstrap-lgxxc\" (UID: \"4d734d9b-30ed-4ef7-b4c6-6958b19e6118\") " pod="openstack/keystone-bootstrap-lgxxc" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.899283 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-5pb7v"] Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.899434 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khptd\" (UniqueName: \"kubernetes.io/projected/4d734d9b-30ed-4ef7-b4c6-6958b19e6118-kube-api-access-khptd\") pod \"keystone-bootstrap-lgxxc\" (UID: \"4d734d9b-30ed-4ef7-b4c6-6958b19e6118\") " pod="openstack/keystone-bootstrap-lgxxc" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.901647 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-5pb7v" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.903380 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn58z\" (UniqueName: \"kubernetes.io/projected/ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf-kube-api-access-mn58z\") pod \"dnsmasq-dns-847c4cc679-m4bfc\" (UID: \"ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf\") " pod="openstack/dnsmasq-dns-847c4cc679-m4bfc" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.911016 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-7qtqf" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.912459 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.913119 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.929446 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-5pb7v"] Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.944452 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54a9f74e-fc12-43b7-aca3-0594480e0222-combined-ca-bundle\") pod \"heat-db-sync-kdfn7\" (UID: \"54a9f74e-fc12-43b7-aca3-0594480e0222\") " pod="openstack/heat-db-sync-kdfn7" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.944732 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54a9f74e-fc12-43b7-aca3-0594480e0222-config-data\") pod \"heat-db-sync-kdfn7\" (UID: \"54a9f74e-fc12-43b7-aca3-0594480e0222\") " pod="openstack/heat-db-sync-kdfn7" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.944837 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spxnd\" (UniqueName: \"kubernetes.io/projected/54a9f74e-fc12-43b7-aca3-0594480e0222-kube-api-access-spxnd\") pod \"heat-db-sync-kdfn7\" (UID: \"54a9f74e-fc12-43b7-aca3-0594480e0222\") " pod="openstack/heat-db-sync-kdfn7" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.950419 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54a9f74e-fc12-43b7-aca3-0594480e0222-combined-ca-bundle\") pod \"heat-db-sync-kdfn7\" (UID: \"54a9f74e-fc12-43b7-aca3-0594480e0222\") " pod="openstack/heat-db-sync-kdfn7" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.963834 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54a9f74e-fc12-43b7-aca3-0594480e0222-config-data\") pod \"heat-db-sync-kdfn7\" (UID: \"54a9f74e-fc12-43b7-aca3-0594480e0222\") " pod="openstack/heat-db-sync-kdfn7" Jan 26 18:54:31 crc kubenswrapper[4737]: I0126 18:54:31.988740 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-m4bfc" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:31.990872 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spxnd\" (UniqueName: \"kubernetes.io/projected/54a9f74e-fc12-43b7-aca3-0594480e0222-kube-api-access-spxnd\") pod \"heat-db-sync-kdfn7\" (UID: \"54a9f74e-fc12-43b7-aca3-0594480e0222\") " pod="openstack/heat-db-sync-kdfn7" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.008284 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lgxxc" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.110483 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-crvp5"] Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.163541 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-crvp5" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.171148 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cac069b5-db5e-47ec-ada0-7e6acf1af111-db-sync-config-data\") pod \"cinder-db-sync-5pb7v\" (UID: \"cac069b5-db5e-47ec-ada0-7e6acf1af111\") " pod="openstack/cinder-db-sync-5pb7v" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.171210 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cac069b5-db5e-47ec-ada0-7e6acf1af111-etc-machine-id\") pod \"cinder-db-sync-5pb7v\" (UID: \"cac069b5-db5e-47ec-ada0-7e6acf1af111\") " pod="openstack/cinder-db-sync-5pb7v" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.171796 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-kdfn7" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.171962 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cac069b5-db5e-47ec-ada0-7e6acf1af111-scripts\") pod \"cinder-db-sync-5pb7v\" (UID: \"cac069b5-db5e-47ec-ada0-7e6acf1af111\") " pod="openstack/cinder-db-sync-5pb7v" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.172944 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cac069b5-db5e-47ec-ada0-7e6acf1af111-config-data\") pod \"cinder-db-sync-5pb7v\" (UID: \"cac069b5-db5e-47ec-ada0-7e6acf1af111\") " pod="openstack/cinder-db-sync-5pb7v" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.173271 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cac069b5-db5e-47ec-ada0-7e6acf1af111-combined-ca-bundle\") pod \"cinder-db-sync-5pb7v\" (UID: \"cac069b5-db5e-47ec-ada0-7e6acf1af111\") " pod="openstack/cinder-db-sync-5pb7v" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.173306 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d45lf\" (UniqueName: \"kubernetes.io/projected/cac069b5-db5e-47ec-ada0-7e6acf1af111-kube-api-access-d45lf\") pod \"cinder-db-sync-5pb7v\" (UID: \"cac069b5-db5e-47ec-ada0-7e6acf1af111\") " pod="openstack/cinder-db-sync-5pb7v" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.182360 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-2b6wq" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.182557 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.197333 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-crvp5"] Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.245610 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-sk8gf"] Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.254698 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-sk8gf" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.264249 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.264506 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-2w8wx" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.264778 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.275573 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cac069b5-db5e-47ec-ada0-7e6acf1af111-db-sync-config-data\") pod \"cinder-db-sync-5pb7v\" (UID: \"cac069b5-db5e-47ec-ada0-7e6acf1af111\") " pod="openstack/cinder-db-sync-5pb7v" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.275631 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cac069b5-db5e-47ec-ada0-7e6acf1af111-etc-machine-id\") pod \"cinder-db-sync-5pb7v\" (UID: \"cac069b5-db5e-47ec-ada0-7e6acf1af111\") " pod="openstack/cinder-db-sync-5pb7v" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.275651 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cac069b5-db5e-47ec-ada0-7e6acf1af111-scripts\") pod \"cinder-db-sync-5pb7v\" (UID: \"cac069b5-db5e-47ec-ada0-7e6acf1af111\") " pod="openstack/cinder-db-sync-5pb7v" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.275675 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cac069b5-db5e-47ec-ada0-7e6acf1af111-config-data\") pod \"cinder-db-sync-5pb7v\" (UID: \"cac069b5-db5e-47ec-ada0-7e6acf1af111\") " pod="openstack/cinder-db-sync-5pb7v" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.275807 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cac069b5-db5e-47ec-ada0-7e6acf1af111-combined-ca-bundle\") pod \"cinder-db-sync-5pb7v\" (UID: \"cac069b5-db5e-47ec-ada0-7e6acf1af111\") " pod="openstack/cinder-db-sync-5pb7v" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.275827 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d45lf\" (UniqueName: \"kubernetes.io/projected/cac069b5-db5e-47ec-ada0-7e6acf1af111-kube-api-access-d45lf\") pod \"cinder-db-sync-5pb7v\" (UID: \"cac069b5-db5e-47ec-ada0-7e6acf1af111\") " pod="openstack/cinder-db-sync-5pb7v" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.279097 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-sk8gf"] Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.280879 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cac069b5-db5e-47ec-ada0-7e6acf1af111-db-sync-config-data\") pod \"cinder-db-sync-5pb7v\" (UID: \"cac069b5-db5e-47ec-ada0-7e6acf1af111\") " pod="openstack/cinder-db-sync-5pb7v" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.281200 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cac069b5-db5e-47ec-ada0-7e6acf1af111-etc-machine-id\") pod \"cinder-db-sync-5pb7v\" (UID: \"cac069b5-db5e-47ec-ada0-7e6acf1af111\") " pod="openstack/cinder-db-sync-5pb7v" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.294390 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cac069b5-db5e-47ec-ada0-7e6acf1af111-combined-ca-bundle\") pod \"cinder-db-sync-5pb7v\" (UID: \"cac069b5-db5e-47ec-ada0-7e6acf1af111\") " pod="openstack/cinder-db-sync-5pb7v" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.302741 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-m4bfc"] Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.305898 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cac069b5-db5e-47ec-ada0-7e6acf1af111-scripts\") pod \"cinder-db-sync-5pb7v\" (UID: \"cac069b5-db5e-47ec-ada0-7e6acf1af111\") " pod="openstack/cinder-db-sync-5pb7v" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.310627 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cac069b5-db5e-47ec-ada0-7e6acf1af111-config-data\") pod \"cinder-db-sync-5pb7v\" (UID: \"cac069b5-db5e-47ec-ada0-7e6acf1af111\") " pod="openstack/cinder-db-sync-5pb7v" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.314816 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d45lf\" (UniqueName: \"kubernetes.io/projected/cac069b5-db5e-47ec-ada0-7e6acf1af111-kube-api-access-d45lf\") pod \"cinder-db-sync-5pb7v\" (UID: \"cac069b5-db5e-47ec-ada0-7e6acf1af111\") " pod="openstack/cinder-db-sync-5pb7v" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.326471 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-lmn22"] Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.339523 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-lmn22" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.345371 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.355560 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.369569 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.370163 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.385701 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cgcl\" (UniqueName: \"kubernetes.io/projected/ccac15d0-8553-4c25-9bac-4f65d06e7d0e-kube-api-access-5cgcl\") pod \"neutron-db-sync-sk8gf\" (UID: \"ccac15d0-8553-4c25-9bac-4f65d06e7d0e\") " pod="openstack/neutron-db-sync-sk8gf" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.385746 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/31ee14c5-9b8d-4903-afc7-0b7c643b2756-db-sync-config-data\") pod \"barbican-db-sync-crvp5\" (UID: \"31ee14c5-9b8d-4903-afc7-0b7c643b2756\") " pod="openstack/barbican-db-sync-crvp5" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.385824 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31ee14c5-9b8d-4903-afc7-0b7c643b2756-combined-ca-bundle\") pod \"barbican-db-sync-crvp5\" (UID: \"31ee14c5-9b8d-4903-afc7-0b7c643b2756\") " pod="openstack/barbican-db-sync-crvp5" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.385879 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ccac15d0-8553-4c25-9bac-4f65d06e7d0e-config\") pod \"neutron-db-sync-sk8gf\" (UID: \"ccac15d0-8553-4c25-9bac-4f65d06e7d0e\") " pod="openstack/neutron-db-sync-sk8gf" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.385929 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfm6g\" (UniqueName: \"kubernetes.io/projected/31ee14c5-9b8d-4903-afc7-0b7c643b2756-kube-api-access-vfm6g\") pod \"barbican-db-sync-crvp5\" (UID: \"31ee14c5-9b8d-4903-afc7-0b7c643b2756\") " pod="openstack/barbican-db-sync-crvp5" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.385950 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccac15d0-8553-4c25-9bac-4f65d06e7d0e-combined-ca-bundle\") pod \"neutron-db-sync-sk8gf\" (UID: \"ccac15d0-8553-4c25-9bac-4f65d06e7d0e\") " pod="openstack/neutron-db-sync-sk8gf" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.429008 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-lmn22"] Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.450640 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.465055 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-8nbml"] Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.467157 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8nbml" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.472409 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.472662 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.473291 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-vnvks" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.481147 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-8nbml"] Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.488305 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfm6g\" (UniqueName: \"kubernetes.io/projected/31ee14c5-9b8d-4903-afc7-0b7c643b2756-kube-api-access-vfm6g\") pod \"barbican-db-sync-crvp5\" (UID: \"31ee14c5-9b8d-4903-afc7-0b7c643b2756\") " pod="openstack/barbican-db-sync-crvp5" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.488349 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccac15d0-8553-4c25-9bac-4f65d06e7d0e-combined-ca-bundle\") pod \"neutron-db-sync-sk8gf\" (UID: \"ccac15d0-8553-4c25-9bac-4f65d06e7d0e\") " pod="openstack/neutron-db-sync-sk8gf" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.488376 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/093651a2-4ab4-4c4a-8b9a-16836c7117bc-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-lmn22\" (UID: \"093651a2-4ab4-4c4a-8b9a-16836c7117bc\") " pod="openstack/dnsmasq-dns-785d8bcb8c-lmn22" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.488394 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b81603b3-3bc1-43ba-8a07-59b7f8eed3b6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b81603b3-3bc1-43ba-8a07-59b7f8eed3b6\") " pod="openstack/ceilometer-0" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.488417 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/093651a2-4ab4-4c4a-8b9a-16836c7117bc-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-lmn22\" (UID: \"093651a2-4ab4-4c4a-8b9a-16836c7117bc\") " pod="openstack/dnsmasq-dns-785d8bcb8c-lmn22" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.488455 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/093651a2-4ab4-4c4a-8b9a-16836c7117bc-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-lmn22\" (UID: \"093651a2-4ab4-4c4a-8b9a-16836c7117bc\") " pod="openstack/dnsmasq-dns-785d8bcb8c-lmn22" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.488471 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b81603b3-3bc1-43ba-8a07-59b7f8eed3b6-scripts\") pod \"ceilometer-0\" (UID: \"b81603b3-3bc1-43ba-8a07-59b7f8eed3b6\") " pod="openstack/ceilometer-0" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.488516 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cgcl\" (UniqueName: \"kubernetes.io/projected/ccac15d0-8553-4c25-9bac-4f65d06e7d0e-kube-api-access-5cgcl\") pod \"neutron-db-sync-sk8gf\" (UID: \"ccac15d0-8553-4c25-9bac-4f65d06e7d0e\") " pod="openstack/neutron-db-sync-sk8gf" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.488536 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b81603b3-3bc1-43ba-8a07-59b7f8eed3b6-config-data\") pod \"ceilometer-0\" (UID: \"b81603b3-3bc1-43ba-8a07-59b7f8eed3b6\") " pod="openstack/ceilometer-0" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.488561 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/31ee14c5-9b8d-4903-afc7-0b7c643b2756-db-sync-config-data\") pod \"barbican-db-sync-crvp5\" (UID: \"31ee14c5-9b8d-4903-afc7-0b7c643b2756\") " pod="openstack/barbican-db-sync-crvp5" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.488589 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b81603b3-3bc1-43ba-8a07-59b7f8eed3b6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b81603b3-3bc1-43ba-8a07-59b7f8eed3b6\") " pod="openstack/ceilometer-0" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.488617 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b81603b3-3bc1-43ba-8a07-59b7f8eed3b6-run-httpd\") pod \"ceilometer-0\" (UID: \"b81603b3-3bc1-43ba-8a07-59b7f8eed3b6\") " pod="openstack/ceilometer-0" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.488637 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/093651a2-4ab4-4c4a-8b9a-16836c7117bc-config\") pod \"dnsmasq-dns-785d8bcb8c-lmn22\" (UID: \"093651a2-4ab4-4c4a-8b9a-16836c7117bc\") " pod="openstack/dnsmasq-dns-785d8bcb8c-lmn22" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.488673 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/093651a2-4ab4-4c4a-8b9a-16836c7117bc-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-lmn22\" (UID: \"093651a2-4ab4-4c4a-8b9a-16836c7117bc\") " pod="openstack/dnsmasq-dns-785d8bcb8c-lmn22" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.488691 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31ee14c5-9b8d-4903-afc7-0b7c643b2756-combined-ca-bundle\") pod \"barbican-db-sync-crvp5\" (UID: \"31ee14c5-9b8d-4903-afc7-0b7c643b2756\") " pod="openstack/barbican-db-sync-crvp5" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.488723 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56t26\" (UniqueName: \"kubernetes.io/projected/b81603b3-3bc1-43ba-8a07-59b7f8eed3b6-kube-api-access-56t26\") pod \"ceilometer-0\" (UID: \"b81603b3-3bc1-43ba-8a07-59b7f8eed3b6\") " pod="openstack/ceilometer-0" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.488749 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ccac15d0-8553-4c25-9bac-4f65d06e7d0e-config\") pod \"neutron-db-sync-sk8gf\" (UID: \"ccac15d0-8553-4c25-9bac-4f65d06e7d0e\") " pod="openstack/neutron-db-sync-sk8gf" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.488778 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7gfj\" (UniqueName: \"kubernetes.io/projected/093651a2-4ab4-4c4a-8b9a-16836c7117bc-kube-api-access-t7gfj\") pod \"dnsmasq-dns-785d8bcb8c-lmn22\" (UID: \"093651a2-4ab4-4c4a-8b9a-16836c7117bc\") " pod="openstack/dnsmasq-dns-785d8bcb8c-lmn22" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.488797 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b81603b3-3bc1-43ba-8a07-59b7f8eed3b6-log-httpd\") pod \"ceilometer-0\" (UID: \"b81603b3-3bc1-43ba-8a07-59b7f8eed3b6\") " pod="openstack/ceilometer-0" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.496602 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-5pb7v" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.504957 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/31ee14c5-9b8d-4903-afc7-0b7c643b2756-db-sync-config-data\") pod \"barbican-db-sync-crvp5\" (UID: \"31ee14c5-9b8d-4903-afc7-0b7c643b2756\") " pod="openstack/barbican-db-sync-crvp5" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.505619 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccac15d0-8553-4c25-9bac-4f65d06e7d0e-combined-ca-bundle\") pod \"neutron-db-sync-sk8gf\" (UID: \"ccac15d0-8553-4c25-9bac-4f65d06e7d0e\") " pod="openstack/neutron-db-sync-sk8gf" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.516087 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31ee14c5-9b8d-4903-afc7-0b7c643b2756-combined-ca-bundle\") pod \"barbican-db-sync-crvp5\" (UID: \"31ee14c5-9b8d-4903-afc7-0b7c643b2756\") " pod="openstack/barbican-db-sync-crvp5" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.517866 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ccac15d0-8553-4c25-9bac-4f65d06e7d0e-config\") pod \"neutron-db-sync-sk8gf\" (UID: \"ccac15d0-8553-4c25-9bac-4f65d06e7d0e\") " pod="openstack/neutron-db-sync-sk8gf" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.531439 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfm6g\" (UniqueName: \"kubernetes.io/projected/31ee14c5-9b8d-4903-afc7-0b7c643b2756-kube-api-access-vfm6g\") pod \"barbican-db-sync-crvp5\" (UID: \"31ee14c5-9b8d-4903-afc7-0b7c643b2756\") " pod="openstack/barbican-db-sync-crvp5" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.534954 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cgcl\" (UniqueName: \"kubernetes.io/projected/ccac15d0-8553-4c25-9bac-4f65d06e7d0e-kube-api-access-5cgcl\") pod \"neutron-db-sync-sk8gf\" (UID: \"ccac15d0-8553-4c25-9bac-4f65d06e7d0e\") " pod="openstack/neutron-db-sync-sk8gf" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.590808 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11147190-1d45-4798-83d7-449cd574a296-scripts\") pod \"placement-db-sync-8nbml\" (UID: \"11147190-1d45-4798-83d7-449cd574a296\") " pod="openstack/placement-db-sync-8nbml" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.591309 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11147190-1d45-4798-83d7-449cd574a296-config-data\") pod \"placement-db-sync-8nbml\" (UID: \"11147190-1d45-4798-83d7-449cd574a296\") " pod="openstack/placement-db-sync-8nbml" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.591350 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b81603b3-3bc1-43ba-8a07-59b7f8eed3b6-config-data\") pod \"ceilometer-0\" (UID: \"b81603b3-3bc1-43ba-8a07-59b7f8eed3b6\") " pod="openstack/ceilometer-0" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.591418 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b81603b3-3bc1-43ba-8a07-59b7f8eed3b6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b81603b3-3bc1-43ba-8a07-59b7f8eed3b6\") " pod="openstack/ceilometer-0" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.591447 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6q8t\" (UniqueName: \"kubernetes.io/projected/11147190-1d45-4798-83d7-449cd574a296-kube-api-access-p6q8t\") pod \"placement-db-sync-8nbml\" (UID: \"11147190-1d45-4798-83d7-449cd574a296\") " pod="openstack/placement-db-sync-8nbml" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.591495 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b81603b3-3bc1-43ba-8a07-59b7f8eed3b6-run-httpd\") pod \"ceilometer-0\" (UID: \"b81603b3-3bc1-43ba-8a07-59b7f8eed3b6\") " pod="openstack/ceilometer-0" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.591533 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/093651a2-4ab4-4c4a-8b9a-16836c7117bc-config\") pod \"dnsmasq-dns-785d8bcb8c-lmn22\" (UID: \"093651a2-4ab4-4c4a-8b9a-16836c7117bc\") " pod="openstack/dnsmasq-dns-785d8bcb8c-lmn22" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.591595 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/093651a2-4ab4-4c4a-8b9a-16836c7117bc-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-lmn22\" (UID: \"093651a2-4ab4-4c4a-8b9a-16836c7117bc\") " pod="openstack/dnsmasq-dns-785d8bcb8c-lmn22" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.591659 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56t26\" (UniqueName: \"kubernetes.io/projected/b81603b3-3bc1-43ba-8a07-59b7f8eed3b6-kube-api-access-56t26\") pod \"ceilometer-0\" (UID: \"b81603b3-3bc1-43ba-8a07-59b7f8eed3b6\") " pod="openstack/ceilometer-0" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.591684 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11147190-1d45-4798-83d7-449cd574a296-logs\") pod \"placement-db-sync-8nbml\" (UID: \"11147190-1d45-4798-83d7-449cd574a296\") " pod="openstack/placement-db-sync-8nbml" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.591807 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7gfj\" (UniqueName: \"kubernetes.io/projected/093651a2-4ab4-4c4a-8b9a-16836c7117bc-kube-api-access-t7gfj\") pod \"dnsmasq-dns-785d8bcb8c-lmn22\" (UID: \"093651a2-4ab4-4c4a-8b9a-16836c7117bc\") " pod="openstack/dnsmasq-dns-785d8bcb8c-lmn22" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.591850 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b81603b3-3bc1-43ba-8a07-59b7f8eed3b6-log-httpd\") pod \"ceilometer-0\" (UID: \"b81603b3-3bc1-43ba-8a07-59b7f8eed3b6\") " pod="openstack/ceilometer-0" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.591925 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/093651a2-4ab4-4c4a-8b9a-16836c7117bc-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-lmn22\" (UID: \"093651a2-4ab4-4c4a-8b9a-16836c7117bc\") " pod="openstack/dnsmasq-dns-785d8bcb8c-lmn22" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.591952 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b81603b3-3bc1-43ba-8a07-59b7f8eed3b6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b81603b3-3bc1-43ba-8a07-59b7f8eed3b6\") " pod="openstack/ceilometer-0" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.591989 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/093651a2-4ab4-4c4a-8b9a-16836c7117bc-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-lmn22\" (UID: \"093651a2-4ab4-4c4a-8b9a-16836c7117bc\") " pod="openstack/dnsmasq-dns-785d8bcb8c-lmn22" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.592029 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11147190-1d45-4798-83d7-449cd574a296-combined-ca-bundle\") pod \"placement-db-sync-8nbml\" (UID: \"11147190-1d45-4798-83d7-449cd574a296\") " pod="openstack/placement-db-sync-8nbml" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.592118 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/093651a2-4ab4-4c4a-8b9a-16836c7117bc-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-lmn22\" (UID: \"093651a2-4ab4-4c4a-8b9a-16836c7117bc\") " pod="openstack/dnsmasq-dns-785d8bcb8c-lmn22" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.592151 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b81603b3-3bc1-43ba-8a07-59b7f8eed3b6-scripts\") pod \"ceilometer-0\" (UID: \"b81603b3-3bc1-43ba-8a07-59b7f8eed3b6\") " pod="openstack/ceilometer-0" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.592685 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b81603b3-3bc1-43ba-8a07-59b7f8eed3b6-log-httpd\") pod \"ceilometer-0\" (UID: \"b81603b3-3bc1-43ba-8a07-59b7f8eed3b6\") " pod="openstack/ceilometer-0" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.593058 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/093651a2-4ab4-4c4a-8b9a-16836c7117bc-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-lmn22\" (UID: \"093651a2-4ab4-4c4a-8b9a-16836c7117bc\") " pod="openstack/dnsmasq-dns-785d8bcb8c-lmn22" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.593336 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b81603b3-3bc1-43ba-8a07-59b7f8eed3b6-run-httpd\") pod \"ceilometer-0\" (UID: \"b81603b3-3bc1-43ba-8a07-59b7f8eed3b6\") " pod="openstack/ceilometer-0" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.594398 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/093651a2-4ab4-4c4a-8b9a-16836c7117bc-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-lmn22\" (UID: \"093651a2-4ab4-4c4a-8b9a-16836c7117bc\") " pod="openstack/dnsmasq-dns-785d8bcb8c-lmn22" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.595968 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/093651a2-4ab4-4c4a-8b9a-16836c7117bc-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-lmn22\" (UID: \"093651a2-4ab4-4c4a-8b9a-16836c7117bc\") " pod="openstack/dnsmasq-dns-785d8bcb8c-lmn22" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.596591 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/093651a2-4ab4-4c4a-8b9a-16836c7117bc-config\") pod \"dnsmasq-dns-785d8bcb8c-lmn22\" (UID: \"093651a2-4ab4-4c4a-8b9a-16836c7117bc\") " pod="openstack/dnsmasq-dns-785d8bcb8c-lmn22" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.599107 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b81603b3-3bc1-43ba-8a07-59b7f8eed3b6-config-data\") pod \"ceilometer-0\" (UID: \"b81603b3-3bc1-43ba-8a07-59b7f8eed3b6\") " pod="openstack/ceilometer-0" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.599523 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b81603b3-3bc1-43ba-8a07-59b7f8eed3b6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b81603b3-3bc1-43ba-8a07-59b7f8eed3b6\") " pod="openstack/ceilometer-0" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.600149 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b81603b3-3bc1-43ba-8a07-59b7f8eed3b6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b81603b3-3bc1-43ba-8a07-59b7f8eed3b6\") " pod="openstack/ceilometer-0" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.600886 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/093651a2-4ab4-4c4a-8b9a-16836c7117bc-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-lmn22\" (UID: \"093651a2-4ab4-4c4a-8b9a-16836c7117bc\") " pod="openstack/dnsmasq-dns-785d8bcb8c-lmn22" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.610412 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b81603b3-3bc1-43ba-8a07-59b7f8eed3b6-scripts\") pod \"ceilometer-0\" (UID: \"b81603b3-3bc1-43ba-8a07-59b7f8eed3b6\") " pod="openstack/ceilometer-0" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.613057 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7gfj\" (UniqueName: \"kubernetes.io/projected/093651a2-4ab4-4c4a-8b9a-16836c7117bc-kube-api-access-t7gfj\") pod \"dnsmasq-dns-785d8bcb8c-lmn22\" (UID: \"093651a2-4ab4-4c4a-8b9a-16836c7117bc\") " pod="openstack/dnsmasq-dns-785d8bcb8c-lmn22" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.617023 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56t26\" (UniqueName: \"kubernetes.io/projected/b81603b3-3bc1-43ba-8a07-59b7f8eed3b6-kube-api-access-56t26\") pod \"ceilometer-0\" (UID: \"b81603b3-3bc1-43ba-8a07-59b7f8eed3b6\") " pod="openstack/ceilometer-0" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.619857 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-sk8gf" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.696372 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11147190-1d45-4798-83d7-449cd574a296-scripts\") pod \"placement-db-sync-8nbml\" (UID: \"11147190-1d45-4798-83d7-449cd574a296\") " pod="openstack/placement-db-sync-8nbml" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.696424 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11147190-1d45-4798-83d7-449cd574a296-config-data\") pod \"placement-db-sync-8nbml\" (UID: \"11147190-1d45-4798-83d7-449cd574a296\") " pod="openstack/placement-db-sync-8nbml" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.696489 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6q8t\" (UniqueName: \"kubernetes.io/projected/11147190-1d45-4798-83d7-449cd574a296-kube-api-access-p6q8t\") pod \"placement-db-sync-8nbml\" (UID: \"11147190-1d45-4798-83d7-449cd574a296\") " pod="openstack/placement-db-sync-8nbml" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.696604 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11147190-1d45-4798-83d7-449cd574a296-logs\") pod \"placement-db-sync-8nbml\" (UID: \"11147190-1d45-4798-83d7-449cd574a296\") " pod="openstack/placement-db-sync-8nbml" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.696724 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11147190-1d45-4798-83d7-449cd574a296-combined-ca-bundle\") pod \"placement-db-sync-8nbml\" (UID: \"11147190-1d45-4798-83d7-449cd574a296\") " pod="openstack/placement-db-sync-8nbml" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.697099 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11147190-1d45-4798-83d7-449cd574a296-logs\") pod \"placement-db-sync-8nbml\" (UID: \"11147190-1d45-4798-83d7-449cd574a296\") " pod="openstack/placement-db-sync-8nbml" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.700509 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11147190-1d45-4798-83d7-449cd574a296-config-data\") pod \"placement-db-sync-8nbml\" (UID: \"11147190-1d45-4798-83d7-449cd574a296\") " pod="openstack/placement-db-sync-8nbml" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.700626 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-lmn22" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.700794 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11147190-1d45-4798-83d7-449cd574a296-scripts\") pod \"placement-db-sync-8nbml\" (UID: \"11147190-1d45-4798-83d7-449cd574a296\") " pod="openstack/placement-db-sync-8nbml" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.701995 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11147190-1d45-4798-83d7-449cd574a296-combined-ca-bundle\") pod \"placement-db-sync-8nbml\" (UID: \"11147190-1d45-4798-83d7-449cd574a296\") " pod="openstack/placement-db-sync-8nbml" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.721249 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6q8t\" (UniqueName: \"kubernetes.io/projected/11147190-1d45-4798-83d7-449cd574a296-kube-api-access-p6q8t\") pod \"placement-db-sync-8nbml\" (UID: \"11147190-1d45-4798-83d7-449cd574a296\") " pod="openstack/placement-db-sync-8nbml" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.729544 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.780175 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.789366 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.792955 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.795404 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.795568 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-5gpvt" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.795885 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.796262 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.819564 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8nbml" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.826967 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-crvp5" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.903736 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.905597 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.907934 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08f37444-370d-4f98-ac51-69ff25dadfb1-config-data\") pod \"glance-default-external-api-0\" (UID: \"08f37444-370d-4f98-ac51-69ff25dadfb1\") " pod="openstack/glance-default-external-api-0" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.907961 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08f37444-370d-4f98-ac51-69ff25dadfb1-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"08f37444-370d-4f98-ac51-69ff25dadfb1\") " pod="openstack/glance-default-external-api-0" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.908010 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-503a318c-a4ff-4b14-bac7-f0b8ecb31d43\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-503a318c-a4ff-4b14-bac7-f0b8ecb31d43\") pod \"glance-default-external-api-0\" (UID: \"08f37444-370d-4f98-ac51-69ff25dadfb1\") " pod="openstack/glance-default-external-api-0" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.908094 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08f37444-370d-4f98-ac51-69ff25dadfb1-scripts\") pod \"glance-default-external-api-0\" (UID: \"08f37444-370d-4f98-ac51-69ff25dadfb1\") " pod="openstack/glance-default-external-api-0" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.908116 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08f37444-370d-4f98-ac51-69ff25dadfb1-logs\") pod \"glance-default-external-api-0\" (UID: \"08f37444-370d-4f98-ac51-69ff25dadfb1\") " pod="openstack/glance-default-external-api-0" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.908146 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08f37444-370d-4f98-ac51-69ff25dadfb1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"08f37444-370d-4f98-ac51-69ff25dadfb1\") " pod="openstack/glance-default-external-api-0" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.908180 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frx5g\" (UniqueName: \"kubernetes.io/projected/08f37444-370d-4f98-ac51-69ff25dadfb1-kube-api-access-frx5g\") pod \"glance-default-external-api-0\" (UID: \"08f37444-370d-4f98-ac51-69ff25dadfb1\") " pod="openstack/glance-default-external-api-0" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.908195 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/08f37444-370d-4f98-ac51-69ff25dadfb1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"08f37444-370d-4f98-ac51-69ff25dadfb1\") " pod="openstack/glance-default-external-api-0" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.913926 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.914163 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.928567 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 18:54:32 crc kubenswrapper[4737]: I0126 18:54:32.942444 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-m4bfc"] Jan 26 18:54:33 crc kubenswrapper[4737]: I0126 18:54:33.009483 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08f37444-370d-4f98-ac51-69ff25dadfb1-scripts\") pod \"glance-default-external-api-0\" (UID: \"08f37444-370d-4f98-ac51-69ff25dadfb1\") " pod="openstack/glance-default-external-api-0" Jan 26 18:54:33 crc kubenswrapper[4737]: I0126 18:54:33.009845 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08f37444-370d-4f98-ac51-69ff25dadfb1-logs\") pod \"glance-default-external-api-0\" (UID: \"08f37444-370d-4f98-ac51-69ff25dadfb1\") " pod="openstack/glance-default-external-api-0" Jan 26 18:54:33 crc kubenswrapper[4737]: I0126 18:54:33.009885 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08f37444-370d-4f98-ac51-69ff25dadfb1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"08f37444-370d-4f98-ac51-69ff25dadfb1\") " pod="openstack/glance-default-external-api-0" Jan 26 18:54:33 crc kubenswrapper[4737]: I0126 18:54:33.009920 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d02a338-d239-4e47-9d1f-49f30678168b-logs\") pod \"glance-default-internal-api-0\" (UID: \"4d02a338-d239-4e47-9d1f-49f30678168b\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:54:33 crc kubenswrapper[4737]: I0126 18:54:33.009945 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frx5g\" (UniqueName: \"kubernetes.io/projected/08f37444-370d-4f98-ac51-69ff25dadfb1-kube-api-access-frx5g\") pod \"glance-default-external-api-0\" (UID: \"08f37444-370d-4f98-ac51-69ff25dadfb1\") " pod="openstack/glance-default-external-api-0" Jan 26 18:54:33 crc kubenswrapper[4737]: I0126 18:54:33.009964 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/08f37444-370d-4f98-ac51-69ff25dadfb1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"08f37444-370d-4f98-ac51-69ff25dadfb1\") " pod="openstack/glance-default-external-api-0" Jan 26 18:54:33 crc kubenswrapper[4737]: I0126 18:54:33.009996 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0ffb43bf-5e4b-4a05-81cd-85836e6d2780\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ffb43bf-5e4b-4a05-81cd-85836e6d2780\") pod \"glance-default-internal-api-0\" (UID: \"4d02a338-d239-4e47-9d1f-49f30678168b\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:54:33 crc kubenswrapper[4737]: I0126 18:54:33.010043 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4d02a338-d239-4e47-9d1f-49f30678168b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4d02a338-d239-4e47-9d1f-49f30678168b\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:54:33 crc kubenswrapper[4737]: I0126 18:54:33.010091 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d02a338-d239-4e47-9d1f-49f30678168b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4d02a338-d239-4e47-9d1f-49f30678168b\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:54:33 crc kubenswrapper[4737]: I0126 18:54:33.010114 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82c27\" (UniqueName: \"kubernetes.io/projected/4d02a338-d239-4e47-9d1f-49f30678168b-kube-api-access-82c27\") pod \"glance-default-internal-api-0\" (UID: \"4d02a338-d239-4e47-9d1f-49f30678168b\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:54:33 crc kubenswrapper[4737]: I0126 18:54:33.010139 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08f37444-370d-4f98-ac51-69ff25dadfb1-config-data\") pod \"glance-default-external-api-0\" (UID: \"08f37444-370d-4f98-ac51-69ff25dadfb1\") " pod="openstack/glance-default-external-api-0" Jan 26 18:54:33 crc kubenswrapper[4737]: I0126 18:54:33.010157 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08f37444-370d-4f98-ac51-69ff25dadfb1-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"08f37444-370d-4f98-ac51-69ff25dadfb1\") " pod="openstack/glance-default-external-api-0" Jan 26 18:54:33 crc kubenswrapper[4737]: I0126 18:54:33.010197 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-503a318c-a4ff-4b14-bac7-f0b8ecb31d43\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-503a318c-a4ff-4b14-bac7-f0b8ecb31d43\") pod \"glance-default-external-api-0\" (UID: \"08f37444-370d-4f98-ac51-69ff25dadfb1\") " pod="openstack/glance-default-external-api-0" Jan 26 18:54:33 crc kubenswrapper[4737]: I0126 18:54:33.010229 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d02a338-d239-4e47-9d1f-49f30678168b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4d02a338-d239-4e47-9d1f-49f30678168b\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:54:33 crc kubenswrapper[4737]: I0126 18:54:33.010257 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d02a338-d239-4e47-9d1f-49f30678168b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4d02a338-d239-4e47-9d1f-49f30678168b\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:54:33 crc kubenswrapper[4737]: I0126 18:54:33.010279 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d02a338-d239-4e47-9d1f-49f30678168b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4d02a338-d239-4e47-9d1f-49f30678168b\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:54:33 crc kubenswrapper[4737]: I0126 18:54:33.010695 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08f37444-370d-4f98-ac51-69ff25dadfb1-logs\") pod \"glance-default-external-api-0\" (UID: \"08f37444-370d-4f98-ac51-69ff25dadfb1\") " pod="openstack/glance-default-external-api-0" Jan 26 18:54:33 crc kubenswrapper[4737]: I0126 18:54:33.011479 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/08f37444-370d-4f98-ac51-69ff25dadfb1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"08f37444-370d-4f98-ac51-69ff25dadfb1\") " pod="openstack/glance-default-external-api-0" Jan 26 18:54:33 crc kubenswrapper[4737]: I0126 18:54:33.018593 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08f37444-370d-4f98-ac51-69ff25dadfb1-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"08f37444-370d-4f98-ac51-69ff25dadfb1\") " pod="openstack/glance-default-external-api-0" Jan 26 18:54:33 crc kubenswrapper[4737]: I0126 18:54:33.019279 4737 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 18:54:33 crc kubenswrapper[4737]: I0126 18:54:33.019318 4737 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-503a318c-a4ff-4b14-bac7-f0b8ecb31d43\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-503a318c-a4ff-4b14-bac7-f0b8ecb31d43\") pod \"glance-default-external-api-0\" (UID: \"08f37444-370d-4f98-ac51-69ff25dadfb1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/543a0865806adc8a1aa4ef4cf4d6f37534ce583cc9c348d82f63f0aa114aec1f/globalmount\"" pod="openstack/glance-default-external-api-0" Jan 26 18:54:33 crc kubenswrapper[4737]: I0126 18:54:33.021291 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08f37444-370d-4f98-ac51-69ff25dadfb1-scripts\") pod \"glance-default-external-api-0\" (UID: \"08f37444-370d-4f98-ac51-69ff25dadfb1\") " pod="openstack/glance-default-external-api-0" Jan 26 18:54:33 crc kubenswrapper[4737]: I0126 18:54:33.023541 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08f37444-370d-4f98-ac51-69ff25dadfb1-config-data\") pod \"glance-default-external-api-0\" (UID: \"08f37444-370d-4f98-ac51-69ff25dadfb1\") " pod="openstack/glance-default-external-api-0" Jan 26 18:54:33 crc kubenswrapper[4737]: I0126 18:54:33.034548 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08f37444-370d-4f98-ac51-69ff25dadfb1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"08f37444-370d-4f98-ac51-69ff25dadfb1\") " pod="openstack/glance-default-external-api-0" Jan 26 18:54:33 crc kubenswrapper[4737]: I0126 18:54:33.043289 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frx5g\" (UniqueName: \"kubernetes.io/projected/08f37444-370d-4f98-ac51-69ff25dadfb1-kube-api-access-frx5g\") pod \"glance-default-external-api-0\" (UID: \"08f37444-370d-4f98-ac51-69ff25dadfb1\") " pod="openstack/glance-default-external-api-0" Jan 26 18:54:33 crc kubenswrapper[4737]: I0126 18:54:33.097512 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-503a318c-a4ff-4b14-bac7-f0b8ecb31d43\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-503a318c-a4ff-4b14-bac7-f0b8ecb31d43\") pod \"glance-default-external-api-0\" (UID: \"08f37444-370d-4f98-ac51-69ff25dadfb1\") " pod="openstack/glance-default-external-api-0" Jan 26 18:54:33 crc kubenswrapper[4737]: I0126 18:54:33.116434 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d02a338-d239-4e47-9d1f-49f30678168b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4d02a338-d239-4e47-9d1f-49f30678168b\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:54:33 crc kubenswrapper[4737]: I0126 18:54:33.116500 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82c27\" (UniqueName: \"kubernetes.io/projected/4d02a338-d239-4e47-9d1f-49f30678168b-kube-api-access-82c27\") pod \"glance-default-internal-api-0\" (UID: \"4d02a338-d239-4e47-9d1f-49f30678168b\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:54:33 crc kubenswrapper[4737]: I0126 18:54:33.116710 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d02a338-d239-4e47-9d1f-49f30678168b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4d02a338-d239-4e47-9d1f-49f30678168b\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:54:33 crc kubenswrapper[4737]: I0126 18:54:33.116768 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d02a338-d239-4e47-9d1f-49f30678168b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4d02a338-d239-4e47-9d1f-49f30678168b\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:54:33 crc kubenswrapper[4737]: I0126 18:54:33.116800 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d02a338-d239-4e47-9d1f-49f30678168b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4d02a338-d239-4e47-9d1f-49f30678168b\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:54:33 crc kubenswrapper[4737]: I0126 18:54:33.116965 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d02a338-d239-4e47-9d1f-49f30678168b-logs\") pod \"glance-default-internal-api-0\" (UID: \"4d02a338-d239-4e47-9d1f-49f30678168b\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:54:33 crc kubenswrapper[4737]: I0126 18:54:33.117041 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0ffb43bf-5e4b-4a05-81cd-85836e6d2780\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ffb43bf-5e4b-4a05-81cd-85836e6d2780\") pod \"glance-default-internal-api-0\" (UID: \"4d02a338-d239-4e47-9d1f-49f30678168b\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:54:33 crc kubenswrapper[4737]: I0126 18:54:33.117145 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4d02a338-d239-4e47-9d1f-49f30678168b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4d02a338-d239-4e47-9d1f-49f30678168b\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:54:33 crc kubenswrapper[4737]: I0126 18:54:33.123636 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d02a338-d239-4e47-9d1f-49f30678168b-logs\") pod \"glance-default-internal-api-0\" (UID: \"4d02a338-d239-4e47-9d1f-49f30678168b\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:54:33 crc kubenswrapper[4737]: I0126 18:54:33.124267 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4d02a338-d239-4e47-9d1f-49f30678168b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4d02a338-d239-4e47-9d1f-49f30678168b\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:54:33 crc kubenswrapper[4737]: I0126 18:54:33.125141 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 18:54:33 crc kubenswrapper[4737]: I0126 18:54:33.127136 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d02a338-d239-4e47-9d1f-49f30678168b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4d02a338-d239-4e47-9d1f-49f30678168b\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:54:33 crc kubenswrapper[4737]: I0126 18:54:33.128757 4737 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 18:54:33 crc kubenswrapper[4737]: I0126 18:54:33.128797 4737 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0ffb43bf-5e4b-4a05-81cd-85836e6d2780\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ffb43bf-5e4b-4a05-81cd-85836e6d2780\") pod \"glance-default-internal-api-0\" (UID: \"4d02a338-d239-4e47-9d1f-49f30678168b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/fdf5547aa86845271a11e2b0db53f95e86a38bbd5e41234fa2d6106d36b4b80f/globalmount\"" pod="openstack/glance-default-internal-api-0" Jan 26 18:54:33 crc kubenswrapper[4737]: I0126 18:54:33.132906 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d02a338-d239-4e47-9d1f-49f30678168b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4d02a338-d239-4e47-9d1f-49f30678168b\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:54:33 crc kubenswrapper[4737]: I0126 18:54:33.151155 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82c27\" (UniqueName: \"kubernetes.io/projected/4d02a338-d239-4e47-9d1f-49f30678168b-kube-api-access-82c27\") pod \"glance-default-internal-api-0\" (UID: \"4d02a338-d239-4e47-9d1f-49f30678168b\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:54:33 crc kubenswrapper[4737]: I0126 18:54:33.152628 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d02a338-d239-4e47-9d1f-49f30678168b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4d02a338-d239-4e47-9d1f-49f30678168b\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:54:33 crc kubenswrapper[4737]: I0126 18:54:33.153934 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d02a338-d239-4e47-9d1f-49f30678168b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4d02a338-d239-4e47-9d1f-49f30678168b\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:54:33 crc kubenswrapper[4737]: I0126 18:54:33.225066 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0ffb43bf-5e4b-4a05-81cd-85836e6d2780\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ffb43bf-5e4b-4a05-81cd-85836e6d2780\") pod \"glance-default-internal-api-0\" (UID: \"4d02a338-d239-4e47-9d1f-49f30678168b\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:54:33 crc kubenswrapper[4737]: I0126 18:54:33.413860 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-m4bfc" event={"ID":"ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf","Type":"ContainerStarted","Data":"11f2d3b741e998db30947b22c74ce3890e2ae3068b2e3100c9aa82abe65427cd"} Jan 26 18:54:33 crc kubenswrapper[4737]: I0126 18:54:33.439349 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 18:54:33 crc kubenswrapper[4737]: I0126 18:54:33.476579 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-kdfn7"] Jan 26 18:54:33 crc kubenswrapper[4737]: I0126 18:54:33.572632 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-lgxxc"] Jan 26 18:54:33 crc kubenswrapper[4737]: I0126 18:54:33.706018 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-5pb7v"] Jan 26 18:54:34 crc kubenswrapper[4737]: W0126 18:54:34.235318 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podccac15d0_8553_4c25_9bac_4f65d06e7d0e.slice/crio-9bb1b7b2cdc6a1b0eb41241d17aaf25249a1c878ab1ace0b0a8a09bd4b712c13 WatchSource:0}: Error finding container 9bb1b7b2cdc6a1b0eb41241d17aaf25249a1c878ab1ace0b0a8a09bd4b712c13: Status 404 returned error can't find the container with id 9bb1b7b2cdc6a1b0eb41241d17aaf25249a1c878ab1ace0b0a8a09bd4b712c13 Jan 26 18:54:34 crc kubenswrapper[4737]: I0126 18:54:34.266977 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-sk8gf"] Jan 26 18:54:34 crc kubenswrapper[4737]: I0126 18:54:34.330248 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-lmn22"] Jan 26 18:54:34 crc kubenswrapper[4737]: I0126 18:54:34.351488 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:54:34 crc kubenswrapper[4737]: I0126 18:54:34.375170 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-8nbml"] Jan 26 18:54:34 crc kubenswrapper[4737]: I0126 18:54:34.384937 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-crvp5"] Jan 26 18:54:34 crc kubenswrapper[4737]: I0126 18:54:34.439562 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b81603b3-3bc1-43ba-8a07-59b7f8eed3b6","Type":"ContainerStarted","Data":"eac0d59303e3deb71a51dc974899adfac9802ad015d66af0fa9a58e23a1d6a77"} Jan 26 18:54:34 crc kubenswrapper[4737]: I0126 18:54:34.442922 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-sk8gf" event={"ID":"ccac15d0-8553-4c25-9bac-4f65d06e7d0e","Type":"ContainerStarted","Data":"9bb1b7b2cdc6a1b0eb41241d17aaf25249a1c878ab1ace0b0a8a09bd4b712c13"} Jan 26 18:54:34 crc kubenswrapper[4737]: I0126 18:54:34.484942 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-kdfn7" event={"ID":"54a9f74e-fc12-43b7-aca3-0594480e0222","Type":"ContainerStarted","Data":"fcda7f7865bf8ceadf7b23f11e3e35be2d4df8bde0693def7e093444acf3e2c1"} Jan 26 18:54:34 crc kubenswrapper[4737]: I0126 18:54:34.493938 4737 generic.go:334] "Generic (PLEG): container finished" podID="ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf" containerID="bacf7a77a3f313e033c3663ae15745009659a259461e31a1adfc662d8173340d" exitCode=0 Jan 26 18:54:34 crc kubenswrapper[4737]: I0126 18:54:34.494004 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-m4bfc" event={"ID":"ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf","Type":"ContainerDied","Data":"bacf7a77a3f313e033c3663ae15745009659a259461e31a1adfc662d8173340d"} Jan 26 18:54:34 crc kubenswrapper[4737]: I0126 18:54:34.547343 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lgxxc" event={"ID":"4d734d9b-30ed-4ef7-b4c6-6958b19e6118","Type":"ContainerStarted","Data":"a5a1a24c6d16166051da6f258f5ce4c4c2ed6a4c723f322d6a20383febb61693"} Jan 26 18:54:34 crc kubenswrapper[4737]: I0126 18:54:34.547704 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lgxxc" event={"ID":"4d734d9b-30ed-4ef7-b4c6-6958b19e6118","Type":"ContainerStarted","Data":"1d807e48c8d252130eb4d769d4a8c5f52e469fe224139a8627fbe190edba7a6a"} Jan 26 18:54:34 crc kubenswrapper[4737]: I0126 18:54:34.551716 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-lmn22" event={"ID":"093651a2-4ab4-4c4a-8b9a-16836c7117bc","Type":"ContainerStarted","Data":"9a687d1a7a80b9dcd56da66bd46e7fb94a6efb862b59f42657c8654131dc3582"} Jan 26 18:54:34 crc kubenswrapper[4737]: I0126 18:54:34.581123 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-5pb7v" event={"ID":"cac069b5-db5e-47ec-ada0-7e6acf1af111","Type":"ContainerStarted","Data":"f672bd6815e1dba3fa766b1bd4fb4a64a0af4b9e36fb8969c36d7c27f6e3927d"} Jan 26 18:54:34 crc kubenswrapper[4737]: I0126 18:54:34.620293 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 18:54:34 crc kubenswrapper[4737]: I0126 18:54:34.623020 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8nbml" event={"ID":"11147190-1d45-4798-83d7-449cd574a296","Type":"ContainerStarted","Data":"adbd01fefa44fc8454f428e996b1c6b81479ae26c7c1d2dbb979e864cb2709ce"} Jan 26 18:54:34 crc kubenswrapper[4737]: I0126 18:54:34.626815 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-lgxxc" podStartSLOduration=3.62679054 podStartE2EDuration="3.62679054s" podCreationTimestamp="2026-01-26 18:54:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:54:34.586728084 +0000 UTC m=+1447.894922792" watchObservedRunningTime="2026-01-26 18:54:34.62679054 +0000 UTC m=+1447.934985248" Jan 26 18:54:34 crc kubenswrapper[4737]: I0126 18:54:34.647545 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-crvp5" event={"ID":"31ee14c5-9b8d-4903-afc7-0b7c643b2756","Type":"ContainerStarted","Data":"f9824b8a74b0863a62a8520fd09957425e44a256b6ac5508d28cc8e1554277a3"} Jan 26 18:54:34 crc kubenswrapper[4737]: I0126 18:54:34.760009 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 18:54:34 crc kubenswrapper[4737]: I0126 18:54:34.785101 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 18:54:35 crc kubenswrapper[4737]: I0126 18:54:35.102394 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:54:35 crc kubenswrapper[4737]: E0126 18:54:35.182359 4737 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod093651a2_4ab4_4c4a_8b9a_16836c7117bc.slice/crio-conmon-871f950cb19477940b7dc8a749acc98004ad6e09bf6f85ec85d3aff84bc93bdc.scope\": RecentStats: unable to find data in memory cache]" Jan 26 18:54:35 crc kubenswrapper[4737]: I0126 18:54:35.303263 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-m4bfc" Jan 26 18:54:35 crc kubenswrapper[4737]: I0126 18:54:35.408111 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf-dns-svc\") pod \"ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf\" (UID: \"ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf\") " Jan 26 18:54:35 crc kubenswrapper[4737]: I0126 18:54:35.408247 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf-config\") pod \"ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf\" (UID: \"ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf\") " Jan 26 18:54:35 crc kubenswrapper[4737]: I0126 18:54:35.408382 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mn58z\" (UniqueName: \"kubernetes.io/projected/ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf-kube-api-access-mn58z\") pod \"ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf\" (UID: \"ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf\") " Jan 26 18:54:35 crc kubenswrapper[4737]: I0126 18:54:35.409132 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf-ovsdbserver-nb\") pod \"ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf\" (UID: \"ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf\") " Jan 26 18:54:35 crc kubenswrapper[4737]: I0126 18:54:35.409174 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf-ovsdbserver-sb\") pod \"ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf\" (UID: \"ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf\") " Jan 26 18:54:35 crc kubenswrapper[4737]: I0126 18:54:35.409281 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf-dns-swift-storage-0\") pod \"ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf\" (UID: \"ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf\") " Jan 26 18:54:35 crc kubenswrapper[4737]: I0126 18:54:35.426414 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf-kube-api-access-mn58z" (OuterVolumeSpecName: "kube-api-access-mn58z") pod "ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf" (UID: "ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf"). InnerVolumeSpecName "kube-api-access-mn58z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:54:35 crc kubenswrapper[4737]: I0126 18:54:35.453294 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf" (UID: "ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:54:35 crc kubenswrapper[4737]: I0126 18:54:35.459023 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf" (UID: "ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:54:35 crc kubenswrapper[4737]: I0126 18:54:35.473158 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf" (UID: "ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:54:35 crc kubenswrapper[4737]: I0126 18:54:35.473525 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 18:54:35 crc kubenswrapper[4737]: I0126 18:54:35.494157 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf-config" (OuterVolumeSpecName: "config") pod "ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf" (UID: "ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:54:35 crc kubenswrapper[4737]: I0126 18:54:35.498789 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf" (UID: "ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:54:35 crc kubenswrapper[4737]: I0126 18:54:35.513096 4737 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:35 crc kubenswrapper[4737]: I0126 18:54:35.513137 4737 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:35 crc kubenswrapper[4737]: I0126 18:54:35.513151 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mn58z\" (UniqueName: \"kubernetes.io/projected/ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf-kube-api-access-mn58z\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:35 crc kubenswrapper[4737]: I0126 18:54:35.513164 4737 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:35 crc kubenswrapper[4737]: I0126 18:54:35.513177 4737 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:35 crc kubenswrapper[4737]: I0126 18:54:35.513188 4737 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:35 crc kubenswrapper[4737]: I0126 18:54:35.709303 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-sk8gf" event={"ID":"ccac15d0-8553-4c25-9bac-4f65d06e7d0e","Type":"ContainerStarted","Data":"cc68631ceb5ab7897346be8341af243713cf34e8432f039ed3d3d66dbcd8ac62"} Jan 26 18:54:35 crc kubenswrapper[4737]: I0126 18:54:35.733585 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-m4bfc" event={"ID":"ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf","Type":"ContainerDied","Data":"11f2d3b741e998db30947b22c74ce3890e2ae3068b2e3100c9aa82abe65427cd"} Jan 26 18:54:35 crc kubenswrapper[4737]: I0126 18:54:35.733645 4737 scope.go:117] "RemoveContainer" containerID="bacf7a77a3f313e033c3663ae15745009659a259461e31a1adfc662d8173340d" Jan 26 18:54:35 crc kubenswrapper[4737]: I0126 18:54:35.733823 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-m4bfc" Jan 26 18:54:35 crc kubenswrapper[4737]: I0126 18:54:35.744477 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-sk8gf" podStartSLOduration=4.744447368 podStartE2EDuration="4.744447368s" podCreationTimestamp="2026-01-26 18:54:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:54:35.73271399 +0000 UTC m=+1449.040908718" watchObservedRunningTime="2026-01-26 18:54:35.744447368 +0000 UTC m=+1449.052642076" Jan 26 18:54:35 crc kubenswrapper[4737]: I0126 18:54:35.748967 4737 generic.go:334] "Generic (PLEG): container finished" podID="093651a2-4ab4-4c4a-8b9a-16836c7117bc" containerID="871f950cb19477940b7dc8a749acc98004ad6e09bf6f85ec85d3aff84bc93bdc" exitCode=0 Jan 26 18:54:35 crc kubenswrapper[4737]: I0126 18:54:35.749017 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-lmn22" event={"ID":"093651a2-4ab4-4c4a-8b9a-16836c7117bc","Type":"ContainerDied","Data":"871f950cb19477940b7dc8a749acc98004ad6e09bf6f85ec85d3aff84bc93bdc"} Jan 26 18:54:35 crc kubenswrapper[4737]: I0126 18:54:35.754440 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"08f37444-370d-4f98-ac51-69ff25dadfb1","Type":"ContainerStarted","Data":"ec7d2549c9234a4ea84ff8bf2a9f1ec74d544a7a397f958a90ea331e8c563871"} Jan 26 18:54:35 crc kubenswrapper[4737]: I0126 18:54:35.775909 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4d02a338-d239-4e47-9d1f-49f30678168b","Type":"ContainerStarted","Data":"7696cba41d28aac4113b962d0c2a6f93b1a34bf6ce7b6f7a422ce4b750fff091"} Jan 26 18:54:35 crc kubenswrapper[4737]: I0126 18:54:35.841756 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-m4bfc"] Jan 26 18:54:35 crc kubenswrapper[4737]: I0126 18:54:35.897869 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-m4bfc"] Jan 26 18:54:36 crc kubenswrapper[4737]: I0126 18:54:36.799051 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4d02a338-d239-4e47-9d1f-49f30678168b","Type":"ContainerStarted","Data":"f733f173d976a8f85f05631e64418cc180f1a3d4fc27e7a735162805d6a4960e"} Jan 26 18:54:36 crc kubenswrapper[4737]: I0126 18:54:36.807410 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-lmn22" event={"ID":"093651a2-4ab4-4c4a-8b9a-16836c7117bc","Type":"ContainerStarted","Data":"77d45fcac6a9c74293c6ce3e47d05de62ab15841ad7284eb54126fb8304f13d7"} Jan 26 18:54:36 crc kubenswrapper[4737]: I0126 18:54:36.807675 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-lmn22" Jan 26 18:54:36 crc kubenswrapper[4737]: I0126 18:54:36.850861 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-lmn22" podStartSLOduration=4.850388139 podStartE2EDuration="4.850388139s" podCreationTimestamp="2026-01-26 18:54:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:54:36.829522936 +0000 UTC m=+1450.137717644" watchObservedRunningTime="2026-01-26 18:54:36.850388139 +0000 UTC m=+1450.158582837" Jan 26 18:54:37 crc kubenswrapper[4737]: I0126 18:54:37.016705 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf" path="/var/lib/kubelet/pods/ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf/volumes" Jan 26 18:54:37 crc kubenswrapper[4737]: I0126 18:54:37.842582 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"08f37444-370d-4f98-ac51-69ff25dadfb1","Type":"ContainerStarted","Data":"d38e7f3d247ed56a6459e0e9ddadd13b7b573de70e09065d4acf8379f99a7d36"} Jan 26 18:54:37 crc kubenswrapper[4737]: I0126 18:54:37.847040 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4d02a338-d239-4e47-9d1f-49f30678168b" containerName="glance-log" containerID="cri-o://f733f173d976a8f85f05631e64418cc180f1a3d4fc27e7a735162805d6a4960e" gracePeriod=30 Jan 26 18:54:37 crc kubenswrapper[4737]: I0126 18:54:37.847198 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4d02a338-d239-4e47-9d1f-49f30678168b","Type":"ContainerStarted","Data":"4107b92d3860196b5cab9f4e3357b14a9bd5a5b2eb05ebb55ee5e19d084e9dbd"} Jan 26 18:54:37 crc kubenswrapper[4737]: I0126 18:54:37.847397 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4d02a338-d239-4e47-9d1f-49f30678168b" containerName="glance-httpd" containerID="cri-o://4107b92d3860196b5cab9f4e3357b14a9bd5a5b2eb05ebb55ee5e19d084e9dbd" gracePeriod=30 Jan 26 18:54:37 crc kubenswrapper[4737]: I0126 18:54:37.887992 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.887968674 podStartE2EDuration="6.887968674s" podCreationTimestamp="2026-01-26 18:54:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:54:37.877449065 +0000 UTC m=+1451.185643783" watchObservedRunningTime="2026-01-26 18:54:37.887968674 +0000 UTC m=+1451.196163382" Jan 26 18:54:38 crc kubenswrapper[4737]: I0126 18:54:38.868897 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"08f37444-370d-4f98-ac51-69ff25dadfb1","Type":"ContainerStarted","Data":"a8156db2183fd08413c7f56cf5b8bf860455800a49cb63df9d9f375e05e822a2"} Jan 26 18:54:38 crc kubenswrapper[4737]: I0126 18:54:38.869116 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="08f37444-370d-4f98-ac51-69ff25dadfb1" containerName="glance-log" containerID="cri-o://d38e7f3d247ed56a6459e0e9ddadd13b7b573de70e09065d4acf8379f99a7d36" gracePeriod=30 Jan 26 18:54:38 crc kubenswrapper[4737]: I0126 18:54:38.869125 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="08f37444-370d-4f98-ac51-69ff25dadfb1" containerName="glance-httpd" containerID="cri-o://a8156db2183fd08413c7f56cf5b8bf860455800a49cb63df9d9f375e05e822a2" gracePeriod=30 Jan 26 18:54:38 crc kubenswrapper[4737]: I0126 18:54:38.876837 4737 generic.go:334] "Generic (PLEG): container finished" podID="4d02a338-d239-4e47-9d1f-49f30678168b" containerID="4107b92d3860196b5cab9f4e3357b14a9bd5a5b2eb05ebb55ee5e19d084e9dbd" exitCode=0 Jan 26 18:54:38 crc kubenswrapper[4737]: I0126 18:54:38.876892 4737 generic.go:334] "Generic (PLEG): container finished" podID="4d02a338-d239-4e47-9d1f-49f30678168b" containerID="f733f173d976a8f85f05631e64418cc180f1a3d4fc27e7a735162805d6a4960e" exitCode=143 Jan 26 18:54:38 crc kubenswrapper[4737]: I0126 18:54:38.876929 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4d02a338-d239-4e47-9d1f-49f30678168b","Type":"ContainerDied","Data":"4107b92d3860196b5cab9f4e3357b14a9bd5a5b2eb05ebb55ee5e19d084e9dbd"} Jan 26 18:54:38 crc kubenswrapper[4737]: I0126 18:54:38.876974 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4d02a338-d239-4e47-9d1f-49f30678168b","Type":"ContainerDied","Data":"f733f173d976a8f85f05631e64418cc180f1a3d4fc27e7a735162805d6a4960e"} Jan 26 18:54:38 crc kubenswrapper[4737]: I0126 18:54:38.909341 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.909313635 podStartE2EDuration="7.909313635s" podCreationTimestamp="2026-01-26 18:54:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:54:38.899255687 +0000 UTC m=+1452.207450396" watchObservedRunningTime="2026-01-26 18:54:38.909313635 +0000 UTC m=+1452.217508343" Jan 26 18:54:39 crc kubenswrapper[4737]: I0126 18:54:39.908398 4737 generic.go:334] "Generic (PLEG): container finished" podID="08f37444-370d-4f98-ac51-69ff25dadfb1" containerID="a8156db2183fd08413c7f56cf5b8bf860455800a49cb63df9d9f375e05e822a2" exitCode=0 Jan 26 18:54:39 crc kubenswrapper[4737]: I0126 18:54:39.908655 4737 generic.go:334] "Generic (PLEG): container finished" podID="08f37444-370d-4f98-ac51-69ff25dadfb1" containerID="d38e7f3d247ed56a6459e0e9ddadd13b7b573de70e09065d4acf8379f99a7d36" exitCode=143 Jan 26 18:54:39 crc kubenswrapper[4737]: I0126 18:54:39.908473 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"08f37444-370d-4f98-ac51-69ff25dadfb1","Type":"ContainerDied","Data":"a8156db2183fd08413c7f56cf5b8bf860455800a49cb63df9d9f375e05e822a2"} Jan 26 18:54:39 crc kubenswrapper[4737]: I0126 18:54:39.908708 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"08f37444-370d-4f98-ac51-69ff25dadfb1","Type":"ContainerDied","Data":"d38e7f3d247ed56a6459e0e9ddadd13b7b573de70e09065d4acf8379f99a7d36"} Jan 26 18:54:40 crc kubenswrapper[4737]: I0126 18:54:40.941093 4737 generic.go:334] "Generic (PLEG): container finished" podID="4d734d9b-30ed-4ef7-b4c6-6958b19e6118" containerID="a5a1a24c6d16166051da6f258f5ce4c4c2ed6a4c723f322d6a20383febb61693" exitCode=0 Jan 26 18:54:40 crc kubenswrapper[4737]: I0126 18:54:40.941145 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lgxxc" event={"ID":"4d734d9b-30ed-4ef7-b4c6-6958b19e6118","Type":"ContainerDied","Data":"a5a1a24c6d16166051da6f258f5ce4c4c2ed6a4c723f322d6a20383febb61693"} Jan 26 18:54:42 crc kubenswrapper[4737]: I0126 18:54:42.704412 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-lmn22" Jan 26 18:54:42 crc kubenswrapper[4737]: I0126 18:54:42.794680 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-p5nqb"] Jan 26 18:54:42 crc kubenswrapper[4737]: I0126 18:54:42.794962 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74f6bcbc87-p5nqb" podUID="fdf19f0a-8101-42b8-85d0-c97f63045b3d" containerName="dnsmasq-dns" containerID="cri-o://0139b3b8a2667f813f0f611daa16ab2f4f01af86dcb8ff3a6f36ef7c7ed9b22e" gracePeriod=10 Jan 26 18:54:43 crc kubenswrapper[4737]: I0126 18:54:43.625513 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-p5nqb" podUID="fdf19f0a-8101-42b8-85d0-c97f63045b3d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.173:5353: connect: connection refused" Jan 26 18:54:44 crc kubenswrapper[4737]: I0126 18:54:44.122721 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 18:54:44 crc kubenswrapper[4737]: I0126 18:54:44.125008 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lgxxc" Jan 26 18:54:44 crc kubenswrapper[4737]: I0126 18:54:44.194622 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d734d9b-30ed-4ef7-b4c6-6958b19e6118-scripts\") pod \"4d734d9b-30ed-4ef7-b4c6-6958b19e6118\" (UID: \"4d734d9b-30ed-4ef7-b4c6-6958b19e6118\") " Jan 26 18:54:44 crc kubenswrapper[4737]: I0126 18:54:44.194709 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d02a338-d239-4e47-9d1f-49f30678168b-scripts\") pod \"4d02a338-d239-4e47-9d1f-49f30678168b\" (UID: \"4d02a338-d239-4e47-9d1f-49f30678168b\") " Jan 26 18:54:44 crc kubenswrapper[4737]: I0126 18:54:44.194789 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d02a338-d239-4e47-9d1f-49f30678168b-logs\") pod \"4d02a338-d239-4e47-9d1f-49f30678168b\" (UID: \"4d02a338-d239-4e47-9d1f-49f30678168b\") " Jan 26 18:54:44 crc kubenswrapper[4737]: I0126 18:54:44.194822 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d02a338-d239-4e47-9d1f-49f30678168b-config-data\") pod \"4d02a338-d239-4e47-9d1f-49f30678168b\" (UID: \"4d02a338-d239-4e47-9d1f-49f30678168b\") " Jan 26 18:54:44 crc kubenswrapper[4737]: I0126 18:54:44.194922 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82c27\" (UniqueName: \"kubernetes.io/projected/4d02a338-d239-4e47-9d1f-49f30678168b-kube-api-access-82c27\") pod \"4d02a338-d239-4e47-9d1f-49f30678168b\" (UID: \"4d02a338-d239-4e47-9d1f-49f30678168b\") " Jan 26 18:54:44 crc kubenswrapper[4737]: I0126 18:54:44.195059 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ffb43bf-5e4b-4a05-81cd-85836e6d2780\") pod \"4d02a338-d239-4e47-9d1f-49f30678168b\" (UID: \"4d02a338-d239-4e47-9d1f-49f30678168b\") " Jan 26 18:54:44 crc kubenswrapper[4737]: I0126 18:54:44.195180 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d734d9b-30ed-4ef7-b4c6-6958b19e6118-combined-ca-bundle\") pod \"4d734d9b-30ed-4ef7-b4c6-6958b19e6118\" (UID: \"4d734d9b-30ed-4ef7-b4c6-6958b19e6118\") " Jan 26 18:54:44 crc kubenswrapper[4737]: I0126 18:54:44.195219 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d734d9b-30ed-4ef7-b4c6-6958b19e6118-config-data\") pod \"4d734d9b-30ed-4ef7-b4c6-6958b19e6118\" (UID: \"4d734d9b-30ed-4ef7-b4c6-6958b19e6118\") " Jan 26 18:54:44 crc kubenswrapper[4737]: I0126 18:54:44.195234 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4d734d9b-30ed-4ef7-b4c6-6958b19e6118-fernet-keys\") pod \"4d734d9b-30ed-4ef7-b4c6-6958b19e6118\" (UID: \"4d734d9b-30ed-4ef7-b4c6-6958b19e6118\") " Jan 26 18:54:44 crc kubenswrapper[4737]: I0126 18:54:44.195258 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khptd\" (UniqueName: \"kubernetes.io/projected/4d734d9b-30ed-4ef7-b4c6-6958b19e6118-kube-api-access-khptd\") pod \"4d734d9b-30ed-4ef7-b4c6-6958b19e6118\" (UID: \"4d734d9b-30ed-4ef7-b4c6-6958b19e6118\") " Jan 26 18:54:44 crc kubenswrapper[4737]: I0126 18:54:44.195281 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d02a338-d239-4e47-9d1f-49f30678168b-internal-tls-certs\") pod \"4d02a338-d239-4e47-9d1f-49f30678168b\" (UID: \"4d02a338-d239-4e47-9d1f-49f30678168b\") " Jan 26 18:54:44 crc kubenswrapper[4737]: I0126 18:54:44.195307 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4d02a338-d239-4e47-9d1f-49f30678168b-httpd-run\") pod \"4d02a338-d239-4e47-9d1f-49f30678168b\" (UID: \"4d02a338-d239-4e47-9d1f-49f30678168b\") " Jan 26 18:54:44 crc kubenswrapper[4737]: I0126 18:54:44.195335 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4d734d9b-30ed-4ef7-b4c6-6958b19e6118-credential-keys\") pod \"4d734d9b-30ed-4ef7-b4c6-6958b19e6118\" (UID: \"4d734d9b-30ed-4ef7-b4c6-6958b19e6118\") " Jan 26 18:54:44 crc kubenswrapper[4737]: I0126 18:54:44.195354 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d02a338-d239-4e47-9d1f-49f30678168b-combined-ca-bundle\") pod \"4d02a338-d239-4e47-9d1f-49f30678168b\" (UID: \"4d02a338-d239-4e47-9d1f-49f30678168b\") " Jan 26 18:54:44 crc kubenswrapper[4737]: I0126 18:54:44.208868 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d02a338-d239-4e47-9d1f-49f30678168b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4d02a338-d239-4e47-9d1f-49f30678168b" (UID: "4d02a338-d239-4e47-9d1f-49f30678168b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:54:44 crc kubenswrapper[4737]: I0126 18:54:44.209879 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d02a338-d239-4e47-9d1f-49f30678168b-logs" (OuterVolumeSpecName: "logs") pod "4d02a338-d239-4e47-9d1f-49f30678168b" (UID: "4d02a338-d239-4e47-9d1f-49f30678168b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:54:44 crc kubenswrapper[4737]: I0126 18:54:44.212439 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d734d9b-30ed-4ef7-b4c6-6958b19e6118-scripts" (OuterVolumeSpecName: "scripts") pod "4d734d9b-30ed-4ef7-b4c6-6958b19e6118" (UID: "4d734d9b-30ed-4ef7-b4c6-6958b19e6118"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:54:44 crc kubenswrapper[4737]: I0126 18:54:44.215936 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d734d9b-30ed-4ef7-b4c6-6958b19e6118-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "4d734d9b-30ed-4ef7-b4c6-6958b19e6118" (UID: "4d734d9b-30ed-4ef7-b4c6-6958b19e6118"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:54:44 crc kubenswrapper[4737]: I0126 18:54:44.220871 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d02a338-d239-4e47-9d1f-49f30678168b-kube-api-access-82c27" (OuterVolumeSpecName: "kube-api-access-82c27") pod "4d02a338-d239-4e47-9d1f-49f30678168b" (UID: "4d02a338-d239-4e47-9d1f-49f30678168b"). InnerVolumeSpecName "kube-api-access-82c27". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:54:44 crc kubenswrapper[4737]: I0126 18:54:44.222911 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d734d9b-30ed-4ef7-b4c6-6958b19e6118-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "4d734d9b-30ed-4ef7-b4c6-6958b19e6118" (UID: "4d734d9b-30ed-4ef7-b4c6-6958b19e6118"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:54:44 crc kubenswrapper[4737]: I0126 18:54:44.239933 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d02a338-d239-4e47-9d1f-49f30678168b-scripts" (OuterVolumeSpecName: "scripts") pod "4d02a338-d239-4e47-9d1f-49f30678168b" (UID: "4d02a338-d239-4e47-9d1f-49f30678168b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:54:44 crc kubenswrapper[4737]: I0126 18:54:44.240446 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d734d9b-30ed-4ef7-b4c6-6958b19e6118-kube-api-access-khptd" (OuterVolumeSpecName: "kube-api-access-khptd") pod "4d734d9b-30ed-4ef7-b4c6-6958b19e6118" (UID: "4d734d9b-30ed-4ef7-b4c6-6958b19e6118"). InnerVolumeSpecName "kube-api-access-khptd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:54:44 crc kubenswrapper[4737]: I0126 18:54:44.257561 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ffb43bf-5e4b-4a05-81cd-85836e6d2780" (OuterVolumeSpecName: "glance") pod "4d02a338-d239-4e47-9d1f-49f30678168b" (UID: "4d02a338-d239-4e47-9d1f-49f30678168b"). InnerVolumeSpecName "pvc-0ffb43bf-5e4b-4a05-81cd-85836e6d2780". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 18:54:44 crc kubenswrapper[4737]: I0126 18:54:44.282597 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d02a338-d239-4e47-9d1f-49f30678168b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4d02a338-d239-4e47-9d1f-49f30678168b" (UID: "4d02a338-d239-4e47-9d1f-49f30678168b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:54:44 crc kubenswrapper[4737]: I0126 18:54:44.296909 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d734d9b-30ed-4ef7-b4c6-6958b19e6118-config-data" (OuterVolumeSpecName: "config-data") pod "4d734d9b-30ed-4ef7-b4c6-6958b19e6118" (UID: "4d734d9b-30ed-4ef7-b4c6-6958b19e6118"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:54:44 crc kubenswrapper[4737]: I0126 18:54:44.297852 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d734d9b-30ed-4ef7-b4c6-6958b19e6118-config-data\") pod \"4d734d9b-30ed-4ef7-b4c6-6958b19e6118\" (UID: \"4d734d9b-30ed-4ef7-b4c6-6958b19e6118\") " Jan 26 18:54:44 crc kubenswrapper[4737]: W0126 18:54:44.297999 4737 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/4d734d9b-30ed-4ef7-b4c6-6958b19e6118/volumes/kubernetes.io~secret/config-data Jan 26 18:54:44 crc kubenswrapper[4737]: I0126 18:54:44.298025 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d734d9b-30ed-4ef7-b4c6-6958b19e6118-config-data" (OuterVolumeSpecName: "config-data") pod "4d734d9b-30ed-4ef7-b4c6-6958b19e6118" (UID: "4d734d9b-30ed-4ef7-b4c6-6958b19e6118"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:54:44 crc kubenswrapper[4737]: I0126 18:54:44.298641 4737 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d734d9b-30ed-4ef7-b4c6-6958b19e6118-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:44 crc kubenswrapper[4737]: I0126 18:54:44.298664 4737 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4d734d9b-30ed-4ef7-b4c6-6958b19e6118-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:44 crc kubenswrapper[4737]: I0126 18:54:44.298677 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khptd\" (UniqueName: \"kubernetes.io/projected/4d734d9b-30ed-4ef7-b4c6-6958b19e6118-kube-api-access-khptd\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:44 crc kubenswrapper[4737]: I0126 18:54:44.298691 4737 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4d02a338-d239-4e47-9d1f-49f30678168b-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:44 crc kubenswrapper[4737]: I0126 18:54:44.298704 4737 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4d734d9b-30ed-4ef7-b4c6-6958b19e6118-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:44 crc kubenswrapper[4737]: I0126 18:54:44.298716 4737 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d02a338-d239-4e47-9d1f-49f30678168b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:44 crc kubenswrapper[4737]: I0126 18:54:44.298727 4737 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d734d9b-30ed-4ef7-b4c6-6958b19e6118-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:44 crc kubenswrapper[4737]: I0126 18:54:44.298737 4737 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d02a338-d239-4e47-9d1f-49f30678168b-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:44 crc kubenswrapper[4737]: I0126 18:54:44.298749 4737 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d02a338-d239-4e47-9d1f-49f30678168b-logs\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:44 crc kubenswrapper[4737]: I0126 18:54:44.298760 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82c27\" (UniqueName: \"kubernetes.io/projected/4d02a338-d239-4e47-9d1f-49f30678168b-kube-api-access-82c27\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:44 crc kubenswrapper[4737]: I0126 18:54:44.298811 4737 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-0ffb43bf-5e4b-4a05-81cd-85836e6d2780\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ffb43bf-5e4b-4a05-81cd-85836e6d2780\") on node \"crc\" " Jan 26 18:54:44 crc kubenswrapper[4737]: I0126 18:54:44.308270 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d734d9b-30ed-4ef7-b4c6-6958b19e6118-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4d734d9b-30ed-4ef7-b4c6-6958b19e6118" (UID: "4d734d9b-30ed-4ef7-b4c6-6958b19e6118"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:54:44 crc kubenswrapper[4737]: I0126 18:54:44.311575 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d02a338-d239-4e47-9d1f-49f30678168b-config-data" (OuterVolumeSpecName: "config-data") pod "4d02a338-d239-4e47-9d1f-49f30678168b" (UID: "4d02a338-d239-4e47-9d1f-49f30678168b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:54:44 crc kubenswrapper[4737]: I0126 18:54:44.331936 4737 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 26 18:54:44 crc kubenswrapper[4737]: I0126 18:54:44.332123 4737 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-0ffb43bf-5e4b-4a05-81cd-85836e6d2780" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ffb43bf-5e4b-4a05-81cd-85836e6d2780") on node "crc" Jan 26 18:54:44 crc kubenswrapper[4737]: I0126 18:54:44.335213 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d02a338-d239-4e47-9d1f-49f30678168b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4d02a338-d239-4e47-9d1f-49f30678168b" (UID: "4d02a338-d239-4e47-9d1f-49f30678168b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:54:44 crc kubenswrapper[4737]: I0126 18:54:44.401484 4737 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d02a338-d239-4e47-9d1f-49f30678168b-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:44 crc kubenswrapper[4737]: I0126 18:54:44.401542 4737 reconciler_common.go:293] "Volume detached for volume \"pvc-0ffb43bf-5e4b-4a05-81cd-85836e6d2780\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ffb43bf-5e4b-4a05-81cd-85836e6d2780\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:44 crc kubenswrapper[4737]: I0126 18:54:44.401554 4737 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d734d9b-30ed-4ef7-b4c6-6958b19e6118-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:44 crc kubenswrapper[4737]: I0126 18:54:44.401566 4737 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d02a338-d239-4e47-9d1f-49f30678168b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.001001 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.002839 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4d02a338-d239-4e47-9d1f-49f30678168b","Type":"ContainerDied","Data":"7696cba41d28aac4113b962d0c2a6f93b1a34bf6ce7b6f7a422ce4b750fff091"} Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.002893 4737 scope.go:117] "RemoveContainer" containerID="4107b92d3860196b5cab9f4e3357b14a9bd5a5b2eb05ebb55ee5e19d084e9dbd" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.003923 4737 generic.go:334] "Generic (PLEG): container finished" podID="fdf19f0a-8101-42b8-85d0-c97f63045b3d" containerID="0139b3b8a2667f813f0f611daa16ab2f4f01af86dcb8ff3a6f36ef7c7ed9b22e" exitCode=0 Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.004057 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-p5nqb" event={"ID":"fdf19f0a-8101-42b8-85d0-c97f63045b3d","Type":"ContainerDied","Data":"0139b3b8a2667f813f0f611daa16ab2f4f01af86dcb8ff3a6f36ef7c7ed9b22e"} Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.014802 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lgxxc" event={"ID":"4d734d9b-30ed-4ef7-b4c6-6958b19e6118","Type":"ContainerDied","Data":"1d807e48c8d252130eb4d769d4a8c5f52e469fe224139a8627fbe190edba7a6a"} Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.014843 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d807e48c8d252130eb4d769d4a8c5f52e469fe224139a8627fbe190edba7a6a" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.014896 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lgxxc" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.079919 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.098545 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.119139 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 18:54:45 crc kubenswrapper[4737]: E0126 18:54:45.119751 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d02a338-d239-4e47-9d1f-49f30678168b" containerName="glance-httpd" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.119771 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d02a338-d239-4e47-9d1f-49f30678168b" containerName="glance-httpd" Jan 26 18:54:45 crc kubenswrapper[4737]: E0126 18:54:45.119785 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d734d9b-30ed-4ef7-b4c6-6958b19e6118" containerName="keystone-bootstrap" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.119793 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d734d9b-30ed-4ef7-b4c6-6958b19e6118" containerName="keystone-bootstrap" Jan 26 18:54:45 crc kubenswrapper[4737]: E0126 18:54:45.119833 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d02a338-d239-4e47-9d1f-49f30678168b" containerName="glance-log" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.119838 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d02a338-d239-4e47-9d1f-49f30678168b" containerName="glance-log" Jan 26 18:54:45 crc kubenswrapper[4737]: E0126 18:54:45.119846 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf" containerName="init" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.119851 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf" containerName="init" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.120058 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d02a338-d239-4e47-9d1f-49f30678168b" containerName="glance-log" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.120096 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddafd8d0-dd20-4ee7-8653-b7a44b5f1faf" containerName="init" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.120109 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d734d9b-30ed-4ef7-b4c6-6958b19e6118" containerName="keystone-bootstrap" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.120127 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d02a338-d239-4e47-9d1f-49f30678168b" containerName="glance-httpd" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.121526 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.123575 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.123754 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.131538 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.271749 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-lgxxc"] Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.281543 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-lgxxc"] Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.293447 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2105678-a452-433f-aa75-908321272f46-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f2105678-a452-433f-aa75-908321272f46\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.293611 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2105678-a452-433f-aa75-908321272f46-logs\") pod \"glance-default-internal-api-0\" (UID: \"f2105678-a452-433f-aa75-908321272f46\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.293656 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljwtg\" (UniqueName: \"kubernetes.io/projected/f2105678-a452-433f-aa75-908321272f46-kube-api-access-ljwtg\") pod \"glance-default-internal-api-0\" (UID: \"f2105678-a452-433f-aa75-908321272f46\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.293700 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2105678-a452-433f-aa75-908321272f46-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f2105678-a452-433f-aa75-908321272f46\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.293755 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2105678-a452-433f-aa75-908321272f46-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f2105678-a452-433f-aa75-908321272f46\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.293800 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f2105678-a452-433f-aa75-908321272f46-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f2105678-a452-433f-aa75-908321272f46\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.293833 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0ffb43bf-5e4b-4a05-81cd-85836e6d2780\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ffb43bf-5e4b-4a05-81cd-85836e6d2780\") pod \"glance-default-internal-api-0\" (UID: \"f2105678-a452-433f-aa75-908321272f46\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.293920 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2105678-a452-433f-aa75-908321272f46-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f2105678-a452-433f-aa75-908321272f46\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.385759 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-vbj8n"] Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.387856 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vbj8n" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.392199 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-z69hk" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.392823 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.393428 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.393566 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.393618 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.410272 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2105678-a452-433f-aa75-908321272f46-logs\") pod \"glance-default-internal-api-0\" (UID: \"f2105678-a452-433f-aa75-908321272f46\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.410378 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59ecae78-d5c7-4104-b28e-fd9d70a69dc5-config-data\") pod \"keystone-bootstrap-vbj8n\" (UID: \"59ecae78-d5c7-4104-b28e-fd9d70a69dc5\") " pod="openstack/keystone-bootstrap-vbj8n" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.410426 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84wt6\" (UniqueName: \"kubernetes.io/projected/59ecae78-d5c7-4104-b28e-fd9d70a69dc5-kube-api-access-84wt6\") pod \"keystone-bootstrap-vbj8n\" (UID: \"59ecae78-d5c7-4104-b28e-fd9d70a69dc5\") " pod="openstack/keystone-bootstrap-vbj8n" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.410509 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljwtg\" (UniqueName: \"kubernetes.io/projected/f2105678-a452-433f-aa75-908321272f46-kube-api-access-ljwtg\") pod \"glance-default-internal-api-0\" (UID: \"f2105678-a452-433f-aa75-908321272f46\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.410662 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2105678-a452-433f-aa75-908321272f46-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f2105678-a452-433f-aa75-908321272f46\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.410705 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59ecae78-d5c7-4104-b28e-fd9d70a69dc5-combined-ca-bundle\") pod \"keystone-bootstrap-vbj8n\" (UID: \"59ecae78-d5c7-4104-b28e-fd9d70a69dc5\") " pod="openstack/keystone-bootstrap-vbj8n" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.410737 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2105678-a452-433f-aa75-908321272f46-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f2105678-a452-433f-aa75-908321272f46\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.410795 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f2105678-a452-433f-aa75-908321272f46-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f2105678-a452-433f-aa75-908321272f46\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.410841 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0ffb43bf-5e4b-4a05-81cd-85836e6d2780\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ffb43bf-5e4b-4a05-81cd-85836e6d2780\") pod \"glance-default-internal-api-0\" (UID: \"f2105678-a452-433f-aa75-908321272f46\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.410857 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2105678-a452-433f-aa75-908321272f46-logs\") pod \"glance-default-internal-api-0\" (UID: \"f2105678-a452-433f-aa75-908321272f46\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.411043 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/59ecae78-d5c7-4104-b28e-fd9d70a69dc5-fernet-keys\") pod \"keystone-bootstrap-vbj8n\" (UID: \"59ecae78-d5c7-4104-b28e-fd9d70a69dc5\") " pod="openstack/keystone-bootstrap-vbj8n" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.411126 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f2105678-a452-433f-aa75-908321272f46-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f2105678-a452-433f-aa75-908321272f46\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.411164 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2105678-a452-433f-aa75-908321272f46-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f2105678-a452-433f-aa75-908321272f46\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.411206 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59ecae78-d5c7-4104-b28e-fd9d70a69dc5-scripts\") pod \"keystone-bootstrap-vbj8n\" (UID: \"59ecae78-d5c7-4104-b28e-fd9d70a69dc5\") " pod="openstack/keystone-bootstrap-vbj8n" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.411576 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2105678-a452-433f-aa75-908321272f46-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f2105678-a452-433f-aa75-908321272f46\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.411609 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/59ecae78-d5c7-4104-b28e-fd9d70a69dc5-credential-keys\") pod \"keystone-bootstrap-vbj8n\" (UID: \"59ecae78-d5c7-4104-b28e-fd9d70a69dc5\") " pod="openstack/keystone-bootstrap-vbj8n" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.417667 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2105678-a452-433f-aa75-908321272f46-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f2105678-a452-433f-aa75-908321272f46\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.418532 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2105678-a452-433f-aa75-908321272f46-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f2105678-a452-433f-aa75-908321272f46\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.419408 4737 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.419449 4737 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0ffb43bf-5e4b-4a05-81cd-85836e6d2780\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ffb43bf-5e4b-4a05-81cd-85836e6d2780\") pod \"glance-default-internal-api-0\" (UID: \"f2105678-a452-433f-aa75-908321272f46\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/fdf5547aa86845271a11e2b0db53f95e86a38bbd5e41234fa2d6106d36b4b80f/globalmount\"" pod="openstack/glance-default-internal-api-0" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.419831 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2105678-a452-433f-aa75-908321272f46-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f2105678-a452-433f-aa75-908321272f46\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.421437 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2105678-a452-433f-aa75-908321272f46-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f2105678-a452-433f-aa75-908321272f46\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.427097 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vbj8n"] Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.463813 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljwtg\" (UniqueName: \"kubernetes.io/projected/f2105678-a452-433f-aa75-908321272f46-kube-api-access-ljwtg\") pod \"glance-default-internal-api-0\" (UID: \"f2105678-a452-433f-aa75-908321272f46\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.479548 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0ffb43bf-5e4b-4a05-81cd-85836e6d2780\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ffb43bf-5e4b-4a05-81cd-85836e6d2780\") pod \"glance-default-internal-api-0\" (UID: \"f2105678-a452-433f-aa75-908321272f46\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.514020 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59ecae78-d5c7-4104-b28e-fd9d70a69dc5-config-data\") pod \"keystone-bootstrap-vbj8n\" (UID: \"59ecae78-d5c7-4104-b28e-fd9d70a69dc5\") " pod="openstack/keystone-bootstrap-vbj8n" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.515266 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84wt6\" (UniqueName: \"kubernetes.io/projected/59ecae78-d5c7-4104-b28e-fd9d70a69dc5-kube-api-access-84wt6\") pod \"keystone-bootstrap-vbj8n\" (UID: \"59ecae78-d5c7-4104-b28e-fd9d70a69dc5\") " pod="openstack/keystone-bootstrap-vbj8n" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.515337 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59ecae78-d5c7-4104-b28e-fd9d70a69dc5-combined-ca-bundle\") pod \"keystone-bootstrap-vbj8n\" (UID: \"59ecae78-d5c7-4104-b28e-fd9d70a69dc5\") " pod="openstack/keystone-bootstrap-vbj8n" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.515421 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/59ecae78-d5c7-4104-b28e-fd9d70a69dc5-fernet-keys\") pod \"keystone-bootstrap-vbj8n\" (UID: \"59ecae78-d5c7-4104-b28e-fd9d70a69dc5\") " pod="openstack/keystone-bootstrap-vbj8n" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.515471 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59ecae78-d5c7-4104-b28e-fd9d70a69dc5-scripts\") pod \"keystone-bootstrap-vbj8n\" (UID: \"59ecae78-d5c7-4104-b28e-fd9d70a69dc5\") " pod="openstack/keystone-bootstrap-vbj8n" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.515612 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/59ecae78-d5c7-4104-b28e-fd9d70a69dc5-credential-keys\") pod \"keystone-bootstrap-vbj8n\" (UID: \"59ecae78-d5c7-4104-b28e-fd9d70a69dc5\") " pod="openstack/keystone-bootstrap-vbj8n" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.518557 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59ecae78-d5c7-4104-b28e-fd9d70a69dc5-config-data\") pod \"keystone-bootstrap-vbj8n\" (UID: \"59ecae78-d5c7-4104-b28e-fd9d70a69dc5\") " pod="openstack/keystone-bootstrap-vbj8n" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.519567 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/59ecae78-d5c7-4104-b28e-fd9d70a69dc5-fernet-keys\") pod \"keystone-bootstrap-vbj8n\" (UID: \"59ecae78-d5c7-4104-b28e-fd9d70a69dc5\") " pod="openstack/keystone-bootstrap-vbj8n" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.519879 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/59ecae78-d5c7-4104-b28e-fd9d70a69dc5-credential-keys\") pod \"keystone-bootstrap-vbj8n\" (UID: \"59ecae78-d5c7-4104-b28e-fd9d70a69dc5\") " pod="openstack/keystone-bootstrap-vbj8n" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.520429 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59ecae78-d5c7-4104-b28e-fd9d70a69dc5-scripts\") pod \"keystone-bootstrap-vbj8n\" (UID: \"59ecae78-d5c7-4104-b28e-fd9d70a69dc5\") " pod="openstack/keystone-bootstrap-vbj8n" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.523849 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59ecae78-d5c7-4104-b28e-fd9d70a69dc5-combined-ca-bundle\") pod \"keystone-bootstrap-vbj8n\" (UID: \"59ecae78-d5c7-4104-b28e-fd9d70a69dc5\") " pod="openstack/keystone-bootstrap-vbj8n" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.536315 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84wt6\" (UniqueName: \"kubernetes.io/projected/59ecae78-d5c7-4104-b28e-fd9d70a69dc5-kube-api-access-84wt6\") pod \"keystone-bootstrap-vbj8n\" (UID: \"59ecae78-d5c7-4104-b28e-fd9d70a69dc5\") " pod="openstack/keystone-bootstrap-vbj8n" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.617527 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vbj8n" Jan 26 18:54:45 crc kubenswrapper[4737]: I0126 18:54:45.757161 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 18:54:46 crc kubenswrapper[4737]: I0126 18:54:46.994973 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d02a338-d239-4e47-9d1f-49f30678168b" path="/var/lib/kubelet/pods/4d02a338-d239-4e47-9d1f-49f30678168b/volumes" Jan 26 18:54:46 crc kubenswrapper[4737]: I0126 18:54:46.996326 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d734d9b-30ed-4ef7-b4c6-6958b19e6118" path="/var/lib/kubelet/pods/4d734d9b-30ed-4ef7-b4c6-6958b19e6118/volumes" Jan 26 18:54:47 crc kubenswrapper[4737]: I0126 18:54:47.973930 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-s7g98"] Jan 26 18:54:47 crc kubenswrapper[4737]: I0126 18:54:47.979186 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s7g98" Jan 26 18:54:47 crc kubenswrapper[4737]: I0126 18:54:47.986730 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s7g98"] Jan 26 18:54:48 crc kubenswrapper[4737]: I0126 18:54:48.086144 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d1ea1d4-ca8f-48d7-838b-a71cc03f2b39-utilities\") pod \"redhat-operators-s7g98\" (UID: \"0d1ea1d4-ca8f-48d7-838b-a71cc03f2b39\") " pod="openshift-marketplace/redhat-operators-s7g98" Jan 26 18:54:48 crc kubenswrapper[4737]: I0126 18:54:48.086358 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbjv9\" (UniqueName: \"kubernetes.io/projected/0d1ea1d4-ca8f-48d7-838b-a71cc03f2b39-kube-api-access-kbjv9\") pod \"redhat-operators-s7g98\" (UID: \"0d1ea1d4-ca8f-48d7-838b-a71cc03f2b39\") " pod="openshift-marketplace/redhat-operators-s7g98" Jan 26 18:54:48 crc kubenswrapper[4737]: I0126 18:54:48.086435 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d1ea1d4-ca8f-48d7-838b-a71cc03f2b39-catalog-content\") pod \"redhat-operators-s7g98\" (UID: \"0d1ea1d4-ca8f-48d7-838b-a71cc03f2b39\") " pod="openshift-marketplace/redhat-operators-s7g98" Jan 26 18:54:48 crc kubenswrapper[4737]: I0126 18:54:48.189310 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d1ea1d4-ca8f-48d7-838b-a71cc03f2b39-utilities\") pod \"redhat-operators-s7g98\" (UID: \"0d1ea1d4-ca8f-48d7-838b-a71cc03f2b39\") " pod="openshift-marketplace/redhat-operators-s7g98" Jan 26 18:54:48 crc kubenswrapper[4737]: I0126 18:54:48.189395 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbjv9\" (UniqueName: \"kubernetes.io/projected/0d1ea1d4-ca8f-48d7-838b-a71cc03f2b39-kube-api-access-kbjv9\") pod \"redhat-operators-s7g98\" (UID: \"0d1ea1d4-ca8f-48d7-838b-a71cc03f2b39\") " pod="openshift-marketplace/redhat-operators-s7g98" Jan 26 18:54:48 crc kubenswrapper[4737]: I0126 18:54:48.189429 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d1ea1d4-ca8f-48d7-838b-a71cc03f2b39-catalog-content\") pod \"redhat-operators-s7g98\" (UID: \"0d1ea1d4-ca8f-48d7-838b-a71cc03f2b39\") " pod="openshift-marketplace/redhat-operators-s7g98" Jan 26 18:54:48 crc kubenswrapper[4737]: I0126 18:54:48.190021 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d1ea1d4-ca8f-48d7-838b-a71cc03f2b39-catalog-content\") pod \"redhat-operators-s7g98\" (UID: \"0d1ea1d4-ca8f-48d7-838b-a71cc03f2b39\") " pod="openshift-marketplace/redhat-operators-s7g98" Jan 26 18:54:48 crc kubenswrapper[4737]: I0126 18:54:48.190055 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d1ea1d4-ca8f-48d7-838b-a71cc03f2b39-utilities\") pod \"redhat-operators-s7g98\" (UID: \"0d1ea1d4-ca8f-48d7-838b-a71cc03f2b39\") " pod="openshift-marketplace/redhat-operators-s7g98" Jan 26 18:54:48 crc kubenswrapper[4737]: I0126 18:54:48.223014 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbjv9\" (UniqueName: \"kubernetes.io/projected/0d1ea1d4-ca8f-48d7-838b-a71cc03f2b39-kube-api-access-kbjv9\") pod \"redhat-operators-s7g98\" (UID: \"0d1ea1d4-ca8f-48d7-838b-a71cc03f2b39\") " pod="openshift-marketplace/redhat-operators-s7g98" Jan 26 18:54:48 crc kubenswrapper[4737]: I0126 18:54:48.309444 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s7g98" Jan 26 18:54:48 crc kubenswrapper[4737]: I0126 18:54:48.625434 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-p5nqb" podUID="fdf19f0a-8101-42b8-85d0-c97f63045b3d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.173:5353: connect: connection refused" Jan 26 18:54:51 crc kubenswrapper[4737]: E0126 18:54:51.587485 4737 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Jan 26 18:54:51 crc kubenswrapper[4737]: E0126 18:54:51.588396 4737 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p6q8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-8nbml_openstack(11147190-1d45-4798-83d7-449cd574a296): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 18:54:51 crc kubenswrapper[4737]: E0126 18:54:51.589806 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-8nbml" podUID="11147190-1d45-4798-83d7-449cd574a296" Jan 26 18:54:52 crc kubenswrapper[4737]: E0126 18:54:52.087932 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-8nbml" podUID="11147190-1d45-4798-83d7-449cd574a296" Jan 26 18:54:58 crc kubenswrapper[4737]: I0126 18:54:58.626399 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-p5nqb" podUID="fdf19f0a-8101-42b8-85d0-c97f63045b3d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.173:5353: i/o timeout" Jan 26 18:54:58 crc kubenswrapper[4737]: I0126 18:54:58.627116 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-p5nqb" Jan 26 18:54:59 crc kubenswrapper[4737]: I0126 18:54:59.163269 4737 generic.go:334] "Generic (PLEG): container finished" podID="ccac15d0-8553-4c25-9bac-4f65d06e7d0e" containerID="cc68631ceb5ab7897346be8341af243713cf34e8432f039ed3d3d66dbcd8ac62" exitCode=0 Jan 26 18:54:59 crc kubenswrapper[4737]: I0126 18:54:59.163323 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-sk8gf" event={"ID":"ccac15d0-8553-4c25-9bac-4f65d06e7d0e","Type":"ContainerDied","Data":"cc68631ceb5ab7897346be8341af243713cf34e8432f039ed3d3d66dbcd8ac62"} Jan 26 18:55:00 crc kubenswrapper[4737]: I0126 18:55:00.949649 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:55:00 crc kubenswrapper[4737]: I0126 18:55:00.951249 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:55:00 crc kubenswrapper[4737]: I0126 18:55:00.951333 4737 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" Jan 26 18:55:00 crc kubenswrapper[4737]: I0126 18:55:00.952314 4737 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2e00b45a79587ca6768c3a9f0e09f0e494c418f3da2b1b4af85ad9741a3fdd5c"} pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 18:55:00 crc kubenswrapper[4737]: I0126 18:55:00.952378 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" containerID="cri-o://2e00b45a79587ca6768c3a9f0e09f0e494c418f3da2b1b4af85ad9741a3fdd5c" gracePeriod=600 Jan 26 18:55:00 crc kubenswrapper[4737]: E0126 18:55:00.984935 4737 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Jan 26 18:55:00 crc kubenswrapper[4737]: E0126 18:55:00.985362 4737 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-spxnd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-kdfn7_openstack(54a9f74e-fc12-43b7-aca3-0594480e0222): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 18:55:00 crc kubenswrapper[4737]: E0126 18:55:00.986560 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-kdfn7" podUID="54a9f74e-fc12-43b7-aca3-0594480e0222" Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.089608 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.186771 4737 generic.go:334] "Generic (PLEG): container finished" podID="afd75772-7900-46c3-b392-afb075e1cc08" containerID="2e00b45a79587ca6768c3a9f0e09f0e494c418f3da2b1b4af85ad9741a3fdd5c" exitCode=0 Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.186836 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" event={"ID":"afd75772-7900-46c3-b392-afb075e1cc08","Type":"ContainerDied","Data":"2e00b45a79587ca6768c3a9f0e09f0e494c418f3da2b1b4af85ad9741a3fdd5c"} Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.189722 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"08f37444-370d-4f98-ac51-69ff25dadfb1","Type":"ContainerDied","Data":"ec7d2549c9234a4ea84ff8bf2a9f1ec74d544a7a397f958a90ea331e8c563871"} Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.189775 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 18:55:01 crc kubenswrapper[4737]: E0126 18:55:01.191658 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-kdfn7" podUID="54a9f74e-fc12-43b7-aca3-0594480e0222" Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.243035 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08f37444-370d-4f98-ac51-69ff25dadfb1-logs\") pod \"08f37444-370d-4f98-ac51-69ff25dadfb1\" (UID: \"08f37444-370d-4f98-ac51-69ff25dadfb1\") " Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.243248 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-503a318c-a4ff-4b14-bac7-f0b8ecb31d43\") pod \"08f37444-370d-4f98-ac51-69ff25dadfb1\" (UID: \"08f37444-370d-4f98-ac51-69ff25dadfb1\") " Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.243444 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/08f37444-370d-4f98-ac51-69ff25dadfb1-httpd-run\") pod \"08f37444-370d-4f98-ac51-69ff25dadfb1\" (UID: \"08f37444-370d-4f98-ac51-69ff25dadfb1\") " Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.243509 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08f37444-370d-4f98-ac51-69ff25dadfb1-combined-ca-bundle\") pod \"08f37444-370d-4f98-ac51-69ff25dadfb1\" (UID: \"08f37444-370d-4f98-ac51-69ff25dadfb1\") " Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.243529 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08f37444-370d-4f98-ac51-69ff25dadfb1-config-data\") pod \"08f37444-370d-4f98-ac51-69ff25dadfb1\" (UID: \"08f37444-370d-4f98-ac51-69ff25dadfb1\") " Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.243531 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08f37444-370d-4f98-ac51-69ff25dadfb1-logs" (OuterVolumeSpecName: "logs") pod "08f37444-370d-4f98-ac51-69ff25dadfb1" (UID: "08f37444-370d-4f98-ac51-69ff25dadfb1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.243560 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frx5g\" (UniqueName: \"kubernetes.io/projected/08f37444-370d-4f98-ac51-69ff25dadfb1-kube-api-access-frx5g\") pod \"08f37444-370d-4f98-ac51-69ff25dadfb1\" (UID: \"08f37444-370d-4f98-ac51-69ff25dadfb1\") " Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.243590 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08f37444-370d-4f98-ac51-69ff25dadfb1-scripts\") pod \"08f37444-370d-4f98-ac51-69ff25dadfb1\" (UID: \"08f37444-370d-4f98-ac51-69ff25dadfb1\") " Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.243641 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08f37444-370d-4f98-ac51-69ff25dadfb1-public-tls-certs\") pod \"08f37444-370d-4f98-ac51-69ff25dadfb1\" (UID: \"08f37444-370d-4f98-ac51-69ff25dadfb1\") " Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.243729 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08f37444-370d-4f98-ac51-69ff25dadfb1-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "08f37444-370d-4f98-ac51-69ff25dadfb1" (UID: "08f37444-370d-4f98-ac51-69ff25dadfb1"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.244156 4737 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/08f37444-370d-4f98-ac51-69ff25dadfb1-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.244177 4737 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08f37444-370d-4f98-ac51-69ff25dadfb1-logs\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.250236 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08f37444-370d-4f98-ac51-69ff25dadfb1-kube-api-access-frx5g" (OuterVolumeSpecName: "kube-api-access-frx5g") pod "08f37444-370d-4f98-ac51-69ff25dadfb1" (UID: "08f37444-370d-4f98-ac51-69ff25dadfb1"). InnerVolumeSpecName "kube-api-access-frx5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.260439 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08f37444-370d-4f98-ac51-69ff25dadfb1-scripts" (OuterVolumeSpecName: "scripts") pod "08f37444-370d-4f98-ac51-69ff25dadfb1" (UID: "08f37444-370d-4f98-ac51-69ff25dadfb1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.268921 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-503a318c-a4ff-4b14-bac7-f0b8ecb31d43" (OuterVolumeSpecName: "glance") pod "08f37444-370d-4f98-ac51-69ff25dadfb1" (UID: "08f37444-370d-4f98-ac51-69ff25dadfb1"). InnerVolumeSpecName "pvc-503a318c-a4ff-4b14-bac7-f0b8ecb31d43". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.275527 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08f37444-370d-4f98-ac51-69ff25dadfb1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "08f37444-370d-4f98-ac51-69ff25dadfb1" (UID: "08f37444-370d-4f98-ac51-69ff25dadfb1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:55:01 crc kubenswrapper[4737]: E0126 18:55:01.301604 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08f37444-370d-4f98-ac51-69ff25dadfb1-public-tls-certs podName:08f37444-370d-4f98-ac51-69ff25dadfb1 nodeName:}" failed. No retries permitted until 2026-01-26 18:55:01.801571333 +0000 UTC m=+1475.109766041 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "public-tls-certs" (UniqueName: "kubernetes.io/secret/08f37444-370d-4f98-ac51-69ff25dadfb1-public-tls-certs") pod "08f37444-370d-4f98-ac51-69ff25dadfb1" (UID: "08f37444-370d-4f98-ac51-69ff25dadfb1") : error deleting /var/lib/kubelet/pods/08f37444-370d-4f98-ac51-69ff25dadfb1/volume-subpaths: remove /var/lib/kubelet/pods/08f37444-370d-4f98-ac51-69ff25dadfb1/volume-subpaths: no such file or directory Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.304278 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08f37444-370d-4f98-ac51-69ff25dadfb1-config-data" (OuterVolumeSpecName: "config-data") pod "08f37444-370d-4f98-ac51-69ff25dadfb1" (UID: "08f37444-370d-4f98-ac51-69ff25dadfb1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.346840 4737 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08f37444-370d-4f98-ac51-69ff25dadfb1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.346888 4737 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08f37444-370d-4f98-ac51-69ff25dadfb1-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.346900 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frx5g\" (UniqueName: \"kubernetes.io/projected/08f37444-370d-4f98-ac51-69ff25dadfb1-kube-api-access-frx5g\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.346912 4737 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08f37444-370d-4f98-ac51-69ff25dadfb1-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.346940 4737 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-503a318c-a4ff-4b14-bac7-f0b8ecb31d43\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-503a318c-a4ff-4b14-bac7-f0b8ecb31d43\") on node \"crc\" " Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.376252 4737 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.376424 4737 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-503a318c-a4ff-4b14-bac7-f0b8ecb31d43" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-503a318c-a4ff-4b14-bac7-f0b8ecb31d43") on node "crc" Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.448464 4737 reconciler_common.go:293] "Volume detached for volume \"pvc-503a318c-a4ff-4b14-bac7-f0b8ecb31d43\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-503a318c-a4ff-4b14-bac7-f0b8ecb31d43\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:01 crc kubenswrapper[4737]: E0126 18:55:01.639601 4737 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 26 18:55:01 crc kubenswrapper[4737]: E0126 18:55:01.639802 4737 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vfm6g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-crvp5_openstack(31ee14c5-9b8d-4903-afc7-0b7c643b2756): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 18:55:01 crc kubenswrapper[4737]: E0126 18:55:01.641002 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-crvp5" podUID="31ee14c5-9b8d-4903-afc7-0b7c643b2756" Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.684571 4737 scope.go:117] "RemoveContainer" containerID="f733f173d976a8f85f05631e64418cc180f1a3d4fc27e7a735162805d6a4960e" Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.740640 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-sk8gf" Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.748039 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-p5nqb" Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.856176 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccac15d0-8553-4c25-9bac-4f65d06e7d0e-combined-ca-bundle\") pod \"ccac15d0-8553-4c25-9bac-4f65d06e7d0e\" (UID: \"ccac15d0-8553-4c25-9bac-4f65d06e7d0e\") " Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.856242 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fdf19f0a-8101-42b8-85d0-c97f63045b3d-dns-swift-storage-0\") pod \"fdf19f0a-8101-42b8-85d0-c97f63045b3d\" (UID: \"fdf19f0a-8101-42b8-85d0-c97f63045b3d\") " Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.856323 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cgcl\" (UniqueName: \"kubernetes.io/projected/ccac15d0-8553-4c25-9bac-4f65d06e7d0e-kube-api-access-5cgcl\") pod \"ccac15d0-8553-4c25-9bac-4f65d06e7d0e\" (UID: \"ccac15d0-8553-4c25-9bac-4f65d06e7d0e\") " Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.856402 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdf19f0a-8101-42b8-85d0-c97f63045b3d-ovsdbserver-nb\") pod \"fdf19f0a-8101-42b8-85d0-c97f63045b3d\" (UID: \"fdf19f0a-8101-42b8-85d0-c97f63045b3d\") " Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.856451 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08f37444-370d-4f98-ac51-69ff25dadfb1-public-tls-certs\") pod \"08f37444-370d-4f98-ac51-69ff25dadfb1\" (UID: \"08f37444-370d-4f98-ac51-69ff25dadfb1\") " Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.856558 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdf19f0a-8101-42b8-85d0-c97f63045b3d-config\") pod \"fdf19f0a-8101-42b8-85d0-c97f63045b3d\" (UID: \"fdf19f0a-8101-42b8-85d0-c97f63045b3d\") " Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.856587 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdf19f0a-8101-42b8-85d0-c97f63045b3d-ovsdbserver-sb\") pod \"fdf19f0a-8101-42b8-85d0-c97f63045b3d\" (UID: \"fdf19f0a-8101-42b8-85d0-c97f63045b3d\") " Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.856605 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdf19f0a-8101-42b8-85d0-c97f63045b3d-dns-svc\") pod \"fdf19f0a-8101-42b8-85d0-c97f63045b3d\" (UID: \"fdf19f0a-8101-42b8-85d0-c97f63045b3d\") " Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.856636 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ccac15d0-8553-4c25-9bac-4f65d06e7d0e-config\") pod \"ccac15d0-8553-4c25-9bac-4f65d06e7d0e\" (UID: \"ccac15d0-8553-4c25-9bac-4f65d06e7d0e\") " Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.856680 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5tph\" (UniqueName: \"kubernetes.io/projected/fdf19f0a-8101-42b8-85d0-c97f63045b3d-kube-api-access-q5tph\") pod \"fdf19f0a-8101-42b8-85d0-c97f63045b3d\" (UID: \"fdf19f0a-8101-42b8-85d0-c97f63045b3d\") " Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.861219 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08f37444-370d-4f98-ac51-69ff25dadfb1-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "08f37444-370d-4f98-ac51-69ff25dadfb1" (UID: "08f37444-370d-4f98-ac51-69ff25dadfb1"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.862211 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccac15d0-8553-4c25-9bac-4f65d06e7d0e-kube-api-access-5cgcl" (OuterVolumeSpecName: "kube-api-access-5cgcl") pod "ccac15d0-8553-4c25-9bac-4f65d06e7d0e" (UID: "ccac15d0-8553-4c25-9bac-4f65d06e7d0e"). InnerVolumeSpecName "kube-api-access-5cgcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.862739 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdf19f0a-8101-42b8-85d0-c97f63045b3d-kube-api-access-q5tph" (OuterVolumeSpecName: "kube-api-access-q5tph") pod "fdf19f0a-8101-42b8-85d0-c97f63045b3d" (UID: "fdf19f0a-8101-42b8-85d0-c97f63045b3d"). InnerVolumeSpecName "kube-api-access-q5tph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.905220 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccac15d0-8553-4c25-9bac-4f65d06e7d0e-config" (OuterVolumeSpecName: "config") pod "ccac15d0-8553-4c25-9bac-4f65d06e7d0e" (UID: "ccac15d0-8553-4c25-9bac-4f65d06e7d0e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.909884 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdf19f0a-8101-42b8-85d0-c97f63045b3d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fdf19f0a-8101-42b8-85d0-c97f63045b3d" (UID: "fdf19f0a-8101-42b8-85d0-c97f63045b3d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.910465 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccac15d0-8553-4c25-9bac-4f65d06e7d0e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ccac15d0-8553-4c25-9bac-4f65d06e7d0e" (UID: "ccac15d0-8553-4c25-9bac-4f65d06e7d0e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.923471 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdf19f0a-8101-42b8-85d0-c97f63045b3d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "fdf19f0a-8101-42b8-85d0-c97f63045b3d" (UID: "fdf19f0a-8101-42b8-85d0-c97f63045b3d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.928129 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdf19f0a-8101-42b8-85d0-c97f63045b3d-config" (OuterVolumeSpecName: "config") pod "fdf19f0a-8101-42b8-85d0-c97f63045b3d" (UID: "fdf19f0a-8101-42b8-85d0-c97f63045b3d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.933656 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdf19f0a-8101-42b8-85d0-c97f63045b3d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fdf19f0a-8101-42b8-85d0-c97f63045b3d" (UID: "fdf19f0a-8101-42b8-85d0-c97f63045b3d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.934978 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdf19f0a-8101-42b8-85d0-c97f63045b3d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fdf19f0a-8101-42b8-85d0-c97f63045b3d" (UID: "fdf19f0a-8101-42b8-85d0-c97f63045b3d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.959036 4737 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08f37444-370d-4f98-ac51-69ff25dadfb1-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.959107 4737 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdf19f0a-8101-42b8-85d0-c97f63045b3d-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.959120 4737 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdf19f0a-8101-42b8-85d0-c97f63045b3d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.959131 4737 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdf19f0a-8101-42b8-85d0-c97f63045b3d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.959147 4737 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/ccac15d0-8553-4c25-9bac-4f65d06e7d0e-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.959159 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5tph\" (UniqueName: \"kubernetes.io/projected/fdf19f0a-8101-42b8-85d0-c97f63045b3d-kube-api-access-q5tph\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.959174 4737 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccac15d0-8553-4c25-9bac-4f65d06e7d0e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.959183 4737 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fdf19f0a-8101-42b8-85d0-c97f63045b3d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.959192 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cgcl\" (UniqueName: \"kubernetes.io/projected/ccac15d0-8553-4c25-9bac-4f65d06e7d0e-kube-api-access-5cgcl\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:01 crc kubenswrapper[4737]: I0126 18:55:01.959201 4737 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdf19f0a-8101-42b8-85d0-c97f63045b3d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:02 crc kubenswrapper[4737]: I0126 18:55:02.144046 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 18:55:02 crc kubenswrapper[4737]: I0126 18:55:02.159744 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 18:55:02 crc kubenswrapper[4737]: I0126 18:55:02.176482 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 18:55:02 crc kubenswrapper[4737]: E0126 18:55:02.177048 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdf19f0a-8101-42b8-85d0-c97f63045b3d" containerName="dnsmasq-dns" Jan 26 18:55:02 crc kubenswrapper[4737]: I0126 18:55:02.177085 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdf19f0a-8101-42b8-85d0-c97f63045b3d" containerName="dnsmasq-dns" Jan 26 18:55:02 crc kubenswrapper[4737]: E0126 18:55:02.177114 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08f37444-370d-4f98-ac51-69ff25dadfb1" containerName="glance-httpd" Jan 26 18:55:02 crc kubenswrapper[4737]: I0126 18:55:02.177122 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="08f37444-370d-4f98-ac51-69ff25dadfb1" containerName="glance-httpd" Jan 26 18:55:02 crc kubenswrapper[4737]: E0126 18:55:02.177138 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccac15d0-8553-4c25-9bac-4f65d06e7d0e" containerName="neutron-db-sync" Jan 26 18:55:02 crc kubenswrapper[4737]: I0126 18:55:02.177145 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccac15d0-8553-4c25-9bac-4f65d06e7d0e" containerName="neutron-db-sync" Jan 26 18:55:02 crc kubenswrapper[4737]: E0126 18:55:02.177156 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdf19f0a-8101-42b8-85d0-c97f63045b3d" containerName="init" Jan 26 18:55:02 crc kubenswrapper[4737]: I0126 18:55:02.177161 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdf19f0a-8101-42b8-85d0-c97f63045b3d" containerName="init" Jan 26 18:55:02 crc kubenswrapper[4737]: E0126 18:55:02.177181 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08f37444-370d-4f98-ac51-69ff25dadfb1" containerName="glance-log" Jan 26 18:55:02 crc kubenswrapper[4737]: I0126 18:55:02.177189 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="08f37444-370d-4f98-ac51-69ff25dadfb1" containerName="glance-log" Jan 26 18:55:02 crc kubenswrapper[4737]: I0126 18:55:02.177374 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdf19f0a-8101-42b8-85d0-c97f63045b3d" containerName="dnsmasq-dns" Jan 26 18:55:02 crc kubenswrapper[4737]: I0126 18:55:02.177389 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccac15d0-8553-4c25-9bac-4f65d06e7d0e" containerName="neutron-db-sync" Jan 26 18:55:02 crc kubenswrapper[4737]: I0126 18:55:02.177402 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="08f37444-370d-4f98-ac51-69ff25dadfb1" containerName="glance-log" Jan 26 18:55:02 crc kubenswrapper[4737]: I0126 18:55:02.177418 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="08f37444-370d-4f98-ac51-69ff25dadfb1" containerName="glance-httpd" Jan 26 18:55:02 crc kubenswrapper[4737]: I0126 18:55:02.178756 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 18:55:02 crc kubenswrapper[4737]: I0126 18:55:02.181792 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 26 18:55:02 crc kubenswrapper[4737]: I0126 18:55:02.182052 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 26 18:55:02 crc kubenswrapper[4737]: I0126 18:55:02.195202 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 18:55:02 crc kubenswrapper[4737]: I0126 18:55:02.233916 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-p5nqb" event={"ID":"fdf19f0a-8101-42b8-85d0-c97f63045b3d","Type":"ContainerDied","Data":"dcaebecce088f70e98083f98dcd6c618ccbbf2f031da050b70e64844554dbfaf"} Jan 26 18:55:02 crc kubenswrapper[4737]: I0126 18:55:02.234138 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-p5nqb" Jan 26 18:55:02 crc kubenswrapper[4737]: I0126 18:55:02.245932 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-sk8gf" event={"ID":"ccac15d0-8553-4c25-9bac-4f65d06e7d0e","Type":"ContainerDied","Data":"9bb1b7b2cdc6a1b0eb41241d17aaf25249a1c878ab1ace0b0a8a09bd4b712c13"} Jan 26 18:55:02 crc kubenswrapper[4737]: I0126 18:55:02.245959 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-sk8gf" Jan 26 18:55:02 crc kubenswrapper[4737]: I0126 18:55:02.245982 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9bb1b7b2cdc6a1b0eb41241d17aaf25249a1c878ab1ace0b0a8a09bd4b712c13" Jan 26 18:55:02 crc kubenswrapper[4737]: E0126 18:55:02.258701 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-crvp5" podUID="31ee14c5-9b8d-4903-afc7-0b7c643b2756" Jan 26 18:55:02 crc kubenswrapper[4737]: I0126 18:55:02.269607 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c7f5f39-5fca-4ebd-b06b-1022c2500338-scripts\") pod \"glance-default-external-api-0\" (UID: \"8c7f5f39-5fca-4ebd-b06b-1022c2500338\") " pod="openstack/glance-default-external-api-0" Jan 26 18:55:02 crc kubenswrapper[4737]: I0126 18:55:02.269758 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c7f5f39-5fca-4ebd-b06b-1022c2500338-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8c7f5f39-5fca-4ebd-b06b-1022c2500338\") " pod="openstack/glance-default-external-api-0" Jan 26 18:55:02 crc kubenswrapper[4737]: I0126 18:55:02.269977 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-503a318c-a4ff-4b14-bac7-f0b8ecb31d43\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-503a318c-a4ff-4b14-bac7-f0b8ecb31d43\") pod \"glance-default-external-api-0\" (UID: \"8c7f5f39-5fca-4ebd-b06b-1022c2500338\") " pod="openstack/glance-default-external-api-0" Jan 26 18:55:02 crc kubenswrapper[4737]: I0126 18:55:02.270230 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c7f5f39-5fca-4ebd-b06b-1022c2500338-config-data\") pod \"glance-default-external-api-0\" (UID: \"8c7f5f39-5fca-4ebd-b06b-1022c2500338\") " pod="openstack/glance-default-external-api-0" Jan 26 18:55:02 crc kubenswrapper[4737]: I0126 18:55:02.270309 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c7f5f39-5fca-4ebd-b06b-1022c2500338-logs\") pod \"glance-default-external-api-0\" (UID: \"8c7f5f39-5fca-4ebd-b06b-1022c2500338\") " pod="openstack/glance-default-external-api-0" Jan 26 18:55:02 crc kubenswrapper[4737]: I0126 18:55:02.270339 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8c7f5f39-5fca-4ebd-b06b-1022c2500338-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8c7f5f39-5fca-4ebd-b06b-1022c2500338\") " pod="openstack/glance-default-external-api-0" Jan 26 18:55:02 crc kubenswrapper[4737]: I0126 18:55:02.270418 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4tpb\" (UniqueName: \"kubernetes.io/projected/8c7f5f39-5fca-4ebd-b06b-1022c2500338-kube-api-access-q4tpb\") pod \"glance-default-external-api-0\" (UID: \"8c7f5f39-5fca-4ebd-b06b-1022c2500338\") " pod="openstack/glance-default-external-api-0" Jan 26 18:55:02 crc kubenswrapper[4737]: I0126 18:55:02.270994 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c7f5f39-5fca-4ebd-b06b-1022c2500338-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8c7f5f39-5fca-4ebd-b06b-1022c2500338\") " pod="openstack/glance-default-external-api-0" Jan 26 18:55:02 crc kubenswrapper[4737]: I0126 18:55:02.281860 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-p5nqb"] Jan 26 18:55:02 crc kubenswrapper[4737]: I0126 18:55:02.297522 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-p5nqb"] Jan 26 18:55:02 crc kubenswrapper[4737]: I0126 18:55:02.373939 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c7f5f39-5fca-4ebd-b06b-1022c2500338-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8c7f5f39-5fca-4ebd-b06b-1022c2500338\") " pod="openstack/glance-default-external-api-0" Jan 26 18:55:02 crc kubenswrapper[4737]: I0126 18:55:02.374076 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c7f5f39-5fca-4ebd-b06b-1022c2500338-scripts\") pod \"glance-default-external-api-0\" (UID: \"8c7f5f39-5fca-4ebd-b06b-1022c2500338\") " pod="openstack/glance-default-external-api-0" Jan 26 18:55:02 crc kubenswrapper[4737]: I0126 18:55:02.374162 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c7f5f39-5fca-4ebd-b06b-1022c2500338-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8c7f5f39-5fca-4ebd-b06b-1022c2500338\") " pod="openstack/glance-default-external-api-0" Jan 26 18:55:02 crc kubenswrapper[4737]: I0126 18:55:02.374238 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-503a318c-a4ff-4b14-bac7-f0b8ecb31d43\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-503a318c-a4ff-4b14-bac7-f0b8ecb31d43\") pod \"glance-default-external-api-0\" (UID: \"8c7f5f39-5fca-4ebd-b06b-1022c2500338\") " pod="openstack/glance-default-external-api-0" Jan 26 18:55:02 crc kubenswrapper[4737]: I0126 18:55:02.374307 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c7f5f39-5fca-4ebd-b06b-1022c2500338-config-data\") pod \"glance-default-external-api-0\" (UID: \"8c7f5f39-5fca-4ebd-b06b-1022c2500338\") " pod="openstack/glance-default-external-api-0" Jan 26 18:55:02 crc kubenswrapper[4737]: I0126 18:55:02.374346 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c7f5f39-5fca-4ebd-b06b-1022c2500338-logs\") pod \"glance-default-external-api-0\" (UID: \"8c7f5f39-5fca-4ebd-b06b-1022c2500338\") " pod="openstack/glance-default-external-api-0" Jan 26 18:55:02 crc kubenswrapper[4737]: I0126 18:55:02.374386 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8c7f5f39-5fca-4ebd-b06b-1022c2500338-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8c7f5f39-5fca-4ebd-b06b-1022c2500338\") " pod="openstack/glance-default-external-api-0" Jan 26 18:55:02 crc kubenswrapper[4737]: I0126 18:55:02.374421 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4tpb\" (UniqueName: \"kubernetes.io/projected/8c7f5f39-5fca-4ebd-b06b-1022c2500338-kube-api-access-q4tpb\") pod \"glance-default-external-api-0\" (UID: \"8c7f5f39-5fca-4ebd-b06b-1022c2500338\") " pod="openstack/glance-default-external-api-0" Jan 26 18:55:02 crc kubenswrapper[4737]: I0126 18:55:02.376186 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8c7f5f39-5fca-4ebd-b06b-1022c2500338-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8c7f5f39-5fca-4ebd-b06b-1022c2500338\") " pod="openstack/glance-default-external-api-0" Jan 26 18:55:02 crc kubenswrapper[4737]: I0126 18:55:02.376332 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c7f5f39-5fca-4ebd-b06b-1022c2500338-logs\") pod \"glance-default-external-api-0\" (UID: \"8c7f5f39-5fca-4ebd-b06b-1022c2500338\") " pod="openstack/glance-default-external-api-0" Jan 26 18:55:02 crc kubenswrapper[4737]: I0126 18:55:02.379864 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c7f5f39-5fca-4ebd-b06b-1022c2500338-scripts\") pod \"glance-default-external-api-0\" (UID: \"8c7f5f39-5fca-4ebd-b06b-1022c2500338\") " pod="openstack/glance-default-external-api-0" Jan 26 18:55:02 crc kubenswrapper[4737]: I0126 18:55:02.380500 4737 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 18:55:02 crc kubenswrapper[4737]: I0126 18:55:02.380524 4737 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-503a318c-a4ff-4b14-bac7-f0b8ecb31d43\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-503a318c-a4ff-4b14-bac7-f0b8ecb31d43\") pod \"glance-default-external-api-0\" (UID: \"8c7f5f39-5fca-4ebd-b06b-1022c2500338\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/543a0865806adc8a1aa4ef4cf4d6f37534ce583cc9c348d82f63f0aa114aec1f/globalmount\"" pod="openstack/glance-default-external-api-0" Jan 26 18:55:02 crc kubenswrapper[4737]: I0126 18:55:02.382361 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c7f5f39-5fca-4ebd-b06b-1022c2500338-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8c7f5f39-5fca-4ebd-b06b-1022c2500338\") " pod="openstack/glance-default-external-api-0" Jan 26 18:55:02 crc kubenswrapper[4737]: I0126 18:55:02.382557 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c7f5f39-5fca-4ebd-b06b-1022c2500338-config-data\") pod \"glance-default-external-api-0\" (UID: \"8c7f5f39-5fca-4ebd-b06b-1022c2500338\") " pod="openstack/glance-default-external-api-0" Jan 26 18:55:02 crc kubenswrapper[4737]: I0126 18:55:02.382991 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c7f5f39-5fca-4ebd-b06b-1022c2500338-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8c7f5f39-5fca-4ebd-b06b-1022c2500338\") " pod="openstack/glance-default-external-api-0" Jan 26 18:55:02 crc kubenswrapper[4737]: I0126 18:55:02.393267 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4tpb\" (UniqueName: \"kubernetes.io/projected/8c7f5f39-5fca-4ebd-b06b-1022c2500338-kube-api-access-q4tpb\") pod \"glance-default-external-api-0\" (UID: \"8c7f5f39-5fca-4ebd-b06b-1022c2500338\") " pod="openstack/glance-default-external-api-0" Jan 26 18:55:02 crc kubenswrapper[4737]: I0126 18:55:02.426717 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-503a318c-a4ff-4b14-bac7-f0b8ecb31d43\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-503a318c-a4ff-4b14-bac7-f0b8ecb31d43\") pod \"glance-default-external-api-0\" (UID: \"8c7f5f39-5fca-4ebd-b06b-1022c2500338\") " pod="openstack/glance-default-external-api-0" Jan 26 18:55:02 crc kubenswrapper[4737]: I0126 18:55:02.503953 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 18:55:02 crc kubenswrapper[4737]: I0126 18:55:02.953218 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-5ndsz"] Jan 26 18:55:02 crc kubenswrapper[4737]: I0126 18:55:02.956813 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-5ndsz" Jan 26 18:55:02 crc kubenswrapper[4737]: I0126 18:55:02.974777 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-5ndsz"] Jan 26 18:55:03 crc kubenswrapper[4737]: I0126 18:55:03.017456 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08f37444-370d-4f98-ac51-69ff25dadfb1" path="/var/lib/kubelet/pods/08f37444-370d-4f98-ac51-69ff25dadfb1/volumes" Jan 26 18:55:03 crc kubenswrapper[4737]: I0126 18:55:03.019053 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdf19f0a-8101-42b8-85d0-c97f63045b3d" path="/var/lib/kubelet/pods/fdf19f0a-8101-42b8-85d0-c97f63045b3d/volumes" Jan 26 18:55:03 crc kubenswrapper[4737]: I0126 18:55:03.102567 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bwz9\" (UniqueName: \"kubernetes.io/projected/40dcad4e-d2aa-4e7e-bf72-4afd88ca77df-kube-api-access-7bwz9\") pod \"dnsmasq-dns-55f844cf75-5ndsz\" (UID: \"40dcad4e-d2aa-4e7e-bf72-4afd88ca77df\") " pod="openstack/dnsmasq-dns-55f844cf75-5ndsz" Jan 26 18:55:03 crc kubenswrapper[4737]: I0126 18:55:03.104583 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/40dcad4e-d2aa-4e7e-bf72-4afd88ca77df-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-5ndsz\" (UID: \"40dcad4e-d2aa-4e7e-bf72-4afd88ca77df\") " pod="openstack/dnsmasq-dns-55f844cf75-5ndsz" Jan 26 18:55:03 crc kubenswrapper[4737]: I0126 18:55:03.104970 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40dcad4e-d2aa-4e7e-bf72-4afd88ca77df-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-5ndsz\" (UID: \"40dcad4e-d2aa-4e7e-bf72-4afd88ca77df\") " pod="openstack/dnsmasq-dns-55f844cf75-5ndsz" Jan 26 18:55:03 crc kubenswrapper[4737]: I0126 18:55:03.105045 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40dcad4e-d2aa-4e7e-bf72-4afd88ca77df-config\") pod \"dnsmasq-dns-55f844cf75-5ndsz\" (UID: \"40dcad4e-d2aa-4e7e-bf72-4afd88ca77df\") " pod="openstack/dnsmasq-dns-55f844cf75-5ndsz" Jan 26 18:55:03 crc kubenswrapper[4737]: I0126 18:55:03.110021 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40dcad4e-d2aa-4e7e-bf72-4afd88ca77df-dns-svc\") pod \"dnsmasq-dns-55f844cf75-5ndsz\" (UID: \"40dcad4e-d2aa-4e7e-bf72-4afd88ca77df\") " pod="openstack/dnsmasq-dns-55f844cf75-5ndsz" Jan 26 18:55:03 crc kubenswrapper[4737]: I0126 18:55:03.110538 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/40dcad4e-d2aa-4e7e-bf72-4afd88ca77df-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-5ndsz\" (UID: \"40dcad4e-d2aa-4e7e-bf72-4afd88ca77df\") " pod="openstack/dnsmasq-dns-55f844cf75-5ndsz" Jan 26 18:55:03 crc kubenswrapper[4737]: I0126 18:55:03.181968 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-75ff77bb76-fx82z"] Jan 26 18:55:03 crc kubenswrapper[4737]: I0126 18:55:03.184226 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75ff77bb76-fx82z" Jan 26 18:55:03 crc kubenswrapper[4737]: I0126 18:55:03.187721 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-2w8wx" Jan 26 18:55:03 crc kubenswrapper[4737]: I0126 18:55:03.194423 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 26 18:55:03 crc kubenswrapper[4737]: I0126 18:55:03.194666 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 26 18:55:03 crc kubenswrapper[4737]: I0126 18:55:03.203372 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 26 18:55:03 crc kubenswrapper[4737]: I0126 18:55:03.222313 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bwz9\" (UniqueName: \"kubernetes.io/projected/40dcad4e-d2aa-4e7e-bf72-4afd88ca77df-kube-api-access-7bwz9\") pod \"dnsmasq-dns-55f844cf75-5ndsz\" (UID: \"40dcad4e-d2aa-4e7e-bf72-4afd88ca77df\") " pod="openstack/dnsmasq-dns-55f844cf75-5ndsz" Jan 26 18:55:03 crc kubenswrapper[4737]: I0126 18:55:03.222593 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/40dcad4e-d2aa-4e7e-bf72-4afd88ca77df-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-5ndsz\" (UID: \"40dcad4e-d2aa-4e7e-bf72-4afd88ca77df\") " pod="openstack/dnsmasq-dns-55f844cf75-5ndsz" Jan 26 18:55:03 crc kubenswrapper[4737]: I0126 18:55:03.223045 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40dcad4e-d2aa-4e7e-bf72-4afd88ca77df-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-5ndsz\" (UID: \"40dcad4e-d2aa-4e7e-bf72-4afd88ca77df\") " pod="openstack/dnsmasq-dns-55f844cf75-5ndsz" Jan 26 18:55:03 crc kubenswrapper[4737]: I0126 18:55:03.223395 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40dcad4e-d2aa-4e7e-bf72-4afd88ca77df-config\") pod \"dnsmasq-dns-55f844cf75-5ndsz\" (UID: \"40dcad4e-d2aa-4e7e-bf72-4afd88ca77df\") " pod="openstack/dnsmasq-dns-55f844cf75-5ndsz" Jan 26 18:55:03 crc kubenswrapper[4737]: I0126 18:55:03.223573 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40dcad4e-d2aa-4e7e-bf72-4afd88ca77df-dns-svc\") pod \"dnsmasq-dns-55f844cf75-5ndsz\" (UID: \"40dcad4e-d2aa-4e7e-bf72-4afd88ca77df\") " pod="openstack/dnsmasq-dns-55f844cf75-5ndsz" Jan 26 18:55:03 crc kubenswrapper[4737]: I0126 18:55:03.223929 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/40dcad4e-d2aa-4e7e-bf72-4afd88ca77df-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-5ndsz\" (UID: \"40dcad4e-d2aa-4e7e-bf72-4afd88ca77df\") " pod="openstack/dnsmasq-dns-55f844cf75-5ndsz" Jan 26 18:55:03 crc kubenswrapper[4737]: I0126 18:55:03.224454 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/40dcad4e-d2aa-4e7e-bf72-4afd88ca77df-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-5ndsz\" (UID: \"40dcad4e-d2aa-4e7e-bf72-4afd88ca77df\") " pod="openstack/dnsmasq-dns-55f844cf75-5ndsz" Jan 26 18:55:03 crc kubenswrapper[4737]: I0126 18:55:03.225991 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/40dcad4e-d2aa-4e7e-bf72-4afd88ca77df-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-5ndsz\" (UID: \"40dcad4e-d2aa-4e7e-bf72-4afd88ca77df\") " pod="openstack/dnsmasq-dns-55f844cf75-5ndsz" Jan 26 18:55:03 crc kubenswrapper[4737]: I0126 18:55:03.228866 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40dcad4e-d2aa-4e7e-bf72-4afd88ca77df-config\") pod \"dnsmasq-dns-55f844cf75-5ndsz\" (UID: \"40dcad4e-d2aa-4e7e-bf72-4afd88ca77df\") " pod="openstack/dnsmasq-dns-55f844cf75-5ndsz" Jan 26 18:55:03 crc kubenswrapper[4737]: I0126 18:55:03.229470 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40dcad4e-d2aa-4e7e-bf72-4afd88ca77df-dns-svc\") pod \"dnsmasq-dns-55f844cf75-5ndsz\" (UID: \"40dcad4e-d2aa-4e7e-bf72-4afd88ca77df\") " pod="openstack/dnsmasq-dns-55f844cf75-5ndsz" Jan 26 18:55:03 crc kubenswrapper[4737]: I0126 18:55:03.230193 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40dcad4e-d2aa-4e7e-bf72-4afd88ca77df-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-5ndsz\" (UID: \"40dcad4e-d2aa-4e7e-bf72-4afd88ca77df\") " pod="openstack/dnsmasq-dns-55f844cf75-5ndsz" Jan 26 18:55:03 crc kubenswrapper[4737]: I0126 18:55:03.271729 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-75ff77bb76-fx82z"] Jan 26 18:55:03 crc kubenswrapper[4737]: I0126 18:55:03.272208 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bwz9\" (UniqueName: \"kubernetes.io/projected/40dcad4e-d2aa-4e7e-bf72-4afd88ca77df-kube-api-access-7bwz9\") pod \"dnsmasq-dns-55f844cf75-5ndsz\" (UID: \"40dcad4e-d2aa-4e7e-bf72-4afd88ca77df\") " pod="openstack/dnsmasq-dns-55f844cf75-5ndsz" Jan 26 18:55:03 crc kubenswrapper[4737]: I0126 18:55:03.337848 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-5ndsz" Jan 26 18:55:03 crc kubenswrapper[4737]: I0126 18:55:03.340680 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c995be15-2ce8-471e-b1cb-880242eb10f6-combined-ca-bundle\") pod \"neutron-75ff77bb76-fx82z\" (UID: \"c995be15-2ce8-471e-b1cb-880242eb10f6\") " pod="openstack/neutron-75ff77bb76-fx82z" Jan 26 18:55:03 crc kubenswrapper[4737]: I0126 18:55:03.341222 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnrrj\" (UniqueName: \"kubernetes.io/projected/c995be15-2ce8-471e-b1cb-880242eb10f6-kube-api-access-pnrrj\") pod \"neutron-75ff77bb76-fx82z\" (UID: \"c995be15-2ce8-471e-b1cb-880242eb10f6\") " pod="openstack/neutron-75ff77bb76-fx82z" Jan 26 18:55:03 crc kubenswrapper[4737]: I0126 18:55:03.343087 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c995be15-2ce8-471e-b1cb-880242eb10f6-ovndb-tls-certs\") pod \"neutron-75ff77bb76-fx82z\" (UID: \"c995be15-2ce8-471e-b1cb-880242eb10f6\") " pod="openstack/neutron-75ff77bb76-fx82z" Jan 26 18:55:03 crc kubenswrapper[4737]: I0126 18:55:03.347630 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c995be15-2ce8-471e-b1cb-880242eb10f6-config\") pod \"neutron-75ff77bb76-fx82z\" (UID: \"c995be15-2ce8-471e-b1cb-880242eb10f6\") " pod="openstack/neutron-75ff77bb76-fx82z" Jan 26 18:55:03 crc kubenswrapper[4737]: I0126 18:55:03.347741 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c995be15-2ce8-471e-b1cb-880242eb10f6-httpd-config\") pod \"neutron-75ff77bb76-fx82z\" (UID: \"c995be15-2ce8-471e-b1cb-880242eb10f6\") " pod="openstack/neutron-75ff77bb76-fx82z" Jan 26 18:55:03 crc kubenswrapper[4737]: I0126 18:55:03.451330 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c995be15-2ce8-471e-b1cb-880242eb10f6-config\") pod \"neutron-75ff77bb76-fx82z\" (UID: \"c995be15-2ce8-471e-b1cb-880242eb10f6\") " pod="openstack/neutron-75ff77bb76-fx82z" Jan 26 18:55:03 crc kubenswrapper[4737]: I0126 18:55:03.451432 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c995be15-2ce8-471e-b1cb-880242eb10f6-httpd-config\") pod \"neutron-75ff77bb76-fx82z\" (UID: \"c995be15-2ce8-471e-b1cb-880242eb10f6\") " pod="openstack/neutron-75ff77bb76-fx82z" Jan 26 18:55:03 crc kubenswrapper[4737]: I0126 18:55:03.451527 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c995be15-2ce8-471e-b1cb-880242eb10f6-combined-ca-bundle\") pod \"neutron-75ff77bb76-fx82z\" (UID: \"c995be15-2ce8-471e-b1cb-880242eb10f6\") " pod="openstack/neutron-75ff77bb76-fx82z" Jan 26 18:55:03 crc kubenswrapper[4737]: I0126 18:55:03.451610 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnrrj\" (UniqueName: \"kubernetes.io/projected/c995be15-2ce8-471e-b1cb-880242eb10f6-kube-api-access-pnrrj\") pod \"neutron-75ff77bb76-fx82z\" (UID: \"c995be15-2ce8-471e-b1cb-880242eb10f6\") " pod="openstack/neutron-75ff77bb76-fx82z" Jan 26 18:55:03 crc kubenswrapper[4737]: I0126 18:55:03.451647 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c995be15-2ce8-471e-b1cb-880242eb10f6-ovndb-tls-certs\") pod \"neutron-75ff77bb76-fx82z\" (UID: \"c995be15-2ce8-471e-b1cb-880242eb10f6\") " pod="openstack/neutron-75ff77bb76-fx82z" Jan 26 18:55:03 crc kubenswrapper[4737]: I0126 18:55:03.455299 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c995be15-2ce8-471e-b1cb-880242eb10f6-config\") pod \"neutron-75ff77bb76-fx82z\" (UID: \"c995be15-2ce8-471e-b1cb-880242eb10f6\") " pod="openstack/neutron-75ff77bb76-fx82z" Jan 26 18:55:03 crc kubenswrapper[4737]: I0126 18:55:03.458894 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c995be15-2ce8-471e-b1cb-880242eb10f6-httpd-config\") pod \"neutron-75ff77bb76-fx82z\" (UID: \"c995be15-2ce8-471e-b1cb-880242eb10f6\") " pod="openstack/neutron-75ff77bb76-fx82z" Jan 26 18:55:03 crc kubenswrapper[4737]: I0126 18:55:03.461824 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c995be15-2ce8-471e-b1cb-880242eb10f6-ovndb-tls-certs\") pod \"neutron-75ff77bb76-fx82z\" (UID: \"c995be15-2ce8-471e-b1cb-880242eb10f6\") " pod="openstack/neutron-75ff77bb76-fx82z" Jan 26 18:55:03 crc kubenswrapper[4737]: I0126 18:55:03.472105 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c995be15-2ce8-471e-b1cb-880242eb10f6-combined-ca-bundle\") pod \"neutron-75ff77bb76-fx82z\" (UID: \"c995be15-2ce8-471e-b1cb-880242eb10f6\") " pod="openstack/neutron-75ff77bb76-fx82z" Jan 26 18:55:03 crc kubenswrapper[4737]: I0126 18:55:03.482528 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnrrj\" (UniqueName: \"kubernetes.io/projected/c995be15-2ce8-471e-b1cb-880242eb10f6-kube-api-access-pnrrj\") pod \"neutron-75ff77bb76-fx82z\" (UID: \"c995be15-2ce8-471e-b1cb-880242eb10f6\") " pod="openstack/neutron-75ff77bb76-fx82z" Jan 26 18:55:03 crc kubenswrapper[4737]: I0126 18:55:03.508033 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75ff77bb76-fx82z" Jan 26 18:55:03 crc kubenswrapper[4737]: I0126 18:55:03.627027 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-p5nqb" podUID="fdf19f0a-8101-42b8-85d0-c97f63045b3d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.173:5353: i/o timeout" Jan 26 18:55:03 crc kubenswrapper[4737]: E0126 18:55:03.695829 4737 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 26 18:55:03 crc kubenswrapper[4737]: E0126 18:55:03.696050 4737 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d45lf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-5pb7v_openstack(cac069b5-db5e-47ec-ada0-7e6acf1af111): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 18:55:03 crc kubenswrapper[4737]: E0126 18:55:03.697182 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-5pb7v" podUID="cac069b5-db5e-47ec-ada0-7e6acf1af111" Jan 26 18:55:03 crc kubenswrapper[4737]: I0126 18:55:03.801395 4737 scope.go:117] "RemoveContainer" containerID="c76105450930f5c76ed15e2ed040f365f4a322bf2138c5c2073f549076e278fc" Jan 26 18:55:03 crc kubenswrapper[4737]: I0126 18:55:03.913985 4737 scope.go:117] "RemoveContainer" containerID="a8156db2183fd08413c7f56cf5b8bf860455800a49cb63df9d9f375e05e822a2" Jan 26 18:55:04 crc kubenswrapper[4737]: I0126 18:55:04.078440 4737 scope.go:117] "RemoveContainer" containerID="d38e7f3d247ed56a6459e0e9ddadd13b7b573de70e09065d4acf8379f99a7d36" Jan 26 18:55:04 crc kubenswrapper[4737]: I0126 18:55:04.177215 4737 scope.go:117] "RemoveContainer" containerID="0139b3b8a2667f813f0f611daa16ab2f4f01af86dcb8ff3a6f36ef7c7ed9b22e" Jan 26 18:55:04 crc kubenswrapper[4737]: I0126 18:55:04.275015 4737 scope.go:117] "RemoveContainer" containerID="f2f9579a9dff8ba4e02e9b187368d702c7fcb91178fd7706d4f0b4ba38f27103" Jan 26 18:55:04 crc kubenswrapper[4737]: I0126 18:55:04.375792 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" event={"ID":"afd75772-7900-46c3-b392-afb075e1cc08","Type":"ContainerStarted","Data":"1118354a04db19a991298cf7d8a2d128f4afb57f133e36502b231054abcee336"} Jan 26 18:55:04 crc kubenswrapper[4737]: I0126 18:55:04.394738 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 18:55:04 crc kubenswrapper[4737]: E0126 18:55:04.417900 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-5pb7v" podUID="cac069b5-db5e-47ec-ada0-7e6acf1af111" Jan 26 18:55:04 crc kubenswrapper[4737]: I0126 18:55:04.591232 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vbj8n"] Jan 26 18:55:04 crc kubenswrapper[4737]: W0126 18:55:04.606552 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59ecae78_d5c7_4104_b28e_fd9d70a69dc5.slice/crio-0fbb0cf7b9115933c509cd9c54f338a74635a4105c59b4d97cff8da39b2266cd WatchSource:0}: Error finding container 0fbb0cf7b9115933c509cd9c54f338a74635a4105c59b4d97cff8da39b2266cd: Status 404 returned error can't find the container with id 0fbb0cf7b9115933c509cd9c54f338a74635a4105c59b4d97cff8da39b2266cd Jan 26 18:55:04 crc kubenswrapper[4737]: I0126 18:55:04.647207 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s7g98"] Jan 26 18:55:04 crc kubenswrapper[4737]: I0126 18:55:04.853334 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-5ndsz"] Jan 26 18:55:04 crc kubenswrapper[4737]: I0126 18:55:04.939684 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 18:55:05 crc kubenswrapper[4737]: I0126 18:55:05.200752 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-75ff77bb76-fx82z"] Jan 26 18:55:05 crc kubenswrapper[4737]: I0126 18:55:05.447554 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f2105678-a452-433f-aa75-908321272f46","Type":"ContainerStarted","Data":"7e899d3b113f0d353bc9d3743fae421c517bb357d4e87d780a5d647f8a716d99"} Jan 26 18:55:05 crc kubenswrapper[4737]: I0126 18:55:05.453418 4737 generic.go:334] "Generic (PLEG): container finished" podID="0d1ea1d4-ca8f-48d7-838b-a71cc03f2b39" containerID="1e33f30f584eabf982ca73432af480580edd8dd363deaa40485847805e6f2920" exitCode=0 Jan 26 18:55:05 crc kubenswrapper[4737]: I0126 18:55:05.453501 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7g98" event={"ID":"0d1ea1d4-ca8f-48d7-838b-a71cc03f2b39","Type":"ContainerDied","Data":"1e33f30f584eabf982ca73432af480580edd8dd363deaa40485847805e6f2920"} Jan 26 18:55:05 crc kubenswrapper[4737]: I0126 18:55:05.453536 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7g98" event={"ID":"0d1ea1d4-ca8f-48d7-838b-a71cc03f2b39","Type":"ContainerStarted","Data":"3736f28ba7e345ebd8664dbb776cb6b78ebce675f8db265576579c2fffc6e954"} Jan 26 18:55:05 crc kubenswrapper[4737]: I0126 18:55:05.455031 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8c7f5f39-5fca-4ebd-b06b-1022c2500338","Type":"ContainerStarted","Data":"f440ea29c2469be4a2dd1a6f421238767af9d15fff6b0c58bff8a9cd59062828"} Jan 26 18:55:05 crc kubenswrapper[4737]: I0126 18:55:05.457774 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-5ndsz" event={"ID":"40dcad4e-d2aa-4e7e-bf72-4afd88ca77df","Type":"ContainerStarted","Data":"3f7e46dca0dfe0ebd90a530d279e4915e8fb985098df71536da56eb85a145e54"} Jan 26 18:55:05 crc kubenswrapper[4737]: I0126 18:55:05.474625 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8nbml" event={"ID":"11147190-1d45-4798-83d7-449cd574a296","Type":"ContainerStarted","Data":"1a55b5355727b4b9301d1e272dea5dd64862e9b091b399e73471988209bb6ceb"} Jan 26 18:55:05 crc kubenswrapper[4737]: I0126 18:55:05.487743 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vbj8n" event={"ID":"59ecae78-d5c7-4104-b28e-fd9d70a69dc5","Type":"ContainerStarted","Data":"8da202852f6931d217e4caa89c850e91d6bf2550e6e26e0f040d0f3d96273499"} Jan 26 18:55:05 crc kubenswrapper[4737]: I0126 18:55:05.487797 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vbj8n" event={"ID":"59ecae78-d5c7-4104-b28e-fd9d70a69dc5","Type":"ContainerStarted","Data":"0fbb0cf7b9115933c509cd9c54f338a74635a4105c59b4d97cff8da39b2266cd"} Jan 26 18:55:05 crc kubenswrapper[4737]: I0126 18:55:05.490262 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b81603b3-3bc1-43ba-8a07-59b7f8eed3b6","Type":"ContainerStarted","Data":"9fbc364aab6f48e48186ac9cb290f05e9d2751c38282736765a4effef6f43919"} Jan 26 18:55:05 crc kubenswrapper[4737]: I0126 18:55:05.502274 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75ff77bb76-fx82z" event={"ID":"c995be15-2ce8-471e-b1cb-880242eb10f6","Type":"ContainerStarted","Data":"334c1fc8abee0a67e38ad5da9a2e50bdaecaac6e5a1356993237fd30b9deec56"} Jan 26 18:55:05 crc kubenswrapper[4737]: I0126 18:55:05.507141 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-8nbml" podStartSLOduration=3.2101043320000002 podStartE2EDuration="33.507117059s" podCreationTimestamp="2026-01-26 18:54:32 +0000 UTC" firstStartedPulling="2026-01-26 18:54:34.340875555 +0000 UTC m=+1447.649070263" lastFinishedPulling="2026-01-26 18:55:04.637888282 +0000 UTC m=+1477.946082990" observedRunningTime="2026-01-26 18:55:05.4961373 +0000 UTC m=+1478.804332008" watchObservedRunningTime="2026-01-26 18:55:05.507117059 +0000 UTC m=+1478.815311767" Jan 26 18:55:05 crc kubenswrapper[4737]: I0126 18:55:05.538675 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-vbj8n" podStartSLOduration=20.538654674 podStartE2EDuration="20.538654674s" podCreationTimestamp="2026-01-26 18:54:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:55:05.531546596 +0000 UTC m=+1478.839741304" watchObservedRunningTime="2026-01-26 18:55:05.538654674 +0000 UTC m=+1478.846849382" Jan 26 18:55:05 crc kubenswrapper[4737]: I0126 18:55:05.614244 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6bf5799cfc-4n4l5"] Jan 26 18:55:05 crc kubenswrapper[4737]: I0126 18:55:05.617099 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6bf5799cfc-4n4l5" Jan 26 18:55:05 crc kubenswrapper[4737]: I0126 18:55:05.621257 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 26 18:55:05 crc kubenswrapper[4737]: I0126 18:55:05.622247 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 26 18:55:05 crc kubenswrapper[4737]: I0126 18:55:05.627443 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6bf5799cfc-4n4l5"] Jan 26 18:55:05 crc kubenswrapper[4737]: I0126 18:55:05.745614 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfdad184-ce5c-4bfe-a9dc-44f62de75095-public-tls-certs\") pod \"neutron-6bf5799cfc-4n4l5\" (UID: \"cfdad184-ce5c-4bfe-a9dc-44f62de75095\") " pod="openstack/neutron-6bf5799cfc-4n4l5" Jan 26 18:55:05 crc kubenswrapper[4737]: I0126 18:55:05.746083 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cfdad184-ce5c-4bfe-a9dc-44f62de75095-httpd-config\") pod \"neutron-6bf5799cfc-4n4l5\" (UID: \"cfdad184-ce5c-4bfe-a9dc-44f62de75095\") " pod="openstack/neutron-6bf5799cfc-4n4l5" Jan 26 18:55:05 crc kubenswrapper[4737]: I0126 18:55:05.746135 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cfdad184-ce5c-4bfe-a9dc-44f62de75095-config\") pod \"neutron-6bf5799cfc-4n4l5\" (UID: \"cfdad184-ce5c-4bfe-a9dc-44f62de75095\") " pod="openstack/neutron-6bf5799cfc-4n4l5" Jan 26 18:55:05 crc kubenswrapper[4737]: I0126 18:55:05.746160 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfdad184-ce5c-4bfe-a9dc-44f62de75095-internal-tls-certs\") pod \"neutron-6bf5799cfc-4n4l5\" (UID: \"cfdad184-ce5c-4bfe-a9dc-44f62de75095\") " pod="openstack/neutron-6bf5799cfc-4n4l5" Jan 26 18:55:05 crc kubenswrapper[4737]: I0126 18:55:05.746210 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2r2j\" (UniqueName: \"kubernetes.io/projected/cfdad184-ce5c-4bfe-a9dc-44f62de75095-kube-api-access-k2r2j\") pod \"neutron-6bf5799cfc-4n4l5\" (UID: \"cfdad184-ce5c-4bfe-a9dc-44f62de75095\") " pod="openstack/neutron-6bf5799cfc-4n4l5" Jan 26 18:55:05 crc kubenswrapper[4737]: I0126 18:55:05.746239 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfdad184-ce5c-4bfe-a9dc-44f62de75095-ovndb-tls-certs\") pod \"neutron-6bf5799cfc-4n4l5\" (UID: \"cfdad184-ce5c-4bfe-a9dc-44f62de75095\") " pod="openstack/neutron-6bf5799cfc-4n4l5" Jan 26 18:55:05 crc kubenswrapper[4737]: I0126 18:55:05.746335 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfdad184-ce5c-4bfe-a9dc-44f62de75095-combined-ca-bundle\") pod \"neutron-6bf5799cfc-4n4l5\" (UID: \"cfdad184-ce5c-4bfe-a9dc-44f62de75095\") " pod="openstack/neutron-6bf5799cfc-4n4l5" Jan 26 18:55:05 crc kubenswrapper[4737]: I0126 18:55:05.848899 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfdad184-ce5c-4bfe-a9dc-44f62de75095-public-tls-certs\") pod \"neutron-6bf5799cfc-4n4l5\" (UID: \"cfdad184-ce5c-4bfe-a9dc-44f62de75095\") " pod="openstack/neutron-6bf5799cfc-4n4l5" Jan 26 18:55:05 crc kubenswrapper[4737]: I0126 18:55:05.850839 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cfdad184-ce5c-4bfe-a9dc-44f62de75095-httpd-config\") pod \"neutron-6bf5799cfc-4n4l5\" (UID: \"cfdad184-ce5c-4bfe-a9dc-44f62de75095\") " pod="openstack/neutron-6bf5799cfc-4n4l5" Jan 26 18:55:05 crc kubenswrapper[4737]: I0126 18:55:05.851519 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cfdad184-ce5c-4bfe-a9dc-44f62de75095-config\") pod \"neutron-6bf5799cfc-4n4l5\" (UID: \"cfdad184-ce5c-4bfe-a9dc-44f62de75095\") " pod="openstack/neutron-6bf5799cfc-4n4l5" Jan 26 18:55:05 crc kubenswrapper[4737]: I0126 18:55:05.851552 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfdad184-ce5c-4bfe-a9dc-44f62de75095-internal-tls-certs\") pod \"neutron-6bf5799cfc-4n4l5\" (UID: \"cfdad184-ce5c-4bfe-a9dc-44f62de75095\") " pod="openstack/neutron-6bf5799cfc-4n4l5" Jan 26 18:55:05 crc kubenswrapper[4737]: I0126 18:55:05.851806 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2r2j\" (UniqueName: \"kubernetes.io/projected/cfdad184-ce5c-4bfe-a9dc-44f62de75095-kube-api-access-k2r2j\") pod \"neutron-6bf5799cfc-4n4l5\" (UID: \"cfdad184-ce5c-4bfe-a9dc-44f62de75095\") " pod="openstack/neutron-6bf5799cfc-4n4l5" Jan 26 18:55:05 crc kubenswrapper[4737]: I0126 18:55:05.851858 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfdad184-ce5c-4bfe-a9dc-44f62de75095-ovndb-tls-certs\") pod \"neutron-6bf5799cfc-4n4l5\" (UID: \"cfdad184-ce5c-4bfe-a9dc-44f62de75095\") " pod="openstack/neutron-6bf5799cfc-4n4l5" Jan 26 18:55:05 crc kubenswrapper[4737]: I0126 18:55:05.852000 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfdad184-ce5c-4bfe-a9dc-44f62de75095-combined-ca-bundle\") pod \"neutron-6bf5799cfc-4n4l5\" (UID: \"cfdad184-ce5c-4bfe-a9dc-44f62de75095\") " pod="openstack/neutron-6bf5799cfc-4n4l5" Jan 26 18:55:05 crc kubenswrapper[4737]: I0126 18:55:05.854953 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfdad184-ce5c-4bfe-a9dc-44f62de75095-public-tls-certs\") pod \"neutron-6bf5799cfc-4n4l5\" (UID: \"cfdad184-ce5c-4bfe-a9dc-44f62de75095\") " pod="openstack/neutron-6bf5799cfc-4n4l5" Jan 26 18:55:05 crc kubenswrapper[4737]: I0126 18:55:05.859340 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfdad184-ce5c-4bfe-a9dc-44f62de75095-internal-tls-certs\") pod \"neutron-6bf5799cfc-4n4l5\" (UID: \"cfdad184-ce5c-4bfe-a9dc-44f62de75095\") " pod="openstack/neutron-6bf5799cfc-4n4l5" Jan 26 18:55:05 crc kubenswrapper[4737]: I0126 18:55:05.859503 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cfdad184-ce5c-4bfe-a9dc-44f62de75095-httpd-config\") pod \"neutron-6bf5799cfc-4n4l5\" (UID: \"cfdad184-ce5c-4bfe-a9dc-44f62de75095\") " pod="openstack/neutron-6bf5799cfc-4n4l5" Jan 26 18:55:05 crc kubenswrapper[4737]: I0126 18:55:05.859678 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfdad184-ce5c-4bfe-a9dc-44f62de75095-combined-ca-bundle\") pod \"neutron-6bf5799cfc-4n4l5\" (UID: \"cfdad184-ce5c-4bfe-a9dc-44f62de75095\") " pod="openstack/neutron-6bf5799cfc-4n4l5" Jan 26 18:55:05 crc kubenswrapper[4737]: I0126 18:55:05.873878 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfdad184-ce5c-4bfe-a9dc-44f62de75095-ovndb-tls-certs\") pod \"neutron-6bf5799cfc-4n4l5\" (UID: \"cfdad184-ce5c-4bfe-a9dc-44f62de75095\") " pod="openstack/neutron-6bf5799cfc-4n4l5" Jan 26 18:55:05 crc kubenswrapper[4737]: I0126 18:55:05.874596 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/cfdad184-ce5c-4bfe-a9dc-44f62de75095-config\") pod \"neutron-6bf5799cfc-4n4l5\" (UID: \"cfdad184-ce5c-4bfe-a9dc-44f62de75095\") " pod="openstack/neutron-6bf5799cfc-4n4l5" Jan 26 18:55:05 crc kubenswrapper[4737]: I0126 18:55:05.875337 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2r2j\" (UniqueName: \"kubernetes.io/projected/cfdad184-ce5c-4bfe-a9dc-44f62de75095-kube-api-access-k2r2j\") pod \"neutron-6bf5799cfc-4n4l5\" (UID: \"cfdad184-ce5c-4bfe-a9dc-44f62de75095\") " pod="openstack/neutron-6bf5799cfc-4n4l5" Jan 26 18:55:05 crc kubenswrapper[4737]: I0126 18:55:05.944170 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6bf5799cfc-4n4l5" Jan 26 18:55:06 crc kubenswrapper[4737]: I0126 18:55:06.565934 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f2105678-a452-433f-aa75-908321272f46","Type":"ContainerStarted","Data":"fbdf9cd4e5898363e13e592218834c4a83818b60685c65abeca87b0bc8064703"} Jan 26 18:55:06 crc kubenswrapper[4737]: I0126 18:55:06.571096 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75ff77bb76-fx82z" event={"ID":"c995be15-2ce8-471e-b1cb-880242eb10f6","Type":"ContainerStarted","Data":"e71eaf70b7225ef5806219a093666ec7834a1bfd927b32cdcef79f1ad0f6a97d"} Jan 26 18:55:06 crc kubenswrapper[4737]: I0126 18:55:06.571135 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75ff77bb76-fx82z" event={"ID":"c995be15-2ce8-471e-b1cb-880242eb10f6","Type":"ContainerStarted","Data":"ab6b962c9faa096a1c52d6d51fa797c462cb80650a5f052fca9c9324622c4e4a"} Jan 26 18:55:06 crc kubenswrapper[4737]: I0126 18:55:06.571163 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-75ff77bb76-fx82z" Jan 26 18:55:06 crc kubenswrapper[4737]: I0126 18:55:06.577870 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8c7f5f39-5fca-4ebd-b06b-1022c2500338","Type":"ContainerStarted","Data":"6debd79f4fd6a44a765ba729f95566537cb9c3413f3c8578e6c8bcef6cd06d62"} Jan 26 18:55:06 crc kubenswrapper[4737]: I0126 18:55:06.584272 4737 generic.go:334] "Generic (PLEG): container finished" podID="40dcad4e-d2aa-4e7e-bf72-4afd88ca77df" containerID="a082d91e479306d806f4f06e3ea3edc667a0cda0bb576e526b35b2693041391b" exitCode=0 Jan 26 18:55:06 crc kubenswrapper[4737]: I0126 18:55:06.585124 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-5ndsz" event={"ID":"40dcad4e-d2aa-4e7e-bf72-4afd88ca77df","Type":"ContainerDied","Data":"a082d91e479306d806f4f06e3ea3edc667a0cda0bb576e526b35b2693041391b"} Jan 26 18:55:06 crc kubenswrapper[4737]: I0126 18:55:06.596430 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-75ff77bb76-fx82z" podStartSLOduration=3.596410016 podStartE2EDuration="3.596410016s" podCreationTimestamp="2026-01-26 18:55:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:55:06.593958708 +0000 UTC m=+1479.902153436" watchObservedRunningTime="2026-01-26 18:55:06.596410016 +0000 UTC m=+1479.904604724" Jan 26 18:55:06 crc kubenswrapper[4737]: I0126 18:55:06.777087 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6bf5799cfc-4n4l5"] Jan 26 18:55:06 crc kubenswrapper[4737]: W0126 18:55:06.863013 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcfdad184_ce5c_4bfe_a9dc_44f62de75095.slice/crio-8d7c413a9f90345c333eec90c45d55b623ca99617a0c83dad9863f3f96ec5f52 WatchSource:0}: Error finding container 8d7c413a9f90345c333eec90c45d55b623ca99617a0c83dad9863f3f96ec5f52: Status 404 returned error can't find the container with id 8d7c413a9f90345c333eec90c45d55b623ca99617a0c83dad9863f3f96ec5f52 Jan 26 18:55:07 crc kubenswrapper[4737]: I0126 18:55:07.599532 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8c7f5f39-5fca-4ebd-b06b-1022c2500338","Type":"ContainerStarted","Data":"d7375bbb295caf445cf9905cc9faf3379d6b72b3a2f9e577ed4d1edfe37cb42b"} Jan 26 18:55:07 crc kubenswrapper[4737]: I0126 18:55:07.602984 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-5ndsz" event={"ID":"40dcad4e-d2aa-4e7e-bf72-4afd88ca77df","Type":"ContainerStarted","Data":"da0f1569d3723ad22594cbf6ff877b7d44a60da9fcf34692a0e6c5e652f990ec"} Jan 26 18:55:07 crc kubenswrapper[4737]: I0126 18:55:07.603151 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-5ndsz" Jan 26 18:55:07 crc kubenswrapper[4737]: I0126 18:55:07.605122 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f2105678-a452-433f-aa75-908321272f46","Type":"ContainerStarted","Data":"0be6c934d819d7882080f2d5bcefc3f6ede201b6a0c105d7d0b2ec4ca03547ab"} Jan 26 18:55:07 crc kubenswrapper[4737]: I0126 18:55:07.609719 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7g98" event={"ID":"0d1ea1d4-ca8f-48d7-838b-a71cc03f2b39","Type":"ContainerStarted","Data":"ae744b77e619f224cdeb3592df13c75c2351589ab1635f3d1c5d15d4ae931b7a"} Jan 26 18:55:07 crc kubenswrapper[4737]: I0126 18:55:07.612826 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6bf5799cfc-4n4l5" event={"ID":"cfdad184-ce5c-4bfe-a9dc-44f62de75095","Type":"ContainerStarted","Data":"1d6d7c8edb5d6302c4e4d245e968f01ad07431961b94a53685a1942d2ea642f2"} Jan 26 18:55:07 crc kubenswrapper[4737]: I0126 18:55:07.612882 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6bf5799cfc-4n4l5" event={"ID":"cfdad184-ce5c-4bfe-a9dc-44f62de75095","Type":"ContainerStarted","Data":"8d7c413a9f90345c333eec90c45d55b623ca99617a0c83dad9863f3f96ec5f52"} Jan 26 18:55:07 crc kubenswrapper[4737]: I0126 18:55:07.645829 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.64580347 podStartE2EDuration="5.64580347s" podCreationTimestamp="2026-01-26 18:55:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:55:07.622638103 +0000 UTC m=+1480.930832821" watchObservedRunningTime="2026-01-26 18:55:07.64580347 +0000 UTC m=+1480.953998178" Jan 26 18:55:07 crc kubenswrapper[4737]: I0126 18:55:07.657476 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=22.657453735 podStartE2EDuration="22.657453735s" podCreationTimestamp="2026-01-26 18:54:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:55:07.653499273 +0000 UTC m=+1480.961693981" watchObservedRunningTime="2026-01-26 18:55:07.657453735 +0000 UTC m=+1480.965648433" Jan 26 18:55:07 crc kubenswrapper[4737]: I0126 18:55:07.716165 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-5ndsz" podStartSLOduration=5.716138702 podStartE2EDuration="5.716138702s" podCreationTimestamp="2026-01-26 18:55:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:55:07.707898907 +0000 UTC m=+1481.016093615" watchObservedRunningTime="2026-01-26 18:55:07.716138702 +0000 UTC m=+1481.024333410" Jan 26 18:55:08 crc kubenswrapper[4737]: I0126 18:55:08.637953 4737 generic.go:334] "Generic (PLEG): container finished" podID="0d1ea1d4-ca8f-48d7-838b-a71cc03f2b39" containerID="ae744b77e619f224cdeb3592df13c75c2351589ab1635f3d1c5d15d4ae931b7a" exitCode=0 Jan 26 18:55:08 crc kubenswrapper[4737]: I0126 18:55:08.638176 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7g98" event={"ID":"0d1ea1d4-ca8f-48d7-838b-a71cc03f2b39","Type":"ContainerDied","Data":"ae744b77e619f224cdeb3592df13c75c2351589ab1635f3d1c5d15d4ae931b7a"} Jan 26 18:55:12 crc kubenswrapper[4737]: I0126 18:55:12.504657 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 26 18:55:12 crc kubenswrapper[4737]: I0126 18:55:12.505634 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 26 18:55:12 crc kubenswrapper[4737]: I0126 18:55:12.572750 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 26 18:55:12 crc kubenswrapper[4737]: I0126 18:55:12.573422 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 26 18:55:12 crc kubenswrapper[4737]: I0126 18:55:12.684333 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 26 18:55:12 crc kubenswrapper[4737]: I0126 18:55:12.684403 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 26 18:55:13 crc kubenswrapper[4737]: I0126 18:55:13.341283 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-5ndsz" Jan 26 18:55:13 crc kubenswrapper[4737]: I0126 18:55:13.427765 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-lmn22"] Jan 26 18:55:13 crc kubenswrapper[4737]: I0126 18:55:13.428032 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-lmn22" podUID="093651a2-4ab4-4c4a-8b9a-16836c7117bc" containerName="dnsmasq-dns" containerID="cri-o://77d45fcac6a9c74293c6ce3e47d05de62ab15841ad7284eb54126fb8304f13d7" gracePeriod=10 Jan 26 18:55:14 crc kubenswrapper[4737]: I0126 18:55:14.713560 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6bf5799cfc-4n4l5" event={"ID":"cfdad184-ce5c-4bfe-a9dc-44f62de75095","Type":"ContainerStarted","Data":"7645cee4e5194b787f2e662f685e95d5a2e16b3b5b6472e3876f7af25c7dbd3b"} Jan 26 18:55:14 crc kubenswrapper[4737]: I0126 18:55:14.717273 4737 generic.go:334] "Generic (PLEG): container finished" podID="093651a2-4ab4-4c4a-8b9a-16836c7117bc" containerID="77d45fcac6a9c74293c6ce3e47d05de62ab15841ad7284eb54126fb8304f13d7" exitCode=0 Jan 26 18:55:14 crc kubenswrapper[4737]: I0126 18:55:14.717306 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-lmn22" event={"ID":"093651a2-4ab4-4c4a-8b9a-16836c7117bc","Type":"ContainerDied","Data":"77d45fcac6a9c74293c6ce3e47d05de62ab15841ad7284eb54126fb8304f13d7"} Jan 26 18:55:14 crc kubenswrapper[4737]: I0126 18:55:14.938878 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-lmn22" Jan 26 18:55:15 crc kubenswrapper[4737]: I0126 18:55:15.035443 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/093651a2-4ab4-4c4a-8b9a-16836c7117bc-dns-swift-storage-0\") pod \"093651a2-4ab4-4c4a-8b9a-16836c7117bc\" (UID: \"093651a2-4ab4-4c4a-8b9a-16836c7117bc\") " Jan 26 18:55:15 crc kubenswrapper[4737]: I0126 18:55:15.035546 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/093651a2-4ab4-4c4a-8b9a-16836c7117bc-ovsdbserver-sb\") pod \"093651a2-4ab4-4c4a-8b9a-16836c7117bc\" (UID: \"093651a2-4ab4-4c4a-8b9a-16836c7117bc\") " Jan 26 18:55:15 crc kubenswrapper[4737]: I0126 18:55:15.035594 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/093651a2-4ab4-4c4a-8b9a-16836c7117bc-dns-svc\") pod \"093651a2-4ab4-4c4a-8b9a-16836c7117bc\" (UID: \"093651a2-4ab4-4c4a-8b9a-16836c7117bc\") " Jan 26 18:55:15 crc kubenswrapper[4737]: I0126 18:55:15.035643 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7gfj\" (UniqueName: \"kubernetes.io/projected/093651a2-4ab4-4c4a-8b9a-16836c7117bc-kube-api-access-t7gfj\") pod \"093651a2-4ab4-4c4a-8b9a-16836c7117bc\" (UID: \"093651a2-4ab4-4c4a-8b9a-16836c7117bc\") " Jan 26 18:55:15 crc kubenswrapper[4737]: I0126 18:55:15.036022 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/093651a2-4ab4-4c4a-8b9a-16836c7117bc-config\") pod \"093651a2-4ab4-4c4a-8b9a-16836c7117bc\" (UID: \"093651a2-4ab4-4c4a-8b9a-16836c7117bc\") " Jan 26 18:55:15 crc kubenswrapper[4737]: I0126 18:55:15.036173 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/093651a2-4ab4-4c4a-8b9a-16836c7117bc-ovsdbserver-nb\") pod \"093651a2-4ab4-4c4a-8b9a-16836c7117bc\" (UID: \"093651a2-4ab4-4c4a-8b9a-16836c7117bc\") " Jan 26 18:55:15 crc kubenswrapper[4737]: I0126 18:55:15.047968 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/093651a2-4ab4-4c4a-8b9a-16836c7117bc-kube-api-access-t7gfj" (OuterVolumeSpecName: "kube-api-access-t7gfj") pod "093651a2-4ab4-4c4a-8b9a-16836c7117bc" (UID: "093651a2-4ab4-4c4a-8b9a-16836c7117bc"). InnerVolumeSpecName "kube-api-access-t7gfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:55:15 crc kubenswrapper[4737]: I0126 18:55:15.109558 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/093651a2-4ab4-4c4a-8b9a-16836c7117bc-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "093651a2-4ab4-4c4a-8b9a-16836c7117bc" (UID: "093651a2-4ab4-4c4a-8b9a-16836c7117bc"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:55:15 crc kubenswrapper[4737]: I0126 18:55:15.111725 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/093651a2-4ab4-4c4a-8b9a-16836c7117bc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "093651a2-4ab4-4c4a-8b9a-16836c7117bc" (UID: "093651a2-4ab4-4c4a-8b9a-16836c7117bc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:55:15 crc kubenswrapper[4737]: I0126 18:55:15.123415 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/093651a2-4ab4-4c4a-8b9a-16836c7117bc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "093651a2-4ab4-4c4a-8b9a-16836c7117bc" (UID: "093651a2-4ab4-4c4a-8b9a-16836c7117bc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:55:15 crc kubenswrapper[4737]: I0126 18:55:15.124730 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/093651a2-4ab4-4c4a-8b9a-16836c7117bc-config" (OuterVolumeSpecName: "config") pod "093651a2-4ab4-4c4a-8b9a-16836c7117bc" (UID: "093651a2-4ab4-4c4a-8b9a-16836c7117bc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:55:15 crc kubenswrapper[4737]: I0126 18:55:15.129436 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/093651a2-4ab4-4c4a-8b9a-16836c7117bc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "093651a2-4ab4-4c4a-8b9a-16836c7117bc" (UID: "093651a2-4ab4-4c4a-8b9a-16836c7117bc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:55:15 crc kubenswrapper[4737]: I0126 18:55:15.140742 4737 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/093651a2-4ab4-4c4a-8b9a-16836c7117bc-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:15 crc kubenswrapper[4737]: I0126 18:55:15.140813 4737 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/093651a2-4ab4-4c4a-8b9a-16836c7117bc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:15 crc kubenswrapper[4737]: I0126 18:55:15.140832 4737 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/093651a2-4ab4-4c4a-8b9a-16836c7117bc-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:15 crc kubenswrapper[4737]: I0126 18:55:15.140842 4737 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/093651a2-4ab4-4c4a-8b9a-16836c7117bc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:15 crc kubenswrapper[4737]: I0126 18:55:15.140852 4737 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/093651a2-4ab4-4c4a-8b9a-16836c7117bc-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:15 crc kubenswrapper[4737]: I0126 18:55:15.140863 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7gfj\" (UniqueName: \"kubernetes.io/projected/093651a2-4ab4-4c4a-8b9a-16836c7117bc-kube-api-access-t7gfj\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:15 crc kubenswrapper[4737]: I0126 18:55:15.732927 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-kdfn7" event={"ID":"54a9f74e-fc12-43b7-aca3-0594480e0222","Type":"ContainerStarted","Data":"6e28763de49ab84419a183827eeaa2498baa40575e3f5b2ab71c1383ba21e7bf"} Jan 26 18:55:15 crc kubenswrapper[4737]: I0126 18:55:15.739543 4737 generic.go:334] "Generic (PLEG): container finished" podID="11147190-1d45-4798-83d7-449cd574a296" containerID="1a55b5355727b4b9301d1e272dea5dd64862e9b091b399e73471988209bb6ceb" exitCode=0 Jan 26 18:55:15 crc kubenswrapper[4737]: I0126 18:55:15.739641 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8nbml" event={"ID":"11147190-1d45-4798-83d7-449cd574a296","Type":"ContainerDied","Data":"1a55b5355727b4b9301d1e272dea5dd64862e9b091b399e73471988209bb6ceb"} Jan 26 18:55:15 crc kubenswrapper[4737]: I0126 18:55:15.745778 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7g98" event={"ID":"0d1ea1d4-ca8f-48d7-838b-a71cc03f2b39","Type":"ContainerStarted","Data":"a60e39cc36a0baceb992e4211988031bbb9fa64910f8224709438291f858fbe4"} Jan 26 18:55:15 crc kubenswrapper[4737]: I0126 18:55:15.757524 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 26 18:55:15 crc kubenswrapper[4737]: I0126 18:55:15.757586 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 26 18:55:15 crc kubenswrapper[4737]: I0126 18:55:15.757636 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 26 18:55:15 crc kubenswrapper[4737]: I0126 18:55:15.757682 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 26 18:55:15 crc kubenswrapper[4737]: I0126 18:55:15.758431 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-lmn22" event={"ID":"093651a2-4ab4-4c4a-8b9a-16836c7117bc","Type":"ContainerDied","Data":"9a687d1a7a80b9dcd56da66bd46e7fb94a6efb862b59f42657c8654131dc3582"} Jan 26 18:55:15 crc kubenswrapper[4737]: I0126 18:55:15.758484 4737 scope.go:117] "RemoveContainer" containerID="77d45fcac6a9c74293c6ce3e47d05de62ab15841ad7284eb54126fb8304f13d7" Jan 26 18:55:15 crc kubenswrapper[4737]: I0126 18:55:15.758536 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-lmn22" Jan 26 18:55:15 crc kubenswrapper[4737]: I0126 18:55:15.763417 4737 generic.go:334] "Generic (PLEG): container finished" podID="59ecae78-d5c7-4104-b28e-fd9d70a69dc5" containerID="8da202852f6931d217e4caa89c850e91d6bf2550e6e26e0f040d0f3d96273499" exitCode=0 Jan 26 18:55:15 crc kubenswrapper[4737]: I0126 18:55:15.763479 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vbj8n" event={"ID":"59ecae78-d5c7-4104-b28e-fd9d70a69dc5","Type":"ContainerDied","Data":"8da202852f6931d217e4caa89c850e91d6bf2550e6e26e0f040d0f3d96273499"} Jan 26 18:55:15 crc kubenswrapper[4737]: I0126 18:55:15.766354 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b81603b3-3bc1-43ba-8a07-59b7f8eed3b6","Type":"ContainerStarted","Data":"f65298abc446dd56f82ba1384fb99393ede8cb1fe3e2d3e8e570280c6590b351"} Jan 26 18:55:15 crc kubenswrapper[4737]: I0126 18:55:15.778612 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-crvp5" event={"ID":"31ee14c5-9b8d-4903-afc7-0b7c643b2756","Type":"ContainerStarted","Data":"a2cb887eb23910e377c3962c778ebf4c69b9b70feab0dfb04d4461abc41fd260"} Jan 26 18:55:15 crc kubenswrapper[4737]: I0126 18:55:15.779179 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6bf5799cfc-4n4l5" Jan 26 18:55:15 crc kubenswrapper[4737]: I0126 18:55:15.781448 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-kdfn7" podStartSLOduration=3.362324562 podStartE2EDuration="44.781435223s" podCreationTimestamp="2026-01-26 18:54:31 +0000 UTC" firstStartedPulling="2026-01-26 18:54:33.499307851 +0000 UTC m=+1446.807502559" lastFinishedPulling="2026-01-26 18:55:14.918418512 +0000 UTC m=+1488.226613220" observedRunningTime="2026-01-26 18:55:15.748921664 +0000 UTC m=+1489.057116372" watchObservedRunningTime="2026-01-26 18:55:15.781435223 +0000 UTC m=+1489.089629921" Jan 26 18:55:15 crc kubenswrapper[4737]: I0126 18:55:15.792956 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-s7g98" podStartSLOduration=19.025300332 podStartE2EDuration="28.792934725s" podCreationTimestamp="2026-01-26 18:54:47 +0000 UTC" firstStartedPulling="2026-01-26 18:55:05.455396457 +0000 UTC m=+1478.763591165" lastFinishedPulling="2026-01-26 18:55:15.22303085 +0000 UTC m=+1488.531225558" observedRunningTime="2026-01-26 18:55:15.778550895 +0000 UTC m=+1489.086745603" watchObservedRunningTime="2026-01-26 18:55:15.792934725 +0000 UTC m=+1489.101129423" Jan 26 18:55:15 crc kubenswrapper[4737]: I0126 18:55:15.817359 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 26 18:55:15 crc kubenswrapper[4737]: I0126 18:55:15.817441 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 26 18:55:15 crc kubenswrapper[4737]: I0126 18:55:15.827996 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-crvp5" podStartSLOduration=4.046837475 podStartE2EDuration="44.827970413s" podCreationTimestamp="2026-01-26 18:54:31 +0000 UTC" firstStartedPulling="2026-01-26 18:54:34.409556418 +0000 UTC m=+1447.717751126" lastFinishedPulling="2026-01-26 18:55:15.190689356 +0000 UTC m=+1488.498884064" observedRunningTime="2026-01-26 18:55:15.826343254 +0000 UTC m=+1489.134537982" watchObservedRunningTime="2026-01-26 18:55:15.827970413 +0000 UTC m=+1489.136165121" Jan 26 18:55:15 crc kubenswrapper[4737]: I0126 18:55:15.844452 4737 scope.go:117] "RemoveContainer" containerID="871f950cb19477940b7dc8a749acc98004ad6e09bf6f85ec85d3aff84bc93bdc" Jan 26 18:55:15 crc kubenswrapper[4737]: I0126 18:55:15.893901 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6bf5799cfc-4n4l5" podStartSLOduration=10.89387516 podStartE2EDuration="10.89387516s" podCreationTimestamp="2026-01-26 18:55:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:55:15.882419039 +0000 UTC m=+1489.190613747" watchObservedRunningTime="2026-01-26 18:55:15.89387516 +0000 UTC m=+1489.202069868" Jan 26 18:55:15 crc kubenswrapper[4737]: I0126 18:55:15.967290 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-lmn22"] Jan 26 18:55:15 crc kubenswrapper[4737]: I0126 18:55:15.984497 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-lmn22"] Jan 26 18:55:16 crc kubenswrapper[4737]: I0126 18:55:16.792298 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-5pb7v" event={"ID":"cac069b5-db5e-47ec-ada0-7e6acf1af111","Type":"ContainerStarted","Data":"0812037e61aaa15557e83ff51841b9c58954816a3e829827c7b6ca441d2a80ac"} Jan 26 18:55:16 crc kubenswrapper[4737]: I0126 18:55:16.824778 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-5pb7v" podStartSLOduration=4.016075701 podStartE2EDuration="45.824759217s" podCreationTimestamp="2026-01-26 18:54:31 +0000 UTC" firstStartedPulling="2026-01-26 18:54:33.72781278 +0000 UTC m=+1447.036007488" lastFinishedPulling="2026-01-26 18:55:15.536496296 +0000 UTC m=+1488.844691004" observedRunningTime="2026-01-26 18:55:16.824708166 +0000 UTC m=+1490.132902874" watchObservedRunningTime="2026-01-26 18:55:16.824759217 +0000 UTC m=+1490.132953925" Jan 26 18:55:17 crc kubenswrapper[4737]: I0126 18:55:17.052450 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="093651a2-4ab4-4c4a-8b9a-16836c7117bc" path="/var/lib/kubelet/pods/093651a2-4ab4-4c4a-8b9a-16836c7117bc/volumes" Jan 26 18:55:17 crc kubenswrapper[4737]: I0126 18:55:17.494610 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vbj8n" Jan 26 18:55:17 crc kubenswrapper[4737]: I0126 18:55:17.503464 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8nbml" Jan 26 18:55:17 crc kubenswrapper[4737]: I0126 18:55:17.659171 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59ecae78-d5c7-4104-b28e-fd9d70a69dc5-scripts\") pod \"59ecae78-d5c7-4104-b28e-fd9d70a69dc5\" (UID: \"59ecae78-d5c7-4104-b28e-fd9d70a69dc5\") " Jan 26 18:55:17 crc kubenswrapper[4737]: I0126 18:55:17.659239 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/59ecae78-d5c7-4104-b28e-fd9d70a69dc5-fernet-keys\") pod \"59ecae78-d5c7-4104-b28e-fd9d70a69dc5\" (UID: \"59ecae78-d5c7-4104-b28e-fd9d70a69dc5\") " Jan 26 18:55:17 crc kubenswrapper[4737]: I0126 18:55:17.659272 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11147190-1d45-4798-83d7-449cd574a296-logs\") pod \"11147190-1d45-4798-83d7-449cd574a296\" (UID: \"11147190-1d45-4798-83d7-449cd574a296\") " Jan 26 18:55:17 crc kubenswrapper[4737]: I0126 18:55:17.659289 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11147190-1d45-4798-83d7-449cd574a296-scripts\") pod \"11147190-1d45-4798-83d7-449cd574a296\" (UID: \"11147190-1d45-4798-83d7-449cd574a296\") " Jan 26 18:55:17 crc kubenswrapper[4737]: I0126 18:55:17.659393 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84wt6\" (UniqueName: \"kubernetes.io/projected/59ecae78-d5c7-4104-b28e-fd9d70a69dc5-kube-api-access-84wt6\") pod \"59ecae78-d5c7-4104-b28e-fd9d70a69dc5\" (UID: \"59ecae78-d5c7-4104-b28e-fd9d70a69dc5\") " Jan 26 18:55:17 crc kubenswrapper[4737]: I0126 18:55:17.659468 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59ecae78-d5c7-4104-b28e-fd9d70a69dc5-config-data\") pod \"59ecae78-d5c7-4104-b28e-fd9d70a69dc5\" (UID: \"59ecae78-d5c7-4104-b28e-fd9d70a69dc5\") " Jan 26 18:55:17 crc kubenswrapper[4737]: I0126 18:55:17.659604 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/59ecae78-d5c7-4104-b28e-fd9d70a69dc5-credential-keys\") pod \"59ecae78-d5c7-4104-b28e-fd9d70a69dc5\" (UID: \"59ecae78-d5c7-4104-b28e-fd9d70a69dc5\") " Jan 26 18:55:17 crc kubenswrapper[4737]: I0126 18:55:17.659687 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59ecae78-d5c7-4104-b28e-fd9d70a69dc5-combined-ca-bundle\") pod \"59ecae78-d5c7-4104-b28e-fd9d70a69dc5\" (UID: \"59ecae78-d5c7-4104-b28e-fd9d70a69dc5\") " Jan 26 18:55:17 crc kubenswrapper[4737]: I0126 18:55:17.659724 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11147190-1d45-4798-83d7-449cd574a296-config-data\") pod \"11147190-1d45-4798-83d7-449cd574a296\" (UID: \"11147190-1d45-4798-83d7-449cd574a296\") " Jan 26 18:55:17 crc kubenswrapper[4737]: I0126 18:55:17.659749 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11147190-1d45-4798-83d7-449cd574a296-combined-ca-bundle\") pod \"11147190-1d45-4798-83d7-449cd574a296\" (UID: \"11147190-1d45-4798-83d7-449cd574a296\") " Jan 26 18:55:17 crc kubenswrapper[4737]: I0126 18:55:17.659794 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6q8t\" (UniqueName: \"kubernetes.io/projected/11147190-1d45-4798-83d7-449cd574a296-kube-api-access-p6q8t\") pod \"11147190-1d45-4798-83d7-449cd574a296\" (UID: \"11147190-1d45-4798-83d7-449cd574a296\") " Jan 26 18:55:17 crc kubenswrapper[4737]: I0126 18:55:17.667862 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11147190-1d45-4798-83d7-449cd574a296-kube-api-access-p6q8t" (OuterVolumeSpecName: "kube-api-access-p6q8t") pod "11147190-1d45-4798-83d7-449cd574a296" (UID: "11147190-1d45-4798-83d7-449cd574a296"). InnerVolumeSpecName "kube-api-access-p6q8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:55:17 crc kubenswrapper[4737]: I0126 18:55:17.668877 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11147190-1d45-4798-83d7-449cd574a296-logs" (OuterVolumeSpecName: "logs") pod "11147190-1d45-4798-83d7-449cd574a296" (UID: "11147190-1d45-4798-83d7-449cd574a296"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:55:17 crc kubenswrapper[4737]: I0126 18:55:17.678333 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59ecae78-d5c7-4104-b28e-fd9d70a69dc5-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "59ecae78-d5c7-4104-b28e-fd9d70a69dc5" (UID: "59ecae78-d5c7-4104-b28e-fd9d70a69dc5"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:55:17 crc kubenswrapper[4737]: I0126 18:55:17.682688 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59ecae78-d5c7-4104-b28e-fd9d70a69dc5-scripts" (OuterVolumeSpecName: "scripts") pod "59ecae78-d5c7-4104-b28e-fd9d70a69dc5" (UID: "59ecae78-d5c7-4104-b28e-fd9d70a69dc5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:55:17 crc kubenswrapper[4737]: I0126 18:55:17.682793 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59ecae78-d5c7-4104-b28e-fd9d70a69dc5-kube-api-access-84wt6" (OuterVolumeSpecName: "kube-api-access-84wt6") pod "59ecae78-d5c7-4104-b28e-fd9d70a69dc5" (UID: "59ecae78-d5c7-4104-b28e-fd9d70a69dc5"). InnerVolumeSpecName "kube-api-access-84wt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:55:17 crc kubenswrapper[4737]: I0126 18:55:17.692747 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59ecae78-d5c7-4104-b28e-fd9d70a69dc5-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "59ecae78-d5c7-4104-b28e-fd9d70a69dc5" (UID: "59ecae78-d5c7-4104-b28e-fd9d70a69dc5"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:55:17 crc kubenswrapper[4737]: I0126 18:55:17.696305 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11147190-1d45-4798-83d7-449cd574a296-scripts" (OuterVolumeSpecName: "scripts") pod "11147190-1d45-4798-83d7-449cd574a296" (UID: "11147190-1d45-4798-83d7-449cd574a296"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:55:17 crc kubenswrapper[4737]: I0126 18:55:17.739714 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11147190-1d45-4798-83d7-449cd574a296-config-data" (OuterVolumeSpecName: "config-data") pod "11147190-1d45-4798-83d7-449cd574a296" (UID: "11147190-1d45-4798-83d7-449cd574a296"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:55:17 crc kubenswrapper[4737]: I0126 18:55:17.744597 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11147190-1d45-4798-83d7-449cd574a296-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "11147190-1d45-4798-83d7-449cd574a296" (UID: "11147190-1d45-4798-83d7-449cd574a296"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:55:17 crc kubenswrapper[4737]: E0126 18:55:17.744757 4737 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59ecae78-d5c7-4104-b28e-fd9d70a69dc5-config-data podName:59ecae78-d5c7-4104-b28e-fd9d70a69dc5 nodeName:}" failed. No retries permitted until 2026-01-26 18:55:18.244715334 +0000 UTC m=+1491.552910042 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/59ecae78-d5c7-4104-b28e-fd9d70a69dc5-config-data") pod "59ecae78-d5c7-4104-b28e-fd9d70a69dc5" (UID: "59ecae78-d5c7-4104-b28e-fd9d70a69dc5") : error deleting /var/lib/kubelet/pods/59ecae78-d5c7-4104-b28e-fd9d70a69dc5/volume-subpaths: remove /var/lib/kubelet/pods/59ecae78-d5c7-4104-b28e-fd9d70a69dc5/volume-subpaths: no such file or directory Jan 26 18:55:17 crc kubenswrapper[4737]: I0126 18:55:17.761431 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59ecae78-d5c7-4104-b28e-fd9d70a69dc5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "59ecae78-d5c7-4104-b28e-fd9d70a69dc5" (UID: "59ecae78-d5c7-4104-b28e-fd9d70a69dc5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:55:17 crc kubenswrapper[4737]: I0126 18:55:17.761598 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59ecae78-d5c7-4104-b28e-fd9d70a69dc5-combined-ca-bundle\") pod \"59ecae78-d5c7-4104-b28e-fd9d70a69dc5\" (UID: \"59ecae78-d5c7-4104-b28e-fd9d70a69dc5\") " Jan 26 18:55:17 crc kubenswrapper[4737]: I0126 18:55:17.762299 4737 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11147190-1d45-4798-83d7-449cd574a296-logs\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:17 crc kubenswrapper[4737]: I0126 18:55:17.762425 4737 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11147190-1d45-4798-83d7-449cd574a296-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:17 crc kubenswrapper[4737]: I0126 18:55:17.762441 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84wt6\" (UniqueName: \"kubernetes.io/projected/59ecae78-d5c7-4104-b28e-fd9d70a69dc5-kube-api-access-84wt6\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:17 crc kubenswrapper[4737]: I0126 18:55:17.762484 4737 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/59ecae78-d5c7-4104-b28e-fd9d70a69dc5-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:17 crc kubenswrapper[4737]: I0126 18:55:17.762506 4737 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11147190-1d45-4798-83d7-449cd574a296-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:17 crc kubenswrapper[4737]: I0126 18:55:17.762522 4737 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11147190-1d45-4798-83d7-449cd574a296-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:17 crc kubenswrapper[4737]: I0126 18:55:17.762533 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6q8t\" (UniqueName: \"kubernetes.io/projected/11147190-1d45-4798-83d7-449cd574a296-kube-api-access-p6q8t\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:17 crc kubenswrapper[4737]: I0126 18:55:17.762541 4737 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59ecae78-d5c7-4104-b28e-fd9d70a69dc5-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:17 crc kubenswrapper[4737]: I0126 18:55:17.762548 4737 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/59ecae78-d5c7-4104-b28e-fd9d70a69dc5-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:17 crc kubenswrapper[4737]: W0126 18:55:17.762646 4737 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/59ecae78-d5c7-4104-b28e-fd9d70a69dc5/volumes/kubernetes.io~secret/combined-ca-bundle Jan 26 18:55:17 crc kubenswrapper[4737]: I0126 18:55:17.762660 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59ecae78-d5c7-4104-b28e-fd9d70a69dc5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "59ecae78-d5c7-4104-b28e-fd9d70a69dc5" (UID: "59ecae78-d5c7-4104-b28e-fd9d70a69dc5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:55:17 crc kubenswrapper[4737]: I0126 18:55:17.823355 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vbj8n" event={"ID":"59ecae78-d5c7-4104-b28e-fd9d70a69dc5","Type":"ContainerDied","Data":"0fbb0cf7b9115933c509cd9c54f338a74635a4105c59b4d97cff8da39b2266cd"} Jan 26 18:55:17 crc kubenswrapper[4737]: I0126 18:55:17.824011 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fbb0cf7b9115933c509cd9c54f338a74635a4105c59b4d97cff8da39b2266cd" Jan 26 18:55:17 crc kubenswrapper[4737]: I0126 18:55:17.823553 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vbj8n" Jan 26 18:55:17 crc kubenswrapper[4737]: I0126 18:55:17.833498 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8nbml" Jan 26 18:55:17 crc kubenswrapper[4737]: I0126 18:55:17.835140 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8nbml" event={"ID":"11147190-1d45-4798-83d7-449cd574a296","Type":"ContainerDied","Data":"adbd01fefa44fc8454f428e996b1c6b81479ae26c7c1d2dbb979e864cb2709ce"} Jan 26 18:55:17 crc kubenswrapper[4737]: I0126 18:55:17.835218 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adbd01fefa44fc8454f428e996b1c6b81479ae26c7c1d2dbb979e864cb2709ce" Jan 26 18:55:17 crc kubenswrapper[4737]: I0126 18:55:17.865207 4737 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59ecae78-d5c7-4104-b28e-fd9d70a69dc5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:17 crc kubenswrapper[4737]: I0126 18:55:17.950894 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-c974878b4-m6rmv"] Jan 26 18:55:17 crc kubenswrapper[4737]: E0126 18:55:17.952127 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="093651a2-4ab4-4c4a-8b9a-16836c7117bc" containerName="init" Jan 26 18:55:17 crc kubenswrapper[4737]: I0126 18:55:17.952149 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="093651a2-4ab4-4c4a-8b9a-16836c7117bc" containerName="init" Jan 26 18:55:17 crc kubenswrapper[4737]: E0126 18:55:17.952163 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11147190-1d45-4798-83d7-449cd574a296" containerName="placement-db-sync" Jan 26 18:55:17 crc kubenswrapper[4737]: I0126 18:55:17.952170 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="11147190-1d45-4798-83d7-449cd574a296" containerName="placement-db-sync" Jan 26 18:55:17 crc kubenswrapper[4737]: E0126 18:55:17.952203 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59ecae78-d5c7-4104-b28e-fd9d70a69dc5" containerName="keystone-bootstrap" Jan 26 18:55:17 crc kubenswrapper[4737]: I0126 18:55:17.952211 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="59ecae78-d5c7-4104-b28e-fd9d70a69dc5" containerName="keystone-bootstrap" Jan 26 18:55:17 crc kubenswrapper[4737]: E0126 18:55:17.952224 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="093651a2-4ab4-4c4a-8b9a-16836c7117bc" containerName="dnsmasq-dns" Jan 26 18:55:17 crc kubenswrapper[4737]: I0126 18:55:17.952230 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="093651a2-4ab4-4c4a-8b9a-16836c7117bc" containerName="dnsmasq-dns" Jan 26 18:55:17 crc kubenswrapper[4737]: I0126 18:55:17.952422 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="59ecae78-d5c7-4104-b28e-fd9d70a69dc5" containerName="keystone-bootstrap" Jan 26 18:55:17 crc kubenswrapper[4737]: I0126 18:55:17.952436 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="11147190-1d45-4798-83d7-449cd574a296" containerName="placement-db-sync" Jan 26 18:55:17 crc kubenswrapper[4737]: I0126 18:55:17.952459 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="093651a2-4ab4-4c4a-8b9a-16836c7117bc" containerName="dnsmasq-dns" Jan 26 18:55:17 crc kubenswrapper[4737]: I0126 18:55:17.953615 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c974878b4-m6rmv" Jan 26 18:55:17 crc kubenswrapper[4737]: I0126 18:55:17.960132 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 26 18:55:17 crc kubenswrapper[4737]: I0126 18:55:17.960422 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 26 18:55:17 crc kubenswrapper[4737]: I0126 18:55:17.960443 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-vnvks" Jan 26 18:55:17 crc kubenswrapper[4737]: I0126 18:55:17.965603 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 26 18:55:17 crc kubenswrapper[4737]: I0126 18:55:17.965897 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 26 18:55:17 crc kubenswrapper[4737]: I0126 18:55:17.975974 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c974878b4-m6rmv"] Jan 26 18:55:18 crc kubenswrapper[4737]: I0126 18:55:18.042180 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-86b84744f8-59mdj"] Jan 26 18:55:18 crc kubenswrapper[4737]: I0126 18:55:18.045548 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-86b84744f8-59mdj" Jan 26 18:55:18 crc kubenswrapper[4737]: I0126 18:55:18.053503 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 26 18:55:18 crc kubenswrapper[4737]: I0126 18:55:18.053646 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 26 18:55:18 crc kubenswrapper[4737]: I0126 18:55:18.069765 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/faf8de27-9da1-4a0d-9edf-ebb5d53fc272-scripts\") pod \"placement-c974878b4-m6rmv\" (UID: \"faf8de27-9da1-4a0d-9edf-ebb5d53fc272\") " pod="openstack/placement-c974878b4-m6rmv" Jan 26 18:55:18 crc kubenswrapper[4737]: I0126 18:55:18.069845 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/faf8de27-9da1-4a0d-9edf-ebb5d53fc272-public-tls-certs\") pod \"placement-c974878b4-m6rmv\" (UID: \"faf8de27-9da1-4a0d-9edf-ebb5d53fc272\") " pod="openstack/placement-c974878b4-m6rmv" Jan 26 18:55:18 crc kubenswrapper[4737]: I0126 18:55:18.070033 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmrh6\" (UniqueName: \"kubernetes.io/projected/faf8de27-9da1-4a0d-9edf-ebb5d53fc272-kube-api-access-vmrh6\") pod \"placement-c974878b4-m6rmv\" (UID: \"faf8de27-9da1-4a0d-9edf-ebb5d53fc272\") " pod="openstack/placement-c974878b4-m6rmv" Jan 26 18:55:18 crc kubenswrapper[4737]: I0126 18:55:18.070178 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/faf8de27-9da1-4a0d-9edf-ebb5d53fc272-internal-tls-certs\") pod \"placement-c974878b4-m6rmv\" (UID: \"faf8de27-9da1-4a0d-9edf-ebb5d53fc272\") " pod="openstack/placement-c974878b4-m6rmv" Jan 26 18:55:18 crc kubenswrapper[4737]: I0126 18:55:18.070251 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf8de27-9da1-4a0d-9edf-ebb5d53fc272-combined-ca-bundle\") pod \"placement-c974878b4-m6rmv\" (UID: \"faf8de27-9da1-4a0d-9edf-ebb5d53fc272\") " pod="openstack/placement-c974878b4-m6rmv" Jan 26 18:55:18 crc kubenswrapper[4737]: I0126 18:55:18.070354 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/faf8de27-9da1-4a0d-9edf-ebb5d53fc272-logs\") pod \"placement-c974878b4-m6rmv\" (UID: \"faf8de27-9da1-4a0d-9edf-ebb5d53fc272\") " pod="openstack/placement-c974878b4-m6rmv" Jan 26 18:55:18 crc kubenswrapper[4737]: I0126 18:55:18.070403 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faf8de27-9da1-4a0d-9edf-ebb5d53fc272-config-data\") pod \"placement-c974878b4-m6rmv\" (UID: \"faf8de27-9da1-4a0d-9edf-ebb5d53fc272\") " pod="openstack/placement-c974878b4-m6rmv" Jan 26 18:55:18 crc kubenswrapper[4737]: I0126 18:55:18.089520 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-86b84744f8-59mdj"] Jan 26 18:55:18 crc kubenswrapper[4737]: I0126 18:55:18.172910 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmrh6\" (UniqueName: \"kubernetes.io/projected/faf8de27-9da1-4a0d-9edf-ebb5d53fc272-kube-api-access-vmrh6\") pod \"placement-c974878b4-m6rmv\" (UID: \"faf8de27-9da1-4a0d-9edf-ebb5d53fc272\") " pod="openstack/placement-c974878b4-m6rmv" Jan 26 18:55:18 crc kubenswrapper[4737]: I0126 18:55:18.173017 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/682c692a-8447-4b49-b81d-98b7fa9ccec1-credential-keys\") pod \"keystone-86b84744f8-59mdj\" (UID: \"682c692a-8447-4b49-b81d-98b7fa9ccec1\") " pod="openstack/keystone-86b84744f8-59mdj" Jan 26 18:55:18 crc kubenswrapper[4737]: I0126 18:55:18.174295 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/faf8de27-9da1-4a0d-9edf-ebb5d53fc272-internal-tls-certs\") pod \"placement-c974878b4-m6rmv\" (UID: \"faf8de27-9da1-4a0d-9edf-ebb5d53fc272\") " pod="openstack/placement-c974878b4-m6rmv" Jan 26 18:55:18 crc kubenswrapper[4737]: I0126 18:55:18.174362 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/682c692a-8447-4b49-b81d-98b7fa9ccec1-public-tls-certs\") pod \"keystone-86b84744f8-59mdj\" (UID: \"682c692a-8447-4b49-b81d-98b7fa9ccec1\") " pod="openstack/keystone-86b84744f8-59mdj" Jan 26 18:55:18 crc kubenswrapper[4737]: I0126 18:55:18.174418 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf8de27-9da1-4a0d-9edf-ebb5d53fc272-combined-ca-bundle\") pod \"placement-c974878b4-m6rmv\" (UID: \"faf8de27-9da1-4a0d-9edf-ebb5d53fc272\") " pod="openstack/placement-c974878b4-m6rmv" Jan 26 18:55:18 crc kubenswrapper[4737]: I0126 18:55:18.174468 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/682c692a-8447-4b49-b81d-98b7fa9ccec1-internal-tls-certs\") pod \"keystone-86b84744f8-59mdj\" (UID: \"682c692a-8447-4b49-b81d-98b7fa9ccec1\") " pod="openstack/keystone-86b84744f8-59mdj" Jan 26 18:55:18 crc kubenswrapper[4737]: I0126 18:55:18.174566 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/faf8de27-9da1-4a0d-9edf-ebb5d53fc272-logs\") pod \"placement-c974878b4-m6rmv\" (UID: \"faf8de27-9da1-4a0d-9edf-ebb5d53fc272\") " pod="openstack/placement-c974878b4-m6rmv" Jan 26 18:55:18 crc kubenswrapper[4737]: I0126 18:55:18.174607 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vfbf\" (UniqueName: \"kubernetes.io/projected/682c692a-8447-4b49-b81d-98b7fa9ccec1-kube-api-access-2vfbf\") pod \"keystone-86b84744f8-59mdj\" (UID: \"682c692a-8447-4b49-b81d-98b7fa9ccec1\") " pod="openstack/keystone-86b84744f8-59mdj" Jan 26 18:55:18 crc kubenswrapper[4737]: I0126 18:55:18.174636 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faf8de27-9da1-4a0d-9edf-ebb5d53fc272-config-data\") pod \"placement-c974878b4-m6rmv\" (UID: \"faf8de27-9da1-4a0d-9edf-ebb5d53fc272\") " pod="openstack/placement-c974878b4-m6rmv" Jan 26 18:55:18 crc kubenswrapper[4737]: I0126 18:55:18.174664 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/682c692a-8447-4b49-b81d-98b7fa9ccec1-fernet-keys\") pod \"keystone-86b84744f8-59mdj\" (UID: \"682c692a-8447-4b49-b81d-98b7fa9ccec1\") " pod="openstack/keystone-86b84744f8-59mdj" Jan 26 18:55:18 crc kubenswrapper[4737]: I0126 18:55:18.174701 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/682c692a-8447-4b49-b81d-98b7fa9ccec1-config-data\") pod \"keystone-86b84744f8-59mdj\" (UID: \"682c692a-8447-4b49-b81d-98b7fa9ccec1\") " pod="openstack/keystone-86b84744f8-59mdj" Jan 26 18:55:18 crc kubenswrapper[4737]: I0126 18:55:18.174733 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/682c692a-8447-4b49-b81d-98b7fa9ccec1-combined-ca-bundle\") pod \"keystone-86b84744f8-59mdj\" (UID: \"682c692a-8447-4b49-b81d-98b7fa9ccec1\") " pod="openstack/keystone-86b84744f8-59mdj" Jan 26 18:55:18 crc kubenswrapper[4737]: I0126 18:55:18.174843 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/682c692a-8447-4b49-b81d-98b7fa9ccec1-scripts\") pod \"keystone-86b84744f8-59mdj\" (UID: \"682c692a-8447-4b49-b81d-98b7fa9ccec1\") " pod="openstack/keystone-86b84744f8-59mdj" Jan 26 18:55:18 crc kubenswrapper[4737]: I0126 18:55:18.174918 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/faf8de27-9da1-4a0d-9edf-ebb5d53fc272-scripts\") pod \"placement-c974878b4-m6rmv\" (UID: \"faf8de27-9da1-4a0d-9edf-ebb5d53fc272\") " pod="openstack/placement-c974878b4-m6rmv" Jan 26 18:55:18 crc kubenswrapper[4737]: I0126 18:55:18.174979 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/faf8de27-9da1-4a0d-9edf-ebb5d53fc272-public-tls-certs\") pod \"placement-c974878b4-m6rmv\" (UID: \"faf8de27-9da1-4a0d-9edf-ebb5d53fc272\") " pod="openstack/placement-c974878b4-m6rmv" Jan 26 18:55:18 crc kubenswrapper[4737]: I0126 18:55:18.175680 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/faf8de27-9da1-4a0d-9edf-ebb5d53fc272-logs\") pod \"placement-c974878b4-m6rmv\" (UID: \"faf8de27-9da1-4a0d-9edf-ebb5d53fc272\") " pod="openstack/placement-c974878b4-m6rmv" Jan 26 18:55:18 crc kubenswrapper[4737]: I0126 18:55:18.181819 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/faf8de27-9da1-4a0d-9edf-ebb5d53fc272-internal-tls-certs\") pod \"placement-c974878b4-m6rmv\" (UID: \"faf8de27-9da1-4a0d-9edf-ebb5d53fc272\") " pod="openstack/placement-c974878b4-m6rmv" Jan 26 18:55:18 crc kubenswrapper[4737]: I0126 18:55:18.183634 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/faf8de27-9da1-4a0d-9edf-ebb5d53fc272-public-tls-certs\") pod \"placement-c974878b4-m6rmv\" (UID: \"faf8de27-9da1-4a0d-9edf-ebb5d53fc272\") " pod="openstack/placement-c974878b4-m6rmv" Jan 26 18:55:18 crc kubenswrapper[4737]: I0126 18:55:18.183799 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faf8de27-9da1-4a0d-9edf-ebb5d53fc272-config-data\") pod \"placement-c974878b4-m6rmv\" (UID: \"faf8de27-9da1-4a0d-9edf-ebb5d53fc272\") " pod="openstack/placement-c974878b4-m6rmv" Jan 26 18:55:18 crc kubenswrapper[4737]: I0126 18:55:18.183988 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/faf8de27-9da1-4a0d-9edf-ebb5d53fc272-scripts\") pod \"placement-c974878b4-m6rmv\" (UID: \"faf8de27-9da1-4a0d-9edf-ebb5d53fc272\") " pod="openstack/placement-c974878b4-m6rmv" Jan 26 18:55:18 crc kubenswrapper[4737]: I0126 18:55:18.184406 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf8de27-9da1-4a0d-9edf-ebb5d53fc272-combined-ca-bundle\") pod \"placement-c974878b4-m6rmv\" (UID: \"faf8de27-9da1-4a0d-9edf-ebb5d53fc272\") " pod="openstack/placement-c974878b4-m6rmv" Jan 26 18:55:18 crc kubenswrapper[4737]: I0126 18:55:18.205041 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmrh6\" (UniqueName: \"kubernetes.io/projected/faf8de27-9da1-4a0d-9edf-ebb5d53fc272-kube-api-access-vmrh6\") pod \"placement-c974878b4-m6rmv\" (UID: \"faf8de27-9da1-4a0d-9edf-ebb5d53fc272\") " pod="openstack/placement-c974878b4-m6rmv" Jan 26 18:55:18 crc kubenswrapper[4737]: I0126 18:55:18.276394 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59ecae78-d5c7-4104-b28e-fd9d70a69dc5-config-data\") pod \"59ecae78-d5c7-4104-b28e-fd9d70a69dc5\" (UID: \"59ecae78-d5c7-4104-b28e-fd9d70a69dc5\") " Jan 26 18:55:18 crc kubenswrapper[4737]: I0126 18:55:18.276928 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/682c692a-8447-4b49-b81d-98b7fa9ccec1-internal-tls-certs\") pod \"keystone-86b84744f8-59mdj\" (UID: \"682c692a-8447-4b49-b81d-98b7fa9ccec1\") " pod="openstack/keystone-86b84744f8-59mdj" Jan 26 18:55:18 crc kubenswrapper[4737]: I0126 18:55:18.277140 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vfbf\" (UniqueName: \"kubernetes.io/projected/682c692a-8447-4b49-b81d-98b7fa9ccec1-kube-api-access-2vfbf\") pod \"keystone-86b84744f8-59mdj\" (UID: \"682c692a-8447-4b49-b81d-98b7fa9ccec1\") " pod="openstack/keystone-86b84744f8-59mdj" Jan 26 18:55:18 crc kubenswrapper[4737]: I0126 18:55:18.277583 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/682c692a-8447-4b49-b81d-98b7fa9ccec1-fernet-keys\") pod \"keystone-86b84744f8-59mdj\" (UID: \"682c692a-8447-4b49-b81d-98b7fa9ccec1\") " pod="openstack/keystone-86b84744f8-59mdj" Jan 26 18:55:18 crc kubenswrapper[4737]: I0126 18:55:18.277629 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/682c692a-8447-4b49-b81d-98b7fa9ccec1-config-data\") pod \"keystone-86b84744f8-59mdj\" (UID: \"682c692a-8447-4b49-b81d-98b7fa9ccec1\") " pod="openstack/keystone-86b84744f8-59mdj" Jan 26 18:55:18 crc kubenswrapper[4737]: I0126 18:55:18.277655 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/682c692a-8447-4b49-b81d-98b7fa9ccec1-combined-ca-bundle\") pod \"keystone-86b84744f8-59mdj\" (UID: \"682c692a-8447-4b49-b81d-98b7fa9ccec1\") " pod="openstack/keystone-86b84744f8-59mdj" Jan 26 18:55:18 crc kubenswrapper[4737]: I0126 18:55:18.278118 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/682c692a-8447-4b49-b81d-98b7fa9ccec1-scripts\") pod \"keystone-86b84744f8-59mdj\" (UID: \"682c692a-8447-4b49-b81d-98b7fa9ccec1\") " pod="openstack/keystone-86b84744f8-59mdj" Jan 26 18:55:18 crc kubenswrapper[4737]: I0126 18:55:18.278270 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/682c692a-8447-4b49-b81d-98b7fa9ccec1-credential-keys\") pod \"keystone-86b84744f8-59mdj\" (UID: \"682c692a-8447-4b49-b81d-98b7fa9ccec1\") " pod="openstack/keystone-86b84744f8-59mdj" Jan 26 18:55:18 crc kubenswrapper[4737]: I0126 18:55:18.278376 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/682c692a-8447-4b49-b81d-98b7fa9ccec1-public-tls-certs\") pod \"keystone-86b84744f8-59mdj\" (UID: \"682c692a-8447-4b49-b81d-98b7fa9ccec1\") " pod="openstack/keystone-86b84744f8-59mdj" Jan 26 18:55:18 crc kubenswrapper[4737]: I0126 18:55:18.282433 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59ecae78-d5c7-4104-b28e-fd9d70a69dc5-config-data" (OuterVolumeSpecName: "config-data") pod "59ecae78-d5c7-4104-b28e-fd9d70a69dc5" (UID: "59ecae78-d5c7-4104-b28e-fd9d70a69dc5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:55:18 crc kubenswrapper[4737]: I0126 18:55:18.284145 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/682c692a-8447-4b49-b81d-98b7fa9ccec1-scripts\") pod \"keystone-86b84744f8-59mdj\" (UID: \"682c692a-8447-4b49-b81d-98b7fa9ccec1\") " pod="openstack/keystone-86b84744f8-59mdj" Jan 26 18:55:18 crc kubenswrapper[4737]: I0126 18:55:18.286172 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/682c692a-8447-4b49-b81d-98b7fa9ccec1-internal-tls-certs\") pod \"keystone-86b84744f8-59mdj\" (UID: \"682c692a-8447-4b49-b81d-98b7fa9ccec1\") " pod="openstack/keystone-86b84744f8-59mdj" Jan 26 18:55:18 crc kubenswrapper[4737]: I0126 18:55:18.286877 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/682c692a-8447-4b49-b81d-98b7fa9ccec1-fernet-keys\") pod \"keystone-86b84744f8-59mdj\" (UID: \"682c692a-8447-4b49-b81d-98b7fa9ccec1\") " pod="openstack/keystone-86b84744f8-59mdj" Jan 26 18:55:18 crc kubenswrapper[4737]: I0126 18:55:18.289239 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/682c692a-8447-4b49-b81d-98b7fa9ccec1-config-data\") pod \"keystone-86b84744f8-59mdj\" (UID: \"682c692a-8447-4b49-b81d-98b7fa9ccec1\") " pod="openstack/keystone-86b84744f8-59mdj" Jan 26 18:55:18 crc kubenswrapper[4737]: I0126 18:55:18.289798 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c974878b4-m6rmv" Jan 26 18:55:18 crc kubenswrapper[4737]: I0126 18:55:18.294254 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/682c692a-8447-4b49-b81d-98b7fa9ccec1-combined-ca-bundle\") pod \"keystone-86b84744f8-59mdj\" (UID: \"682c692a-8447-4b49-b81d-98b7fa9ccec1\") " pod="openstack/keystone-86b84744f8-59mdj" Jan 26 18:55:18 crc kubenswrapper[4737]: I0126 18:55:18.298719 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/682c692a-8447-4b49-b81d-98b7fa9ccec1-credential-keys\") pod \"keystone-86b84744f8-59mdj\" (UID: \"682c692a-8447-4b49-b81d-98b7fa9ccec1\") " pod="openstack/keystone-86b84744f8-59mdj" Jan 26 18:55:18 crc kubenswrapper[4737]: I0126 18:55:18.299413 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/682c692a-8447-4b49-b81d-98b7fa9ccec1-public-tls-certs\") pod \"keystone-86b84744f8-59mdj\" (UID: \"682c692a-8447-4b49-b81d-98b7fa9ccec1\") " pod="openstack/keystone-86b84744f8-59mdj" Jan 26 18:55:18 crc kubenswrapper[4737]: I0126 18:55:18.304532 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vfbf\" (UniqueName: \"kubernetes.io/projected/682c692a-8447-4b49-b81d-98b7fa9ccec1-kube-api-access-2vfbf\") pod \"keystone-86b84744f8-59mdj\" (UID: \"682c692a-8447-4b49-b81d-98b7fa9ccec1\") " pod="openstack/keystone-86b84744f8-59mdj" Jan 26 18:55:18 crc kubenswrapper[4737]: I0126 18:55:18.310814 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-s7g98" Jan 26 18:55:18 crc kubenswrapper[4737]: I0126 18:55:18.311813 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-s7g98" Jan 26 18:55:18 crc kubenswrapper[4737]: I0126 18:55:18.376255 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-86b84744f8-59mdj" Jan 26 18:55:18 crc kubenswrapper[4737]: I0126 18:55:18.381674 4737 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59ecae78-d5c7-4104-b28e-fd9d70a69dc5-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:18 crc kubenswrapper[4737]: I0126 18:55:18.702849 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 26 18:55:18 crc kubenswrapper[4737]: I0126 18:55:18.703729 4737 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 18:55:18 crc kubenswrapper[4737]: I0126 18:55:18.722633 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 26 18:55:19 crc kubenswrapper[4737]: I0126 18:55:19.143384 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c974878b4-m6rmv"] Jan 26 18:55:19 crc kubenswrapper[4737]: I0126 18:55:19.296436 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-86b84744f8-59mdj"] Jan 26 18:55:19 crc kubenswrapper[4737]: I0126 18:55:19.398290 4737 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-s7g98" podUID="0d1ea1d4-ca8f-48d7-838b-a71cc03f2b39" containerName="registry-server" probeResult="failure" output=< Jan 26 18:55:19 crc kubenswrapper[4737]: timeout: failed to connect service ":50051" within 1s Jan 26 18:55:19 crc kubenswrapper[4737]: > Jan 26 18:55:19 crc kubenswrapper[4737]: I0126 18:55:19.887139 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-86b84744f8-59mdj" event={"ID":"682c692a-8447-4b49-b81d-98b7fa9ccec1","Type":"ContainerStarted","Data":"67a7fe60f65d185b276c4d45ec2cbc5b2a611dd60cd077dee71c1647d646741a"} Jan 26 18:55:19 crc kubenswrapper[4737]: I0126 18:55:19.887511 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-86b84744f8-59mdj" event={"ID":"682c692a-8447-4b49-b81d-98b7fa9ccec1","Type":"ContainerStarted","Data":"1d3f81f0d92fe0155e11b0a475999fb5be398c4c67df3a80e9529bb8af26609e"} Jan 26 18:55:19 crc kubenswrapper[4737]: I0126 18:55:19.889149 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-86b84744f8-59mdj" Jan 26 18:55:19 crc kubenswrapper[4737]: I0126 18:55:19.901922 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c974878b4-m6rmv" event={"ID":"faf8de27-9da1-4a0d-9edf-ebb5d53fc272","Type":"ContainerStarted","Data":"19425cc5c4b8d9d8c1ad9ed887c4945115b0163cf98059407803a09c60be61f8"} Jan 26 18:55:19 crc kubenswrapper[4737]: I0126 18:55:19.901977 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c974878b4-m6rmv" event={"ID":"faf8de27-9da1-4a0d-9edf-ebb5d53fc272","Type":"ContainerStarted","Data":"7bba842f8718cb1d9a533a507d56cde65e07e920bcd9d0779203709c30ccd008"} Jan 26 18:55:19 crc kubenswrapper[4737]: I0126 18:55:19.937975 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-86b84744f8-59mdj" podStartSLOduration=2.937947389 podStartE2EDuration="2.937947389s" podCreationTimestamp="2026-01-26 18:55:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:55:19.920640876 +0000 UTC m=+1493.228835604" watchObservedRunningTime="2026-01-26 18:55:19.937947389 +0000 UTC m=+1493.246142087" Jan 26 18:55:20 crc kubenswrapper[4737]: I0126 18:55:20.716576 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 26 18:55:20 crc kubenswrapper[4737]: I0126 18:55:20.717273 4737 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 18:55:20 crc kubenswrapper[4737]: I0126 18:55:20.722322 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 26 18:55:20 crc kubenswrapper[4737]: I0126 18:55:20.926491 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c974878b4-m6rmv" event={"ID":"faf8de27-9da1-4a0d-9edf-ebb5d53fc272","Type":"ContainerStarted","Data":"4059ede9b81fa09fa2d03a56b0556683c7f3e30539ea342facda882d985331b8"} Jan 26 18:55:20 crc kubenswrapper[4737]: I0126 18:55:20.926998 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-c974878b4-m6rmv" Jan 26 18:55:20 crc kubenswrapper[4737]: I0126 18:55:20.927029 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-c974878b4-m6rmv" Jan 26 18:55:20 crc kubenswrapper[4737]: I0126 18:55:20.929187 4737 generic.go:334] "Generic (PLEG): container finished" podID="31ee14c5-9b8d-4903-afc7-0b7c643b2756" containerID="a2cb887eb23910e377c3962c778ebf4c69b9b70feab0dfb04d4461abc41fd260" exitCode=0 Jan 26 18:55:20 crc kubenswrapper[4737]: I0126 18:55:20.929376 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-crvp5" event={"ID":"31ee14c5-9b8d-4903-afc7-0b7c643b2756","Type":"ContainerDied","Data":"a2cb887eb23910e377c3962c778ebf4c69b9b70feab0dfb04d4461abc41fd260"} Jan 26 18:55:20 crc kubenswrapper[4737]: I0126 18:55:20.976630 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-c974878b4-m6rmv" podStartSLOduration=3.976606316 podStartE2EDuration="3.976606316s" podCreationTimestamp="2026-01-26 18:55:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:55:20.961178456 +0000 UTC m=+1494.269373174" watchObservedRunningTime="2026-01-26 18:55:20.976606316 +0000 UTC m=+1494.284801034" Jan 26 18:55:23 crc kubenswrapper[4737]: I0126 18:55:23.965469 4737 generic.go:334] "Generic (PLEG): container finished" podID="54a9f74e-fc12-43b7-aca3-0594480e0222" containerID="6e28763de49ab84419a183827eeaa2498baa40575e3f5b2ab71c1383ba21e7bf" exitCode=0 Jan 26 18:55:23 crc kubenswrapper[4737]: I0126 18:55:23.965551 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-kdfn7" event={"ID":"54a9f74e-fc12-43b7-aca3-0594480e0222","Type":"ContainerDied","Data":"6e28763de49ab84419a183827eeaa2498baa40575e3f5b2ab71c1383ba21e7bf"} Jan 26 18:55:23 crc kubenswrapper[4737]: I0126 18:55:23.968221 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-crvp5" event={"ID":"31ee14c5-9b8d-4903-afc7-0b7c643b2756","Type":"ContainerDied","Data":"f9824b8a74b0863a62a8520fd09957425e44a256b6ac5508d28cc8e1554277a3"} Jan 26 18:55:23 crc kubenswrapper[4737]: I0126 18:55:23.968257 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9824b8a74b0863a62a8520fd09957425e44a256b6ac5508d28cc8e1554277a3" Jan 26 18:55:24 crc kubenswrapper[4737]: I0126 18:55:24.082548 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-crvp5" Jan 26 18:55:24 crc kubenswrapper[4737]: I0126 18:55:24.149322 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31ee14c5-9b8d-4903-afc7-0b7c643b2756-combined-ca-bundle\") pod \"31ee14c5-9b8d-4903-afc7-0b7c643b2756\" (UID: \"31ee14c5-9b8d-4903-afc7-0b7c643b2756\") " Jan 26 18:55:24 crc kubenswrapper[4737]: I0126 18:55:24.149398 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/31ee14c5-9b8d-4903-afc7-0b7c643b2756-db-sync-config-data\") pod \"31ee14c5-9b8d-4903-afc7-0b7c643b2756\" (UID: \"31ee14c5-9b8d-4903-afc7-0b7c643b2756\") " Jan 26 18:55:24 crc kubenswrapper[4737]: I0126 18:55:24.149611 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfm6g\" (UniqueName: \"kubernetes.io/projected/31ee14c5-9b8d-4903-afc7-0b7c643b2756-kube-api-access-vfm6g\") pod \"31ee14c5-9b8d-4903-afc7-0b7c643b2756\" (UID: \"31ee14c5-9b8d-4903-afc7-0b7c643b2756\") " Jan 26 18:55:24 crc kubenswrapper[4737]: I0126 18:55:24.154531 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31ee14c5-9b8d-4903-afc7-0b7c643b2756-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "31ee14c5-9b8d-4903-afc7-0b7c643b2756" (UID: "31ee14c5-9b8d-4903-afc7-0b7c643b2756"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:55:24 crc kubenswrapper[4737]: I0126 18:55:24.155533 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31ee14c5-9b8d-4903-afc7-0b7c643b2756-kube-api-access-vfm6g" (OuterVolumeSpecName: "kube-api-access-vfm6g") pod "31ee14c5-9b8d-4903-afc7-0b7c643b2756" (UID: "31ee14c5-9b8d-4903-afc7-0b7c643b2756"). InnerVolumeSpecName "kube-api-access-vfm6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:55:24 crc kubenswrapper[4737]: I0126 18:55:24.202515 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31ee14c5-9b8d-4903-afc7-0b7c643b2756-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "31ee14c5-9b8d-4903-afc7-0b7c643b2756" (UID: "31ee14c5-9b8d-4903-afc7-0b7c643b2756"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:55:24 crc kubenswrapper[4737]: I0126 18:55:24.252882 4737 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31ee14c5-9b8d-4903-afc7-0b7c643b2756-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:24 crc kubenswrapper[4737]: I0126 18:55:24.252917 4737 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/31ee14c5-9b8d-4903-afc7-0b7c643b2756-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:24 crc kubenswrapper[4737]: I0126 18:55:24.252926 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfm6g\" (UniqueName: \"kubernetes.io/projected/31ee14c5-9b8d-4903-afc7-0b7c643b2756-kube-api-access-vfm6g\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:24 crc kubenswrapper[4737]: I0126 18:55:24.983178 4737 generic.go:334] "Generic (PLEG): container finished" podID="cac069b5-db5e-47ec-ada0-7e6acf1af111" containerID="0812037e61aaa15557e83ff51841b9c58954816a3e829827c7b6ca441d2a80ac" exitCode=0 Jan 26 18:55:24 crc kubenswrapper[4737]: I0126 18:55:24.983578 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-crvp5" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.005026 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b81603b3-3bc1-43ba-8a07-59b7f8eed3b6","Type":"ContainerStarted","Data":"6dde630e032b3aa344af4cb2f5546393a37e2efecbf8f3c884b7aee136151757"} Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.005144 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-5pb7v" event={"ID":"cac069b5-db5e-47ec-ada0-7e6acf1af111","Type":"ContainerDied","Data":"0812037e61aaa15557e83ff51841b9c58954816a3e829827c7b6ca441d2a80ac"} Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.438140 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-866f479b9-7wv96"] Jan 26 18:55:25 crc kubenswrapper[4737]: E0126 18:55:25.438968 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31ee14c5-9b8d-4903-afc7-0b7c643b2756" containerName="barbican-db-sync" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.438980 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="31ee14c5-9b8d-4903-afc7-0b7c643b2756" containerName="barbican-db-sync" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.439185 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="31ee14c5-9b8d-4903-afc7-0b7c643b2756" containerName="barbican-db-sync" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.440769 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-866f479b9-7wv96" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.448017 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.448464 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-2b6wq" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.448604 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.454957 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5c5b6c8cdb-gwc7x"] Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.457034 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5c5b6c8cdb-gwc7x" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.463339 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.508166 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5c5b6c8cdb-gwc7x"] Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.541699 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-866f479b9-7wv96"] Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.582094 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b84a5366-14c9-4b93-b185-18a4e3695ed7-combined-ca-bundle\") pod \"barbican-worker-866f479b9-7wv96\" (UID: \"b84a5366-14c9-4b93-b185-18a4e3695ed7\") " pod="openstack/barbican-worker-866f479b9-7wv96" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.582163 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b84a5366-14c9-4b93-b185-18a4e3695ed7-config-data-custom\") pod \"barbican-worker-866f479b9-7wv96\" (UID: \"b84a5366-14c9-4b93-b185-18a4e3695ed7\") " pod="openstack/barbican-worker-866f479b9-7wv96" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.582198 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b84a5366-14c9-4b93-b185-18a4e3695ed7-config-data\") pod \"barbican-worker-866f479b9-7wv96\" (UID: \"b84a5366-14c9-4b93-b185-18a4e3695ed7\") " pod="openstack/barbican-worker-866f479b9-7wv96" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.582242 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b82b3dcd-dcf3-44a0-bfc7-cb8d484ebd6b-config-data-custom\") pod \"barbican-keystone-listener-5c5b6c8cdb-gwc7x\" (UID: \"b82b3dcd-dcf3-44a0-bfc7-cb8d484ebd6b\") " pod="openstack/barbican-keystone-listener-5c5b6c8cdb-gwc7x" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.582273 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b82b3dcd-dcf3-44a0-bfc7-cb8d484ebd6b-config-data\") pod \"barbican-keystone-listener-5c5b6c8cdb-gwc7x\" (UID: \"b82b3dcd-dcf3-44a0-bfc7-cb8d484ebd6b\") " pod="openstack/barbican-keystone-listener-5c5b6c8cdb-gwc7x" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.582323 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b82b3dcd-dcf3-44a0-bfc7-cb8d484ebd6b-logs\") pod \"barbican-keystone-listener-5c5b6c8cdb-gwc7x\" (UID: \"b82b3dcd-dcf3-44a0-bfc7-cb8d484ebd6b\") " pod="openstack/barbican-keystone-listener-5c5b6c8cdb-gwc7x" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.582460 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b84a5366-14c9-4b93-b185-18a4e3695ed7-logs\") pod \"barbican-worker-866f479b9-7wv96\" (UID: \"b84a5366-14c9-4b93-b185-18a4e3695ed7\") " pod="openstack/barbican-worker-866f479b9-7wv96" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.582496 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59fl7\" (UniqueName: \"kubernetes.io/projected/b82b3dcd-dcf3-44a0-bfc7-cb8d484ebd6b-kube-api-access-59fl7\") pod \"barbican-keystone-listener-5c5b6c8cdb-gwc7x\" (UID: \"b82b3dcd-dcf3-44a0-bfc7-cb8d484ebd6b\") " pod="openstack/barbican-keystone-listener-5c5b6c8cdb-gwc7x" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.582522 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktqdr\" (UniqueName: \"kubernetes.io/projected/b84a5366-14c9-4b93-b185-18a4e3695ed7-kube-api-access-ktqdr\") pod \"barbican-worker-866f479b9-7wv96\" (UID: \"b84a5366-14c9-4b93-b185-18a4e3695ed7\") " pod="openstack/barbican-worker-866f479b9-7wv96" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.582549 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b82b3dcd-dcf3-44a0-bfc7-cb8d484ebd6b-combined-ca-bundle\") pod \"barbican-keystone-listener-5c5b6c8cdb-gwc7x\" (UID: \"b82b3dcd-dcf3-44a0-bfc7-cb8d484ebd6b\") " pod="openstack/barbican-keystone-listener-5c5b6c8cdb-gwc7x" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.655151 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-tmbk8"] Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.657749 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-tmbk8" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.676850 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-tmbk8"] Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.723310 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b84a5366-14c9-4b93-b185-18a4e3695ed7-logs\") pod \"barbican-worker-866f479b9-7wv96\" (UID: \"b84a5366-14c9-4b93-b185-18a4e3695ed7\") " pod="openstack/barbican-worker-866f479b9-7wv96" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.723439 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59fl7\" (UniqueName: \"kubernetes.io/projected/b82b3dcd-dcf3-44a0-bfc7-cb8d484ebd6b-kube-api-access-59fl7\") pod \"barbican-keystone-listener-5c5b6c8cdb-gwc7x\" (UID: \"b82b3dcd-dcf3-44a0-bfc7-cb8d484ebd6b\") " pod="openstack/barbican-keystone-listener-5c5b6c8cdb-gwc7x" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.729190 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b84a5366-14c9-4b93-b185-18a4e3695ed7-logs\") pod \"barbican-worker-866f479b9-7wv96\" (UID: \"b84a5366-14c9-4b93-b185-18a4e3695ed7\") " pod="openstack/barbican-worker-866f479b9-7wv96" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.748184 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-689b884cd-xd7w8"] Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.723493 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktqdr\" (UniqueName: \"kubernetes.io/projected/b84a5366-14c9-4b93-b185-18a4e3695ed7-kube-api-access-ktqdr\") pod \"barbican-worker-866f479b9-7wv96\" (UID: \"b84a5366-14c9-4b93-b185-18a4e3695ed7\") " pod="openstack/barbican-worker-866f479b9-7wv96" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.753882 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b82b3dcd-dcf3-44a0-bfc7-cb8d484ebd6b-combined-ca-bundle\") pod \"barbican-keystone-listener-5c5b6c8cdb-gwc7x\" (UID: \"b82b3dcd-dcf3-44a0-bfc7-cb8d484ebd6b\") " pod="openstack/barbican-keystone-listener-5c5b6c8cdb-gwc7x" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.753945 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b84a5366-14c9-4b93-b185-18a4e3695ed7-combined-ca-bundle\") pod \"barbican-worker-866f479b9-7wv96\" (UID: \"b84a5366-14c9-4b93-b185-18a4e3695ed7\") " pod="openstack/barbican-worker-866f479b9-7wv96" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.754021 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b84a5366-14c9-4b93-b185-18a4e3695ed7-config-data-custom\") pod \"barbican-worker-866f479b9-7wv96\" (UID: \"b84a5366-14c9-4b93-b185-18a4e3695ed7\") " pod="openstack/barbican-worker-866f479b9-7wv96" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.754086 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b84a5366-14c9-4b93-b185-18a4e3695ed7-config-data\") pod \"barbican-worker-866f479b9-7wv96\" (UID: \"b84a5366-14c9-4b93-b185-18a4e3695ed7\") " pod="openstack/barbican-worker-866f479b9-7wv96" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.754167 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b82b3dcd-dcf3-44a0-bfc7-cb8d484ebd6b-config-data-custom\") pod \"barbican-keystone-listener-5c5b6c8cdb-gwc7x\" (UID: \"b82b3dcd-dcf3-44a0-bfc7-cb8d484ebd6b\") " pod="openstack/barbican-keystone-listener-5c5b6c8cdb-gwc7x" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.754223 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b82b3dcd-dcf3-44a0-bfc7-cb8d484ebd6b-config-data\") pod \"barbican-keystone-listener-5c5b6c8cdb-gwc7x\" (UID: \"b82b3dcd-dcf3-44a0-bfc7-cb8d484ebd6b\") " pod="openstack/barbican-keystone-listener-5c5b6c8cdb-gwc7x" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.754356 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b82b3dcd-dcf3-44a0-bfc7-cb8d484ebd6b-logs\") pod \"barbican-keystone-listener-5c5b6c8cdb-gwc7x\" (UID: \"b82b3dcd-dcf3-44a0-bfc7-cb8d484ebd6b\") " pod="openstack/barbican-keystone-listener-5c5b6c8cdb-gwc7x" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.755464 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b82b3dcd-dcf3-44a0-bfc7-cb8d484ebd6b-logs\") pod \"barbican-keystone-listener-5c5b6c8cdb-gwc7x\" (UID: \"b82b3dcd-dcf3-44a0-bfc7-cb8d484ebd6b\") " pod="openstack/barbican-keystone-listener-5c5b6c8cdb-gwc7x" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.763567 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-689b884cd-xd7w8" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.773433 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b84a5366-14c9-4b93-b185-18a4e3695ed7-combined-ca-bundle\") pod \"barbican-worker-866f479b9-7wv96\" (UID: \"b84a5366-14c9-4b93-b185-18a4e3695ed7\") " pod="openstack/barbican-worker-866f479b9-7wv96" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.774511 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.774908 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59fl7\" (UniqueName: \"kubernetes.io/projected/b82b3dcd-dcf3-44a0-bfc7-cb8d484ebd6b-kube-api-access-59fl7\") pod \"barbican-keystone-listener-5c5b6c8cdb-gwc7x\" (UID: \"b82b3dcd-dcf3-44a0-bfc7-cb8d484ebd6b\") " pod="openstack/barbican-keystone-listener-5c5b6c8cdb-gwc7x" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.790471 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b82b3dcd-dcf3-44a0-bfc7-cb8d484ebd6b-config-data-custom\") pod \"barbican-keystone-listener-5c5b6c8cdb-gwc7x\" (UID: \"b82b3dcd-dcf3-44a0-bfc7-cb8d484ebd6b\") " pod="openstack/barbican-keystone-listener-5c5b6c8cdb-gwc7x" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.798088 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktqdr\" (UniqueName: \"kubernetes.io/projected/b84a5366-14c9-4b93-b185-18a4e3695ed7-kube-api-access-ktqdr\") pod \"barbican-worker-866f479b9-7wv96\" (UID: \"b84a5366-14c9-4b93-b185-18a4e3695ed7\") " pod="openstack/barbican-worker-866f479b9-7wv96" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.802740 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b84a5366-14c9-4b93-b185-18a4e3695ed7-config-data\") pod \"barbican-worker-866f479b9-7wv96\" (UID: \"b84a5366-14c9-4b93-b185-18a4e3695ed7\") " pod="openstack/barbican-worker-866f479b9-7wv96" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.820365 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b84a5366-14c9-4b93-b185-18a4e3695ed7-config-data-custom\") pod \"barbican-worker-866f479b9-7wv96\" (UID: \"b84a5366-14c9-4b93-b185-18a4e3695ed7\") " pod="openstack/barbican-worker-866f479b9-7wv96" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.836206 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-689b884cd-xd7w8"] Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.848318 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-866f479b9-7wv96" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.850928 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b82b3dcd-dcf3-44a0-bfc7-cb8d484ebd6b-combined-ca-bundle\") pod \"barbican-keystone-listener-5c5b6c8cdb-gwc7x\" (UID: \"b82b3dcd-dcf3-44a0-bfc7-cb8d484ebd6b\") " pod="openstack/barbican-keystone-listener-5c5b6c8cdb-gwc7x" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.856825 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df72a93c-eb25-4b5a-bb6c-0b989ce0b993-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-tmbk8\" (UID: \"df72a93c-eb25-4b5a-bb6c-0b989ce0b993\") " pod="openstack/dnsmasq-dns-85ff748b95-tmbk8" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.856941 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cec54497-f9a7-4d22-8989-a78d815df93c-config-data\") pod \"barbican-api-689b884cd-xd7w8\" (UID: \"cec54497-f9a7-4d22-8989-a78d815df93c\") " pod="openstack/barbican-api-689b884cd-xd7w8" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.857008 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxqjv\" (UniqueName: \"kubernetes.io/projected/cec54497-f9a7-4d22-8989-a78d815df93c-kube-api-access-bxqjv\") pod \"barbican-api-689b884cd-xd7w8\" (UID: \"cec54497-f9a7-4d22-8989-a78d815df93c\") " pod="openstack/barbican-api-689b884cd-xd7w8" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.857049 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/df72a93c-eb25-4b5a-bb6c-0b989ce0b993-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-tmbk8\" (UID: \"df72a93c-eb25-4b5a-bb6c-0b989ce0b993\") " pod="openstack/dnsmasq-dns-85ff748b95-tmbk8" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.857109 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df72a93c-eb25-4b5a-bb6c-0b989ce0b993-config\") pod \"dnsmasq-dns-85ff748b95-tmbk8\" (UID: \"df72a93c-eb25-4b5a-bb6c-0b989ce0b993\") " pod="openstack/dnsmasq-dns-85ff748b95-tmbk8" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.857140 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cec54497-f9a7-4d22-8989-a78d815df93c-logs\") pod \"barbican-api-689b884cd-xd7w8\" (UID: \"cec54497-f9a7-4d22-8989-a78d815df93c\") " pod="openstack/barbican-api-689b884cd-xd7w8" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.857169 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cec54497-f9a7-4d22-8989-a78d815df93c-combined-ca-bundle\") pod \"barbican-api-689b884cd-xd7w8\" (UID: \"cec54497-f9a7-4d22-8989-a78d815df93c\") " pod="openstack/barbican-api-689b884cd-xd7w8" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.857190 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df72a93c-eb25-4b5a-bb6c-0b989ce0b993-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-tmbk8\" (UID: \"df72a93c-eb25-4b5a-bb6c-0b989ce0b993\") " pod="openstack/dnsmasq-dns-85ff748b95-tmbk8" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.857210 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv6lr\" (UniqueName: \"kubernetes.io/projected/df72a93c-eb25-4b5a-bb6c-0b989ce0b993-kube-api-access-xv6lr\") pod \"dnsmasq-dns-85ff748b95-tmbk8\" (UID: \"df72a93c-eb25-4b5a-bb6c-0b989ce0b993\") " pod="openstack/dnsmasq-dns-85ff748b95-tmbk8" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.857240 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df72a93c-eb25-4b5a-bb6c-0b989ce0b993-dns-svc\") pod \"dnsmasq-dns-85ff748b95-tmbk8\" (UID: \"df72a93c-eb25-4b5a-bb6c-0b989ce0b993\") " pod="openstack/dnsmasq-dns-85ff748b95-tmbk8" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.857279 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cec54497-f9a7-4d22-8989-a78d815df93c-config-data-custom\") pod \"barbican-api-689b884cd-xd7w8\" (UID: \"cec54497-f9a7-4d22-8989-a78d815df93c\") " pod="openstack/barbican-api-689b884cd-xd7w8" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.861465 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b82b3dcd-dcf3-44a0-bfc7-cb8d484ebd6b-config-data\") pod \"barbican-keystone-listener-5c5b6c8cdb-gwc7x\" (UID: \"b82b3dcd-dcf3-44a0-bfc7-cb8d484ebd6b\") " pod="openstack/barbican-keystone-listener-5c5b6c8cdb-gwc7x" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.870747 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5c5b6c8cdb-gwc7x" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.914290 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-kdfn7" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.958482 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54a9f74e-fc12-43b7-aca3-0594480e0222-config-data\") pod \"54a9f74e-fc12-43b7-aca3-0594480e0222\" (UID: \"54a9f74e-fc12-43b7-aca3-0594480e0222\") " Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.958544 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54a9f74e-fc12-43b7-aca3-0594480e0222-combined-ca-bundle\") pod \"54a9f74e-fc12-43b7-aca3-0594480e0222\" (UID: \"54a9f74e-fc12-43b7-aca3-0594480e0222\") " Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.958675 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spxnd\" (UniqueName: \"kubernetes.io/projected/54a9f74e-fc12-43b7-aca3-0594480e0222-kube-api-access-spxnd\") pod \"54a9f74e-fc12-43b7-aca3-0594480e0222\" (UID: \"54a9f74e-fc12-43b7-aca3-0594480e0222\") " Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.959024 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cec54497-f9a7-4d22-8989-a78d815df93c-config-data\") pod \"barbican-api-689b884cd-xd7w8\" (UID: \"cec54497-f9a7-4d22-8989-a78d815df93c\") " pod="openstack/barbican-api-689b884cd-xd7w8" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.959093 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxqjv\" (UniqueName: \"kubernetes.io/projected/cec54497-f9a7-4d22-8989-a78d815df93c-kube-api-access-bxqjv\") pod \"barbican-api-689b884cd-xd7w8\" (UID: \"cec54497-f9a7-4d22-8989-a78d815df93c\") " pod="openstack/barbican-api-689b884cd-xd7w8" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.959145 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/df72a93c-eb25-4b5a-bb6c-0b989ce0b993-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-tmbk8\" (UID: \"df72a93c-eb25-4b5a-bb6c-0b989ce0b993\") " pod="openstack/dnsmasq-dns-85ff748b95-tmbk8" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.959190 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df72a93c-eb25-4b5a-bb6c-0b989ce0b993-config\") pod \"dnsmasq-dns-85ff748b95-tmbk8\" (UID: \"df72a93c-eb25-4b5a-bb6c-0b989ce0b993\") " pod="openstack/dnsmasq-dns-85ff748b95-tmbk8" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.959219 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cec54497-f9a7-4d22-8989-a78d815df93c-logs\") pod \"barbican-api-689b884cd-xd7w8\" (UID: \"cec54497-f9a7-4d22-8989-a78d815df93c\") " pod="openstack/barbican-api-689b884cd-xd7w8" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.959246 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df72a93c-eb25-4b5a-bb6c-0b989ce0b993-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-tmbk8\" (UID: \"df72a93c-eb25-4b5a-bb6c-0b989ce0b993\") " pod="openstack/dnsmasq-dns-85ff748b95-tmbk8" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.959259 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cec54497-f9a7-4d22-8989-a78d815df93c-combined-ca-bundle\") pod \"barbican-api-689b884cd-xd7w8\" (UID: \"cec54497-f9a7-4d22-8989-a78d815df93c\") " pod="openstack/barbican-api-689b884cd-xd7w8" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.959280 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv6lr\" (UniqueName: \"kubernetes.io/projected/df72a93c-eb25-4b5a-bb6c-0b989ce0b993-kube-api-access-xv6lr\") pod \"dnsmasq-dns-85ff748b95-tmbk8\" (UID: \"df72a93c-eb25-4b5a-bb6c-0b989ce0b993\") " pod="openstack/dnsmasq-dns-85ff748b95-tmbk8" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.959308 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df72a93c-eb25-4b5a-bb6c-0b989ce0b993-dns-svc\") pod \"dnsmasq-dns-85ff748b95-tmbk8\" (UID: \"df72a93c-eb25-4b5a-bb6c-0b989ce0b993\") " pod="openstack/dnsmasq-dns-85ff748b95-tmbk8" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.959341 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cec54497-f9a7-4d22-8989-a78d815df93c-config-data-custom\") pod \"barbican-api-689b884cd-xd7w8\" (UID: \"cec54497-f9a7-4d22-8989-a78d815df93c\") " pod="openstack/barbican-api-689b884cd-xd7w8" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.959358 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df72a93c-eb25-4b5a-bb6c-0b989ce0b993-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-tmbk8\" (UID: \"df72a93c-eb25-4b5a-bb6c-0b989ce0b993\") " pod="openstack/dnsmasq-dns-85ff748b95-tmbk8" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.962125 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df72a93c-eb25-4b5a-bb6c-0b989ce0b993-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-tmbk8\" (UID: \"df72a93c-eb25-4b5a-bb6c-0b989ce0b993\") " pod="openstack/dnsmasq-dns-85ff748b95-tmbk8" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.967187 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df72a93c-eb25-4b5a-bb6c-0b989ce0b993-config\") pod \"dnsmasq-dns-85ff748b95-tmbk8\" (UID: \"df72a93c-eb25-4b5a-bb6c-0b989ce0b993\") " pod="openstack/dnsmasq-dns-85ff748b95-tmbk8" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.975762 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cec54497-f9a7-4d22-8989-a78d815df93c-combined-ca-bundle\") pod \"barbican-api-689b884cd-xd7w8\" (UID: \"cec54497-f9a7-4d22-8989-a78d815df93c\") " pod="openstack/barbican-api-689b884cd-xd7w8" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.975762 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df72a93c-eb25-4b5a-bb6c-0b989ce0b993-dns-svc\") pod \"dnsmasq-dns-85ff748b95-tmbk8\" (UID: \"df72a93c-eb25-4b5a-bb6c-0b989ce0b993\") " pod="openstack/dnsmasq-dns-85ff748b95-tmbk8" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.976508 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cec54497-f9a7-4d22-8989-a78d815df93c-logs\") pod \"barbican-api-689b884cd-xd7w8\" (UID: \"cec54497-f9a7-4d22-8989-a78d815df93c\") " pod="openstack/barbican-api-689b884cd-xd7w8" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.977523 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df72a93c-eb25-4b5a-bb6c-0b989ce0b993-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-tmbk8\" (UID: \"df72a93c-eb25-4b5a-bb6c-0b989ce0b993\") " pod="openstack/dnsmasq-dns-85ff748b95-tmbk8" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.981969 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/df72a93c-eb25-4b5a-bb6c-0b989ce0b993-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-tmbk8\" (UID: \"df72a93c-eb25-4b5a-bb6c-0b989ce0b993\") " pod="openstack/dnsmasq-dns-85ff748b95-tmbk8" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.983940 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cec54497-f9a7-4d22-8989-a78d815df93c-config-data-custom\") pod \"barbican-api-689b884cd-xd7w8\" (UID: \"cec54497-f9a7-4d22-8989-a78d815df93c\") " pod="openstack/barbican-api-689b884cd-xd7w8" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.986739 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54a9f74e-fc12-43b7-aca3-0594480e0222-kube-api-access-spxnd" (OuterVolumeSpecName: "kube-api-access-spxnd") pod "54a9f74e-fc12-43b7-aca3-0594480e0222" (UID: "54a9f74e-fc12-43b7-aca3-0594480e0222"). InnerVolumeSpecName "kube-api-access-spxnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:55:25 crc kubenswrapper[4737]: I0126 18:55:25.989310 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cec54497-f9a7-4d22-8989-a78d815df93c-config-data\") pod \"barbican-api-689b884cd-xd7w8\" (UID: \"cec54497-f9a7-4d22-8989-a78d815df93c\") " pod="openstack/barbican-api-689b884cd-xd7w8" Jan 26 18:55:26 crc kubenswrapper[4737]: I0126 18:55:26.000397 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv6lr\" (UniqueName: \"kubernetes.io/projected/df72a93c-eb25-4b5a-bb6c-0b989ce0b993-kube-api-access-xv6lr\") pod \"dnsmasq-dns-85ff748b95-tmbk8\" (UID: \"df72a93c-eb25-4b5a-bb6c-0b989ce0b993\") " pod="openstack/dnsmasq-dns-85ff748b95-tmbk8" Jan 26 18:55:26 crc kubenswrapper[4737]: I0126 18:55:26.014991 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-kdfn7" Jan 26 18:55:26 crc kubenswrapper[4737]: I0126 18:55:26.015238 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-kdfn7" event={"ID":"54a9f74e-fc12-43b7-aca3-0594480e0222","Type":"ContainerDied","Data":"fcda7f7865bf8ceadf7b23f11e3e35be2d4df8bde0693def7e093444acf3e2c1"} Jan 26 18:55:26 crc kubenswrapper[4737]: I0126 18:55:26.015282 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcda7f7865bf8ceadf7b23f11e3e35be2d4df8bde0693def7e093444acf3e2c1" Jan 26 18:55:26 crc kubenswrapper[4737]: I0126 18:55:26.015282 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxqjv\" (UniqueName: \"kubernetes.io/projected/cec54497-f9a7-4d22-8989-a78d815df93c-kube-api-access-bxqjv\") pod \"barbican-api-689b884cd-xd7w8\" (UID: \"cec54497-f9a7-4d22-8989-a78d815df93c\") " pod="openstack/barbican-api-689b884cd-xd7w8" Jan 26 18:55:26 crc kubenswrapper[4737]: I0126 18:55:26.062254 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54a9f74e-fc12-43b7-aca3-0594480e0222-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "54a9f74e-fc12-43b7-aca3-0594480e0222" (UID: "54a9f74e-fc12-43b7-aca3-0594480e0222"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:55:26 crc kubenswrapper[4737]: I0126 18:55:26.063819 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spxnd\" (UniqueName: \"kubernetes.io/projected/54a9f74e-fc12-43b7-aca3-0594480e0222-kube-api-access-spxnd\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:26 crc kubenswrapper[4737]: I0126 18:55:26.063931 4737 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54a9f74e-fc12-43b7-aca3-0594480e0222-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:26 crc kubenswrapper[4737]: I0126 18:55:26.219518 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54a9f74e-fc12-43b7-aca3-0594480e0222-config-data" (OuterVolumeSpecName: "config-data") pod "54a9f74e-fc12-43b7-aca3-0594480e0222" (UID: "54a9f74e-fc12-43b7-aca3-0594480e0222"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:55:26 crc kubenswrapper[4737]: I0126 18:55:26.274989 4737 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54a9f74e-fc12-43b7-aca3-0594480e0222-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:26 crc kubenswrapper[4737]: I0126 18:55:26.283898 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-tmbk8" Jan 26 18:55:26 crc kubenswrapper[4737]: I0126 18:55:26.301772 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-689b884cd-xd7w8" Jan 26 18:55:26 crc kubenswrapper[4737]: I0126 18:55:26.624487 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-5pb7v" Jan 26 18:55:26 crc kubenswrapper[4737]: I0126 18:55:26.693890 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cac069b5-db5e-47ec-ada0-7e6acf1af111-combined-ca-bundle\") pod \"cac069b5-db5e-47ec-ada0-7e6acf1af111\" (UID: \"cac069b5-db5e-47ec-ada0-7e6acf1af111\") " Jan 26 18:55:26 crc kubenswrapper[4737]: I0126 18:55:26.694028 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cac069b5-db5e-47ec-ada0-7e6acf1af111-scripts\") pod \"cac069b5-db5e-47ec-ada0-7e6acf1af111\" (UID: \"cac069b5-db5e-47ec-ada0-7e6acf1af111\") " Jan 26 18:55:26 crc kubenswrapper[4737]: I0126 18:55:26.694093 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cac069b5-db5e-47ec-ada0-7e6acf1af111-etc-machine-id\") pod \"cac069b5-db5e-47ec-ada0-7e6acf1af111\" (UID: \"cac069b5-db5e-47ec-ada0-7e6acf1af111\") " Jan 26 18:55:26 crc kubenswrapper[4737]: I0126 18:55:26.694178 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cac069b5-db5e-47ec-ada0-7e6acf1af111-db-sync-config-data\") pod \"cac069b5-db5e-47ec-ada0-7e6acf1af111\" (UID: \"cac069b5-db5e-47ec-ada0-7e6acf1af111\") " Jan 26 18:55:26 crc kubenswrapper[4737]: I0126 18:55:26.694280 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cac069b5-db5e-47ec-ada0-7e6acf1af111-config-data\") pod \"cac069b5-db5e-47ec-ada0-7e6acf1af111\" (UID: \"cac069b5-db5e-47ec-ada0-7e6acf1af111\") " Jan 26 18:55:26 crc kubenswrapper[4737]: I0126 18:55:26.694313 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d45lf\" (UniqueName: \"kubernetes.io/projected/cac069b5-db5e-47ec-ada0-7e6acf1af111-kube-api-access-d45lf\") pod \"cac069b5-db5e-47ec-ada0-7e6acf1af111\" (UID: \"cac069b5-db5e-47ec-ada0-7e6acf1af111\") " Jan 26 18:55:26 crc kubenswrapper[4737]: I0126 18:55:26.701248 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cac069b5-db5e-47ec-ada0-7e6acf1af111-scripts" (OuterVolumeSpecName: "scripts") pod "cac069b5-db5e-47ec-ada0-7e6acf1af111" (UID: "cac069b5-db5e-47ec-ada0-7e6acf1af111"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:55:26 crc kubenswrapper[4737]: I0126 18:55:26.701598 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cac069b5-db5e-47ec-ada0-7e6acf1af111-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "cac069b5-db5e-47ec-ada0-7e6acf1af111" (UID: "cac069b5-db5e-47ec-ada0-7e6acf1af111"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:55:26 crc kubenswrapper[4737]: I0126 18:55:26.702662 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cac069b5-db5e-47ec-ada0-7e6acf1af111-kube-api-access-d45lf" (OuterVolumeSpecName: "kube-api-access-d45lf") pod "cac069b5-db5e-47ec-ada0-7e6acf1af111" (UID: "cac069b5-db5e-47ec-ada0-7e6acf1af111"). InnerVolumeSpecName "kube-api-access-d45lf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:55:26 crc kubenswrapper[4737]: W0126 18:55:26.702738 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb84a5366_14c9_4b93_b185_18a4e3695ed7.slice/crio-39ca47aa18ac910f351503d58e243f3b9d3939fb07474962f170ab705f760cf2 WatchSource:0}: Error finding container 39ca47aa18ac910f351503d58e243f3b9d3939fb07474962f170ab705f760cf2: Status 404 returned error can't find the container with id 39ca47aa18ac910f351503d58e243f3b9d3939fb07474962f170ab705f760cf2 Jan 26 18:55:26 crc kubenswrapper[4737]: I0126 18:55:26.706789 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cac069b5-db5e-47ec-ada0-7e6acf1af111-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "cac069b5-db5e-47ec-ada0-7e6acf1af111" (UID: "cac069b5-db5e-47ec-ada0-7e6acf1af111"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:55:26 crc kubenswrapper[4737]: I0126 18:55:26.718821 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-866f479b9-7wv96"] Jan 26 18:55:26 crc kubenswrapper[4737]: I0126 18:55:26.754195 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cac069b5-db5e-47ec-ada0-7e6acf1af111-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cac069b5-db5e-47ec-ada0-7e6acf1af111" (UID: "cac069b5-db5e-47ec-ada0-7e6acf1af111"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:55:26 crc kubenswrapper[4737]: I0126 18:55:26.800939 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cac069b5-db5e-47ec-ada0-7e6acf1af111-config-data" (OuterVolumeSpecName: "config-data") pod "cac069b5-db5e-47ec-ada0-7e6acf1af111" (UID: "cac069b5-db5e-47ec-ada0-7e6acf1af111"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:55:26 crc kubenswrapper[4737]: I0126 18:55:26.801641 4737 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cac069b5-db5e-47ec-ada0-7e6acf1af111-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:26 crc kubenswrapper[4737]: I0126 18:55:26.801672 4737 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cac069b5-db5e-47ec-ada0-7e6acf1af111-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:26 crc kubenswrapper[4737]: I0126 18:55:26.801682 4737 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cac069b5-db5e-47ec-ada0-7e6acf1af111-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:26 crc kubenswrapper[4737]: I0126 18:55:26.801692 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d45lf\" (UniqueName: \"kubernetes.io/projected/cac069b5-db5e-47ec-ada0-7e6acf1af111-kube-api-access-d45lf\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:26 crc kubenswrapper[4737]: I0126 18:55:26.801705 4737 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cac069b5-db5e-47ec-ada0-7e6acf1af111-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:26 crc kubenswrapper[4737]: I0126 18:55:26.863019 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5c5b6c8cdb-gwc7x"] Jan 26 18:55:26 crc kubenswrapper[4737]: I0126 18:55:26.910867 4737 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cac069b5-db5e-47ec-ada0-7e6acf1af111-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.062640 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-5pb7v" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.063446 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-5pb7v" event={"ID":"cac069b5-db5e-47ec-ada0-7e6acf1af111","Type":"ContainerDied","Data":"f672bd6815e1dba3fa766b1bd4fb4a64a0af4b9e36fb8969c36d7c27f6e3927d"} Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.063498 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f672bd6815e1dba3fa766b1bd4fb4a64a0af4b9e36fb8969c36d7c27f6e3927d" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.070339 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-866f479b9-7wv96" event={"ID":"b84a5366-14c9-4b93-b185-18a4e3695ed7","Type":"ContainerStarted","Data":"39ca47aa18ac910f351503d58e243f3b9d3939fb07474962f170ab705f760cf2"} Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.073136 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5c5b6c8cdb-gwc7x" event={"ID":"b82b3dcd-dcf3-44a0-bfc7-cb8d484ebd6b","Type":"ContainerStarted","Data":"8c9b39a86c276411717596b863a04ed642cfca70a09cdf4586acc1c266633071"} Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.115033 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-tmbk8"] Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.138008 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-689b884cd-xd7w8"] Jan 26 18:55:27 crc kubenswrapper[4737]: W0126 18:55:27.154644 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcec54497_f9a7_4d22_8989_a78d815df93c.slice/crio-d49ab26b7c5c9fbd156c998dd8e8ce5dc3666e7cc3dae8ef13954ecd74185778 WatchSource:0}: Error finding container d49ab26b7c5c9fbd156c998dd8e8ce5dc3666e7cc3dae8ef13954ecd74185778: Status 404 returned error can't find the container with id d49ab26b7c5c9fbd156c998dd8e8ce5dc3666e7cc3dae8ef13954ecd74185778 Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.384753 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 18:55:27 crc kubenswrapper[4737]: E0126 18:55:27.406105 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cac069b5-db5e-47ec-ada0-7e6acf1af111" containerName="cinder-db-sync" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.406150 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="cac069b5-db5e-47ec-ada0-7e6acf1af111" containerName="cinder-db-sync" Jan 26 18:55:27 crc kubenswrapper[4737]: E0126 18:55:27.406186 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54a9f74e-fc12-43b7-aca3-0594480e0222" containerName="heat-db-sync" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.406194 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="54a9f74e-fc12-43b7-aca3-0594480e0222" containerName="heat-db-sync" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.406624 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="cac069b5-db5e-47ec-ada0-7e6acf1af111" containerName="cinder-db-sync" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.406663 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="54a9f74e-fc12-43b7-aca3-0594480e0222" containerName="heat-db-sync" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.429525 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.441884 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.442140 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-7qtqf" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.442304 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.442448 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.491090 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.512157 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-tmbk8"] Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.553877 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-6rgnn"] Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.560289 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-6rgnn" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.580850 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-6rgnn"] Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.592951 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49c93b4b-1101-4e35-857b-722849fadd92-config-data\") pod \"cinder-scheduler-0\" (UID: \"49c93b4b-1101-4e35-857b-722849fadd92\") " pod="openstack/cinder-scheduler-0" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.593045 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkqmz\" (UniqueName: \"kubernetes.io/projected/49c93b4b-1101-4e35-857b-722849fadd92-kube-api-access-dkqmz\") pod \"cinder-scheduler-0\" (UID: \"49c93b4b-1101-4e35-857b-722849fadd92\") " pod="openstack/cinder-scheduler-0" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.593087 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/49c93b4b-1101-4e35-857b-722849fadd92-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"49c93b4b-1101-4e35-857b-722849fadd92\") " pod="openstack/cinder-scheduler-0" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.593137 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49c93b4b-1101-4e35-857b-722849fadd92-scripts\") pod \"cinder-scheduler-0\" (UID: \"49c93b4b-1101-4e35-857b-722849fadd92\") " pod="openstack/cinder-scheduler-0" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.593183 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49c93b4b-1101-4e35-857b-722849fadd92-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"49c93b4b-1101-4e35-857b-722849fadd92\") " pod="openstack/cinder-scheduler-0" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.593205 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/49c93b4b-1101-4e35-857b-722849fadd92-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"49c93b4b-1101-4e35-857b-722849fadd92\") " pod="openstack/cinder-scheduler-0" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.695871 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/67c8afbc-8ed9-4ebb-b150-f6f5257f7b15-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-6rgnn\" (UID: \"67c8afbc-8ed9-4ebb-b150-f6f5257f7b15\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6rgnn" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.696706 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49c93b4b-1101-4e35-857b-722849fadd92-config-data\") pod \"cinder-scheduler-0\" (UID: \"49c93b4b-1101-4e35-857b-722849fadd92\") " pod="openstack/cinder-scheduler-0" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.698334 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snslv\" (UniqueName: \"kubernetes.io/projected/67c8afbc-8ed9-4ebb-b150-f6f5257f7b15-kube-api-access-snslv\") pod \"dnsmasq-dns-5c9776ccc5-6rgnn\" (UID: \"67c8afbc-8ed9-4ebb-b150-f6f5257f7b15\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6rgnn" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.698496 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67c8afbc-8ed9-4ebb-b150-f6f5257f7b15-config\") pod \"dnsmasq-dns-5c9776ccc5-6rgnn\" (UID: \"67c8afbc-8ed9-4ebb-b150-f6f5257f7b15\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6rgnn" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.698697 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkqmz\" (UniqueName: \"kubernetes.io/projected/49c93b4b-1101-4e35-857b-722849fadd92-kube-api-access-dkqmz\") pod \"cinder-scheduler-0\" (UID: \"49c93b4b-1101-4e35-857b-722849fadd92\") " pod="openstack/cinder-scheduler-0" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.698799 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/49c93b4b-1101-4e35-857b-722849fadd92-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"49c93b4b-1101-4e35-857b-722849fadd92\") " pod="openstack/cinder-scheduler-0" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.698970 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/67c8afbc-8ed9-4ebb-b150-f6f5257f7b15-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-6rgnn\" (UID: \"67c8afbc-8ed9-4ebb-b150-f6f5257f7b15\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6rgnn" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.699180 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/67c8afbc-8ed9-4ebb-b150-f6f5257f7b15-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-6rgnn\" (UID: \"67c8afbc-8ed9-4ebb-b150-f6f5257f7b15\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6rgnn" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.699574 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/67c8afbc-8ed9-4ebb-b150-f6f5257f7b15-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-6rgnn\" (UID: \"67c8afbc-8ed9-4ebb-b150-f6f5257f7b15\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6rgnn" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.699683 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49c93b4b-1101-4e35-857b-722849fadd92-scripts\") pod \"cinder-scheduler-0\" (UID: \"49c93b4b-1101-4e35-857b-722849fadd92\") " pod="openstack/cinder-scheduler-0" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.699883 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49c93b4b-1101-4e35-857b-722849fadd92-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"49c93b4b-1101-4e35-857b-722849fadd92\") " pod="openstack/cinder-scheduler-0" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.700004 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/49c93b4b-1101-4e35-857b-722849fadd92-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"49c93b4b-1101-4e35-857b-722849fadd92\") " pod="openstack/cinder-scheduler-0" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.700218 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.700285 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/49c93b4b-1101-4e35-857b-722849fadd92-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"49c93b4b-1101-4e35-857b-722849fadd92\") " pod="openstack/cinder-scheduler-0" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.717697 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.720718 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49c93b4b-1101-4e35-857b-722849fadd92-scripts\") pod \"cinder-scheduler-0\" (UID: \"49c93b4b-1101-4e35-857b-722849fadd92\") " pod="openstack/cinder-scheduler-0" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.720926 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/49c93b4b-1101-4e35-857b-722849fadd92-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"49c93b4b-1101-4e35-857b-722849fadd92\") " pod="openstack/cinder-scheduler-0" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.722550 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.723473 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49c93b4b-1101-4e35-857b-722849fadd92-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"49c93b4b-1101-4e35-857b-722849fadd92\") " pod="openstack/cinder-scheduler-0" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.735174 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49c93b4b-1101-4e35-857b-722849fadd92-config-data\") pod \"cinder-scheduler-0\" (UID: \"49c93b4b-1101-4e35-857b-722849fadd92\") " pod="openstack/cinder-scheduler-0" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.737118 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.747979 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkqmz\" (UniqueName: \"kubernetes.io/projected/49c93b4b-1101-4e35-857b-722849fadd92-kube-api-access-dkqmz\") pod \"cinder-scheduler-0\" (UID: \"49c93b4b-1101-4e35-857b-722849fadd92\") " pod="openstack/cinder-scheduler-0" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.792408 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.802641 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/67c8afbc-8ed9-4ebb-b150-f6f5257f7b15-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-6rgnn\" (UID: \"67c8afbc-8ed9-4ebb-b150-f6f5257f7b15\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6rgnn" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.802714 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/deadcd24-0a98-4f1d-986b-75187a3eccee-logs\") pod \"cinder-api-0\" (UID: \"deadcd24-0a98-4f1d-986b-75187a3eccee\") " pod="openstack/cinder-api-0" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.802754 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/deadcd24-0a98-4f1d-986b-75187a3eccee-etc-machine-id\") pod \"cinder-api-0\" (UID: \"deadcd24-0a98-4f1d-986b-75187a3eccee\") " pod="openstack/cinder-api-0" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.802785 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deadcd24-0a98-4f1d-986b-75187a3eccee-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"deadcd24-0a98-4f1d-986b-75187a3eccee\") " pod="openstack/cinder-api-0" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.802871 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snslv\" (UniqueName: \"kubernetes.io/projected/67c8afbc-8ed9-4ebb-b150-f6f5257f7b15-kube-api-access-snslv\") pod \"dnsmasq-dns-5c9776ccc5-6rgnn\" (UID: \"67c8afbc-8ed9-4ebb-b150-f6f5257f7b15\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6rgnn" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.802907 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67c8afbc-8ed9-4ebb-b150-f6f5257f7b15-config\") pod \"dnsmasq-dns-5c9776ccc5-6rgnn\" (UID: \"67c8afbc-8ed9-4ebb-b150-f6f5257f7b15\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6rgnn" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.802949 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/deadcd24-0a98-4f1d-986b-75187a3eccee-config-data-custom\") pod \"cinder-api-0\" (UID: \"deadcd24-0a98-4f1d-986b-75187a3eccee\") " pod="openstack/cinder-api-0" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.802991 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t94b8\" (UniqueName: \"kubernetes.io/projected/deadcd24-0a98-4f1d-986b-75187a3eccee-kube-api-access-t94b8\") pod \"cinder-api-0\" (UID: \"deadcd24-0a98-4f1d-986b-75187a3eccee\") " pod="openstack/cinder-api-0" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.803036 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/67c8afbc-8ed9-4ebb-b150-f6f5257f7b15-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-6rgnn\" (UID: \"67c8afbc-8ed9-4ebb-b150-f6f5257f7b15\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6rgnn" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.803302 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/67c8afbc-8ed9-4ebb-b150-f6f5257f7b15-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-6rgnn\" (UID: \"67c8afbc-8ed9-4ebb-b150-f6f5257f7b15\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6rgnn" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.803326 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/67c8afbc-8ed9-4ebb-b150-f6f5257f7b15-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-6rgnn\" (UID: \"67c8afbc-8ed9-4ebb-b150-f6f5257f7b15\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6rgnn" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.803375 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/deadcd24-0a98-4f1d-986b-75187a3eccee-scripts\") pod \"cinder-api-0\" (UID: \"deadcd24-0a98-4f1d-986b-75187a3eccee\") " pod="openstack/cinder-api-0" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.803430 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deadcd24-0a98-4f1d-986b-75187a3eccee-config-data\") pod \"cinder-api-0\" (UID: \"deadcd24-0a98-4f1d-986b-75187a3eccee\") " pod="openstack/cinder-api-0" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.804771 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/67c8afbc-8ed9-4ebb-b150-f6f5257f7b15-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-6rgnn\" (UID: \"67c8afbc-8ed9-4ebb-b150-f6f5257f7b15\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6rgnn" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.804832 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67c8afbc-8ed9-4ebb-b150-f6f5257f7b15-config\") pod \"dnsmasq-dns-5c9776ccc5-6rgnn\" (UID: \"67c8afbc-8ed9-4ebb-b150-f6f5257f7b15\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6rgnn" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.805553 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/67c8afbc-8ed9-4ebb-b150-f6f5257f7b15-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-6rgnn\" (UID: \"67c8afbc-8ed9-4ebb-b150-f6f5257f7b15\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6rgnn" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.805772 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/67c8afbc-8ed9-4ebb-b150-f6f5257f7b15-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-6rgnn\" (UID: \"67c8afbc-8ed9-4ebb-b150-f6f5257f7b15\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6rgnn" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.806094 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/67c8afbc-8ed9-4ebb-b150-f6f5257f7b15-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-6rgnn\" (UID: \"67c8afbc-8ed9-4ebb-b150-f6f5257f7b15\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6rgnn" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.825252 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snslv\" (UniqueName: \"kubernetes.io/projected/67c8afbc-8ed9-4ebb-b150-f6f5257f7b15-kube-api-access-snslv\") pod \"dnsmasq-dns-5c9776ccc5-6rgnn\" (UID: \"67c8afbc-8ed9-4ebb-b150-f6f5257f7b15\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6rgnn" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.913957 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/deadcd24-0a98-4f1d-986b-75187a3eccee-scripts\") pod \"cinder-api-0\" (UID: \"deadcd24-0a98-4f1d-986b-75187a3eccee\") " pod="openstack/cinder-api-0" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.914053 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deadcd24-0a98-4f1d-986b-75187a3eccee-config-data\") pod \"cinder-api-0\" (UID: \"deadcd24-0a98-4f1d-986b-75187a3eccee\") " pod="openstack/cinder-api-0" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.914130 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/deadcd24-0a98-4f1d-986b-75187a3eccee-logs\") pod \"cinder-api-0\" (UID: \"deadcd24-0a98-4f1d-986b-75187a3eccee\") " pod="openstack/cinder-api-0" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.914157 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/deadcd24-0a98-4f1d-986b-75187a3eccee-etc-machine-id\") pod \"cinder-api-0\" (UID: \"deadcd24-0a98-4f1d-986b-75187a3eccee\") " pod="openstack/cinder-api-0" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.914177 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deadcd24-0a98-4f1d-986b-75187a3eccee-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"deadcd24-0a98-4f1d-986b-75187a3eccee\") " pod="openstack/cinder-api-0" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.914301 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/deadcd24-0a98-4f1d-986b-75187a3eccee-config-data-custom\") pod \"cinder-api-0\" (UID: \"deadcd24-0a98-4f1d-986b-75187a3eccee\") " pod="openstack/cinder-api-0" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.914342 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t94b8\" (UniqueName: \"kubernetes.io/projected/deadcd24-0a98-4f1d-986b-75187a3eccee-kube-api-access-t94b8\") pod \"cinder-api-0\" (UID: \"deadcd24-0a98-4f1d-986b-75187a3eccee\") " pod="openstack/cinder-api-0" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.914672 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/deadcd24-0a98-4f1d-986b-75187a3eccee-etc-machine-id\") pod \"cinder-api-0\" (UID: \"deadcd24-0a98-4f1d-986b-75187a3eccee\") " pod="openstack/cinder-api-0" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.915537 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/deadcd24-0a98-4f1d-986b-75187a3eccee-logs\") pod \"cinder-api-0\" (UID: \"deadcd24-0a98-4f1d-986b-75187a3eccee\") " pod="openstack/cinder-api-0" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.915787 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-6rgnn" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.925873 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deadcd24-0a98-4f1d-986b-75187a3eccee-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"deadcd24-0a98-4f1d-986b-75187a3eccee\") " pod="openstack/cinder-api-0" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.925906 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/deadcd24-0a98-4f1d-986b-75187a3eccee-config-data-custom\") pod \"cinder-api-0\" (UID: \"deadcd24-0a98-4f1d-986b-75187a3eccee\") " pod="openstack/cinder-api-0" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.945679 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/deadcd24-0a98-4f1d-986b-75187a3eccee-scripts\") pod \"cinder-api-0\" (UID: \"deadcd24-0a98-4f1d-986b-75187a3eccee\") " pod="openstack/cinder-api-0" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.955219 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deadcd24-0a98-4f1d-986b-75187a3eccee-config-data\") pod \"cinder-api-0\" (UID: \"deadcd24-0a98-4f1d-986b-75187a3eccee\") " pod="openstack/cinder-api-0" Jan 26 18:55:27 crc kubenswrapper[4737]: I0126 18:55:27.984747 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t94b8\" (UniqueName: \"kubernetes.io/projected/deadcd24-0a98-4f1d-986b-75187a3eccee-kube-api-access-t94b8\") pod \"cinder-api-0\" (UID: \"deadcd24-0a98-4f1d-986b-75187a3eccee\") " pod="openstack/cinder-api-0" Jan 26 18:55:28 crc kubenswrapper[4737]: I0126 18:55:28.066489 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 18:55:28 crc kubenswrapper[4737]: I0126 18:55:28.129221 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-689b884cd-xd7w8" event={"ID":"cec54497-f9a7-4d22-8989-a78d815df93c","Type":"ContainerStarted","Data":"3e3d61f1f8efce9665ab8c6ea8d0897e1affcdd5f7d0a7c74ad7558a5cdb1277"} Jan 26 18:55:28 crc kubenswrapper[4737]: I0126 18:55:28.129280 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-689b884cd-xd7w8" event={"ID":"cec54497-f9a7-4d22-8989-a78d815df93c","Type":"ContainerStarted","Data":"d49ab26b7c5c9fbd156c998dd8e8ce5dc3666e7cc3dae8ef13954ecd74185778"} Jan 26 18:55:28 crc kubenswrapper[4737]: I0126 18:55:28.157101 4737 generic.go:334] "Generic (PLEG): container finished" podID="df72a93c-eb25-4b5a-bb6c-0b989ce0b993" containerID="9a19566354bd643a741ce639bfb6a45dcfeabfc7524e9c428e5d834fba6b16e2" exitCode=0 Jan 26 18:55:28 crc kubenswrapper[4737]: I0126 18:55:28.157169 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-tmbk8" event={"ID":"df72a93c-eb25-4b5a-bb6c-0b989ce0b993","Type":"ContainerDied","Data":"9a19566354bd643a741ce639bfb6a45dcfeabfc7524e9c428e5d834fba6b16e2"} Jan 26 18:55:28 crc kubenswrapper[4737]: I0126 18:55:28.157207 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-tmbk8" event={"ID":"df72a93c-eb25-4b5a-bb6c-0b989ce0b993","Type":"ContainerStarted","Data":"856c1d01db56e179fc0e83eee2f36161aeac3a0dc4662c5d563f05a8dadc7aed"} Jan 26 18:55:28 crc kubenswrapper[4737]: E0126 18:55:28.177308 4737 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf72a93c_eb25_4b5a_bb6c_0b989ce0b993.slice/crio-9a19566354bd643a741ce639bfb6a45dcfeabfc7524e9c428e5d834fba6b16e2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf72a93c_eb25_4b5a_bb6c_0b989ce0b993.slice/crio-conmon-9a19566354bd643a741ce639bfb6a45dcfeabfc7524e9c428e5d834fba6b16e2.scope\": RecentStats: unable to find data in memory cache]" Jan 26 18:55:28 crc kubenswrapper[4737]: I0126 18:55:28.427211 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-s7g98" Jan 26 18:55:28 crc kubenswrapper[4737]: I0126 18:55:28.519635 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-s7g98" Jan 26 18:55:28 crc kubenswrapper[4737]: I0126 18:55:28.781200 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s7g98"] Jan 26 18:55:29 crc kubenswrapper[4737]: I0126 18:55:29.001118 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-tmbk8" Jan 26 18:55:29 crc kubenswrapper[4737]: I0126 18:55:29.059634 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df72a93c-eb25-4b5a-bb6c-0b989ce0b993-ovsdbserver-sb\") pod \"df72a93c-eb25-4b5a-bb6c-0b989ce0b993\" (UID: \"df72a93c-eb25-4b5a-bb6c-0b989ce0b993\") " Jan 26 18:55:29 crc kubenswrapper[4737]: I0126 18:55:29.059712 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/df72a93c-eb25-4b5a-bb6c-0b989ce0b993-dns-swift-storage-0\") pod \"df72a93c-eb25-4b5a-bb6c-0b989ce0b993\" (UID: \"df72a93c-eb25-4b5a-bb6c-0b989ce0b993\") " Jan 26 18:55:29 crc kubenswrapper[4737]: I0126 18:55:29.059752 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df72a93c-eb25-4b5a-bb6c-0b989ce0b993-config\") pod \"df72a93c-eb25-4b5a-bb6c-0b989ce0b993\" (UID: \"df72a93c-eb25-4b5a-bb6c-0b989ce0b993\") " Jan 26 18:55:29 crc kubenswrapper[4737]: I0126 18:55:29.059822 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df72a93c-eb25-4b5a-bb6c-0b989ce0b993-dns-svc\") pod \"df72a93c-eb25-4b5a-bb6c-0b989ce0b993\" (UID: \"df72a93c-eb25-4b5a-bb6c-0b989ce0b993\") " Jan 26 18:55:29 crc kubenswrapper[4737]: I0126 18:55:29.059868 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df72a93c-eb25-4b5a-bb6c-0b989ce0b993-ovsdbserver-nb\") pod \"df72a93c-eb25-4b5a-bb6c-0b989ce0b993\" (UID: \"df72a93c-eb25-4b5a-bb6c-0b989ce0b993\") " Jan 26 18:55:29 crc kubenswrapper[4737]: I0126 18:55:29.059953 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xv6lr\" (UniqueName: \"kubernetes.io/projected/df72a93c-eb25-4b5a-bb6c-0b989ce0b993-kube-api-access-xv6lr\") pod \"df72a93c-eb25-4b5a-bb6c-0b989ce0b993\" (UID: \"df72a93c-eb25-4b5a-bb6c-0b989ce0b993\") " Jan 26 18:55:29 crc kubenswrapper[4737]: I0126 18:55:29.080266 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df72a93c-eb25-4b5a-bb6c-0b989ce0b993-kube-api-access-xv6lr" (OuterVolumeSpecName: "kube-api-access-xv6lr") pod "df72a93c-eb25-4b5a-bb6c-0b989ce0b993" (UID: "df72a93c-eb25-4b5a-bb6c-0b989ce0b993"). InnerVolumeSpecName "kube-api-access-xv6lr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:55:29 crc kubenswrapper[4737]: I0126 18:55:29.130051 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df72a93c-eb25-4b5a-bb6c-0b989ce0b993-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "df72a93c-eb25-4b5a-bb6c-0b989ce0b993" (UID: "df72a93c-eb25-4b5a-bb6c-0b989ce0b993"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:55:29 crc kubenswrapper[4737]: I0126 18:55:29.142631 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df72a93c-eb25-4b5a-bb6c-0b989ce0b993-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "df72a93c-eb25-4b5a-bb6c-0b989ce0b993" (UID: "df72a93c-eb25-4b5a-bb6c-0b989ce0b993"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:55:29 crc kubenswrapper[4737]: I0126 18:55:29.170577 4737 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df72a93c-eb25-4b5a-bb6c-0b989ce0b993-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:29 crc kubenswrapper[4737]: I0126 18:55:29.170639 4737 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/df72a93c-eb25-4b5a-bb6c-0b989ce0b993-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:29 crc kubenswrapper[4737]: I0126 18:55:29.170663 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xv6lr\" (UniqueName: \"kubernetes.io/projected/df72a93c-eb25-4b5a-bb6c-0b989ce0b993-kube-api-access-xv6lr\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:29 crc kubenswrapper[4737]: I0126 18:55:29.202720 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df72a93c-eb25-4b5a-bb6c-0b989ce0b993-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "df72a93c-eb25-4b5a-bb6c-0b989ce0b993" (UID: "df72a93c-eb25-4b5a-bb6c-0b989ce0b993"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:55:29 crc kubenswrapper[4737]: I0126 18:55:29.215851 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df72a93c-eb25-4b5a-bb6c-0b989ce0b993-config" (OuterVolumeSpecName: "config") pod "df72a93c-eb25-4b5a-bb6c-0b989ce0b993" (UID: "df72a93c-eb25-4b5a-bb6c-0b989ce0b993"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:55:29 crc kubenswrapper[4737]: I0126 18:55:29.227958 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-tmbk8" event={"ID":"df72a93c-eb25-4b5a-bb6c-0b989ce0b993","Type":"ContainerDied","Data":"856c1d01db56e179fc0e83eee2f36161aeac3a0dc4662c5d563f05a8dadc7aed"} Jan 26 18:55:29 crc kubenswrapper[4737]: I0126 18:55:29.227985 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-tmbk8" Jan 26 18:55:29 crc kubenswrapper[4737]: I0126 18:55:29.228026 4737 scope.go:117] "RemoveContainer" containerID="9a19566354bd643a741ce639bfb6a45dcfeabfc7524e9c428e5d834fba6b16e2" Jan 26 18:55:29 crc kubenswrapper[4737]: I0126 18:55:29.234629 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-689b884cd-xd7w8" event={"ID":"cec54497-f9a7-4d22-8989-a78d815df93c","Type":"ContainerStarted","Data":"14468dfc9b5395ce444e1d5e2d3fc9905c9a7e4ac33b331a53e7cf5718691c7a"} Jan 26 18:55:29 crc kubenswrapper[4737]: I0126 18:55:29.234725 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-689b884cd-xd7w8" Jan 26 18:55:29 crc kubenswrapper[4737]: I0126 18:55:29.235632 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-689b884cd-xd7w8" Jan 26 18:55:29 crc kubenswrapper[4737]: I0126 18:55:29.243462 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df72a93c-eb25-4b5a-bb6c-0b989ce0b993-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "df72a93c-eb25-4b5a-bb6c-0b989ce0b993" (UID: "df72a93c-eb25-4b5a-bb6c-0b989ce0b993"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:55:29 crc kubenswrapper[4737]: I0126 18:55:29.250981 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-6rgnn"] Jan 26 18:55:29 crc kubenswrapper[4737]: I0126 18:55:29.277365 4737 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df72a93c-eb25-4b5a-bb6c-0b989ce0b993-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:29 crc kubenswrapper[4737]: I0126 18:55:29.277392 4737 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df72a93c-eb25-4b5a-bb6c-0b989ce0b993-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:29 crc kubenswrapper[4737]: I0126 18:55:29.277402 4737 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df72a93c-eb25-4b5a-bb6c-0b989ce0b993-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:29 crc kubenswrapper[4737]: I0126 18:55:29.279175 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-689b884cd-xd7w8" podStartSLOduration=4.279154037 podStartE2EDuration="4.279154037s" podCreationTimestamp="2026-01-26 18:55:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:55:29.273504965 +0000 UTC m=+1502.581699673" watchObservedRunningTime="2026-01-26 18:55:29.279154037 +0000 UTC m=+1502.587348745" Jan 26 18:55:29 crc kubenswrapper[4737]: I0126 18:55:29.427525 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 18:55:29 crc kubenswrapper[4737]: I0126 18:55:29.475422 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 26 18:55:29 crc kubenswrapper[4737]: I0126 18:55:29.631874 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-tmbk8"] Jan 26 18:55:29 crc kubenswrapper[4737]: I0126 18:55:29.643717 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-tmbk8"] Jan 26 18:55:30 crc kubenswrapper[4737]: W0126 18:55:30.035679 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49c93b4b_1101_4e35_857b_722849fadd92.slice/crio-ab0d5aa4826b719bba3ca1d12af5ac66e313a74c2a63eba2ee2bf0cb199f91ee WatchSource:0}: Error finding container ab0d5aa4826b719bba3ca1d12af5ac66e313a74c2a63eba2ee2bf0cb199f91ee: Status 404 returned error can't find the container with id ab0d5aa4826b719bba3ca1d12af5ac66e313a74c2a63eba2ee2bf0cb199f91ee Jan 26 18:55:30 crc kubenswrapper[4737]: I0126 18:55:30.249583 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-6rgnn" event={"ID":"67c8afbc-8ed9-4ebb-b150-f6f5257f7b15","Type":"ContainerStarted","Data":"eb7684d9a841f565e5357a1df2da7c7753f9620fa191cc3ff74d931e4cb881a7"} Jan 26 18:55:30 crc kubenswrapper[4737]: I0126 18:55:30.249656 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-6rgnn" event={"ID":"67c8afbc-8ed9-4ebb-b150-f6f5257f7b15","Type":"ContainerStarted","Data":"0773aeaf6e888366f35a8a9a6b297c774861e220ed355ba46447e5071da7b6ff"} Jan 26 18:55:30 crc kubenswrapper[4737]: I0126 18:55:30.251168 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"49c93b4b-1101-4e35-857b-722849fadd92","Type":"ContainerStarted","Data":"ab0d5aa4826b719bba3ca1d12af5ac66e313a74c2a63eba2ee2bf0cb199f91ee"} Jan 26 18:55:30 crc kubenswrapper[4737]: I0126 18:55:30.254179 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-s7g98" podUID="0d1ea1d4-ca8f-48d7-838b-a71cc03f2b39" containerName="registry-server" containerID="cri-o://a60e39cc36a0baceb992e4211988031bbb9fa64910f8224709438291f858fbe4" gracePeriod=2 Jan 26 18:55:31 crc kubenswrapper[4737]: I0126 18:55:31.005598 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df72a93c-eb25-4b5a-bb6c-0b989ce0b993" path="/var/lib/kubelet/pods/df72a93c-eb25-4b5a-bb6c-0b989ce0b993/volumes" Jan 26 18:55:31 crc kubenswrapper[4737]: I0126 18:55:31.284821 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-866f479b9-7wv96" event={"ID":"b84a5366-14c9-4b93-b185-18a4e3695ed7","Type":"ContainerStarted","Data":"e821dae0f481e1275cf16dacd81c1bae040333a7c8bff8ded4e5bb24a2544d6a"} Jan 26 18:55:31 crc kubenswrapper[4737]: I0126 18:55:31.295270 4737 generic.go:334] "Generic (PLEG): container finished" podID="67c8afbc-8ed9-4ebb-b150-f6f5257f7b15" containerID="eb7684d9a841f565e5357a1df2da7c7753f9620fa191cc3ff74d931e4cb881a7" exitCode=0 Jan 26 18:55:31 crc kubenswrapper[4737]: I0126 18:55:31.295779 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-6rgnn" event={"ID":"67c8afbc-8ed9-4ebb-b150-f6f5257f7b15","Type":"ContainerDied","Data":"eb7684d9a841f565e5357a1df2da7c7753f9620fa191cc3ff74d931e4cb881a7"} Jan 26 18:55:31 crc kubenswrapper[4737]: I0126 18:55:31.321304 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5c5b6c8cdb-gwc7x" event={"ID":"b82b3dcd-dcf3-44a0-bfc7-cb8d484ebd6b","Type":"ContainerStarted","Data":"9ca7a8dc578c99c54561245bcd2eebc72d8e353902453594c104412c783cdbb1"} Jan 26 18:55:31 crc kubenswrapper[4737]: I0126 18:55:31.327961 4737 generic.go:334] "Generic (PLEG): container finished" podID="0d1ea1d4-ca8f-48d7-838b-a71cc03f2b39" containerID="a60e39cc36a0baceb992e4211988031bbb9fa64910f8224709438291f858fbe4" exitCode=0 Jan 26 18:55:31 crc kubenswrapper[4737]: I0126 18:55:31.328093 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7g98" event={"ID":"0d1ea1d4-ca8f-48d7-838b-a71cc03f2b39","Type":"ContainerDied","Data":"a60e39cc36a0baceb992e4211988031bbb9fa64910f8224709438291f858fbe4"} Jan 26 18:55:31 crc kubenswrapper[4737]: I0126 18:55:31.336227 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"deadcd24-0a98-4f1d-986b-75187a3eccee","Type":"ContainerStarted","Data":"1a691060e34750f3c08f1f945d405b6593c3d94e35e875c9f0e7dab8150f33c3"} Jan 26 18:55:31 crc kubenswrapper[4737]: I0126 18:55:31.352710 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 26 18:55:31 crc kubenswrapper[4737]: I0126 18:55:31.518041 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s7g98" Jan 26 18:55:31 crc kubenswrapper[4737]: I0126 18:55:31.664510 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d1ea1d4-ca8f-48d7-838b-a71cc03f2b39-catalog-content\") pod \"0d1ea1d4-ca8f-48d7-838b-a71cc03f2b39\" (UID: \"0d1ea1d4-ca8f-48d7-838b-a71cc03f2b39\") " Jan 26 18:55:31 crc kubenswrapper[4737]: I0126 18:55:31.664681 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbjv9\" (UniqueName: \"kubernetes.io/projected/0d1ea1d4-ca8f-48d7-838b-a71cc03f2b39-kube-api-access-kbjv9\") pod \"0d1ea1d4-ca8f-48d7-838b-a71cc03f2b39\" (UID: \"0d1ea1d4-ca8f-48d7-838b-a71cc03f2b39\") " Jan 26 18:55:31 crc kubenswrapper[4737]: I0126 18:55:31.664795 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d1ea1d4-ca8f-48d7-838b-a71cc03f2b39-utilities\") pod \"0d1ea1d4-ca8f-48d7-838b-a71cc03f2b39\" (UID: \"0d1ea1d4-ca8f-48d7-838b-a71cc03f2b39\") " Jan 26 18:55:31 crc kubenswrapper[4737]: I0126 18:55:31.666335 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d1ea1d4-ca8f-48d7-838b-a71cc03f2b39-utilities" (OuterVolumeSpecName: "utilities") pod "0d1ea1d4-ca8f-48d7-838b-a71cc03f2b39" (UID: "0d1ea1d4-ca8f-48d7-838b-a71cc03f2b39"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:55:31 crc kubenswrapper[4737]: I0126 18:55:31.687544 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d1ea1d4-ca8f-48d7-838b-a71cc03f2b39-kube-api-access-kbjv9" (OuterVolumeSpecName: "kube-api-access-kbjv9") pod "0d1ea1d4-ca8f-48d7-838b-a71cc03f2b39" (UID: "0d1ea1d4-ca8f-48d7-838b-a71cc03f2b39"). InnerVolumeSpecName "kube-api-access-kbjv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:55:31 crc kubenswrapper[4737]: I0126 18:55:31.768460 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbjv9\" (UniqueName: \"kubernetes.io/projected/0d1ea1d4-ca8f-48d7-838b-a71cc03f2b39-kube-api-access-kbjv9\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:31 crc kubenswrapper[4737]: I0126 18:55:31.768506 4737 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d1ea1d4-ca8f-48d7-838b-a71cc03f2b39-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:31 crc kubenswrapper[4737]: I0126 18:55:31.807039 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d1ea1d4-ca8f-48d7-838b-a71cc03f2b39-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0d1ea1d4-ca8f-48d7-838b-a71cc03f2b39" (UID: "0d1ea1d4-ca8f-48d7-838b-a71cc03f2b39"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:55:31 crc kubenswrapper[4737]: I0126 18:55:31.884021 4737 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d1ea1d4-ca8f-48d7-838b-a71cc03f2b39-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:32 crc kubenswrapper[4737]: I0126 18:55:32.365940 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-6rgnn" event={"ID":"67c8afbc-8ed9-4ebb-b150-f6f5257f7b15","Type":"ContainerStarted","Data":"e544ad69ca751ec62e21a7ac226c2fe50389582109a0c738cf1fcae76616aeb9"} Jan 26 18:55:32 crc kubenswrapper[4737]: I0126 18:55:32.367545 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-6rgnn" Jan 26 18:55:32 crc kubenswrapper[4737]: I0126 18:55:32.371292 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5c5b6c8cdb-gwc7x" event={"ID":"b82b3dcd-dcf3-44a0-bfc7-cb8d484ebd6b","Type":"ContainerStarted","Data":"86d03ee4ff5c1cf2c484cb6d5c0562103e765eeea7ba9b91831dbb6431cb5233"} Jan 26 18:55:32 crc kubenswrapper[4737]: I0126 18:55:32.373805 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7g98" event={"ID":"0d1ea1d4-ca8f-48d7-838b-a71cc03f2b39","Type":"ContainerDied","Data":"3736f28ba7e345ebd8664dbb776cb6b78ebce675f8db265576579c2fffc6e954"} Jan 26 18:55:32 crc kubenswrapper[4737]: I0126 18:55:32.373852 4737 scope.go:117] "RemoveContainer" containerID="a60e39cc36a0baceb992e4211988031bbb9fa64910f8224709438291f858fbe4" Jan 26 18:55:32 crc kubenswrapper[4737]: I0126 18:55:32.374001 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s7g98" Jan 26 18:55:32 crc kubenswrapper[4737]: I0126 18:55:32.379686 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"deadcd24-0a98-4f1d-986b-75187a3eccee","Type":"ContainerStarted","Data":"17d21f64c9d2d1e2429d61a41c47c614ed746fecd46cf87a0749818145c44ab0"} Jan 26 18:55:32 crc kubenswrapper[4737]: I0126 18:55:32.379842 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="deadcd24-0a98-4f1d-986b-75187a3eccee" containerName="cinder-api-log" containerID="cri-o://17d21f64c9d2d1e2429d61a41c47c614ed746fecd46cf87a0749818145c44ab0" gracePeriod=30 Jan 26 18:55:32 crc kubenswrapper[4737]: I0126 18:55:32.379962 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="deadcd24-0a98-4f1d-986b-75187a3eccee" containerName="cinder-api" containerID="cri-o://e2be2cc101276cae9cd96c6322ea82bb13c83bfa92517786990c72a87502e36a" gracePeriod=30 Jan 26 18:55:32 crc kubenswrapper[4737]: I0126 18:55:32.380005 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 26 18:55:32 crc kubenswrapper[4737]: I0126 18:55:32.388546 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-866f479b9-7wv96" event={"ID":"b84a5366-14c9-4b93-b185-18a4e3695ed7","Type":"ContainerStarted","Data":"2aa05a0fd995f58bf7340a68019801e4cc23ff4d7c05c5006b59ad28e0ef609b"} Jan 26 18:55:32 crc kubenswrapper[4737]: I0126 18:55:32.410133 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-6rgnn" podStartSLOduration=5.410111143 podStartE2EDuration="5.410111143s" podCreationTimestamp="2026-01-26 18:55:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:55:32.388702474 +0000 UTC m=+1505.696897192" watchObservedRunningTime="2026-01-26 18:55:32.410111143 +0000 UTC m=+1505.718305841" Jan 26 18:55:32 crc kubenswrapper[4737]: I0126 18:55:32.429774 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5c5b6c8cdb-gwc7x" podStartSLOduration=3.7630218380000002 podStartE2EDuration="7.429746251s" podCreationTimestamp="2026-01-26 18:55:25 +0000 UTC" firstStartedPulling="2026-01-26 18:55:26.878824331 +0000 UTC m=+1500.187019039" lastFinishedPulling="2026-01-26 18:55:30.545548744 +0000 UTC m=+1503.853743452" observedRunningTime="2026-01-26 18:55:32.404343989 +0000 UTC m=+1505.712538717" watchObservedRunningTime="2026-01-26 18:55:32.429746251 +0000 UTC m=+1505.737940959" Jan 26 18:55:32 crc kubenswrapper[4737]: I0126 18:55:32.456445 4737 scope.go:117] "RemoveContainer" containerID="ae744b77e619f224cdeb3592df13c75c2351589ab1635f3d1c5d15d4ae931b7a" Jan 26 18:55:32 crc kubenswrapper[4737]: I0126 18:55:32.469186 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.461479112 podStartE2EDuration="5.461479112s" podCreationTimestamp="2026-01-26 18:55:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:55:32.438618498 +0000 UTC m=+1505.746813206" watchObservedRunningTime="2026-01-26 18:55:32.461479112 +0000 UTC m=+1505.769673820" Jan 26 18:55:32 crc kubenswrapper[4737]: I0126 18:55:32.494020 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-866f479b9-7wv96" podStartSLOduration=3.630789623 podStartE2EDuration="7.49399191s" podCreationTimestamp="2026-01-26 18:55:25 +0000 UTC" firstStartedPulling="2026-01-26 18:55:26.706323387 +0000 UTC m=+1500.014518095" lastFinishedPulling="2026-01-26 18:55:30.569525674 +0000 UTC m=+1503.877720382" observedRunningTime="2026-01-26 18:55:32.458570954 +0000 UTC m=+1505.766765682" watchObservedRunningTime="2026-01-26 18:55:32.49399191 +0000 UTC m=+1505.802186628" Jan 26 18:55:32 crc kubenswrapper[4737]: I0126 18:55:32.517987 4737 scope.go:117] "RemoveContainer" containerID="1e33f30f584eabf982ca73432af480580edd8dd363deaa40485847805e6f2920" Jan 26 18:55:32 crc kubenswrapper[4737]: I0126 18:55:32.534142 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s7g98"] Jan 26 18:55:32 crc kubenswrapper[4737]: I0126 18:55:32.556819 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-s7g98"] Jan 26 18:55:33 crc kubenswrapper[4737]: I0126 18:55:33.011725 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d1ea1d4-ca8f-48d7-838b-a71cc03f2b39" path="/var/lib/kubelet/pods/0d1ea1d4-ca8f-48d7-838b-a71cc03f2b39/volumes" Jan 26 18:55:33 crc kubenswrapper[4737]: I0126 18:55:33.273181 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-687b47d654-rb2ft"] Jan 26 18:55:33 crc kubenswrapper[4737]: E0126 18:55:33.273726 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d1ea1d4-ca8f-48d7-838b-a71cc03f2b39" containerName="extract-utilities" Jan 26 18:55:33 crc kubenswrapper[4737]: I0126 18:55:33.273743 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d1ea1d4-ca8f-48d7-838b-a71cc03f2b39" containerName="extract-utilities" Jan 26 18:55:33 crc kubenswrapper[4737]: E0126 18:55:33.273762 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d1ea1d4-ca8f-48d7-838b-a71cc03f2b39" containerName="extract-content" Jan 26 18:55:33 crc kubenswrapper[4737]: I0126 18:55:33.273768 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d1ea1d4-ca8f-48d7-838b-a71cc03f2b39" containerName="extract-content" Jan 26 18:55:33 crc kubenswrapper[4737]: E0126 18:55:33.273786 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d1ea1d4-ca8f-48d7-838b-a71cc03f2b39" containerName="registry-server" Jan 26 18:55:33 crc kubenswrapper[4737]: I0126 18:55:33.273792 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d1ea1d4-ca8f-48d7-838b-a71cc03f2b39" containerName="registry-server" Jan 26 18:55:33 crc kubenswrapper[4737]: E0126 18:55:33.273829 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df72a93c-eb25-4b5a-bb6c-0b989ce0b993" containerName="init" Jan 26 18:55:33 crc kubenswrapper[4737]: I0126 18:55:33.273835 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="df72a93c-eb25-4b5a-bb6c-0b989ce0b993" containerName="init" Jan 26 18:55:33 crc kubenswrapper[4737]: I0126 18:55:33.274029 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d1ea1d4-ca8f-48d7-838b-a71cc03f2b39" containerName="registry-server" Jan 26 18:55:33 crc kubenswrapper[4737]: I0126 18:55:33.274038 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="df72a93c-eb25-4b5a-bb6c-0b989ce0b993" containerName="init" Jan 26 18:55:33 crc kubenswrapper[4737]: I0126 18:55:33.276614 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-687b47d654-rb2ft" Jan 26 18:55:33 crc kubenswrapper[4737]: I0126 18:55:33.279026 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 26 18:55:33 crc kubenswrapper[4737]: I0126 18:55:33.279085 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 26 18:55:33 crc kubenswrapper[4737]: I0126 18:55:33.302226 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-687b47d654-rb2ft"] Jan 26 18:55:33 crc kubenswrapper[4737]: I0126 18:55:33.319874 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1aef338e-174a-4bc2-acd1-56374a72e519-public-tls-certs\") pod \"barbican-api-687b47d654-rb2ft\" (UID: \"1aef338e-174a-4bc2-acd1-56374a72e519\") " pod="openstack/barbican-api-687b47d654-rb2ft" Jan 26 18:55:33 crc kubenswrapper[4737]: I0126 18:55:33.319970 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1aef338e-174a-4bc2-acd1-56374a72e519-combined-ca-bundle\") pod \"barbican-api-687b47d654-rb2ft\" (UID: \"1aef338e-174a-4bc2-acd1-56374a72e519\") " pod="openstack/barbican-api-687b47d654-rb2ft" Jan 26 18:55:33 crc kubenswrapper[4737]: I0126 18:55:33.320013 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1aef338e-174a-4bc2-acd1-56374a72e519-config-data-custom\") pod \"barbican-api-687b47d654-rb2ft\" (UID: \"1aef338e-174a-4bc2-acd1-56374a72e519\") " pod="openstack/barbican-api-687b47d654-rb2ft" Jan 26 18:55:33 crc kubenswrapper[4737]: I0126 18:55:33.320047 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1aef338e-174a-4bc2-acd1-56374a72e519-internal-tls-certs\") pod \"barbican-api-687b47d654-rb2ft\" (UID: \"1aef338e-174a-4bc2-acd1-56374a72e519\") " pod="openstack/barbican-api-687b47d654-rb2ft" Jan 26 18:55:33 crc kubenswrapper[4737]: I0126 18:55:33.320088 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29zwl\" (UniqueName: \"kubernetes.io/projected/1aef338e-174a-4bc2-acd1-56374a72e519-kube-api-access-29zwl\") pod \"barbican-api-687b47d654-rb2ft\" (UID: \"1aef338e-174a-4bc2-acd1-56374a72e519\") " pod="openstack/barbican-api-687b47d654-rb2ft" Jan 26 18:55:33 crc kubenswrapper[4737]: I0126 18:55:33.320123 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1aef338e-174a-4bc2-acd1-56374a72e519-logs\") pod \"barbican-api-687b47d654-rb2ft\" (UID: \"1aef338e-174a-4bc2-acd1-56374a72e519\") " pod="openstack/barbican-api-687b47d654-rb2ft" Jan 26 18:55:33 crc kubenswrapper[4737]: I0126 18:55:33.320210 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1aef338e-174a-4bc2-acd1-56374a72e519-config-data\") pod \"barbican-api-687b47d654-rb2ft\" (UID: \"1aef338e-174a-4bc2-acd1-56374a72e519\") " pod="openstack/barbican-api-687b47d654-rb2ft" Jan 26 18:55:33 crc kubenswrapper[4737]: I0126 18:55:33.422587 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1aef338e-174a-4bc2-acd1-56374a72e519-public-tls-certs\") pod \"barbican-api-687b47d654-rb2ft\" (UID: \"1aef338e-174a-4bc2-acd1-56374a72e519\") " pod="openstack/barbican-api-687b47d654-rb2ft" Jan 26 18:55:33 crc kubenswrapper[4737]: I0126 18:55:33.422840 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1aef338e-174a-4bc2-acd1-56374a72e519-combined-ca-bundle\") pod \"barbican-api-687b47d654-rb2ft\" (UID: \"1aef338e-174a-4bc2-acd1-56374a72e519\") " pod="openstack/barbican-api-687b47d654-rb2ft" Jan 26 18:55:33 crc kubenswrapper[4737]: I0126 18:55:33.423041 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1aef338e-174a-4bc2-acd1-56374a72e519-config-data-custom\") pod \"barbican-api-687b47d654-rb2ft\" (UID: \"1aef338e-174a-4bc2-acd1-56374a72e519\") " pod="openstack/barbican-api-687b47d654-rb2ft" Jan 26 18:55:33 crc kubenswrapper[4737]: I0126 18:55:33.423759 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1aef338e-174a-4bc2-acd1-56374a72e519-internal-tls-certs\") pod \"barbican-api-687b47d654-rb2ft\" (UID: \"1aef338e-174a-4bc2-acd1-56374a72e519\") " pod="openstack/barbican-api-687b47d654-rb2ft" Jan 26 18:55:33 crc kubenswrapper[4737]: I0126 18:55:33.423808 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29zwl\" (UniqueName: \"kubernetes.io/projected/1aef338e-174a-4bc2-acd1-56374a72e519-kube-api-access-29zwl\") pod \"barbican-api-687b47d654-rb2ft\" (UID: \"1aef338e-174a-4bc2-acd1-56374a72e519\") " pod="openstack/barbican-api-687b47d654-rb2ft" Jan 26 18:55:33 crc kubenswrapper[4737]: I0126 18:55:33.423861 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1aef338e-174a-4bc2-acd1-56374a72e519-logs\") pod \"barbican-api-687b47d654-rb2ft\" (UID: \"1aef338e-174a-4bc2-acd1-56374a72e519\") " pod="openstack/barbican-api-687b47d654-rb2ft" Jan 26 18:55:33 crc kubenswrapper[4737]: I0126 18:55:33.423898 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1aef338e-174a-4bc2-acd1-56374a72e519-config-data\") pod \"barbican-api-687b47d654-rb2ft\" (UID: \"1aef338e-174a-4bc2-acd1-56374a72e519\") " pod="openstack/barbican-api-687b47d654-rb2ft" Jan 26 18:55:33 crc kubenswrapper[4737]: I0126 18:55:33.426617 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1aef338e-174a-4bc2-acd1-56374a72e519-logs\") pod \"barbican-api-687b47d654-rb2ft\" (UID: \"1aef338e-174a-4bc2-acd1-56374a72e519\") " pod="openstack/barbican-api-687b47d654-rb2ft" Jan 26 18:55:33 crc kubenswrapper[4737]: I0126 18:55:33.429813 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1aef338e-174a-4bc2-acd1-56374a72e519-config-data-custom\") pod \"barbican-api-687b47d654-rb2ft\" (UID: \"1aef338e-174a-4bc2-acd1-56374a72e519\") " pod="openstack/barbican-api-687b47d654-rb2ft" Jan 26 18:55:33 crc kubenswrapper[4737]: I0126 18:55:33.433187 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1aef338e-174a-4bc2-acd1-56374a72e519-internal-tls-certs\") pod \"barbican-api-687b47d654-rb2ft\" (UID: \"1aef338e-174a-4bc2-acd1-56374a72e519\") " pod="openstack/barbican-api-687b47d654-rb2ft" Jan 26 18:55:33 crc kubenswrapper[4737]: I0126 18:55:33.434724 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1aef338e-174a-4bc2-acd1-56374a72e519-public-tls-certs\") pod \"barbican-api-687b47d654-rb2ft\" (UID: \"1aef338e-174a-4bc2-acd1-56374a72e519\") " pod="openstack/barbican-api-687b47d654-rb2ft" Jan 26 18:55:33 crc kubenswrapper[4737]: I0126 18:55:33.444179 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1aef338e-174a-4bc2-acd1-56374a72e519-combined-ca-bundle\") pod \"barbican-api-687b47d654-rb2ft\" (UID: \"1aef338e-174a-4bc2-acd1-56374a72e519\") " pod="openstack/barbican-api-687b47d654-rb2ft" Jan 26 18:55:33 crc kubenswrapper[4737]: I0126 18:55:33.445458 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29zwl\" (UniqueName: \"kubernetes.io/projected/1aef338e-174a-4bc2-acd1-56374a72e519-kube-api-access-29zwl\") pod \"barbican-api-687b47d654-rb2ft\" (UID: \"1aef338e-174a-4bc2-acd1-56374a72e519\") " pod="openstack/barbican-api-687b47d654-rb2ft" Jan 26 18:55:33 crc kubenswrapper[4737]: I0126 18:55:33.452112 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1aef338e-174a-4bc2-acd1-56374a72e519-config-data\") pod \"barbican-api-687b47d654-rb2ft\" (UID: \"1aef338e-174a-4bc2-acd1-56374a72e519\") " pod="openstack/barbican-api-687b47d654-rb2ft" Jan 26 18:55:33 crc kubenswrapper[4737]: I0126 18:55:33.478033 4737 generic.go:334] "Generic (PLEG): container finished" podID="deadcd24-0a98-4f1d-986b-75187a3eccee" containerID="17d21f64c9d2d1e2429d61a41c47c614ed746fecd46cf87a0749818145c44ab0" exitCode=143 Jan 26 18:55:33 crc kubenswrapper[4737]: I0126 18:55:33.478153 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"deadcd24-0a98-4f1d-986b-75187a3eccee","Type":"ContainerDied","Data":"17d21f64c9d2d1e2429d61a41c47c614ed746fecd46cf87a0749818145c44ab0"} Jan 26 18:55:33 crc kubenswrapper[4737]: I0126 18:55:33.478193 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"deadcd24-0a98-4f1d-986b-75187a3eccee","Type":"ContainerStarted","Data":"e2be2cc101276cae9cd96c6322ea82bb13c83bfa92517786990c72a87502e36a"} Jan 26 18:55:33 crc kubenswrapper[4737]: I0126 18:55:33.481469 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"49c93b4b-1101-4e35-857b-722849fadd92","Type":"ContainerStarted","Data":"6a3f8415df02f19bf44d8ff570aa29b991fe00f296a52eab364e8788cee6482e"} Jan 26 18:55:33 crc kubenswrapper[4737]: I0126 18:55:33.526260 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-75ff77bb76-fx82z" Jan 26 18:55:33 crc kubenswrapper[4737]: I0126 18:55:33.614742 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-687b47d654-rb2ft" Jan 26 18:55:33 crc kubenswrapper[4737]: I0126 18:55:33.735746 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6bf5799cfc-4n4l5"] Jan 26 18:55:33 crc kubenswrapper[4737]: I0126 18:55:33.737129 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6bf5799cfc-4n4l5" podUID="cfdad184-ce5c-4bfe-a9dc-44f62de75095" containerName="neutron-api" containerID="cri-o://1d6d7c8edb5d6302c4e4d245e968f01ad07431961b94a53685a1942d2ea642f2" gracePeriod=30 Jan 26 18:55:33 crc kubenswrapper[4737]: I0126 18:55:33.737445 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6bf5799cfc-4n4l5" podUID="cfdad184-ce5c-4bfe-a9dc-44f62de75095" containerName="neutron-httpd" containerID="cri-o://7645cee4e5194b787f2e662f685e95d5a2e16b3b5b6472e3876f7af25c7dbd3b" gracePeriod=30 Jan 26 18:55:33 crc kubenswrapper[4737]: I0126 18:55:33.753141 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6bf5799cfc-4n4l5" podUID="cfdad184-ce5c-4bfe-a9dc-44f62de75095" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.200:9696/\": EOF" Jan 26 18:55:33 crc kubenswrapper[4737]: I0126 18:55:33.781167 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-55cbc4d4bf-89lfk"] Jan 26 18:55:33 crc kubenswrapper[4737]: I0126 18:55:33.783528 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-55cbc4d4bf-89lfk" Jan 26 18:55:33 crc kubenswrapper[4737]: I0126 18:55:33.797162 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-55cbc4d4bf-89lfk"] Jan 26 18:55:33 crc kubenswrapper[4737]: I0126 18:55:33.834426 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9b9b411-9b28-486b-bb42-cf668fba2ee5-combined-ca-bundle\") pod \"neutron-55cbc4d4bf-89lfk\" (UID: \"a9b9b411-9b28-486b-bb42-cf668fba2ee5\") " pod="openstack/neutron-55cbc4d4bf-89lfk" Jan 26 18:55:33 crc kubenswrapper[4737]: I0126 18:55:33.834542 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a9b9b411-9b28-486b-bb42-cf668fba2ee5-httpd-config\") pod \"neutron-55cbc4d4bf-89lfk\" (UID: \"a9b9b411-9b28-486b-bb42-cf668fba2ee5\") " pod="openstack/neutron-55cbc4d4bf-89lfk" Jan 26 18:55:33 crc kubenswrapper[4737]: I0126 18:55:33.834627 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a9b9b411-9b28-486b-bb42-cf668fba2ee5-config\") pod \"neutron-55cbc4d4bf-89lfk\" (UID: \"a9b9b411-9b28-486b-bb42-cf668fba2ee5\") " pod="openstack/neutron-55cbc4d4bf-89lfk" Jan 26 18:55:33 crc kubenswrapper[4737]: I0126 18:55:33.937502 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djzlt\" (UniqueName: \"kubernetes.io/projected/a9b9b411-9b28-486b-bb42-cf668fba2ee5-kube-api-access-djzlt\") pod \"neutron-55cbc4d4bf-89lfk\" (UID: \"a9b9b411-9b28-486b-bb42-cf668fba2ee5\") " pod="openstack/neutron-55cbc4d4bf-89lfk" Jan 26 18:55:33 crc kubenswrapper[4737]: I0126 18:55:33.937561 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9b9b411-9b28-486b-bb42-cf668fba2ee5-ovndb-tls-certs\") pod \"neutron-55cbc4d4bf-89lfk\" (UID: \"a9b9b411-9b28-486b-bb42-cf668fba2ee5\") " pod="openstack/neutron-55cbc4d4bf-89lfk" Jan 26 18:55:33 crc kubenswrapper[4737]: I0126 18:55:33.937588 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9b9b411-9b28-486b-bb42-cf668fba2ee5-combined-ca-bundle\") pod \"neutron-55cbc4d4bf-89lfk\" (UID: \"a9b9b411-9b28-486b-bb42-cf668fba2ee5\") " pod="openstack/neutron-55cbc4d4bf-89lfk" Jan 26 18:55:33 crc kubenswrapper[4737]: I0126 18:55:33.937646 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a9b9b411-9b28-486b-bb42-cf668fba2ee5-httpd-config\") pod \"neutron-55cbc4d4bf-89lfk\" (UID: \"a9b9b411-9b28-486b-bb42-cf668fba2ee5\") " pod="openstack/neutron-55cbc4d4bf-89lfk" Jan 26 18:55:33 crc kubenswrapper[4737]: I0126 18:55:33.937692 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9b9b411-9b28-486b-bb42-cf668fba2ee5-public-tls-certs\") pod \"neutron-55cbc4d4bf-89lfk\" (UID: \"a9b9b411-9b28-486b-bb42-cf668fba2ee5\") " pod="openstack/neutron-55cbc4d4bf-89lfk" Jan 26 18:55:33 crc kubenswrapper[4737]: I0126 18:55:33.937726 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a9b9b411-9b28-486b-bb42-cf668fba2ee5-config\") pod \"neutron-55cbc4d4bf-89lfk\" (UID: \"a9b9b411-9b28-486b-bb42-cf668fba2ee5\") " pod="openstack/neutron-55cbc4d4bf-89lfk" Jan 26 18:55:33 crc kubenswrapper[4737]: I0126 18:55:33.937785 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9b9b411-9b28-486b-bb42-cf668fba2ee5-internal-tls-certs\") pod \"neutron-55cbc4d4bf-89lfk\" (UID: \"a9b9b411-9b28-486b-bb42-cf668fba2ee5\") " pod="openstack/neutron-55cbc4d4bf-89lfk" Jan 26 18:55:33 crc kubenswrapper[4737]: I0126 18:55:33.945454 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a9b9b411-9b28-486b-bb42-cf668fba2ee5-config\") pod \"neutron-55cbc4d4bf-89lfk\" (UID: \"a9b9b411-9b28-486b-bb42-cf668fba2ee5\") " pod="openstack/neutron-55cbc4d4bf-89lfk" Jan 26 18:55:33 crc kubenswrapper[4737]: I0126 18:55:33.948549 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a9b9b411-9b28-486b-bb42-cf668fba2ee5-httpd-config\") pod \"neutron-55cbc4d4bf-89lfk\" (UID: \"a9b9b411-9b28-486b-bb42-cf668fba2ee5\") " pod="openstack/neutron-55cbc4d4bf-89lfk" Jan 26 18:55:33 crc kubenswrapper[4737]: I0126 18:55:33.949712 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9b9b411-9b28-486b-bb42-cf668fba2ee5-combined-ca-bundle\") pod \"neutron-55cbc4d4bf-89lfk\" (UID: \"a9b9b411-9b28-486b-bb42-cf668fba2ee5\") " pod="openstack/neutron-55cbc4d4bf-89lfk" Jan 26 18:55:34 crc kubenswrapper[4737]: I0126 18:55:34.040611 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9b9b411-9b28-486b-bb42-cf668fba2ee5-internal-tls-certs\") pod \"neutron-55cbc4d4bf-89lfk\" (UID: \"a9b9b411-9b28-486b-bb42-cf668fba2ee5\") " pod="openstack/neutron-55cbc4d4bf-89lfk" Jan 26 18:55:34 crc kubenswrapper[4737]: I0126 18:55:34.040762 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djzlt\" (UniqueName: \"kubernetes.io/projected/a9b9b411-9b28-486b-bb42-cf668fba2ee5-kube-api-access-djzlt\") pod \"neutron-55cbc4d4bf-89lfk\" (UID: \"a9b9b411-9b28-486b-bb42-cf668fba2ee5\") " pod="openstack/neutron-55cbc4d4bf-89lfk" Jan 26 18:55:34 crc kubenswrapper[4737]: I0126 18:55:34.040792 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9b9b411-9b28-486b-bb42-cf668fba2ee5-ovndb-tls-certs\") pod \"neutron-55cbc4d4bf-89lfk\" (UID: \"a9b9b411-9b28-486b-bb42-cf668fba2ee5\") " pod="openstack/neutron-55cbc4d4bf-89lfk" Jan 26 18:55:34 crc kubenswrapper[4737]: I0126 18:55:34.040907 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9b9b411-9b28-486b-bb42-cf668fba2ee5-public-tls-certs\") pod \"neutron-55cbc4d4bf-89lfk\" (UID: \"a9b9b411-9b28-486b-bb42-cf668fba2ee5\") " pod="openstack/neutron-55cbc4d4bf-89lfk" Jan 26 18:55:34 crc kubenswrapper[4737]: I0126 18:55:34.047762 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9b9b411-9b28-486b-bb42-cf668fba2ee5-internal-tls-certs\") pod \"neutron-55cbc4d4bf-89lfk\" (UID: \"a9b9b411-9b28-486b-bb42-cf668fba2ee5\") " pod="openstack/neutron-55cbc4d4bf-89lfk" Jan 26 18:55:34 crc kubenswrapper[4737]: I0126 18:55:34.050748 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9b9b411-9b28-486b-bb42-cf668fba2ee5-public-tls-certs\") pod \"neutron-55cbc4d4bf-89lfk\" (UID: \"a9b9b411-9b28-486b-bb42-cf668fba2ee5\") " pod="openstack/neutron-55cbc4d4bf-89lfk" Jan 26 18:55:34 crc kubenswrapper[4737]: I0126 18:55:34.056030 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9b9b411-9b28-486b-bb42-cf668fba2ee5-ovndb-tls-certs\") pod \"neutron-55cbc4d4bf-89lfk\" (UID: \"a9b9b411-9b28-486b-bb42-cf668fba2ee5\") " pod="openstack/neutron-55cbc4d4bf-89lfk" Jan 26 18:55:34 crc kubenswrapper[4737]: I0126 18:55:34.063458 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djzlt\" (UniqueName: \"kubernetes.io/projected/a9b9b411-9b28-486b-bb42-cf668fba2ee5-kube-api-access-djzlt\") pod \"neutron-55cbc4d4bf-89lfk\" (UID: \"a9b9b411-9b28-486b-bb42-cf668fba2ee5\") " pod="openstack/neutron-55cbc4d4bf-89lfk" Jan 26 18:55:34 crc kubenswrapper[4737]: I0126 18:55:34.120536 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-55cbc4d4bf-89lfk" Jan 26 18:55:34 crc kubenswrapper[4737]: I0126 18:55:34.504670 4737 generic.go:334] "Generic (PLEG): container finished" podID="cfdad184-ce5c-4bfe-a9dc-44f62de75095" containerID="7645cee4e5194b787f2e662f685e95d5a2e16b3b5b6472e3876f7af25c7dbd3b" exitCode=0 Jan 26 18:55:34 crc kubenswrapper[4737]: I0126 18:55:34.504708 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6bf5799cfc-4n4l5" event={"ID":"cfdad184-ce5c-4bfe-a9dc-44f62de75095","Type":"ContainerDied","Data":"7645cee4e5194b787f2e662f685e95d5a2e16b3b5b6472e3876f7af25c7dbd3b"} Jan 26 18:55:35 crc kubenswrapper[4737]: I0126 18:55:35.946735 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6bf5799cfc-4n4l5" podUID="cfdad184-ce5c-4bfe-a9dc-44f62de75095" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.200:9696/\": dial tcp 10.217.0.200:9696: connect: connection refused" Jan 26 18:55:37 crc kubenswrapper[4737]: I0126 18:55:37.875343 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-689b884cd-xd7w8" Jan 26 18:55:37 crc kubenswrapper[4737]: I0126 18:55:37.917630 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-6rgnn" Jan 26 18:55:37 crc kubenswrapper[4737]: I0126 18:55:37.981908 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-5ndsz"] Jan 26 18:55:37 crc kubenswrapper[4737]: I0126 18:55:37.982179 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-5ndsz" podUID="40dcad4e-d2aa-4e7e-bf72-4afd88ca77df" containerName="dnsmasq-dns" containerID="cri-o://da0f1569d3723ad22594cbf6ff877b7d44a60da9fcf34692a0e6c5e652f990ec" gracePeriod=10 Jan 26 18:55:38 crc kubenswrapper[4737]: I0126 18:55:38.031324 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-689b884cd-xd7w8" Jan 26 18:55:38 crc kubenswrapper[4737]: I0126 18:55:38.341799 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-55f844cf75-5ndsz" podUID="40dcad4e-d2aa-4e7e-bf72-4afd88ca77df" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.198:5353: connect: connection refused" Jan 26 18:55:38 crc kubenswrapper[4737]: I0126 18:55:38.553556 4737 generic.go:334] "Generic (PLEG): container finished" podID="cfdad184-ce5c-4bfe-a9dc-44f62de75095" containerID="1d6d7c8edb5d6302c4e4d245e968f01ad07431961b94a53685a1942d2ea642f2" exitCode=0 Jan 26 18:55:38 crc kubenswrapper[4737]: I0126 18:55:38.553902 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6bf5799cfc-4n4l5" event={"ID":"cfdad184-ce5c-4bfe-a9dc-44f62de75095","Type":"ContainerDied","Data":"1d6d7c8edb5d6302c4e4d245e968f01ad07431961b94a53685a1942d2ea642f2"} Jan 26 18:55:38 crc kubenswrapper[4737]: I0126 18:55:38.556482 4737 generic.go:334] "Generic (PLEG): container finished" podID="40dcad4e-d2aa-4e7e-bf72-4afd88ca77df" containerID="da0f1569d3723ad22594cbf6ff877b7d44a60da9fcf34692a0e6c5e652f990ec" exitCode=0 Jan 26 18:55:38 crc kubenswrapper[4737]: I0126 18:55:38.557810 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-5ndsz" event={"ID":"40dcad4e-d2aa-4e7e-bf72-4afd88ca77df","Type":"ContainerDied","Data":"da0f1569d3723ad22594cbf6ff877b7d44a60da9fcf34692a0e6c5e652f990ec"} Jan 26 18:55:39 crc kubenswrapper[4737]: I0126 18:55:39.577141 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"49c93b4b-1101-4e35-857b-722849fadd92","Type":"ContainerStarted","Data":"2bb763c4cef34113873232ce8bfd401ab584eb6489fadd717b101744a0b99b78"} Jan 26 18:55:39 crc kubenswrapper[4737]: I0126 18:55:39.603224 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=11.280089207 podStartE2EDuration="12.603200657s" podCreationTimestamp="2026-01-26 18:55:27 +0000 UTC" firstStartedPulling="2026-01-26 18:55:30.041465397 +0000 UTC m=+1503.349660105" lastFinishedPulling="2026-01-26 18:55:31.364576847 +0000 UTC m=+1504.672771555" observedRunningTime="2026-01-26 18:55:39.600974396 +0000 UTC m=+1512.909169104" watchObservedRunningTime="2026-01-26 18:55:39.603200657 +0000 UTC m=+1512.911395365" Jan 26 18:55:40 crc kubenswrapper[4737]: I0126 18:55:40.003017 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-5ndsz" Jan 26 18:55:40 crc kubenswrapper[4737]: I0126 18:55:40.109924 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/40dcad4e-d2aa-4e7e-bf72-4afd88ca77df-dns-swift-storage-0\") pod \"40dcad4e-d2aa-4e7e-bf72-4afd88ca77df\" (UID: \"40dcad4e-d2aa-4e7e-bf72-4afd88ca77df\") " Jan 26 18:55:40 crc kubenswrapper[4737]: I0126 18:55:40.110366 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40dcad4e-d2aa-4e7e-bf72-4afd88ca77df-ovsdbserver-nb\") pod \"40dcad4e-d2aa-4e7e-bf72-4afd88ca77df\" (UID: \"40dcad4e-d2aa-4e7e-bf72-4afd88ca77df\") " Jan 26 18:55:40 crc kubenswrapper[4737]: I0126 18:55:40.110431 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/40dcad4e-d2aa-4e7e-bf72-4afd88ca77df-ovsdbserver-sb\") pod \"40dcad4e-d2aa-4e7e-bf72-4afd88ca77df\" (UID: \"40dcad4e-d2aa-4e7e-bf72-4afd88ca77df\") " Jan 26 18:55:40 crc kubenswrapper[4737]: I0126 18:55:40.110533 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40dcad4e-d2aa-4e7e-bf72-4afd88ca77df-dns-svc\") pod \"40dcad4e-d2aa-4e7e-bf72-4afd88ca77df\" (UID: \"40dcad4e-d2aa-4e7e-bf72-4afd88ca77df\") " Jan 26 18:55:40 crc kubenswrapper[4737]: I0126 18:55:40.110572 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40dcad4e-d2aa-4e7e-bf72-4afd88ca77df-config\") pod \"40dcad4e-d2aa-4e7e-bf72-4afd88ca77df\" (UID: \"40dcad4e-d2aa-4e7e-bf72-4afd88ca77df\") " Jan 26 18:55:40 crc kubenswrapper[4737]: I0126 18:55:40.110718 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bwz9\" (UniqueName: \"kubernetes.io/projected/40dcad4e-d2aa-4e7e-bf72-4afd88ca77df-kube-api-access-7bwz9\") pod \"40dcad4e-d2aa-4e7e-bf72-4afd88ca77df\" (UID: \"40dcad4e-d2aa-4e7e-bf72-4afd88ca77df\") " Jan 26 18:55:40 crc kubenswrapper[4737]: I0126 18:55:40.139457 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40dcad4e-d2aa-4e7e-bf72-4afd88ca77df-kube-api-access-7bwz9" (OuterVolumeSpecName: "kube-api-access-7bwz9") pod "40dcad4e-d2aa-4e7e-bf72-4afd88ca77df" (UID: "40dcad4e-d2aa-4e7e-bf72-4afd88ca77df"). InnerVolumeSpecName "kube-api-access-7bwz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:55:40 crc kubenswrapper[4737]: I0126 18:55:40.189574 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40dcad4e-d2aa-4e7e-bf72-4afd88ca77df-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "40dcad4e-d2aa-4e7e-bf72-4afd88ca77df" (UID: "40dcad4e-d2aa-4e7e-bf72-4afd88ca77df"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:55:40 crc kubenswrapper[4737]: I0126 18:55:40.207972 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40dcad4e-d2aa-4e7e-bf72-4afd88ca77df-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "40dcad4e-d2aa-4e7e-bf72-4afd88ca77df" (UID: "40dcad4e-d2aa-4e7e-bf72-4afd88ca77df"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:55:40 crc kubenswrapper[4737]: I0126 18:55:40.215329 4737 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40dcad4e-d2aa-4e7e-bf72-4afd88ca77df-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:40 crc kubenswrapper[4737]: I0126 18:55:40.215359 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bwz9\" (UniqueName: \"kubernetes.io/projected/40dcad4e-d2aa-4e7e-bf72-4afd88ca77df-kube-api-access-7bwz9\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:40 crc kubenswrapper[4737]: I0126 18:55:40.215371 4737 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/40dcad4e-d2aa-4e7e-bf72-4afd88ca77df-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:40 crc kubenswrapper[4737]: I0126 18:55:40.215379 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40dcad4e-d2aa-4e7e-bf72-4afd88ca77df-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "40dcad4e-d2aa-4e7e-bf72-4afd88ca77df" (UID: "40dcad4e-d2aa-4e7e-bf72-4afd88ca77df"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:55:40 crc kubenswrapper[4737]: I0126 18:55:40.236600 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40dcad4e-d2aa-4e7e-bf72-4afd88ca77df-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "40dcad4e-d2aa-4e7e-bf72-4afd88ca77df" (UID: "40dcad4e-d2aa-4e7e-bf72-4afd88ca77df"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:55:40 crc kubenswrapper[4737]: I0126 18:55:40.237646 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40dcad4e-d2aa-4e7e-bf72-4afd88ca77df-config" (OuterVolumeSpecName: "config") pod "40dcad4e-d2aa-4e7e-bf72-4afd88ca77df" (UID: "40dcad4e-d2aa-4e7e-bf72-4afd88ca77df"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:55:40 crc kubenswrapper[4737]: I0126 18:55:40.318415 4737 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40dcad4e-d2aa-4e7e-bf72-4afd88ca77df-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:40 crc kubenswrapper[4737]: I0126 18:55:40.318444 4737 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/40dcad4e-d2aa-4e7e-bf72-4afd88ca77df-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:40 crc kubenswrapper[4737]: I0126 18:55:40.318454 4737 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40dcad4e-d2aa-4e7e-bf72-4afd88ca77df-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:40 crc kubenswrapper[4737]: I0126 18:55:40.640400 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-5ndsz" Jan 26 18:55:40 crc kubenswrapper[4737]: I0126 18:55:40.650221 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-5ndsz" event={"ID":"40dcad4e-d2aa-4e7e-bf72-4afd88ca77df","Type":"ContainerDied","Data":"3f7e46dca0dfe0ebd90a530d279e4915e8fb985098df71536da56eb85a145e54"} Jan 26 18:55:40 crc kubenswrapper[4737]: I0126 18:55:40.651128 4737 scope.go:117] "RemoveContainer" containerID="da0f1569d3723ad22594cbf6ff877b7d44a60da9fcf34692a0e6c5e652f990ec" Jan 26 18:55:40 crc kubenswrapper[4737]: I0126 18:55:40.673717 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-5ndsz"] Jan 26 18:55:40 crc kubenswrapper[4737]: I0126 18:55:40.685711 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-5ndsz"] Jan 26 18:55:40 crc kubenswrapper[4737]: I0126 18:55:40.828638 4737 scope.go:117] "RemoveContainer" containerID="a082d91e479306d806f4f06e3ea3edc667a0cda0bb576e526b35b2693041391b" Jan 26 18:55:40 crc kubenswrapper[4737]: I0126 18:55:40.830292 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6bf5799cfc-4n4l5" Jan 26 18:55:40 crc kubenswrapper[4737]: I0126 18:55:40.958976 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfdad184-ce5c-4bfe-a9dc-44f62de75095-ovndb-tls-certs\") pod \"cfdad184-ce5c-4bfe-a9dc-44f62de75095\" (UID: \"cfdad184-ce5c-4bfe-a9dc-44f62de75095\") " Jan 26 18:55:40 crc kubenswrapper[4737]: I0126 18:55:40.959257 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfdad184-ce5c-4bfe-a9dc-44f62de75095-public-tls-certs\") pod \"cfdad184-ce5c-4bfe-a9dc-44f62de75095\" (UID: \"cfdad184-ce5c-4bfe-a9dc-44f62de75095\") " Jan 26 18:55:40 crc kubenswrapper[4737]: I0126 18:55:40.959371 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfdad184-ce5c-4bfe-a9dc-44f62de75095-combined-ca-bundle\") pod \"cfdad184-ce5c-4bfe-a9dc-44f62de75095\" (UID: \"cfdad184-ce5c-4bfe-a9dc-44f62de75095\") " Jan 26 18:55:40 crc kubenswrapper[4737]: I0126 18:55:40.959403 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2r2j\" (UniqueName: \"kubernetes.io/projected/cfdad184-ce5c-4bfe-a9dc-44f62de75095-kube-api-access-k2r2j\") pod \"cfdad184-ce5c-4bfe-a9dc-44f62de75095\" (UID: \"cfdad184-ce5c-4bfe-a9dc-44f62de75095\") " Jan 26 18:55:40 crc kubenswrapper[4737]: I0126 18:55:40.959440 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfdad184-ce5c-4bfe-a9dc-44f62de75095-internal-tls-certs\") pod \"cfdad184-ce5c-4bfe-a9dc-44f62de75095\" (UID: \"cfdad184-ce5c-4bfe-a9dc-44f62de75095\") " Jan 26 18:55:40 crc kubenswrapper[4737]: I0126 18:55:40.959628 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cfdad184-ce5c-4bfe-a9dc-44f62de75095-config\") pod \"cfdad184-ce5c-4bfe-a9dc-44f62de75095\" (UID: \"cfdad184-ce5c-4bfe-a9dc-44f62de75095\") " Jan 26 18:55:40 crc kubenswrapper[4737]: I0126 18:55:40.959682 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cfdad184-ce5c-4bfe-a9dc-44f62de75095-httpd-config\") pod \"cfdad184-ce5c-4bfe-a9dc-44f62de75095\" (UID: \"cfdad184-ce5c-4bfe-a9dc-44f62de75095\") " Jan 26 18:55:40 crc kubenswrapper[4737]: I0126 18:55:40.964142 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfdad184-ce5c-4bfe-a9dc-44f62de75095-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "cfdad184-ce5c-4bfe-a9dc-44f62de75095" (UID: "cfdad184-ce5c-4bfe-a9dc-44f62de75095"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:55:40 crc kubenswrapper[4737]: I0126 18:55:40.970703 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 26 18:55:40 crc kubenswrapper[4737]: I0126 18:55:40.975805 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfdad184-ce5c-4bfe-a9dc-44f62de75095-kube-api-access-k2r2j" (OuterVolumeSpecName: "kube-api-access-k2r2j") pod "cfdad184-ce5c-4bfe-a9dc-44f62de75095" (UID: "cfdad184-ce5c-4bfe-a9dc-44f62de75095"). InnerVolumeSpecName "kube-api-access-k2r2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:55:41 crc kubenswrapper[4737]: I0126 18:55:41.055815 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40dcad4e-d2aa-4e7e-bf72-4afd88ca77df" path="/var/lib/kubelet/pods/40dcad4e-d2aa-4e7e-bf72-4afd88ca77df/volumes" Jan 26 18:55:41 crc kubenswrapper[4737]: I0126 18:55:41.064644 4737 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cfdad184-ce5c-4bfe-a9dc-44f62de75095-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:41 crc kubenswrapper[4737]: I0126 18:55:41.064671 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2r2j\" (UniqueName: \"kubernetes.io/projected/cfdad184-ce5c-4bfe-a9dc-44f62de75095-kube-api-access-k2r2j\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:41 crc kubenswrapper[4737]: I0126 18:55:41.110381 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfdad184-ce5c-4bfe-a9dc-44f62de75095-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "cfdad184-ce5c-4bfe-a9dc-44f62de75095" (UID: "cfdad184-ce5c-4bfe-a9dc-44f62de75095"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:55:41 crc kubenswrapper[4737]: I0126 18:55:41.124884 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfdad184-ce5c-4bfe-a9dc-44f62de75095-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cfdad184-ce5c-4bfe-a9dc-44f62de75095" (UID: "cfdad184-ce5c-4bfe-a9dc-44f62de75095"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:55:41 crc kubenswrapper[4737]: I0126 18:55:41.156704 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfdad184-ce5c-4bfe-a9dc-44f62de75095-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "cfdad184-ce5c-4bfe-a9dc-44f62de75095" (UID: "cfdad184-ce5c-4bfe-a9dc-44f62de75095"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:55:41 crc kubenswrapper[4737]: I0126 18:55:41.157272 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfdad184-ce5c-4bfe-a9dc-44f62de75095-config" (OuterVolumeSpecName: "config") pod "cfdad184-ce5c-4bfe-a9dc-44f62de75095" (UID: "cfdad184-ce5c-4bfe-a9dc-44f62de75095"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:55:41 crc kubenswrapper[4737]: I0126 18:55:41.167277 4737 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/cfdad184-ce5c-4bfe-a9dc-44f62de75095-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:41 crc kubenswrapper[4737]: I0126 18:55:41.167316 4737 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfdad184-ce5c-4bfe-a9dc-44f62de75095-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:41 crc kubenswrapper[4737]: I0126 18:55:41.167329 4737 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfdad184-ce5c-4bfe-a9dc-44f62de75095-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:41 crc kubenswrapper[4737]: I0126 18:55:41.167343 4737 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfdad184-ce5c-4bfe-a9dc-44f62de75095-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:41 crc kubenswrapper[4737]: I0126 18:55:41.201433 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfdad184-ce5c-4bfe-a9dc-44f62de75095-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "cfdad184-ce5c-4bfe-a9dc-44f62de75095" (UID: "cfdad184-ce5c-4bfe-a9dc-44f62de75095"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:55:41 crc kubenswrapper[4737]: I0126 18:55:41.270277 4737 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfdad184-ce5c-4bfe-a9dc-44f62de75095-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:41 crc kubenswrapper[4737]: W0126 18:55:41.327281 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9b9b411_9b28_486b_bb42_cf668fba2ee5.slice/crio-2b8dcb4da963c7742a18bf3c94eb64ddaac3efabaf720b6de29d0d994226ca60 WatchSource:0}: Error finding container 2b8dcb4da963c7742a18bf3c94eb64ddaac3efabaf720b6de29d0d994226ca60: Status 404 returned error can't find the container with id 2b8dcb4da963c7742a18bf3c94eb64ddaac3efabaf720b6de29d0d994226ca60 Jan 26 18:55:41 crc kubenswrapper[4737]: I0126 18:55:41.333191 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-55cbc4d4bf-89lfk"] Jan 26 18:55:41 crc kubenswrapper[4737]: W0126 18:55:41.475114 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1aef338e_174a_4bc2_acd1_56374a72e519.slice/crio-acf85c3868a61a07d39afa3ffe22a05f725cf8c8121eb6d0c7ff156f8e364198 WatchSource:0}: Error finding container acf85c3868a61a07d39afa3ffe22a05f725cf8c8121eb6d0c7ff156f8e364198: Status 404 returned error can't find the container with id acf85c3868a61a07d39afa3ffe22a05f725cf8c8121eb6d0c7ff156f8e364198 Jan 26 18:55:41 crc kubenswrapper[4737]: I0126 18:55:41.476009 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-687b47d654-rb2ft"] Jan 26 18:55:41 crc kubenswrapper[4737]: I0126 18:55:41.659753 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b81603b3-3bc1-43ba-8a07-59b7f8eed3b6","Type":"ContainerStarted","Data":"e89874e81ddd99922af675faaeb910a050724d1ecdd78ea1e20b9c1fc564c39b"} Jan 26 18:55:41 crc kubenswrapper[4737]: I0126 18:55:41.660935 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b81603b3-3bc1-43ba-8a07-59b7f8eed3b6" containerName="ceilometer-central-agent" containerID="cri-o://9fbc364aab6f48e48186ac9cb290f05e9d2751c38282736765a4effef6f43919" gracePeriod=30 Jan 26 18:55:41 crc kubenswrapper[4737]: I0126 18:55:41.661026 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b81603b3-3bc1-43ba-8a07-59b7f8eed3b6" containerName="sg-core" containerID="cri-o://6dde630e032b3aa344af4cb2f5546393a37e2efecbf8f3c884b7aee136151757" gracePeriod=30 Jan 26 18:55:41 crc kubenswrapper[4737]: I0126 18:55:41.661082 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b81603b3-3bc1-43ba-8a07-59b7f8eed3b6" containerName="ceilometer-notification-agent" containerID="cri-o://f65298abc446dd56f82ba1384fb99393ede8cb1fe3e2d3e8e570280c6590b351" gracePeriod=30 Jan 26 18:55:41 crc kubenswrapper[4737]: I0126 18:55:41.661129 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b81603b3-3bc1-43ba-8a07-59b7f8eed3b6" containerName="proxy-httpd" containerID="cri-o://e89874e81ddd99922af675faaeb910a050724d1ecdd78ea1e20b9c1fc564c39b" gracePeriod=30 Jan 26 18:55:41 crc kubenswrapper[4737]: I0126 18:55:41.663723 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6bf5799cfc-4n4l5" event={"ID":"cfdad184-ce5c-4bfe-a9dc-44f62de75095","Type":"ContainerDied","Data":"8d7c413a9f90345c333eec90c45d55b623ca99617a0c83dad9863f3f96ec5f52"} Jan 26 18:55:41 crc kubenswrapper[4737]: I0126 18:55:41.663783 4737 scope.go:117] "RemoveContainer" containerID="7645cee4e5194b787f2e662f685e95d5a2e16b3b5b6472e3876f7af25c7dbd3b" Jan 26 18:55:41 crc kubenswrapper[4737]: I0126 18:55:41.663909 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6bf5799cfc-4n4l5" Jan 26 18:55:41 crc kubenswrapper[4737]: I0126 18:55:41.668512 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-687b47d654-rb2ft" event={"ID":"1aef338e-174a-4bc2-acd1-56374a72e519","Type":"ContainerStarted","Data":"acf85c3868a61a07d39afa3ffe22a05f725cf8c8121eb6d0c7ff156f8e364198"} Jan 26 18:55:41 crc kubenswrapper[4737]: I0126 18:55:41.680185 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-55cbc4d4bf-89lfk" event={"ID":"a9b9b411-9b28-486b-bb42-cf668fba2ee5","Type":"ContainerStarted","Data":"d9d18abd05e3ef25c9c2a44065ed0385c5ddb1723e8f2a19e43185c3688ee117"} Jan 26 18:55:41 crc kubenswrapper[4737]: I0126 18:55:41.680529 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-55cbc4d4bf-89lfk" event={"ID":"a9b9b411-9b28-486b-bb42-cf668fba2ee5","Type":"ContainerStarted","Data":"2b8dcb4da963c7742a18bf3c94eb64ddaac3efabaf720b6de29d0d994226ca60"} Jan 26 18:55:41 crc kubenswrapper[4737]: I0126 18:55:41.695735 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.165801486 podStartE2EDuration="1m9.695714204s" podCreationTimestamp="2026-01-26 18:54:32 +0000 UTC" firstStartedPulling="2026-01-26 18:54:34.306965634 +0000 UTC m=+1447.615160342" lastFinishedPulling="2026-01-26 18:55:40.836878352 +0000 UTC m=+1514.145073060" observedRunningTime="2026-01-26 18:55:41.688521676 +0000 UTC m=+1514.996716404" watchObservedRunningTime="2026-01-26 18:55:41.695714204 +0000 UTC m=+1515.003908912" Jan 26 18:55:41 crc kubenswrapper[4737]: I0126 18:55:41.714284 4737 scope.go:117] "RemoveContainer" containerID="1d6d7c8edb5d6302c4e4d245e968f01ad07431961b94a53685a1942d2ea642f2" Jan 26 18:55:41 crc kubenswrapper[4737]: I0126 18:55:41.725857 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6bf5799cfc-4n4l5"] Jan 26 18:55:41 crc kubenswrapper[4737]: I0126 18:55:41.743762 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6bf5799cfc-4n4l5"] Jan 26 18:55:42 crc kubenswrapper[4737]: I0126 18:55:42.691489 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-55cbc4d4bf-89lfk" event={"ID":"a9b9b411-9b28-486b-bb42-cf668fba2ee5","Type":"ContainerStarted","Data":"aa019c256a40d42d58559a45bec37182e6ccdfbb637deb414a5b95f3cea26d41"} Jan 26 18:55:42 crc kubenswrapper[4737]: I0126 18:55:42.693222 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-55cbc4d4bf-89lfk" Jan 26 18:55:42 crc kubenswrapper[4737]: I0126 18:55:42.695915 4737 generic.go:334] "Generic (PLEG): container finished" podID="b81603b3-3bc1-43ba-8a07-59b7f8eed3b6" containerID="e89874e81ddd99922af675faaeb910a050724d1ecdd78ea1e20b9c1fc564c39b" exitCode=0 Jan 26 18:55:42 crc kubenswrapper[4737]: I0126 18:55:42.695944 4737 generic.go:334] "Generic (PLEG): container finished" podID="b81603b3-3bc1-43ba-8a07-59b7f8eed3b6" containerID="6dde630e032b3aa344af4cb2f5546393a37e2efecbf8f3c884b7aee136151757" exitCode=2 Jan 26 18:55:42 crc kubenswrapper[4737]: I0126 18:55:42.695956 4737 generic.go:334] "Generic (PLEG): container finished" podID="b81603b3-3bc1-43ba-8a07-59b7f8eed3b6" containerID="9fbc364aab6f48e48186ac9cb290f05e9d2751c38282736765a4effef6f43919" exitCode=0 Jan 26 18:55:42 crc kubenswrapper[4737]: I0126 18:55:42.696001 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b81603b3-3bc1-43ba-8a07-59b7f8eed3b6","Type":"ContainerDied","Data":"e89874e81ddd99922af675faaeb910a050724d1ecdd78ea1e20b9c1fc564c39b"} Jan 26 18:55:42 crc kubenswrapper[4737]: I0126 18:55:42.696024 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b81603b3-3bc1-43ba-8a07-59b7f8eed3b6","Type":"ContainerDied","Data":"6dde630e032b3aa344af4cb2f5546393a37e2efecbf8f3c884b7aee136151757"} Jan 26 18:55:42 crc kubenswrapper[4737]: I0126 18:55:42.696038 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b81603b3-3bc1-43ba-8a07-59b7f8eed3b6","Type":"ContainerDied","Data":"9fbc364aab6f48e48186ac9cb290f05e9d2751c38282736765a4effef6f43919"} Jan 26 18:55:42 crc kubenswrapper[4737]: I0126 18:55:42.699901 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-687b47d654-rb2ft" event={"ID":"1aef338e-174a-4bc2-acd1-56374a72e519","Type":"ContainerStarted","Data":"31926c5a06f49fab0e29d0f19c619874369c77a440ec2e3881f8c21eb0399b18"} Jan 26 18:55:42 crc kubenswrapper[4737]: I0126 18:55:42.699944 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-687b47d654-rb2ft" event={"ID":"1aef338e-174a-4bc2-acd1-56374a72e519","Type":"ContainerStarted","Data":"44cd38527fda17e9e772dc16b7df5a16b331cfa3001b4ec6afabf571251de6bc"} Jan 26 18:55:42 crc kubenswrapper[4737]: I0126 18:55:42.700131 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-687b47d654-rb2ft" Jan 26 18:55:42 crc kubenswrapper[4737]: I0126 18:55:42.700167 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-687b47d654-rb2ft" Jan 26 18:55:42 crc kubenswrapper[4737]: I0126 18:55:42.715506 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-55cbc4d4bf-89lfk" podStartSLOduration=9.715486439 podStartE2EDuration="9.715486439s" podCreationTimestamp="2026-01-26 18:55:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:55:42.711516076 +0000 UTC m=+1516.019710784" watchObservedRunningTime="2026-01-26 18:55:42.715486439 +0000 UTC m=+1516.023681147" Jan 26 18:55:42 crc kubenswrapper[4737]: I0126 18:55:42.740387 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-687b47d654-rb2ft" podStartSLOduration=9.740359879 podStartE2EDuration="9.740359879s" podCreationTimestamp="2026-01-26 18:55:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:55:42.733859967 +0000 UTC m=+1516.042054695" watchObservedRunningTime="2026-01-26 18:55:42.740359879 +0000 UTC m=+1516.048554587" Jan 26 18:55:42 crc kubenswrapper[4737]: I0126 18:55:42.793696 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 26 18:55:43 crc kubenswrapper[4737]: I0126 18:55:43.006208 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfdad184-ce5c-4bfe-a9dc-44f62de75095" path="/var/lib/kubelet/pods/cfdad184-ce5c-4bfe-a9dc-44f62de75095/volumes" Jan 26 18:55:43 crc kubenswrapper[4737]: I0126 18:55:43.026565 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 26 18:55:43 crc kubenswrapper[4737]: I0126 18:55:43.443604 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:55:43 crc kubenswrapper[4737]: I0126 18:55:43.523285 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b81603b3-3bc1-43ba-8a07-59b7f8eed3b6-run-httpd\") pod \"b81603b3-3bc1-43ba-8a07-59b7f8eed3b6\" (UID: \"b81603b3-3bc1-43ba-8a07-59b7f8eed3b6\") " Jan 26 18:55:43 crc kubenswrapper[4737]: I0126 18:55:43.523410 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b81603b3-3bc1-43ba-8a07-59b7f8eed3b6-scripts\") pod \"b81603b3-3bc1-43ba-8a07-59b7f8eed3b6\" (UID: \"b81603b3-3bc1-43ba-8a07-59b7f8eed3b6\") " Jan 26 18:55:43 crc kubenswrapper[4737]: I0126 18:55:43.523504 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b81603b3-3bc1-43ba-8a07-59b7f8eed3b6-config-data\") pod \"b81603b3-3bc1-43ba-8a07-59b7f8eed3b6\" (UID: \"b81603b3-3bc1-43ba-8a07-59b7f8eed3b6\") " Jan 26 18:55:43 crc kubenswrapper[4737]: I0126 18:55:43.523682 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56t26\" (UniqueName: \"kubernetes.io/projected/b81603b3-3bc1-43ba-8a07-59b7f8eed3b6-kube-api-access-56t26\") pod \"b81603b3-3bc1-43ba-8a07-59b7f8eed3b6\" (UID: \"b81603b3-3bc1-43ba-8a07-59b7f8eed3b6\") " Jan 26 18:55:43 crc kubenswrapper[4737]: I0126 18:55:43.523728 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b81603b3-3bc1-43ba-8a07-59b7f8eed3b6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b81603b3-3bc1-43ba-8a07-59b7f8eed3b6" (UID: "b81603b3-3bc1-43ba-8a07-59b7f8eed3b6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:55:43 crc kubenswrapper[4737]: I0126 18:55:43.523782 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b81603b3-3bc1-43ba-8a07-59b7f8eed3b6-sg-core-conf-yaml\") pod \"b81603b3-3bc1-43ba-8a07-59b7f8eed3b6\" (UID: \"b81603b3-3bc1-43ba-8a07-59b7f8eed3b6\") " Jan 26 18:55:43 crc kubenswrapper[4737]: I0126 18:55:43.525610 4737 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b81603b3-3bc1-43ba-8a07-59b7f8eed3b6-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:43 crc kubenswrapper[4737]: I0126 18:55:43.539198 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b81603b3-3bc1-43ba-8a07-59b7f8eed3b6-kube-api-access-56t26" (OuterVolumeSpecName: "kube-api-access-56t26") pod "b81603b3-3bc1-43ba-8a07-59b7f8eed3b6" (UID: "b81603b3-3bc1-43ba-8a07-59b7f8eed3b6"). InnerVolumeSpecName "kube-api-access-56t26". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:55:43 crc kubenswrapper[4737]: I0126 18:55:43.540789 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b81603b3-3bc1-43ba-8a07-59b7f8eed3b6-scripts" (OuterVolumeSpecName: "scripts") pod "b81603b3-3bc1-43ba-8a07-59b7f8eed3b6" (UID: "b81603b3-3bc1-43ba-8a07-59b7f8eed3b6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:55:43 crc kubenswrapper[4737]: I0126 18:55:43.558625 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b81603b3-3bc1-43ba-8a07-59b7f8eed3b6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b81603b3-3bc1-43ba-8a07-59b7f8eed3b6" (UID: "b81603b3-3bc1-43ba-8a07-59b7f8eed3b6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:55:43 crc kubenswrapper[4737]: I0126 18:55:43.626712 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b81603b3-3bc1-43ba-8a07-59b7f8eed3b6-log-httpd\") pod \"b81603b3-3bc1-43ba-8a07-59b7f8eed3b6\" (UID: \"b81603b3-3bc1-43ba-8a07-59b7f8eed3b6\") " Jan 26 18:55:43 crc kubenswrapper[4737]: I0126 18:55:43.627053 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b81603b3-3bc1-43ba-8a07-59b7f8eed3b6-combined-ca-bundle\") pod \"b81603b3-3bc1-43ba-8a07-59b7f8eed3b6\" (UID: \"b81603b3-3bc1-43ba-8a07-59b7f8eed3b6\") " Jan 26 18:55:43 crc kubenswrapper[4737]: I0126 18:55:43.627740 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56t26\" (UniqueName: \"kubernetes.io/projected/b81603b3-3bc1-43ba-8a07-59b7f8eed3b6-kube-api-access-56t26\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:43 crc kubenswrapper[4737]: I0126 18:55:43.627766 4737 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b81603b3-3bc1-43ba-8a07-59b7f8eed3b6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:43 crc kubenswrapper[4737]: I0126 18:55:43.627779 4737 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b81603b3-3bc1-43ba-8a07-59b7f8eed3b6-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:43 crc kubenswrapper[4737]: I0126 18:55:43.628553 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b81603b3-3bc1-43ba-8a07-59b7f8eed3b6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b81603b3-3bc1-43ba-8a07-59b7f8eed3b6" (UID: "b81603b3-3bc1-43ba-8a07-59b7f8eed3b6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:55:43 crc kubenswrapper[4737]: I0126 18:55:43.648354 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b81603b3-3bc1-43ba-8a07-59b7f8eed3b6-config-data" (OuterVolumeSpecName: "config-data") pod "b81603b3-3bc1-43ba-8a07-59b7f8eed3b6" (UID: "b81603b3-3bc1-43ba-8a07-59b7f8eed3b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:55:43 crc kubenswrapper[4737]: I0126 18:55:43.714464 4737 generic.go:334] "Generic (PLEG): container finished" podID="b81603b3-3bc1-43ba-8a07-59b7f8eed3b6" containerID="f65298abc446dd56f82ba1384fb99393ede8cb1fe3e2d3e8e570280c6590b351" exitCode=0 Jan 26 18:55:43 crc kubenswrapper[4737]: I0126 18:55:43.715021 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:55:43 crc kubenswrapper[4737]: I0126 18:55:43.715006 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b81603b3-3bc1-43ba-8a07-59b7f8eed3b6","Type":"ContainerDied","Data":"f65298abc446dd56f82ba1384fb99393ede8cb1fe3e2d3e8e570280c6590b351"} Jan 26 18:55:43 crc kubenswrapper[4737]: I0126 18:55:43.715221 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b81603b3-3bc1-43ba-8a07-59b7f8eed3b6","Type":"ContainerDied","Data":"eac0d59303e3deb71a51dc974899adfac9802ad015d66af0fa9a58e23a1d6a77"} Jan 26 18:55:43 crc kubenswrapper[4737]: I0126 18:55:43.715245 4737 scope.go:117] "RemoveContainer" containerID="e89874e81ddd99922af675faaeb910a050724d1ecdd78ea1e20b9c1fc564c39b" Jan 26 18:55:43 crc kubenswrapper[4737]: I0126 18:55:43.717850 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b81603b3-3bc1-43ba-8a07-59b7f8eed3b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b81603b3-3bc1-43ba-8a07-59b7f8eed3b6" (UID: "b81603b3-3bc1-43ba-8a07-59b7f8eed3b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:55:43 crc kubenswrapper[4737]: I0126 18:55:43.729561 4737 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b81603b3-3bc1-43ba-8a07-59b7f8eed3b6-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:43 crc kubenswrapper[4737]: I0126 18:55:43.729600 4737 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b81603b3-3bc1-43ba-8a07-59b7f8eed3b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:43 crc kubenswrapper[4737]: I0126 18:55:43.729613 4737 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b81603b3-3bc1-43ba-8a07-59b7f8eed3b6-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:43 crc kubenswrapper[4737]: I0126 18:55:43.761850 4737 scope.go:117] "RemoveContainer" containerID="6dde630e032b3aa344af4cb2f5546393a37e2efecbf8f3c884b7aee136151757" Jan 26 18:55:43 crc kubenswrapper[4737]: I0126 18:55:43.775883 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 18:55:43 crc kubenswrapper[4737]: I0126 18:55:43.782887 4737 scope.go:117] "RemoveContainer" containerID="f65298abc446dd56f82ba1384fb99393ede8cb1fe3e2d3e8e570280c6590b351" Jan 26 18:55:43 crc kubenswrapper[4737]: I0126 18:55:43.805063 4737 scope.go:117] "RemoveContainer" containerID="9fbc364aab6f48e48186ac9cb290f05e9d2751c38282736765a4effef6f43919" Jan 26 18:55:43 crc kubenswrapper[4737]: I0126 18:55:43.838204 4737 scope.go:117] "RemoveContainer" containerID="e89874e81ddd99922af675faaeb910a050724d1ecdd78ea1e20b9c1fc564c39b" Jan 26 18:55:43 crc kubenswrapper[4737]: E0126 18:55:43.839421 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e89874e81ddd99922af675faaeb910a050724d1ecdd78ea1e20b9c1fc564c39b\": container with ID starting with e89874e81ddd99922af675faaeb910a050724d1ecdd78ea1e20b9c1fc564c39b not found: ID does not exist" containerID="e89874e81ddd99922af675faaeb910a050724d1ecdd78ea1e20b9c1fc564c39b" Jan 26 18:55:43 crc kubenswrapper[4737]: I0126 18:55:43.839490 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e89874e81ddd99922af675faaeb910a050724d1ecdd78ea1e20b9c1fc564c39b"} err="failed to get container status \"e89874e81ddd99922af675faaeb910a050724d1ecdd78ea1e20b9c1fc564c39b\": rpc error: code = NotFound desc = could not find container \"e89874e81ddd99922af675faaeb910a050724d1ecdd78ea1e20b9c1fc564c39b\": container with ID starting with e89874e81ddd99922af675faaeb910a050724d1ecdd78ea1e20b9c1fc564c39b not found: ID does not exist" Jan 26 18:55:43 crc kubenswrapper[4737]: I0126 18:55:43.839522 4737 scope.go:117] "RemoveContainer" containerID="6dde630e032b3aa344af4cb2f5546393a37e2efecbf8f3c884b7aee136151757" Jan 26 18:55:43 crc kubenswrapper[4737]: E0126 18:55:43.839935 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6dde630e032b3aa344af4cb2f5546393a37e2efecbf8f3c884b7aee136151757\": container with ID starting with 6dde630e032b3aa344af4cb2f5546393a37e2efecbf8f3c884b7aee136151757 not found: ID does not exist" containerID="6dde630e032b3aa344af4cb2f5546393a37e2efecbf8f3c884b7aee136151757" Jan 26 18:55:43 crc kubenswrapper[4737]: I0126 18:55:43.839962 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6dde630e032b3aa344af4cb2f5546393a37e2efecbf8f3c884b7aee136151757"} err="failed to get container status \"6dde630e032b3aa344af4cb2f5546393a37e2efecbf8f3c884b7aee136151757\": rpc error: code = NotFound desc = could not find container \"6dde630e032b3aa344af4cb2f5546393a37e2efecbf8f3c884b7aee136151757\": container with ID starting with 6dde630e032b3aa344af4cb2f5546393a37e2efecbf8f3c884b7aee136151757 not found: ID does not exist" Jan 26 18:55:43 crc kubenswrapper[4737]: I0126 18:55:43.839998 4737 scope.go:117] "RemoveContainer" containerID="f65298abc446dd56f82ba1384fb99393ede8cb1fe3e2d3e8e570280c6590b351" Jan 26 18:55:43 crc kubenswrapper[4737]: E0126 18:55:43.840556 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f65298abc446dd56f82ba1384fb99393ede8cb1fe3e2d3e8e570280c6590b351\": container with ID starting with f65298abc446dd56f82ba1384fb99393ede8cb1fe3e2d3e8e570280c6590b351 not found: ID does not exist" containerID="f65298abc446dd56f82ba1384fb99393ede8cb1fe3e2d3e8e570280c6590b351" Jan 26 18:55:43 crc kubenswrapper[4737]: I0126 18:55:43.840578 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f65298abc446dd56f82ba1384fb99393ede8cb1fe3e2d3e8e570280c6590b351"} err="failed to get container status \"f65298abc446dd56f82ba1384fb99393ede8cb1fe3e2d3e8e570280c6590b351\": rpc error: code = NotFound desc = could not find container \"f65298abc446dd56f82ba1384fb99393ede8cb1fe3e2d3e8e570280c6590b351\": container with ID starting with f65298abc446dd56f82ba1384fb99393ede8cb1fe3e2d3e8e570280c6590b351 not found: ID does not exist" Jan 26 18:55:43 crc kubenswrapper[4737]: I0126 18:55:43.840615 4737 scope.go:117] "RemoveContainer" containerID="9fbc364aab6f48e48186ac9cb290f05e9d2751c38282736765a4effef6f43919" Jan 26 18:55:43 crc kubenswrapper[4737]: E0126 18:55:43.842125 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fbc364aab6f48e48186ac9cb290f05e9d2751c38282736765a4effef6f43919\": container with ID starting with 9fbc364aab6f48e48186ac9cb290f05e9d2751c38282736765a4effef6f43919 not found: ID does not exist" containerID="9fbc364aab6f48e48186ac9cb290f05e9d2751c38282736765a4effef6f43919" Jan 26 18:55:43 crc kubenswrapper[4737]: I0126 18:55:43.842150 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fbc364aab6f48e48186ac9cb290f05e9d2751c38282736765a4effef6f43919"} err="failed to get container status \"9fbc364aab6f48e48186ac9cb290f05e9d2751c38282736765a4effef6f43919\": rpc error: code = NotFound desc = could not find container \"9fbc364aab6f48e48186ac9cb290f05e9d2751c38282736765a4effef6f43919\": container with ID starting with 9fbc364aab6f48e48186ac9cb290f05e9d2751c38282736765a4effef6f43919 not found: ID does not exist" Jan 26 18:55:44 crc kubenswrapper[4737]: I0126 18:55:44.102548 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:55:44 crc kubenswrapper[4737]: I0126 18:55:44.115693 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:55:44 crc kubenswrapper[4737]: I0126 18:55:44.133534 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:55:44 crc kubenswrapper[4737]: E0126 18:55:44.134176 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b81603b3-3bc1-43ba-8a07-59b7f8eed3b6" containerName="ceilometer-notification-agent" Jan 26 18:55:44 crc kubenswrapper[4737]: I0126 18:55:44.134189 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="b81603b3-3bc1-43ba-8a07-59b7f8eed3b6" containerName="ceilometer-notification-agent" Jan 26 18:55:44 crc kubenswrapper[4737]: E0126 18:55:44.134209 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b81603b3-3bc1-43ba-8a07-59b7f8eed3b6" containerName="proxy-httpd" Jan 26 18:55:44 crc kubenswrapper[4737]: I0126 18:55:44.134215 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="b81603b3-3bc1-43ba-8a07-59b7f8eed3b6" containerName="proxy-httpd" Jan 26 18:55:44 crc kubenswrapper[4737]: E0126 18:55:44.134223 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40dcad4e-d2aa-4e7e-bf72-4afd88ca77df" containerName="init" Jan 26 18:55:44 crc kubenswrapper[4737]: I0126 18:55:44.134230 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="40dcad4e-d2aa-4e7e-bf72-4afd88ca77df" containerName="init" Jan 26 18:55:44 crc kubenswrapper[4737]: E0126 18:55:44.134245 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b81603b3-3bc1-43ba-8a07-59b7f8eed3b6" containerName="sg-core" Jan 26 18:55:44 crc kubenswrapper[4737]: I0126 18:55:44.134251 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="b81603b3-3bc1-43ba-8a07-59b7f8eed3b6" containerName="sg-core" Jan 26 18:55:44 crc kubenswrapper[4737]: E0126 18:55:44.134262 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40dcad4e-d2aa-4e7e-bf72-4afd88ca77df" containerName="dnsmasq-dns" Jan 26 18:55:44 crc kubenswrapper[4737]: I0126 18:55:44.134267 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="40dcad4e-d2aa-4e7e-bf72-4afd88ca77df" containerName="dnsmasq-dns" Jan 26 18:55:44 crc kubenswrapper[4737]: E0126 18:55:44.134281 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfdad184-ce5c-4bfe-a9dc-44f62de75095" containerName="neutron-httpd" Jan 26 18:55:44 crc kubenswrapper[4737]: I0126 18:55:44.134287 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfdad184-ce5c-4bfe-a9dc-44f62de75095" containerName="neutron-httpd" Jan 26 18:55:44 crc kubenswrapper[4737]: E0126 18:55:44.134298 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfdad184-ce5c-4bfe-a9dc-44f62de75095" containerName="neutron-api" Jan 26 18:55:44 crc kubenswrapper[4737]: I0126 18:55:44.134304 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfdad184-ce5c-4bfe-a9dc-44f62de75095" containerName="neutron-api" Jan 26 18:55:44 crc kubenswrapper[4737]: E0126 18:55:44.134315 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b81603b3-3bc1-43ba-8a07-59b7f8eed3b6" containerName="ceilometer-central-agent" Jan 26 18:55:44 crc kubenswrapper[4737]: I0126 18:55:44.134320 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="b81603b3-3bc1-43ba-8a07-59b7f8eed3b6" containerName="ceilometer-central-agent" Jan 26 18:55:44 crc kubenswrapper[4737]: I0126 18:55:44.134507 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfdad184-ce5c-4bfe-a9dc-44f62de75095" containerName="neutron-api" Jan 26 18:55:44 crc kubenswrapper[4737]: I0126 18:55:44.134527 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="40dcad4e-d2aa-4e7e-bf72-4afd88ca77df" containerName="dnsmasq-dns" Jan 26 18:55:44 crc kubenswrapper[4737]: I0126 18:55:44.134535 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="b81603b3-3bc1-43ba-8a07-59b7f8eed3b6" containerName="ceilometer-central-agent" Jan 26 18:55:44 crc kubenswrapper[4737]: I0126 18:55:44.134547 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="b81603b3-3bc1-43ba-8a07-59b7f8eed3b6" containerName="proxy-httpd" Jan 26 18:55:44 crc kubenswrapper[4737]: I0126 18:55:44.134559 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="b81603b3-3bc1-43ba-8a07-59b7f8eed3b6" containerName="sg-core" Jan 26 18:55:44 crc kubenswrapper[4737]: I0126 18:55:44.134569 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfdad184-ce5c-4bfe-a9dc-44f62de75095" containerName="neutron-httpd" Jan 26 18:55:44 crc kubenswrapper[4737]: I0126 18:55:44.134578 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="b81603b3-3bc1-43ba-8a07-59b7f8eed3b6" containerName="ceilometer-notification-agent" Jan 26 18:55:44 crc kubenswrapper[4737]: I0126 18:55:44.136488 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:55:44 crc kubenswrapper[4737]: I0126 18:55:44.136577 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:55:44 crc kubenswrapper[4737]: I0126 18:55:44.157825 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dce274e8-16df-4b86-8803-b681b0160bc3-log-httpd\") pod \"ceilometer-0\" (UID: \"dce274e8-16df-4b86-8803-b681b0160bc3\") " pod="openstack/ceilometer-0" Jan 26 18:55:44 crc kubenswrapper[4737]: I0126 18:55:44.157908 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dce274e8-16df-4b86-8803-b681b0160bc3-run-httpd\") pod \"ceilometer-0\" (UID: \"dce274e8-16df-4b86-8803-b681b0160bc3\") " pod="openstack/ceilometer-0" Jan 26 18:55:44 crc kubenswrapper[4737]: I0126 18:55:44.158122 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dce274e8-16df-4b86-8803-b681b0160bc3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dce274e8-16df-4b86-8803-b681b0160bc3\") " pod="openstack/ceilometer-0" Jan 26 18:55:44 crc kubenswrapper[4737]: I0126 18:55:44.158374 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dce274e8-16df-4b86-8803-b681b0160bc3-config-data\") pod \"ceilometer-0\" (UID: \"dce274e8-16df-4b86-8803-b681b0160bc3\") " pod="openstack/ceilometer-0" Jan 26 18:55:44 crc kubenswrapper[4737]: I0126 18:55:44.158515 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dce274e8-16df-4b86-8803-b681b0160bc3-scripts\") pod \"ceilometer-0\" (UID: \"dce274e8-16df-4b86-8803-b681b0160bc3\") " pod="openstack/ceilometer-0" Jan 26 18:55:44 crc kubenswrapper[4737]: I0126 18:55:44.158567 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fncsd\" (UniqueName: \"kubernetes.io/projected/dce274e8-16df-4b86-8803-b681b0160bc3-kube-api-access-fncsd\") pod \"ceilometer-0\" (UID: \"dce274e8-16df-4b86-8803-b681b0160bc3\") " pod="openstack/ceilometer-0" Jan 26 18:55:44 crc kubenswrapper[4737]: I0126 18:55:44.158648 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dce274e8-16df-4b86-8803-b681b0160bc3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dce274e8-16df-4b86-8803-b681b0160bc3\") " pod="openstack/ceilometer-0" Jan 26 18:55:44 crc kubenswrapper[4737]: I0126 18:55:44.163042 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 18:55:44 crc kubenswrapper[4737]: I0126 18:55:44.163062 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 18:55:44 crc kubenswrapper[4737]: I0126 18:55:44.261126 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dce274e8-16df-4b86-8803-b681b0160bc3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dce274e8-16df-4b86-8803-b681b0160bc3\") " pod="openstack/ceilometer-0" Jan 26 18:55:44 crc kubenswrapper[4737]: I0126 18:55:44.261182 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dce274e8-16df-4b86-8803-b681b0160bc3-log-httpd\") pod \"ceilometer-0\" (UID: \"dce274e8-16df-4b86-8803-b681b0160bc3\") " pod="openstack/ceilometer-0" Jan 26 18:55:44 crc kubenswrapper[4737]: I0126 18:55:44.261241 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dce274e8-16df-4b86-8803-b681b0160bc3-run-httpd\") pod \"ceilometer-0\" (UID: \"dce274e8-16df-4b86-8803-b681b0160bc3\") " pod="openstack/ceilometer-0" Jan 26 18:55:44 crc kubenswrapper[4737]: I0126 18:55:44.261275 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dce274e8-16df-4b86-8803-b681b0160bc3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dce274e8-16df-4b86-8803-b681b0160bc3\") " pod="openstack/ceilometer-0" Jan 26 18:55:44 crc kubenswrapper[4737]: I0126 18:55:44.261339 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dce274e8-16df-4b86-8803-b681b0160bc3-config-data\") pod \"ceilometer-0\" (UID: \"dce274e8-16df-4b86-8803-b681b0160bc3\") " pod="openstack/ceilometer-0" Jan 26 18:55:44 crc kubenswrapper[4737]: I0126 18:55:44.261379 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dce274e8-16df-4b86-8803-b681b0160bc3-scripts\") pod \"ceilometer-0\" (UID: \"dce274e8-16df-4b86-8803-b681b0160bc3\") " pod="openstack/ceilometer-0" Jan 26 18:55:44 crc kubenswrapper[4737]: I0126 18:55:44.261402 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fncsd\" (UniqueName: \"kubernetes.io/projected/dce274e8-16df-4b86-8803-b681b0160bc3-kube-api-access-fncsd\") pod \"ceilometer-0\" (UID: \"dce274e8-16df-4b86-8803-b681b0160bc3\") " pod="openstack/ceilometer-0" Jan 26 18:55:44 crc kubenswrapper[4737]: I0126 18:55:44.262021 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dce274e8-16df-4b86-8803-b681b0160bc3-log-httpd\") pod \"ceilometer-0\" (UID: \"dce274e8-16df-4b86-8803-b681b0160bc3\") " pod="openstack/ceilometer-0" Jan 26 18:55:44 crc kubenswrapper[4737]: I0126 18:55:44.262141 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dce274e8-16df-4b86-8803-b681b0160bc3-run-httpd\") pod \"ceilometer-0\" (UID: \"dce274e8-16df-4b86-8803-b681b0160bc3\") " pod="openstack/ceilometer-0" Jan 26 18:55:44 crc kubenswrapper[4737]: I0126 18:55:44.266777 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dce274e8-16df-4b86-8803-b681b0160bc3-scripts\") pod \"ceilometer-0\" (UID: \"dce274e8-16df-4b86-8803-b681b0160bc3\") " pod="openstack/ceilometer-0" Jan 26 18:55:44 crc kubenswrapper[4737]: I0126 18:55:44.266807 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dce274e8-16df-4b86-8803-b681b0160bc3-config-data\") pod \"ceilometer-0\" (UID: \"dce274e8-16df-4b86-8803-b681b0160bc3\") " pod="openstack/ceilometer-0" Jan 26 18:55:44 crc kubenswrapper[4737]: I0126 18:55:44.268630 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dce274e8-16df-4b86-8803-b681b0160bc3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dce274e8-16df-4b86-8803-b681b0160bc3\") " pod="openstack/ceilometer-0" Jan 26 18:55:44 crc kubenswrapper[4737]: I0126 18:55:44.279454 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dce274e8-16df-4b86-8803-b681b0160bc3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dce274e8-16df-4b86-8803-b681b0160bc3\") " pod="openstack/ceilometer-0" Jan 26 18:55:44 crc kubenswrapper[4737]: I0126 18:55:44.281556 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fncsd\" (UniqueName: \"kubernetes.io/projected/dce274e8-16df-4b86-8803-b681b0160bc3-kube-api-access-fncsd\") pod \"ceilometer-0\" (UID: \"dce274e8-16df-4b86-8803-b681b0160bc3\") " pod="openstack/ceilometer-0" Jan 26 18:55:44 crc kubenswrapper[4737]: I0126 18:55:44.483358 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:55:44 crc kubenswrapper[4737]: I0126 18:55:44.731678 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="49c93b4b-1101-4e35-857b-722849fadd92" containerName="cinder-scheduler" containerID="cri-o://6a3f8415df02f19bf44d8ff570aa29b991fe00f296a52eab364e8788cee6482e" gracePeriod=30 Jan 26 18:55:44 crc kubenswrapper[4737]: I0126 18:55:44.732372 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="49c93b4b-1101-4e35-857b-722849fadd92" containerName="probe" containerID="cri-o://2bb763c4cef34113873232ce8bfd401ab584eb6489fadd717b101744a0b99b78" gracePeriod=30 Jan 26 18:55:45 crc kubenswrapper[4737]: I0126 18:55:45.001591 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b81603b3-3bc1-43ba-8a07-59b7f8eed3b6" path="/var/lib/kubelet/pods/b81603b3-3bc1-43ba-8a07-59b7f8eed3b6/volumes" Jan 26 18:55:45 crc kubenswrapper[4737]: I0126 18:55:45.002784 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:55:45 crc kubenswrapper[4737]: I0126 18:55:45.743609 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dce274e8-16df-4b86-8803-b681b0160bc3","Type":"ContainerStarted","Data":"7e562ddf445da0968e6968ec36b524eefb8335cd84d34770959ff0bdaddf959e"} Jan 26 18:55:45 crc kubenswrapper[4737]: I0126 18:55:45.744015 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dce274e8-16df-4b86-8803-b681b0160bc3","Type":"ContainerStarted","Data":"d6c33494fa081f904b65b97585a82b0340bfa833ed6c04301baf35e71db0c587"} Jan 26 18:55:45 crc kubenswrapper[4737]: I0126 18:55:45.747539 4737 generic.go:334] "Generic (PLEG): container finished" podID="49c93b4b-1101-4e35-857b-722849fadd92" containerID="2bb763c4cef34113873232ce8bfd401ab584eb6489fadd717b101744a0b99b78" exitCode=0 Jan 26 18:55:45 crc kubenswrapper[4737]: I0126 18:55:45.747576 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"49c93b4b-1101-4e35-857b-722849fadd92","Type":"ContainerDied","Data":"2bb763c4cef34113873232ce8bfd401ab584eb6489fadd717b101744a0b99b78"} Jan 26 18:55:47 crc kubenswrapper[4737]: I0126 18:55:47.306633 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dce274e8-16df-4b86-8803-b681b0160bc3","Type":"ContainerStarted","Data":"7c6b320d56f98865258b0d472d82a5d9ee5605b4552d892201d84591eb450942"} Jan 26 18:55:48 crc kubenswrapper[4737]: I0126 18:55:48.324288 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dce274e8-16df-4b86-8803-b681b0160bc3","Type":"ContainerStarted","Data":"3ba4ea608b2a81a054094b03a442ad33b3817e85249a779afcab5b0ef5056092"} Jan 26 18:55:49 crc kubenswrapper[4737]: I0126 18:55:49.698717 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-c974878b4-m6rmv" Jan 26 18:55:50 crc kubenswrapper[4737]: I0126 18:55:50.088934 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-c974878b4-m6rmv" Jan 26 18:55:50 crc kubenswrapper[4737]: I0126 18:55:50.402972 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dce274e8-16df-4b86-8803-b681b0160bc3","Type":"ContainerStarted","Data":"ec29fec033478998c456ee72e5cedecbcd414e1690994e988310b88c816f604e"} Jan 26 18:55:50 crc kubenswrapper[4737]: I0126 18:55:50.403371 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 18:55:50 crc kubenswrapper[4737]: I0126 18:55:50.410691 4737 generic.go:334] "Generic (PLEG): container finished" podID="49c93b4b-1101-4e35-857b-722849fadd92" containerID="6a3f8415df02f19bf44d8ff570aa29b991fe00f296a52eab364e8788cee6482e" exitCode=0 Jan 26 18:55:50 crc kubenswrapper[4737]: I0126 18:55:50.411605 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"49c93b4b-1101-4e35-857b-722849fadd92","Type":"ContainerDied","Data":"6a3f8415df02f19bf44d8ff570aa29b991fe00f296a52eab364e8788cee6482e"} Jan 26 18:55:50 crc kubenswrapper[4737]: I0126 18:55:50.456025 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.564970883 podStartE2EDuration="6.456005352s" podCreationTimestamp="2026-01-26 18:55:44 +0000 UTC" firstStartedPulling="2026-01-26 18:55:44.996004631 +0000 UTC m=+1518.304199339" lastFinishedPulling="2026-01-26 18:55:49.88703911 +0000 UTC m=+1523.195233808" observedRunningTime="2026-01-26 18:55:50.451760753 +0000 UTC m=+1523.759955461" watchObservedRunningTime="2026-01-26 18:55:50.456005352 +0000 UTC m=+1523.764200060" Jan 26 18:55:50 crc kubenswrapper[4737]: I0126 18:55:50.700009 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-687b47d654-rb2ft" Jan 26 18:55:50 crc kubenswrapper[4737]: I0126 18:55:50.732924 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-687b47d654-rb2ft" Jan 26 18:55:50 crc kubenswrapper[4737]: I0126 18:55:50.780391 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 18:55:50 crc kubenswrapper[4737]: I0126 18:55:50.795058 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-689b884cd-xd7w8"] Jan 26 18:55:50 crc kubenswrapper[4737]: I0126 18:55:50.795558 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-689b884cd-xd7w8" podUID="cec54497-f9a7-4d22-8989-a78d815df93c" containerName="barbican-api-log" containerID="cri-o://3e3d61f1f8efce9665ab8c6ea8d0897e1affcdd5f7d0a7c74ad7558a5cdb1277" gracePeriod=30 Jan 26 18:55:50 crc kubenswrapper[4737]: I0126 18:55:50.795612 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-689b884cd-xd7w8" podUID="cec54497-f9a7-4d22-8989-a78d815df93c" containerName="barbican-api" containerID="cri-o://14468dfc9b5395ce444e1d5e2d3fc9905c9a7e4ac33b331a53e7cf5718691c7a" gracePeriod=30 Jan 26 18:55:50 crc kubenswrapper[4737]: I0126 18:55:50.828117 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49c93b4b-1101-4e35-857b-722849fadd92-scripts\") pod \"49c93b4b-1101-4e35-857b-722849fadd92\" (UID: \"49c93b4b-1101-4e35-857b-722849fadd92\") " Jan 26 18:55:50 crc kubenswrapper[4737]: I0126 18:55:50.828183 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/49c93b4b-1101-4e35-857b-722849fadd92-config-data-custom\") pod \"49c93b4b-1101-4e35-857b-722849fadd92\" (UID: \"49c93b4b-1101-4e35-857b-722849fadd92\") " Jan 26 18:55:50 crc kubenswrapper[4737]: I0126 18:55:50.828260 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49c93b4b-1101-4e35-857b-722849fadd92-config-data\") pod \"49c93b4b-1101-4e35-857b-722849fadd92\" (UID: \"49c93b4b-1101-4e35-857b-722849fadd92\") " Jan 26 18:55:50 crc kubenswrapper[4737]: I0126 18:55:50.828341 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/49c93b4b-1101-4e35-857b-722849fadd92-etc-machine-id\") pod \"49c93b4b-1101-4e35-857b-722849fadd92\" (UID: \"49c93b4b-1101-4e35-857b-722849fadd92\") " Jan 26 18:55:50 crc kubenswrapper[4737]: I0126 18:55:50.828403 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkqmz\" (UniqueName: \"kubernetes.io/projected/49c93b4b-1101-4e35-857b-722849fadd92-kube-api-access-dkqmz\") pod \"49c93b4b-1101-4e35-857b-722849fadd92\" (UID: \"49c93b4b-1101-4e35-857b-722849fadd92\") " Jan 26 18:55:50 crc kubenswrapper[4737]: I0126 18:55:50.828436 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49c93b4b-1101-4e35-857b-722849fadd92-combined-ca-bundle\") pod \"49c93b4b-1101-4e35-857b-722849fadd92\" (UID: \"49c93b4b-1101-4e35-857b-722849fadd92\") " Jan 26 18:55:50 crc kubenswrapper[4737]: I0126 18:55:50.833639 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49c93b4b-1101-4e35-857b-722849fadd92-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "49c93b4b-1101-4e35-857b-722849fadd92" (UID: "49c93b4b-1101-4e35-857b-722849fadd92"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:55:50 crc kubenswrapper[4737]: I0126 18:55:50.840537 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c93b4b-1101-4e35-857b-722849fadd92-kube-api-access-dkqmz" (OuterVolumeSpecName: "kube-api-access-dkqmz") pod "49c93b4b-1101-4e35-857b-722849fadd92" (UID: "49c93b4b-1101-4e35-857b-722849fadd92"). InnerVolumeSpecName "kube-api-access-dkqmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:55:50 crc kubenswrapper[4737]: I0126 18:55:50.852376 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c93b4b-1101-4e35-857b-722849fadd92-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "49c93b4b-1101-4e35-857b-722849fadd92" (UID: "49c93b4b-1101-4e35-857b-722849fadd92"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:55:50 crc kubenswrapper[4737]: I0126 18:55:50.859969 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c93b4b-1101-4e35-857b-722849fadd92-scripts" (OuterVolumeSpecName: "scripts") pod "49c93b4b-1101-4e35-857b-722849fadd92" (UID: "49c93b4b-1101-4e35-857b-722849fadd92"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:55:50 crc kubenswrapper[4737]: I0126 18:55:50.933314 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c93b4b-1101-4e35-857b-722849fadd92-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "49c93b4b-1101-4e35-857b-722849fadd92" (UID: "49c93b4b-1101-4e35-857b-722849fadd92"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:55:50 crc kubenswrapper[4737]: I0126 18:55:50.936038 4737 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49c93b4b-1101-4e35-857b-722849fadd92-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:50 crc kubenswrapper[4737]: I0126 18:55:50.936151 4737 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49c93b4b-1101-4e35-857b-722849fadd92-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:50 crc kubenswrapper[4737]: I0126 18:55:50.936161 4737 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/49c93b4b-1101-4e35-857b-722849fadd92-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:50 crc kubenswrapper[4737]: I0126 18:55:50.936170 4737 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/49c93b4b-1101-4e35-857b-722849fadd92-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:50 crc kubenswrapper[4737]: I0126 18:55:50.936181 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkqmz\" (UniqueName: \"kubernetes.io/projected/49c93b4b-1101-4e35-857b-722849fadd92-kube-api-access-dkqmz\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:51 crc kubenswrapper[4737]: I0126 18:55:51.096246 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c93b4b-1101-4e35-857b-722849fadd92-config-data" (OuterVolumeSpecName: "config-data") pod "49c93b4b-1101-4e35-857b-722849fadd92" (UID: "49c93b4b-1101-4e35-857b-722849fadd92"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:55:51 crc kubenswrapper[4737]: I0126 18:55:51.142304 4737 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49c93b4b-1101-4e35-857b-722849fadd92-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:51 crc kubenswrapper[4737]: I0126 18:55:51.425169 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"49c93b4b-1101-4e35-857b-722849fadd92","Type":"ContainerDied","Data":"ab0d5aa4826b719bba3ca1d12af5ac66e313a74c2a63eba2ee2bf0cb199f91ee"} Jan 26 18:55:51 crc kubenswrapper[4737]: I0126 18:55:51.425227 4737 scope.go:117] "RemoveContainer" containerID="2bb763c4cef34113873232ce8bfd401ab584eb6489fadd717b101744a0b99b78" Jan 26 18:55:51 crc kubenswrapper[4737]: I0126 18:55:51.425377 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 18:55:51 crc kubenswrapper[4737]: I0126 18:55:51.435159 4737 generic.go:334] "Generic (PLEG): container finished" podID="cec54497-f9a7-4d22-8989-a78d815df93c" containerID="3e3d61f1f8efce9665ab8c6ea8d0897e1affcdd5f7d0a7c74ad7558a5cdb1277" exitCode=143 Jan 26 18:55:51 crc kubenswrapper[4737]: I0126 18:55:51.435213 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-689b884cd-xd7w8" event={"ID":"cec54497-f9a7-4d22-8989-a78d815df93c","Type":"ContainerDied","Data":"3e3d61f1f8efce9665ab8c6ea8d0897e1affcdd5f7d0a7c74ad7558a5cdb1277"} Jan 26 18:55:51 crc kubenswrapper[4737]: I0126 18:55:51.460598 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 18:55:51 crc kubenswrapper[4737]: I0126 18:55:51.468251 4737 scope.go:117] "RemoveContainer" containerID="6a3f8415df02f19bf44d8ff570aa29b991fe00f296a52eab364e8788cee6482e" Jan 26 18:55:51 crc kubenswrapper[4737]: I0126 18:55:51.472418 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 18:55:51 crc kubenswrapper[4737]: I0126 18:55:51.510708 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 18:55:51 crc kubenswrapper[4737]: E0126 18:55:51.511239 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49c93b4b-1101-4e35-857b-722849fadd92" containerName="probe" Jan 26 18:55:51 crc kubenswrapper[4737]: I0126 18:55:51.511255 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="49c93b4b-1101-4e35-857b-722849fadd92" containerName="probe" Jan 26 18:55:51 crc kubenswrapper[4737]: E0126 18:55:51.511302 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49c93b4b-1101-4e35-857b-722849fadd92" containerName="cinder-scheduler" Jan 26 18:55:51 crc kubenswrapper[4737]: I0126 18:55:51.511309 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="49c93b4b-1101-4e35-857b-722849fadd92" containerName="cinder-scheduler" Jan 26 18:55:51 crc kubenswrapper[4737]: I0126 18:55:51.511531 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="49c93b4b-1101-4e35-857b-722849fadd92" containerName="cinder-scheduler" Jan 26 18:55:51 crc kubenswrapper[4737]: I0126 18:55:51.511552 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="49c93b4b-1101-4e35-857b-722849fadd92" containerName="probe" Jan 26 18:55:51 crc kubenswrapper[4737]: I0126 18:55:51.512834 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 18:55:51 crc kubenswrapper[4737]: I0126 18:55:51.518452 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 26 18:55:51 crc kubenswrapper[4737]: I0126 18:55:51.543447 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 18:55:51 crc kubenswrapper[4737]: I0126 18:55:51.559886 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb9hn\" (UniqueName: \"kubernetes.io/projected/635e921c-e7e7-4721-a152-f589e21e4631-kube-api-access-hb9hn\") pod \"cinder-scheduler-0\" (UID: \"635e921c-e7e7-4721-a152-f589e21e4631\") " pod="openstack/cinder-scheduler-0" Jan 26 18:55:51 crc kubenswrapper[4737]: I0126 18:55:51.559963 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/635e921c-e7e7-4721-a152-f589e21e4631-scripts\") pod \"cinder-scheduler-0\" (UID: \"635e921c-e7e7-4721-a152-f589e21e4631\") " pod="openstack/cinder-scheduler-0" Jan 26 18:55:51 crc kubenswrapper[4737]: I0126 18:55:51.559992 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/635e921c-e7e7-4721-a152-f589e21e4631-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"635e921c-e7e7-4721-a152-f589e21e4631\") " pod="openstack/cinder-scheduler-0" Jan 26 18:55:51 crc kubenswrapper[4737]: I0126 18:55:51.560037 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/635e921c-e7e7-4721-a152-f589e21e4631-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"635e921c-e7e7-4721-a152-f589e21e4631\") " pod="openstack/cinder-scheduler-0" Jan 26 18:55:51 crc kubenswrapper[4737]: I0126 18:55:51.560281 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/635e921c-e7e7-4721-a152-f589e21e4631-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"635e921c-e7e7-4721-a152-f589e21e4631\") " pod="openstack/cinder-scheduler-0" Jan 26 18:55:51 crc kubenswrapper[4737]: I0126 18:55:51.560303 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/635e921c-e7e7-4721-a152-f589e21e4631-config-data\") pod \"cinder-scheduler-0\" (UID: \"635e921c-e7e7-4721-a152-f589e21e4631\") " pod="openstack/cinder-scheduler-0" Jan 26 18:55:51 crc kubenswrapper[4737]: I0126 18:55:51.662764 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb9hn\" (UniqueName: \"kubernetes.io/projected/635e921c-e7e7-4721-a152-f589e21e4631-kube-api-access-hb9hn\") pod \"cinder-scheduler-0\" (UID: \"635e921c-e7e7-4721-a152-f589e21e4631\") " pod="openstack/cinder-scheduler-0" Jan 26 18:55:51 crc kubenswrapper[4737]: I0126 18:55:51.663267 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/635e921c-e7e7-4721-a152-f589e21e4631-scripts\") pod \"cinder-scheduler-0\" (UID: \"635e921c-e7e7-4721-a152-f589e21e4631\") " pod="openstack/cinder-scheduler-0" Jan 26 18:55:51 crc kubenswrapper[4737]: I0126 18:55:51.663320 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/635e921c-e7e7-4721-a152-f589e21e4631-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"635e921c-e7e7-4721-a152-f589e21e4631\") " pod="openstack/cinder-scheduler-0" Jan 26 18:55:51 crc kubenswrapper[4737]: I0126 18:55:51.663406 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/635e921c-e7e7-4721-a152-f589e21e4631-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"635e921c-e7e7-4721-a152-f589e21e4631\") " pod="openstack/cinder-scheduler-0" Jan 26 18:55:51 crc kubenswrapper[4737]: I0126 18:55:51.663509 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/635e921c-e7e7-4721-a152-f589e21e4631-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"635e921c-e7e7-4721-a152-f589e21e4631\") " pod="openstack/cinder-scheduler-0" Jan 26 18:55:51 crc kubenswrapper[4737]: I0126 18:55:51.663545 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/635e921c-e7e7-4721-a152-f589e21e4631-config-data\") pod \"cinder-scheduler-0\" (UID: \"635e921c-e7e7-4721-a152-f589e21e4631\") " pod="openstack/cinder-scheduler-0" Jan 26 18:55:51 crc kubenswrapper[4737]: I0126 18:55:51.663659 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/635e921c-e7e7-4721-a152-f589e21e4631-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"635e921c-e7e7-4721-a152-f589e21e4631\") " pod="openstack/cinder-scheduler-0" Jan 26 18:55:51 crc kubenswrapper[4737]: I0126 18:55:51.665936 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-86b84744f8-59mdj" Jan 26 18:55:51 crc kubenswrapper[4737]: I0126 18:55:51.668163 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/635e921c-e7e7-4721-a152-f589e21e4631-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"635e921c-e7e7-4721-a152-f589e21e4631\") " pod="openstack/cinder-scheduler-0" Jan 26 18:55:51 crc kubenswrapper[4737]: I0126 18:55:51.668196 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/635e921c-e7e7-4721-a152-f589e21e4631-scripts\") pod \"cinder-scheduler-0\" (UID: \"635e921c-e7e7-4721-a152-f589e21e4631\") " pod="openstack/cinder-scheduler-0" Jan 26 18:55:51 crc kubenswrapper[4737]: I0126 18:55:51.668752 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/635e921c-e7e7-4721-a152-f589e21e4631-config-data\") pod \"cinder-scheduler-0\" (UID: \"635e921c-e7e7-4721-a152-f589e21e4631\") " pod="openstack/cinder-scheduler-0" Jan 26 18:55:51 crc kubenswrapper[4737]: I0126 18:55:51.670177 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/635e921c-e7e7-4721-a152-f589e21e4631-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"635e921c-e7e7-4721-a152-f589e21e4631\") " pod="openstack/cinder-scheduler-0" Jan 26 18:55:51 crc kubenswrapper[4737]: I0126 18:55:51.683591 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb9hn\" (UniqueName: \"kubernetes.io/projected/635e921c-e7e7-4721-a152-f589e21e4631-kube-api-access-hb9hn\") pod \"cinder-scheduler-0\" (UID: \"635e921c-e7e7-4721-a152-f589e21e4631\") " pod="openstack/cinder-scheduler-0" Jan 26 18:55:51 crc kubenswrapper[4737]: I0126 18:55:51.863717 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 18:55:52 crc kubenswrapper[4737]: I0126 18:55:52.555969 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 18:55:52 crc kubenswrapper[4737]: W0126 18:55:52.559560 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod635e921c_e7e7_4721_a152_f589e21e4631.slice/crio-95aa6e2021992e28fd3c12f254bf7ef42786e7a9f182c647bb954e21a372591f WatchSource:0}: Error finding container 95aa6e2021992e28fd3c12f254bf7ef42786e7a9f182c647bb954e21a372591f: Status 404 returned error can't find the container with id 95aa6e2021992e28fd3c12f254bf7ef42786e7a9f182c647bb954e21a372591f Jan 26 18:55:53 crc kubenswrapper[4737]: I0126 18:55:53.001856 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c93b4b-1101-4e35-857b-722849fadd92" path="/var/lib/kubelet/pods/49c93b4b-1101-4e35-857b-722849fadd92/volumes" Jan 26 18:55:53 crc kubenswrapper[4737]: I0126 18:55:53.476224 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"635e921c-e7e7-4721-a152-f589e21e4631","Type":"ContainerStarted","Data":"a05fedf76e133a5f4d24dd0c2f1fa46bd4a0b53631088521e7a3e246f8c2b7c2"} Jan 26 18:55:53 crc kubenswrapper[4737]: I0126 18:55:53.476589 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"635e921c-e7e7-4721-a152-f589e21e4631","Type":"ContainerStarted","Data":"95aa6e2021992e28fd3c12f254bf7ef42786e7a9f182c647bb954e21a372591f"} Jan 26 18:55:54 crc kubenswrapper[4737]: I0126 18:55:54.289557 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-689b884cd-xd7w8" podUID="cec54497-f9a7-4d22-8989-a78d815df93c" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.206:9311/healthcheck\": read tcp 10.217.0.2:37502->10.217.0.206:9311: read: connection reset by peer" Jan 26 18:55:54 crc kubenswrapper[4737]: I0126 18:55:54.289615 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-689b884cd-xd7w8" podUID="cec54497-f9a7-4d22-8989-a78d815df93c" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.206:9311/healthcheck\": read tcp 10.217.0.2:37490->10.217.0.206:9311: read: connection reset by peer" Jan 26 18:55:54 crc kubenswrapper[4737]: I0126 18:55:54.495141 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"635e921c-e7e7-4721-a152-f589e21e4631","Type":"ContainerStarted","Data":"22c150008be1a1e139fb9e8a0d3b8fb9c23eb83dee76c3aab88bdd19845cb447"} Jan 26 18:55:54 crc kubenswrapper[4737]: I0126 18:55:54.525150 4737 generic.go:334] "Generic (PLEG): container finished" podID="cec54497-f9a7-4d22-8989-a78d815df93c" containerID="14468dfc9b5395ce444e1d5e2d3fc9905c9a7e4ac33b331a53e7cf5718691c7a" exitCode=0 Jan 26 18:55:54 crc kubenswrapper[4737]: I0126 18:55:54.525210 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-689b884cd-xd7w8" event={"ID":"cec54497-f9a7-4d22-8989-a78d815df93c","Type":"ContainerDied","Data":"14468dfc9b5395ce444e1d5e2d3fc9905c9a7e4ac33b331a53e7cf5718691c7a"} Jan 26 18:55:54 crc kubenswrapper[4737]: I0126 18:55:54.525399 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.525383407 podStartE2EDuration="3.525383407s" podCreationTimestamp="2026-01-26 18:55:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:55:54.524854835 +0000 UTC m=+1527.833049533" watchObservedRunningTime="2026-01-26 18:55:54.525383407 +0000 UTC m=+1527.833578115" Jan 26 18:55:54 crc kubenswrapper[4737]: I0126 18:55:54.912253 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-689b884cd-xd7w8" Jan 26 18:55:54 crc kubenswrapper[4737]: I0126 18:55:54.971711 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cec54497-f9a7-4d22-8989-a78d815df93c-logs\") pod \"cec54497-f9a7-4d22-8989-a78d815df93c\" (UID: \"cec54497-f9a7-4d22-8989-a78d815df93c\") " Jan 26 18:55:54 crc kubenswrapper[4737]: I0126 18:55:54.972144 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cec54497-f9a7-4d22-8989-a78d815df93c-combined-ca-bundle\") pod \"cec54497-f9a7-4d22-8989-a78d815df93c\" (UID: \"cec54497-f9a7-4d22-8989-a78d815df93c\") " Jan 26 18:55:54 crc kubenswrapper[4737]: I0126 18:55:54.972165 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cec54497-f9a7-4d22-8989-a78d815df93c-config-data-custom\") pod \"cec54497-f9a7-4d22-8989-a78d815df93c\" (UID: \"cec54497-f9a7-4d22-8989-a78d815df93c\") " Jan 26 18:55:54 crc kubenswrapper[4737]: I0126 18:55:54.972216 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxqjv\" (UniqueName: \"kubernetes.io/projected/cec54497-f9a7-4d22-8989-a78d815df93c-kube-api-access-bxqjv\") pod \"cec54497-f9a7-4d22-8989-a78d815df93c\" (UID: \"cec54497-f9a7-4d22-8989-a78d815df93c\") " Jan 26 18:55:54 crc kubenswrapper[4737]: I0126 18:55:54.972253 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cec54497-f9a7-4d22-8989-a78d815df93c-config-data\") pod \"cec54497-f9a7-4d22-8989-a78d815df93c\" (UID: \"cec54497-f9a7-4d22-8989-a78d815df93c\") " Jan 26 18:55:54 crc kubenswrapper[4737]: I0126 18:55:54.978564 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cec54497-f9a7-4d22-8989-a78d815df93c-logs" (OuterVolumeSpecName: "logs") pod "cec54497-f9a7-4d22-8989-a78d815df93c" (UID: "cec54497-f9a7-4d22-8989-a78d815df93c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:55:54 crc kubenswrapper[4737]: I0126 18:55:54.985628 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cec54497-f9a7-4d22-8989-a78d815df93c-kube-api-access-bxqjv" (OuterVolumeSpecName: "kube-api-access-bxqjv") pod "cec54497-f9a7-4d22-8989-a78d815df93c" (UID: "cec54497-f9a7-4d22-8989-a78d815df93c"). InnerVolumeSpecName "kube-api-access-bxqjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:55:54 crc kubenswrapper[4737]: I0126 18:55:54.997890 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cec54497-f9a7-4d22-8989-a78d815df93c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "cec54497-f9a7-4d22-8989-a78d815df93c" (UID: "cec54497-f9a7-4d22-8989-a78d815df93c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:55:55 crc kubenswrapper[4737]: I0126 18:55:55.017235 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cec54497-f9a7-4d22-8989-a78d815df93c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cec54497-f9a7-4d22-8989-a78d815df93c" (UID: "cec54497-f9a7-4d22-8989-a78d815df93c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:55:55 crc kubenswrapper[4737]: I0126 18:55:55.074494 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cec54497-f9a7-4d22-8989-a78d815df93c-config-data" (OuterVolumeSpecName: "config-data") pod "cec54497-f9a7-4d22-8989-a78d815df93c" (UID: "cec54497-f9a7-4d22-8989-a78d815df93c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:55:55 crc kubenswrapper[4737]: I0126 18:55:55.077196 4737 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cec54497-f9a7-4d22-8989-a78d815df93c-logs\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:55 crc kubenswrapper[4737]: I0126 18:55:55.077229 4737 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cec54497-f9a7-4d22-8989-a78d815df93c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:55 crc kubenswrapper[4737]: I0126 18:55:55.077239 4737 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cec54497-f9a7-4d22-8989-a78d815df93c-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:55 crc kubenswrapper[4737]: I0126 18:55:55.077267 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxqjv\" (UniqueName: \"kubernetes.io/projected/cec54497-f9a7-4d22-8989-a78d815df93c-kube-api-access-bxqjv\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:55 crc kubenswrapper[4737]: I0126 18:55:55.077276 4737 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cec54497-f9a7-4d22-8989-a78d815df93c-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:55 crc kubenswrapper[4737]: I0126 18:55:55.541294 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-689b884cd-xd7w8" Jan 26 18:55:55 crc kubenswrapper[4737]: I0126 18:55:55.541269 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-689b884cd-xd7w8" event={"ID":"cec54497-f9a7-4d22-8989-a78d815df93c","Type":"ContainerDied","Data":"d49ab26b7c5c9fbd156c998dd8e8ce5dc3666e7cc3dae8ef13954ecd74185778"} Jan 26 18:55:55 crc kubenswrapper[4737]: I0126 18:55:55.542803 4737 scope.go:117] "RemoveContainer" containerID="14468dfc9b5395ce444e1d5e2d3fc9905c9a7e4ac33b331a53e7cf5718691c7a" Jan 26 18:55:55 crc kubenswrapper[4737]: I0126 18:55:55.575698 4737 scope.go:117] "RemoveContainer" containerID="3e3d61f1f8efce9665ab8c6ea8d0897e1affcdd5f7d0a7c74ad7558a5cdb1277" Jan 26 18:55:55 crc kubenswrapper[4737]: I0126 18:55:55.606011 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-689b884cd-xd7w8"] Jan 26 18:55:55 crc kubenswrapper[4737]: I0126 18:55:55.626237 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-689b884cd-xd7w8"] Jan 26 18:55:55 crc kubenswrapper[4737]: I0126 18:55:55.712554 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 26 18:55:55 crc kubenswrapper[4737]: E0126 18:55:55.713132 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cec54497-f9a7-4d22-8989-a78d815df93c" containerName="barbican-api" Jan 26 18:55:55 crc kubenswrapper[4737]: I0126 18:55:55.713157 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="cec54497-f9a7-4d22-8989-a78d815df93c" containerName="barbican-api" Jan 26 18:55:55 crc kubenswrapper[4737]: E0126 18:55:55.713201 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cec54497-f9a7-4d22-8989-a78d815df93c" containerName="barbican-api-log" Jan 26 18:55:55 crc kubenswrapper[4737]: I0126 18:55:55.713211 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="cec54497-f9a7-4d22-8989-a78d815df93c" containerName="barbican-api-log" Jan 26 18:55:55 crc kubenswrapper[4737]: I0126 18:55:55.713484 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="cec54497-f9a7-4d22-8989-a78d815df93c" containerName="barbican-api-log" Jan 26 18:55:55 crc kubenswrapper[4737]: I0126 18:55:55.713516 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="cec54497-f9a7-4d22-8989-a78d815df93c" containerName="barbican-api" Jan 26 18:55:55 crc kubenswrapper[4737]: I0126 18:55:55.733754 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 26 18:55:55 crc kubenswrapper[4737]: I0126 18:55:55.739645 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 26 18:55:55 crc kubenswrapper[4737]: I0126 18:55:55.741907 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 26 18:55:55 crc kubenswrapper[4737]: I0126 18:55:55.745259 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-lqhhb" Jan 26 18:55:55 crc kubenswrapper[4737]: I0126 18:55:55.783851 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 26 18:55:55 crc kubenswrapper[4737]: I0126 18:55:55.806520 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d857f780-d620-4d1a-bacb-8ecff74a012f-openstack-config\") pod \"openstackclient\" (UID: \"d857f780-d620-4d1a-bacb-8ecff74a012f\") " pod="openstack/openstackclient" Jan 26 18:55:55 crc kubenswrapper[4737]: I0126 18:55:55.807001 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d857f780-d620-4d1a-bacb-8ecff74a012f-openstack-config-secret\") pod \"openstackclient\" (UID: \"d857f780-d620-4d1a-bacb-8ecff74a012f\") " pod="openstack/openstackclient" Jan 26 18:55:55 crc kubenswrapper[4737]: I0126 18:55:55.807363 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzk6s\" (UniqueName: \"kubernetes.io/projected/d857f780-d620-4d1a-bacb-8ecff74a012f-kube-api-access-jzk6s\") pod \"openstackclient\" (UID: \"d857f780-d620-4d1a-bacb-8ecff74a012f\") " pod="openstack/openstackclient" Jan 26 18:55:55 crc kubenswrapper[4737]: I0126 18:55:55.807479 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d857f780-d620-4d1a-bacb-8ecff74a012f-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d857f780-d620-4d1a-bacb-8ecff74a012f\") " pod="openstack/openstackclient" Jan 26 18:55:55 crc kubenswrapper[4737]: I0126 18:55:55.911134 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d857f780-d620-4d1a-bacb-8ecff74a012f-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d857f780-d620-4d1a-bacb-8ecff74a012f\") " pod="openstack/openstackclient" Jan 26 18:55:55 crc kubenswrapper[4737]: I0126 18:55:55.911195 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzk6s\" (UniqueName: \"kubernetes.io/projected/d857f780-d620-4d1a-bacb-8ecff74a012f-kube-api-access-jzk6s\") pod \"openstackclient\" (UID: \"d857f780-d620-4d1a-bacb-8ecff74a012f\") " pod="openstack/openstackclient" Jan 26 18:55:55 crc kubenswrapper[4737]: I0126 18:55:55.911318 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d857f780-d620-4d1a-bacb-8ecff74a012f-openstack-config\") pod \"openstackclient\" (UID: \"d857f780-d620-4d1a-bacb-8ecff74a012f\") " pod="openstack/openstackclient" Jan 26 18:55:55 crc kubenswrapper[4737]: I0126 18:55:55.911434 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d857f780-d620-4d1a-bacb-8ecff74a012f-openstack-config-secret\") pod \"openstackclient\" (UID: \"d857f780-d620-4d1a-bacb-8ecff74a012f\") " pod="openstack/openstackclient" Jan 26 18:55:55 crc kubenswrapper[4737]: I0126 18:55:55.912465 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d857f780-d620-4d1a-bacb-8ecff74a012f-openstack-config\") pod \"openstackclient\" (UID: \"d857f780-d620-4d1a-bacb-8ecff74a012f\") " pod="openstack/openstackclient" Jan 26 18:55:55 crc kubenswrapper[4737]: I0126 18:55:55.917509 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d857f780-d620-4d1a-bacb-8ecff74a012f-openstack-config-secret\") pod \"openstackclient\" (UID: \"d857f780-d620-4d1a-bacb-8ecff74a012f\") " pod="openstack/openstackclient" Jan 26 18:55:55 crc kubenswrapper[4737]: I0126 18:55:55.934437 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d857f780-d620-4d1a-bacb-8ecff74a012f-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d857f780-d620-4d1a-bacb-8ecff74a012f\") " pod="openstack/openstackclient" Jan 26 18:55:55 crc kubenswrapper[4737]: I0126 18:55:55.939057 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzk6s\" (UniqueName: \"kubernetes.io/projected/d857f780-d620-4d1a-bacb-8ecff74a012f-kube-api-access-jzk6s\") pod \"openstackclient\" (UID: \"d857f780-d620-4d1a-bacb-8ecff74a012f\") " pod="openstack/openstackclient" Jan 26 18:55:56 crc kubenswrapper[4737]: I0126 18:55:56.060734 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 26 18:55:56 crc kubenswrapper[4737]: W0126 18:55:56.536736 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd857f780_d620_4d1a_bacb_8ecff74a012f.slice/crio-e8dd0f459935cd4e589cfbe27e91ab184907cc4122c70af5c7f190eabfbf3574 WatchSource:0}: Error finding container e8dd0f459935cd4e589cfbe27e91ab184907cc4122c70af5c7f190eabfbf3574: Status 404 returned error can't find the container with id e8dd0f459935cd4e589cfbe27e91ab184907cc4122c70af5c7f190eabfbf3574 Jan 26 18:55:56 crc kubenswrapper[4737]: I0126 18:55:56.552839 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 26 18:55:56 crc kubenswrapper[4737]: I0126 18:55:56.553551 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"d857f780-d620-4d1a-bacb-8ecff74a012f","Type":"ContainerStarted","Data":"e8dd0f459935cd4e589cfbe27e91ab184907cc4122c70af5c7f190eabfbf3574"} Jan 26 18:55:56 crc kubenswrapper[4737]: I0126 18:55:56.864690 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 26 18:55:56 crc kubenswrapper[4737]: I0126 18:55:56.994937 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cec54497-f9a7-4d22-8989-a78d815df93c" path="/var/lib/kubelet/pods/cec54497-f9a7-4d22-8989-a78d815df93c/volumes" Jan 26 18:56:02 crc kubenswrapper[4737]: I0126 18:56:02.044096 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6dd8ff9d59-rttts"] Jan 26 18:56:02 crc kubenswrapper[4737]: I0126 18:56:02.046586 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6dd8ff9d59-rttts" Jan 26 18:56:02 crc kubenswrapper[4737]: I0126 18:56:02.057672 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 26 18:56:02 crc kubenswrapper[4737]: I0126 18:56:02.057879 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 26 18:56:02 crc kubenswrapper[4737]: I0126 18:56:02.057988 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 26 18:56:02 crc kubenswrapper[4737]: I0126 18:56:02.066711 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6dd8ff9d59-rttts"] Jan 26 18:56:02 crc kubenswrapper[4737]: I0126 18:56:02.179956 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/38df0a7c-47f1-4834-b970-d815d009b6d7-etc-swift\") pod \"swift-proxy-6dd8ff9d59-rttts\" (UID: \"38df0a7c-47f1-4834-b970-d815d009b6d7\") " pod="openstack/swift-proxy-6dd8ff9d59-rttts" Jan 26 18:56:02 crc kubenswrapper[4737]: I0126 18:56:02.180192 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38df0a7c-47f1-4834-b970-d815d009b6d7-combined-ca-bundle\") pod \"swift-proxy-6dd8ff9d59-rttts\" (UID: \"38df0a7c-47f1-4834-b970-d815d009b6d7\") " pod="openstack/swift-proxy-6dd8ff9d59-rttts" Jan 26 18:56:02 crc kubenswrapper[4737]: I0126 18:56:02.180422 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/38df0a7c-47f1-4834-b970-d815d009b6d7-internal-tls-certs\") pod \"swift-proxy-6dd8ff9d59-rttts\" (UID: \"38df0a7c-47f1-4834-b970-d815d009b6d7\") " pod="openstack/swift-proxy-6dd8ff9d59-rttts" Jan 26 18:56:02 crc kubenswrapper[4737]: I0126 18:56:02.180444 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/38df0a7c-47f1-4834-b970-d815d009b6d7-public-tls-certs\") pod \"swift-proxy-6dd8ff9d59-rttts\" (UID: \"38df0a7c-47f1-4834-b970-d815d009b6d7\") " pod="openstack/swift-proxy-6dd8ff9d59-rttts" Jan 26 18:56:02 crc kubenswrapper[4737]: I0126 18:56:02.180766 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38df0a7c-47f1-4834-b970-d815d009b6d7-config-data\") pod \"swift-proxy-6dd8ff9d59-rttts\" (UID: \"38df0a7c-47f1-4834-b970-d815d009b6d7\") " pod="openstack/swift-proxy-6dd8ff9d59-rttts" Jan 26 18:56:02 crc kubenswrapper[4737]: I0126 18:56:02.180917 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38df0a7c-47f1-4834-b970-d815d009b6d7-log-httpd\") pod \"swift-proxy-6dd8ff9d59-rttts\" (UID: \"38df0a7c-47f1-4834-b970-d815d009b6d7\") " pod="openstack/swift-proxy-6dd8ff9d59-rttts" Jan 26 18:56:02 crc kubenswrapper[4737]: I0126 18:56:02.180946 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4jx5\" (UniqueName: \"kubernetes.io/projected/38df0a7c-47f1-4834-b970-d815d009b6d7-kube-api-access-v4jx5\") pod \"swift-proxy-6dd8ff9d59-rttts\" (UID: \"38df0a7c-47f1-4834-b970-d815d009b6d7\") " pod="openstack/swift-proxy-6dd8ff9d59-rttts" Jan 26 18:56:02 crc kubenswrapper[4737]: I0126 18:56:02.181082 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38df0a7c-47f1-4834-b970-d815d009b6d7-run-httpd\") pod \"swift-proxy-6dd8ff9d59-rttts\" (UID: \"38df0a7c-47f1-4834-b970-d815d009b6d7\") " pod="openstack/swift-proxy-6dd8ff9d59-rttts" Jan 26 18:56:02 crc kubenswrapper[4737]: I0126 18:56:02.283605 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38df0a7c-47f1-4834-b970-d815d009b6d7-log-httpd\") pod \"swift-proxy-6dd8ff9d59-rttts\" (UID: \"38df0a7c-47f1-4834-b970-d815d009b6d7\") " pod="openstack/swift-proxy-6dd8ff9d59-rttts" Jan 26 18:56:02 crc kubenswrapper[4737]: I0126 18:56:02.283674 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4jx5\" (UniqueName: \"kubernetes.io/projected/38df0a7c-47f1-4834-b970-d815d009b6d7-kube-api-access-v4jx5\") pod \"swift-proxy-6dd8ff9d59-rttts\" (UID: \"38df0a7c-47f1-4834-b970-d815d009b6d7\") " pod="openstack/swift-proxy-6dd8ff9d59-rttts" Jan 26 18:56:02 crc kubenswrapper[4737]: I0126 18:56:02.283783 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38df0a7c-47f1-4834-b970-d815d009b6d7-run-httpd\") pod \"swift-proxy-6dd8ff9d59-rttts\" (UID: \"38df0a7c-47f1-4834-b970-d815d009b6d7\") " pod="openstack/swift-proxy-6dd8ff9d59-rttts" Jan 26 18:56:02 crc kubenswrapper[4737]: I0126 18:56:02.283825 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/38df0a7c-47f1-4834-b970-d815d009b6d7-etc-swift\") pod \"swift-proxy-6dd8ff9d59-rttts\" (UID: \"38df0a7c-47f1-4834-b970-d815d009b6d7\") " pod="openstack/swift-proxy-6dd8ff9d59-rttts" Jan 26 18:56:02 crc kubenswrapper[4737]: I0126 18:56:02.283900 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38df0a7c-47f1-4834-b970-d815d009b6d7-combined-ca-bundle\") pod \"swift-proxy-6dd8ff9d59-rttts\" (UID: \"38df0a7c-47f1-4834-b970-d815d009b6d7\") " pod="openstack/swift-proxy-6dd8ff9d59-rttts" Jan 26 18:56:02 crc kubenswrapper[4737]: I0126 18:56:02.284016 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/38df0a7c-47f1-4834-b970-d815d009b6d7-internal-tls-certs\") pod \"swift-proxy-6dd8ff9d59-rttts\" (UID: \"38df0a7c-47f1-4834-b970-d815d009b6d7\") " pod="openstack/swift-proxy-6dd8ff9d59-rttts" Jan 26 18:56:02 crc kubenswrapper[4737]: I0126 18:56:02.284038 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/38df0a7c-47f1-4834-b970-d815d009b6d7-public-tls-certs\") pod \"swift-proxy-6dd8ff9d59-rttts\" (UID: \"38df0a7c-47f1-4834-b970-d815d009b6d7\") " pod="openstack/swift-proxy-6dd8ff9d59-rttts" Jan 26 18:56:02 crc kubenswrapper[4737]: I0126 18:56:02.284158 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38df0a7c-47f1-4834-b970-d815d009b6d7-config-data\") pod \"swift-proxy-6dd8ff9d59-rttts\" (UID: \"38df0a7c-47f1-4834-b970-d815d009b6d7\") " pod="openstack/swift-proxy-6dd8ff9d59-rttts" Jan 26 18:56:02 crc kubenswrapper[4737]: I0126 18:56:02.284315 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38df0a7c-47f1-4834-b970-d815d009b6d7-run-httpd\") pod \"swift-proxy-6dd8ff9d59-rttts\" (UID: \"38df0a7c-47f1-4834-b970-d815d009b6d7\") " pod="openstack/swift-proxy-6dd8ff9d59-rttts" Jan 26 18:56:02 crc kubenswrapper[4737]: I0126 18:56:02.285307 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38df0a7c-47f1-4834-b970-d815d009b6d7-log-httpd\") pod \"swift-proxy-6dd8ff9d59-rttts\" (UID: \"38df0a7c-47f1-4834-b970-d815d009b6d7\") " pod="openstack/swift-proxy-6dd8ff9d59-rttts" Jan 26 18:56:02 crc kubenswrapper[4737]: I0126 18:56:02.294231 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38df0a7c-47f1-4834-b970-d815d009b6d7-combined-ca-bundle\") pod \"swift-proxy-6dd8ff9d59-rttts\" (UID: \"38df0a7c-47f1-4834-b970-d815d009b6d7\") " pod="openstack/swift-proxy-6dd8ff9d59-rttts" Jan 26 18:56:02 crc kubenswrapper[4737]: I0126 18:56:02.294328 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/38df0a7c-47f1-4834-b970-d815d009b6d7-etc-swift\") pod \"swift-proxy-6dd8ff9d59-rttts\" (UID: \"38df0a7c-47f1-4834-b970-d815d009b6d7\") " pod="openstack/swift-proxy-6dd8ff9d59-rttts" Jan 26 18:56:02 crc kubenswrapper[4737]: I0126 18:56:02.295659 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/38df0a7c-47f1-4834-b970-d815d009b6d7-internal-tls-certs\") pod \"swift-proxy-6dd8ff9d59-rttts\" (UID: \"38df0a7c-47f1-4834-b970-d815d009b6d7\") " pod="openstack/swift-proxy-6dd8ff9d59-rttts" Jan 26 18:56:02 crc kubenswrapper[4737]: I0126 18:56:02.300685 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38df0a7c-47f1-4834-b970-d815d009b6d7-config-data\") pod \"swift-proxy-6dd8ff9d59-rttts\" (UID: \"38df0a7c-47f1-4834-b970-d815d009b6d7\") " pod="openstack/swift-proxy-6dd8ff9d59-rttts" Jan 26 18:56:02 crc kubenswrapper[4737]: I0126 18:56:02.300770 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/38df0a7c-47f1-4834-b970-d815d009b6d7-public-tls-certs\") pod \"swift-proxy-6dd8ff9d59-rttts\" (UID: \"38df0a7c-47f1-4834-b970-d815d009b6d7\") " pod="openstack/swift-proxy-6dd8ff9d59-rttts" Jan 26 18:56:02 crc kubenswrapper[4737]: I0126 18:56:02.304808 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4jx5\" (UniqueName: \"kubernetes.io/projected/38df0a7c-47f1-4834-b970-d815d009b6d7-kube-api-access-v4jx5\") pod \"swift-proxy-6dd8ff9d59-rttts\" (UID: \"38df0a7c-47f1-4834-b970-d815d009b6d7\") " pod="openstack/swift-proxy-6dd8ff9d59-rttts" Jan 26 18:56:02 crc kubenswrapper[4737]: I0126 18:56:02.400339 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6dd8ff9d59-rttts" Jan 26 18:56:02 crc kubenswrapper[4737]: I0126 18:56:02.428063 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 26 18:56:02 crc kubenswrapper[4737]: I0126 18:56:02.661146 4737 generic.go:334] "Generic (PLEG): container finished" podID="deadcd24-0a98-4f1d-986b-75187a3eccee" containerID="e2be2cc101276cae9cd96c6322ea82bb13c83bfa92517786990c72a87502e36a" exitCode=137 Jan 26 18:56:02 crc kubenswrapper[4737]: I0126 18:56:02.661235 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"deadcd24-0a98-4f1d-986b-75187a3eccee","Type":"ContainerDied","Data":"e2be2cc101276cae9cd96c6322ea82bb13c83bfa92517786990c72a87502e36a"} Jan 26 18:56:03 crc kubenswrapper[4737]: I0126 18:56:03.068872 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="deadcd24-0a98-4f1d-986b-75187a3eccee" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.209:8776/healthcheck\": dial tcp 10.217.0.209:8776: connect: connection refused" Jan 26 18:56:04 crc kubenswrapper[4737]: I0126 18:56:04.136814 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-55cbc4d4bf-89lfk" Jan 26 18:56:04 crc kubenswrapper[4737]: I0126 18:56:04.243296 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-75ff77bb76-fx82z"] Jan 26 18:56:04 crc kubenswrapper[4737]: I0126 18:56:04.243615 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-75ff77bb76-fx82z" podUID="c995be15-2ce8-471e-b1cb-880242eb10f6" containerName="neutron-api" containerID="cri-o://ab6b962c9faa096a1c52d6d51fa797c462cb80650a5f052fca9c9324622c4e4a" gracePeriod=30 Jan 26 18:56:04 crc kubenswrapper[4737]: I0126 18:56:04.244264 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-75ff77bb76-fx82z" podUID="c995be15-2ce8-471e-b1cb-880242eb10f6" containerName="neutron-httpd" containerID="cri-o://e71eaf70b7225ef5806219a093666ec7834a1bfd927b32cdcef79f1ad0f6a97d" gracePeriod=30 Jan 26 18:56:04 crc kubenswrapper[4737]: I0126 18:56:04.692991 4737 generic.go:334] "Generic (PLEG): container finished" podID="c995be15-2ce8-471e-b1cb-880242eb10f6" containerID="e71eaf70b7225ef5806219a093666ec7834a1bfd927b32cdcef79f1ad0f6a97d" exitCode=0 Jan 26 18:56:04 crc kubenswrapper[4737]: I0126 18:56:04.693126 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75ff77bb76-fx82z" event={"ID":"c995be15-2ce8-471e-b1cb-880242eb10f6","Type":"ContainerDied","Data":"e71eaf70b7225ef5806219a093666ec7834a1bfd927b32cdcef79f1ad0f6a97d"} Jan 26 18:56:05 crc kubenswrapper[4737]: I0126 18:56:05.474976 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:56:05 crc kubenswrapper[4737]: I0126 18:56:05.475366 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dce274e8-16df-4b86-8803-b681b0160bc3" containerName="ceilometer-central-agent" containerID="cri-o://7e562ddf445da0968e6968ec36b524eefb8335cd84d34770959ff0bdaddf959e" gracePeriod=30 Jan 26 18:56:05 crc kubenswrapper[4737]: I0126 18:56:05.475706 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dce274e8-16df-4b86-8803-b681b0160bc3" containerName="proxy-httpd" containerID="cri-o://ec29fec033478998c456ee72e5cedecbcd414e1690994e988310b88c816f604e" gracePeriod=30 Jan 26 18:56:05 crc kubenswrapper[4737]: I0126 18:56:05.475809 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dce274e8-16df-4b86-8803-b681b0160bc3" containerName="ceilometer-notification-agent" containerID="cri-o://7c6b320d56f98865258b0d472d82a5d9ee5605b4552d892201d84591eb450942" gracePeriod=30 Jan 26 18:56:05 crc kubenswrapper[4737]: I0126 18:56:05.475712 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dce274e8-16df-4b86-8803-b681b0160bc3" containerName="sg-core" containerID="cri-o://3ba4ea608b2a81a054094b03a442ad33b3817e85249a779afcab5b0ef5056092" gracePeriod=30 Jan 26 18:56:05 crc kubenswrapper[4737]: I0126 18:56:05.506266 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="dce274e8-16df-4b86-8803-b681b0160bc3" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.212:3000/\": EOF" Jan 26 18:56:05 crc kubenswrapper[4737]: I0126 18:56:05.710158 4737 generic.go:334] "Generic (PLEG): container finished" podID="dce274e8-16df-4b86-8803-b681b0160bc3" containerID="3ba4ea608b2a81a054094b03a442ad33b3817e85249a779afcab5b0ef5056092" exitCode=2 Jan 26 18:56:05 crc kubenswrapper[4737]: I0126 18:56:05.710252 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dce274e8-16df-4b86-8803-b681b0160bc3","Type":"ContainerDied","Data":"3ba4ea608b2a81a054094b03a442ad33b3817e85249a779afcab5b0ef5056092"} Jan 26 18:56:06 crc kubenswrapper[4737]: I0126 18:56:06.733469 4737 generic.go:334] "Generic (PLEG): container finished" podID="dce274e8-16df-4b86-8803-b681b0160bc3" containerID="ec29fec033478998c456ee72e5cedecbcd414e1690994e988310b88c816f604e" exitCode=0 Jan 26 18:56:06 crc kubenswrapper[4737]: I0126 18:56:06.733813 4737 generic.go:334] "Generic (PLEG): container finished" podID="dce274e8-16df-4b86-8803-b681b0160bc3" containerID="7e562ddf445da0968e6968ec36b524eefb8335cd84d34770959ff0bdaddf959e" exitCode=0 Jan 26 18:56:06 crc kubenswrapper[4737]: I0126 18:56:06.733549 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dce274e8-16df-4b86-8803-b681b0160bc3","Type":"ContainerDied","Data":"ec29fec033478998c456ee72e5cedecbcd414e1690994e988310b88c816f604e"} Jan 26 18:56:06 crc kubenswrapper[4737]: I0126 18:56:06.733864 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dce274e8-16df-4b86-8803-b681b0160bc3","Type":"ContainerDied","Data":"7e562ddf445da0968e6968ec36b524eefb8335cd84d34770959ff0bdaddf959e"} Jan 26 18:56:08 crc kubenswrapper[4737]: I0126 18:56:08.067787 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="deadcd24-0a98-4f1d-986b-75187a3eccee" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.209:8776/healthcheck\": dial tcp 10.217.0.209:8776: connect: connection refused" Jan 26 18:56:09 crc kubenswrapper[4737]: I0126 18:56:09.768058 4737 generic.go:334] "Generic (PLEG): container finished" podID="c995be15-2ce8-471e-b1cb-880242eb10f6" containerID="ab6b962c9faa096a1c52d6d51fa797c462cb80650a5f052fca9c9324622c4e4a" exitCode=0 Jan 26 18:56:09 crc kubenswrapper[4737]: I0126 18:56:09.768110 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75ff77bb76-fx82z" event={"ID":"c995be15-2ce8-471e-b1cb-880242eb10f6","Type":"ContainerDied","Data":"ab6b962c9faa096a1c52d6d51fa797c462cb80650a5f052fca9c9324622c4e4a"} Jan 26 18:56:10 crc kubenswrapper[4737]: I0126 18:56:10.755082 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 18:56:10 crc kubenswrapper[4737]: I0126 18:56:10.795030 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"d857f780-d620-4d1a-bacb-8ecff74a012f","Type":"ContainerStarted","Data":"92e9725bb0b33e821391f6a06073f89cb18eed83ed18ae7f5c45e8f370352b71"} Jan 26 18:56:10 crc kubenswrapper[4737]: I0126 18:56:10.802339 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"deadcd24-0a98-4f1d-986b-75187a3eccee","Type":"ContainerDied","Data":"1a691060e34750f3c08f1f945d405b6593c3d94e35e875c9f0e7dab8150f33c3"} Jan 26 18:56:10 crc kubenswrapper[4737]: I0126 18:56:10.802427 4737 scope.go:117] "RemoveContainer" containerID="e2be2cc101276cae9cd96c6322ea82bb13c83bfa92517786990c72a87502e36a" Jan 26 18:56:10 crc kubenswrapper[4737]: I0126 18:56:10.802629 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 18:56:10 crc kubenswrapper[4737]: I0126 18:56:10.825642 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.059498933 podStartE2EDuration="15.825622499s" podCreationTimestamp="2026-01-26 18:55:55 +0000 UTC" firstStartedPulling="2026-01-26 18:55:56.544285136 +0000 UTC m=+1529.852479844" lastFinishedPulling="2026-01-26 18:56:10.310408702 +0000 UTC m=+1543.618603410" observedRunningTime="2026-01-26 18:56:10.823866078 +0000 UTC m=+1544.132060796" watchObservedRunningTime="2026-01-26 18:56:10.825622499 +0000 UTC m=+1544.133817207" Jan 26 18:56:10 crc kubenswrapper[4737]: I0126 18:56:10.850048 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deadcd24-0a98-4f1d-986b-75187a3eccee-config-data\") pod \"deadcd24-0a98-4f1d-986b-75187a3eccee\" (UID: \"deadcd24-0a98-4f1d-986b-75187a3eccee\") " Jan 26 18:56:10 crc kubenswrapper[4737]: I0126 18:56:10.850198 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/deadcd24-0a98-4f1d-986b-75187a3eccee-logs\") pod \"deadcd24-0a98-4f1d-986b-75187a3eccee\" (UID: \"deadcd24-0a98-4f1d-986b-75187a3eccee\") " Jan 26 18:56:10 crc kubenswrapper[4737]: I0126 18:56:10.850233 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deadcd24-0a98-4f1d-986b-75187a3eccee-combined-ca-bundle\") pod \"deadcd24-0a98-4f1d-986b-75187a3eccee\" (UID: \"deadcd24-0a98-4f1d-986b-75187a3eccee\") " Jan 26 18:56:10 crc kubenswrapper[4737]: I0126 18:56:10.850328 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/deadcd24-0a98-4f1d-986b-75187a3eccee-etc-machine-id\") pod \"deadcd24-0a98-4f1d-986b-75187a3eccee\" (UID: \"deadcd24-0a98-4f1d-986b-75187a3eccee\") " Jan 26 18:56:10 crc kubenswrapper[4737]: I0126 18:56:10.850371 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/deadcd24-0a98-4f1d-986b-75187a3eccee-config-data-custom\") pod \"deadcd24-0a98-4f1d-986b-75187a3eccee\" (UID: \"deadcd24-0a98-4f1d-986b-75187a3eccee\") " Jan 26 18:56:10 crc kubenswrapper[4737]: I0126 18:56:10.850415 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/deadcd24-0a98-4f1d-986b-75187a3eccee-scripts\") pod \"deadcd24-0a98-4f1d-986b-75187a3eccee\" (UID: \"deadcd24-0a98-4f1d-986b-75187a3eccee\") " Jan 26 18:56:10 crc kubenswrapper[4737]: I0126 18:56:10.850579 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t94b8\" (UniqueName: \"kubernetes.io/projected/deadcd24-0a98-4f1d-986b-75187a3eccee-kube-api-access-t94b8\") pod \"deadcd24-0a98-4f1d-986b-75187a3eccee\" (UID: \"deadcd24-0a98-4f1d-986b-75187a3eccee\") " Jan 26 18:56:10 crc kubenswrapper[4737]: I0126 18:56:10.850749 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/deadcd24-0a98-4f1d-986b-75187a3eccee-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "deadcd24-0a98-4f1d-986b-75187a3eccee" (UID: "deadcd24-0a98-4f1d-986b-75187a3eccee"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:56:10 crc kubenswrapper[4737]: I0126 18:56:10.851170 4737 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/deadcd24-0a98-4f1d-986b-75187a3eccee-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:10 crc kubenswrapper[4737]: I0126 18:56:10.851405 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/deadcd24-0a98-4f1d-986b-75187a3eccee-logs" (OuterVolumeSpecName: "logs") pod "deadcd24-0a98-4f1d-986b-75187a3eccee" (UID: "deadcd24-0a98-4f1d-986b-75187a3eccee"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:56:10 crc kubenswrapper[4737]: I0126 18:56:10.860246 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deadcd24-0a98-4f1d-986b-75187a3eccee-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "deadcd24-0a98-4f1d-986b-75187a3eccee" (UID: "deadcd24-0a98-4f1d-986b-75187a3eccee"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:56:10 crc kubenswrapper[4737]: I0126 18:56:10.861491 4737 scope.go:117] "RemoveContainer" containerID="17d21f64c9d2d1e2429d61a41c47c614ed746fecd46cf87a0749818145c44ab0" Jan 26 18:56:10 crc kubenswrapper[4737]: I0126 18:56:10.870438 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deadcd24-0a98-4f1d-986b-75187a3eccee-scripts" (OuterVolumeSpecName: "scripts") pod "deadcd24-0a98-4f1d-986b-75187a3eccee" (UID: "deadcd24-0a98-4f1d-986b-75187a3eccee"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:56:10 crc kubenswrapper[4737]: I0126 18:56:10.870717 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/deadcd24-0a98-4f1d-986b-75187a3eccee-kube-api-access-t94b8" (OuterVolumeSpecName: "kube-api-access-t94b8") pod "deadcd24-0a98-4f1d-986b-75187a3eccee" (UID: "deadcd24-0a98-4f1d-986b-75187a3eccee"). InnerVolumeSpecName "kube-api-access-t94b8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:56:10 crc kubenswrapper[4737]: I0126 18:56:10.953300 4737 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/deadcd24-0a98-4f1d-986b-75187a3eccee-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:10 crc kubenswrapper[4737]: I0126 18:56:10.953342 4737 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/deadcd24-0a98-4f1d-986b-75187a3eccee-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:10 crc kubenswrapper[4737]: I0126 18:56:10.953355 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t94b8\" (UniqueName: \"kubernetes.io/projected/deadcd24-0a98-4f1d-986b-75187a3eccee-kube-api-access-t94b8\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:10 crc kubenswrapper[4737]: I0126 18:56:10.953366 4737 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/deadcd24-0a98-4f1d-986b-75187a3eccee-logs\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:10 crc kubenswrapper[4737]: I0126 18:56:10.962288 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deadcd24-0a98-4f1d-986b-75187a3eccee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "deadcd24-0a98-4f1d-986b-75187a3eccee" (UID: "deadcd24-0a98-4f1d-986b-75187a3eccee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.003625 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deadcd24-0a98-4f1d-986b-75187a3eccee-config-data" (OuterVolumeSpecName: "config-data") pod "deadcd24-0a98-4f1d-986b-75187a3eccee" (UID: "deadcd24-0a98-4f1d-986b-75187a3eccee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.059588 4737 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deadcd24-0a98-4f1d-986b-75187a3eccee-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.059620 4737 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deadcd24-0a98-4f1d-986b-75187a3eccee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.130381 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75ff77bb76-fx82z" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.288758 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.292033 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c995be15-2ce8-471e-b1cb-880242eb10f6-httpd-config\") pod \"c995be15-2ce8-471e-b1cb-880242eb10f6\" (UID: \"c995be15-2ce8-471e-b1cb-880242eb10f6\") " Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.292102 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c995be15-2ce8-471e-b1cb-880242eb10f6-combined-ca-bundle\") pod \"c995be15-2ce8-471e-b1cb-880242eb10f6\" (UID: \"c995be15-2ce8-471e-b1cb-880242eb10f6\") " Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.292247 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c995be15-2ce8-471e-b1cb-880242eb10f6-config\") pod \"c995be15-2ce8-471e-b1cb-880242eb10f6\" (UID: \"c995be15-2ce8-471e-b1cb-880242eb10f6\") " Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.292389 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c995be15-2ce8-471e-b1cb-880242eb10f6-ovndb-tls-certs\") pod \"c995be15-2ce8-471e-b1cb-880242eb10f6\" (UID: \"c995be15-2ce8-471e-b1cb-880242eb10f6\") " Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.292485 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnrrj\" (UniqueName: \"kubernetes.io/projected/c995be15-2ce8-471e-b1cb-880242eb10f6-kube-api-access-pnrrj\") pod \"c995be15-2ce8-471e-b1cb-880242eb10f6\" (UID: \"c995be15-2ce8-471e-b1cb-880242eb10f6\") " Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.299442 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c995be15-2ce8-471e-b1cb-880242eb10f6-kube-api-access-pnrrj" (OuterVolumeSpecName: "kube-api-access-pnrrj") pod "c995be15-2ce8-471e-b1cb-880242eb10f6" (UID: "c995be15-2ce8-471e-b1cb-880242eb10f6"). InnerVolumeSpecName "kube-api-access-pnrrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.320359 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c995be15-2ce8-471e-b1cb-880242eb10f6-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "c995be15-2ce8-471e-b1cb-880242eb10f6" (UID: "c995be15-2ce8-471e-b1cb-880242eb10f6"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.330648 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.357455 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 26 18:56:11 crc kubenswrapper[4737]: E0126 18:56:11.357881 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c995be15-2ce8-471e-b1cb-880242eb10f6" containerName="neutron-api" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.357894 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="c995be15-2ce8-471e-b1cb-880242eb10f6" containerName="neutron-api" Jan 26 18:56:11 crc kubenswrapper[4737]: E0126 18:56:11.358455 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c995be15-2ce8-471e-b1cb-880242eb10f6" containerName="neutron-httpd" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.358465 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="c995be15-2ce8-471e-b1cb-880242eb10f6" containerName="neutron-httpd" Jan 26 18:56:11 crc kubenswrapper[4737]: E0126 18:56:11.358484 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deadcd24-0a98-4f1d-986b-75187a3eccee" containerName="cinder-api" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.358490 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="deadcd24-0a98-4f1d-986b-75187a3eccee" containerName="cinder-api" Jan 26 18:56:11 crc kubenswrapper[4737]: E0126 18:56:11.358502 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deadcd24-0a98-4f1d-986b-75187a3eccee" containerName="cinder-api-log" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.358508 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="deadcd24-0a98-4f1d-986b-75187a3eccee" containerName="cinder-api-log" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.359377 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="deadcd24-0a98-4f1d-986b-75187a3eccee" containerName="cinder-api" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.359415 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="c995be15-2ce8-471e-b1cb-880242eb10f6" containerName="neutron-httpd" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.359427 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="deadcd24-0a98-4f1d-986b-75187a3eccee" containerName="cinder-api-log" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.359439 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="c995be15-2ce8-471e-b1cb-880242eb10f6" containerName="neutron-api" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.362237 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.366828 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.367059 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.370840 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.378321 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.394908 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnrrj\" (UniqueName: \"kubernetes.io/projected/c995be15-2ce8-471e-b1cb-880242eb10f6-kube-api-access-pnrrj\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.394935 4737 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c995be15-2ce8-471e-b1cb-880242eb10f6-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.403336 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6dd8ff9d59-rttts"] Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.437863 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c995be15-2ce8-471e-b1cb-880242eb10f6-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "c995be15-2ce8-471e-b1cb-880242eb10f6" (UID: "c995be15-2ce8-471e-b1cb-880242eb10f6"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.474669 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c995be15-2ce8-471e-b1cb-880242eb10f6-config" (OuterVolumeSpecName: "config") pod "c995be15-2ce8-471e-b1cb-880242eb10f6" (UID: "c995be15-2ce8-471e-b1cb-880242eb10f6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.484587 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c995be15-2ce8-471e-b1cb-880242eb10f6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c995be15-2ce8-471e-b1cb-880242eb10f6" (UID: "c995be15-2ce8-471e-b1cb-880242eb10f6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.501625 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/715806cf-cb82-4224-bdb0-8aed20e29b49-etc-machine-id\") pod \"cinder-api-0\" (UID: \"715806cf-cb82-4224-bdb0-8aed20e29b49\") " pod="openstack/cinder-api-0" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.501686 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/715806cf-cb82-4224-bdb0-8aed20e29b49-logs\") pod \"cinder-api-0\" (UID: \"715806cf-cb82-4224-bdb0-8aed20e29b49\") " pod="openstack/cinder-api-0" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.501800 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz24k\" (UniqueName: \"kubernetes.io/projected/715806cf-cb82-4224-bdb0-8aed20e29b49-kube-api-access-pz24k\") pod \"cinder-api-0\" (UID: \"715806cf-cb82-4224-bdb0-8aed20e29b49\") " pod="openstack/cinder-api-0" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.501834 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/715806cf-cb82-4224-bdb0-8aed20e29b49-public-tls-certs\") pod \"cinder-api-0\" (UID: \"715806cf-cb82-4224-bdb0-8aed20e29b49\") " pod="openstack/cinder-api-0" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.501863 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/715806cf-cb82-4224-bdb0-8aed20e29b49-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"715806cf-cb82-4224-bdb0-8aed20e29b49\") " pod="openstack/cinder-api-0" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.501900 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/715806cf-cb82-4224-bdb0-8aed20e29b49-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"715806cf-cb82-4224-bdb0-8aed20e29b49\") " pod="openstack/cinder-api-0" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.502050 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/715806cf-cb82-4224-bdb0-8aed20e29b49-scripts\") pod \"cinder-api-0\" (UID: \"715806cf-cb82-4224-bdb0-8aed20e29b49\") " pod="openstack/cinder-api-0" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.502234 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/715806cf-cb82-4224-bdb0-8aed20e29b49-config-data\") pod \"cinder-api-0\" (UID: \"715806cf-cb82-4224-bdb0-8aed20e29b49\") " pod="openstack/cinder-api-0" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.502299 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/715806cf-cb82-4224-bdb0-8aed20e29b49-config-data-custom\") pod \"cinder-api-0\" (UID: \"715806cf-cb82-4224-bdb0-8aed20e29b49\") " pod="openstack/cinder-api-0" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.502436 4737 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c995be15-2ce8-471e-b1cb-880242eb10f6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.502454 4737 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c995be15-2ce8-471e-b1cb-880242eb10f6-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.502465 4737 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c995be15-2ce8-471e-b1cb-880242eb10f6-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.604158 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pz24k\" (UniqueName: \"kubernetes.io/projected/715806cf-cb82-4224-bdb0-8aed20e29b49-kube-api-access-pz24k\") pod \"cinder-api-0\" (UID: \"715806cf-cb82-4224-bdb0-8aed20e29b49\") " pod="openstack/cinder-api-0" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.604213 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/715806cf-cb82-4224-bdb0-8aed20e29b49-public-tls-certs\") pod \"cinder-api-0\" (UID: \"715806cf-cb82-4224-bdb0-8aed20e29b49\") " pod="openstack/cinder-api-0" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.604241 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/715806cf-cb82-4224-bdb0-8aed20e29b49-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"715806cf-cb82-4224-bdb0-8aed20e29b49\") " pod="openstack/cinder-api-0" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.604274 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/715806cf-cb82-4224-bdb0-8aed20e29b49-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"715806cf-cb82-4224-bdb0-8aed20e29b49\") " pod="openstack/cinder-api-0" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.604297 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/715806cf-cb82-4224-bdb0-8aed20e29b49-scripts\") pod \"cinder-api-0\" (UID: \"715806cf-cb82-4224-bdb0-8aed20e29b49\") " pod="openstack/cinder-api-0" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.604356 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/715806cf-cb82-4224-bdb0-8aed20e29b49-config-data\") pod \"cinder-api-0\" (UID: \"715806cf-cb82-4224-bdb0-8aed20e29b49\") " pod="openstack/cinder-api-0" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.604382 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/715806cf-cb82-4224-bdb0-8aed20e29b49-config-data-custom\") pod \"cinder-api-0\" (UID: \"715806cf-cb82-4224-bdb0-8aed20e29b49\") " pod="openstack/cinder-api-0" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.604417 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/715806cf-cb82-4224-bdb0-8aed20e29b49-etc-machine-id\") pod \"cinder-api-0\" (UID: \"715806cf-cb82-4224-bdb0-8aed20e29b49\") " pod="openstack/cinder-api-0" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.604441 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/715806cf-cb82-4224-bdb0-8aed20e29b49-logs\") pod \"cinder-api-0\" (UID: \"715806cf-cb82-4224-bdb0-8aed20e29b49\") " pod="openstack/cinder-api-0" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.605007 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/715806cf-cb82-4224-bdb0-8aed20e29b49-logs\") pod \"cinder-api-0\" (UID: \"715806cf-cb82-4224-bdb0-8aed20e29b49\") " pod="openstack/cinder-api-0" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.613032 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/715806cf-cb82-4224-bdb0-8aed20e29b49-etc-machine-id\") pod \"cinder-api-0\" (UID: \"715806cf-cb82-4224-bdb0-8aed20e29b49\") " pod="openstack/cinder-api-0" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.637352 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-5f8dbb8f99-b67tw"] Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.640237 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5f8dbb8f99-b67tw" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.649849 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/715806cf-cb82-4224-bdb0-8aed20e29b49-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"715806cf-cb82-4224-bdb0-8aed20e29b49\") " pod="openstack/cinder-api-0" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.655796 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pz24k\" (UniqueName: \"kubernetes.io/projected/715806cf-cb82-4224-bdb0-8aed20e29b49-kube-api-access-pz24k\") pod \"cinder-api-0\" (UID: \"715806cf-cb82-4224-bdb0-8aed20e29b49\") " pod="openstack/cinder-api-0" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.656050 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/715806cf-cb82-4224-bdb0-8aed20e29b49-config-data-custom\") pod \"cinder-api-0\" (UID: \"715806cf-cb82-4224-bdb0-8aed20e29b49\") " pod="openstack/cinder-api-0" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.656625 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/715806cf-cb82-4224-bdb0-8aed20e29b49-public-tls-certs\") pod \"cinder-api-0\" (UID: \"715806cf-cb82-4224-bdb0-8aed20e29b49\") " pod="openstack/cinder-api-0" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.656695 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/715806cf-cb82-4224-bdb0-8aed20e29b49-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"715806cf-cb82-4224-bdb0-8aed20e29b49\") " pod="openstack/cinder-api-0" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.657131 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.657191 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-4flsc" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.657785 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/715806cf-cb82-4224-bdb0-8aed20e29b49-scripts\") pod \"cinder-api-0\" (UID: \"715806cf-cb82-4224-bdb0-8aed20e29b49\") " pod="openstack/cinder-api-0" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.659090 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.659741 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/715806cf-cb82-4224-bdb0-8aed20e29b49-config-data\") pod \"cinder-api-0\" (UID: \"715806cf-cb82-4224-bdb0-8aed20e29b49\") " pod="openstack/cinder-api-0" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.700902 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5f8dbb8f99-b67tw"] Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.741635 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6025d581-6326-4154-b2ad-ba111e0d0f61-combined-ca-bundle\") pod \"heat-engine-5f8dbb8f99-b67tw\" (UID: \"6025d581-6326-4154-b2ad-ba111e0d0f61\") " pod="openstack/heat-engine-5f8dbb8f99-b67tw" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.741771 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6025d581-6326-4154-b2ad-ba111e0d0f61-config-data-custom\") pod \"heat-engine-5f8dbb8f99-b67tw\" (UID: \"6025d581-6326-4154-b2ad-ba111e0d0f61\") " pod="openstack/heat-engine-5f8dbb8f99-b67tw" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.741973 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwt89\" (UniqueName: \"kubernetes.io/projected/6025d581-6326-4154-b2ad-ba111e0d0f61-kube-api-access-hwt89\") pod \"heat-engine-5f8dbb8f99-b67tw\" (UID: \"6025d581-6326-4154-b2ad-ba111e0d0f61\") " pod="openstack/heat-engine-5f8dbb8f99-b67tw" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.742297 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6025d581-6326-4154-b2ad-ba111e0d0f61-config-data\") pod \"heat-engine-5f8dbb8f99-b67tw\" (UID: \"6025d581-6326-4154-b2ad-ba111e0d0f61\") " pod="openstack/heat-engine-5f8dbb8f99-b67tw" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.792172 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-z4djp"] Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.794810 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-z4djp" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.843208 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-b844f4d95-b87n7"] Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.845271 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-b844f4d95-b87n7" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.849001 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7a68838-86b3-499a-86cd-943dcb86e129-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-z4djp\" (UID: \"b7a68838-86b3-499a-86cd-943dcb86e129\") " pod="openstack/dnsmasq-dns-7756b9d78c-z4djp" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.849045 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b7a68838-86b3-499a-86cd-943dcb86e129-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-z4djp\" (UID: \"b7a68838-86b3-499a-86cd-943dcb86e129\") " pod="openstack/dnsmasq-dns-7756b9d78c-z4djp" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.849128 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b7a68838-86b3-499a-86cd-943dcb86e129-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-z4djp\" (UID: \"b7a68838-86b3-499a-86cd-943dcb86e129\") " pod="openstack/dnsmasq-dns-7756b9d78c-z4djp" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.849158 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6025d581-6326-4154-b2ad-ba111e0d0f61-combined-ca-bundle\") pod \"heat-engine-5f8dbb8f99-b67tw\" (UID: \"6025d581-6326-4154-b2ad-ba111e0d0f61\") " pod="openstack/heat-engine-5f8dbb8f99-b67tw" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.849187 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6025d581-6326-4154-b2ad-ba111e0d0f61-config-data-custom\") pod \"heat-engine-5f8dbb8f99-b67tw\" (UID: \"6025d581-6326-4154-b2ad-ba111e0d0f61\") " pod="openstack/heat-engine-5f8dbb8f99-b67tw" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.849204 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2wzq\" (UniqueName: \"kubernetes.io/projected/b7a68838-86b3-499a-86cd-943dcb86e129-kube-api-access-r2wzq\") pod \"dnsmasq-dns-7756b9d78c-z4djp\" (UID: \"b7a68838-86b3-499a-86cd-943dcb86e129\") " pod="openstack/dnsmasq-dns-7756b9d78c-z4djp" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.849240 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7a68838-86b3-499a-86cd-943dcb86e129-config\") pod \"dnsmasq-dns-7756b9d78c-z4djp\" (UID: \"b7a68838-86b3-499a-86cd-943dcb86e129\") " pod="openstack/dnsmasq-dns-7756b9d78c-z4djp" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.849265 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwt89\" (UniqueName: \"kubernetes.io/projected/6025d581-6326-4154-b2ad-ba111e0d0f61-kube-api-access-hwt89\") pod \"heat-engine-5f8dbb8f99-b67tw\" (UID: \"6025d581-6326-4154-b2ad-ba111e0d0f61\") " pod="openstack/heat-engine-5f8dbb8f99-b67tw" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.849317 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b7a68838-86b3-499a-86cd-943dcb86e129-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-z4djp\" (UID: \"b7a68838-86b3-499a-86cd-943dcb86e129\") " pod="openstack/dnsmasq-dns-7756b9d78c-z4djp" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.849366 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6025d581-6326-4154-b2ad-ba111e0d0f61-config-data\") pod \"heat-engine-5f8dbb8f99-b67tw\" (UID: \"6025d581-6326-4154-b2ad-ba111e0d0f61\") " pod="openstack/heat-engine-5f8dbb8f99-b67tw" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.850262 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.863436 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6025d581-6326-4154-b2ad-ba111e0d0f61-combined-ca-bundle\") pod \"heat-engine-5f8dbb8f99-b67tw\" (UID: \"6025d581-6326-4154-b2ad-ba111e0d0f61\") " pod="openstack/heat-engine-5f8dbb8f99-b67tw" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.867631 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6025d581-6326-4154-b2ad-ba111e0d0f61-config-data\") pod \"heat-engine-5f8dbb8f99-b67tw\" (UID: \"6025d581-6326-4154-b2ad-ba111e0d0f61\") " pod="openstack/heat-engine-5f8dbb8f99-b67tw" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.876922 4737 generic.go:334] "Generic (PLEG): container finished" podID="dce274e8-16df-4b86-8803-b681b0160bc3" containerID="7c6b320d56f98865258b0d472d82a5d9ee5605b4552d892201d84591eb450942" exitCode=0 Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.876997 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dce274e8-16df-4b86-8803-b681b0160bc3","Type":"ContainerDied","Data":"7c6b320d56f98865258b0d472d82a5d9ee5605b4552d892201d84591eb450942"} Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.879720 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75ff77bb76-fx82z" event={"ID":"c995be15-2ce8-471e-b1cb-880242eb10f6","Type":"ContainerDied","Data":"334c1fc8abee0a67e38ad5da9a2e50bdaecaac6e5a1356993237fd30b9deec56"} Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.879776 4737 scope.go:117] "RemoveContainer" containerID="e71eaf70b7225ef5806219a093666ec7834a1bfd927b32cdcef79f1ad0f6a97d" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.879771 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75ff77bb76-fx82z" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.886529 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-z4djp"] Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.902866 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwt89\" (UniqueName: \"kubernetes.io/projected/6025d581-6326-4154-b2ad-ba111e0d0f61-kube-api-access-hwt89\") pod \"heat-engine-5f8dbb8f99-b67tw\" (UID: \"6025d581-6326-4154-b2ad-ba111e0d0f61\") " pod="openstack/heat-engine-5f8dbb8f99-b67tw" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.905470 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6dd8ff9d59-rttts" event={"ID":"38df0a7c-47f1-4834-b970-d815d009b6d7","Type":"ContainerStarted","Data":"00f72d0475fa6e5591a253bf5d8983e7517b4696d25d5292dce8a9eb2e4bd236"} Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.906492 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6025d581-6326-4154-b2ad-ba111e0d0f61-config-data-custom\") pod \"heat-engine-5f8dbb8f99-b67tw\" (UID: \"6025d581-6326-4154-b2ad-ba111e0d0f61\") " pod="openstack/heat-engine-5f8dbb8f99-b67tw" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.921225 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-b844f4d95-b87n7"] Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.930446 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.945083 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.952099 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b7a68838-86b3-499a-86cd-943dcb86e129-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-z4djp\" (UID: \"b7a68838-86b3-499a-86cd-943dcb86e129\") " pod="openstack/dnsmasq-dns-7756b9d78c-z4djp" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.952202 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11ac79cf-f745-4084-ba59-ee3ff364518d-combined-ca-bundle\") pod \"heat-cfnapi-b844f4d95-b87n7\" (UID: \"11ac79cf-f745-4084-ba59-ee3ff364518d\") " pod="openstack/heat-cfnapi-b844f4d95-b87n7" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.952297 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsm2j\" (UniqueName: \"kubernetes.io/projected/11ac79cf-f745-4084-ba59-ee3ff364518d-kube-api-access-hsm2j\") pod \"heat-cfnapi-b844f4d95-b87n7\" (UID: \"11ac79cf-f745-4084-ba59-ee3ff364518d\") " pod="openstack/heat-cfnapi-b844f4d95-b87n7" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.952541 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7a68838-86b3-499a-86cd-943dcb86e129-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-z4djp\" (UID: \"b7a68838-86b3-499a-86cd-943dcb86e129\") " pod="openstack/dnsmasq-dns-7756b9d78c-z4djp" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.952582 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b7a68838-86b3-499a-86cd-943dcb86e129-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-z4djp\" (UID: \"b7a68838-86b3-499a-86cd-943dcb86e129\") " pod="openstack/dnsmasq-dns-7756b9d78c-z4djp" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.952609 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11ac79cf-f745-4084-ba59-ee3ff364518d-config-data\") pod \"heat-cfnapi-b844f4d95-b87n7\" (UID: \"11ac79cf-f745-4084-ba59-ee3ff364518d\") " pod="openstack/heat-cfnapi-b844f4d95-b87n7" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.952678 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b7a68838-86b3-499a-86cd-943dcb86e129-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-z4djp\" (UID: \"b7a68838-86b3-499a-86cd-943dcb86e129\") " pod="openstack/dnsmasq-dns-7756b9d78c-z4djp" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.952751 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2wzq\" (UniqueName: \"kubernetes.io/projected/b7a68838-86b3-499a-86cd-943dcb86e129-kube-api-access-r2wzq\") pod \"dnsmasq-dns-7756b9d78c-z4djp\" (UID: \"b7a68838-86b3-499a-86cd-943dcb86e129\") " pod="openstack/dnsmasq-dns-7756b9d78c-z4djp" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.952793 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7a68838-86b3-499a-86cd-943dcb86e129-config\") pod \"dnsmasq-dns-7756b9d78c-z4djp\" (UID: \"b7a68838-86b3-499a-86cd-943dcb86e129\") " pod="openstack/dnsmasq-dns-7756b9d78c-z4djp" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.952830 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/11ac79cf-f745-4084-ba59-ee3ff364518d-config-data-custom\") pod \"heat-cfnapi-b844f4d95-b87n7\" (UID: \"11ac79cf-f745-4084-ba59-ee3ff364518d\") " pod="openstack/heat-cfnapi-b844f4d95-b87n7" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.954194 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b7a68838-86b3-499a-86cd-943dcb86e129-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-z4djp\" (UID: \"b7a68838-86b3-499a-86cd-943dcb86e129\") " pod="openstack/dnsmasq-dns-7756b9d78c-z4djp" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.957185 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7a68838-86b3-499a-86cd-943dcb86e129-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-z4djp\" (UID: \"b7a68838-86b3-499a-86cd-943dcb86e129\") " pod="openstack/dnsmasq-dns-7756b9d78c-z4djp" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.957875 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b7a68838-86b3-499a-86cd-943dcb86e129-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-z4djp\" (UID: \"b7a68838-86b3-499a-86cd-943dcb86e129\") " pod="openstack/dnsmasq-dns-7756b9d78c-z4djp" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.958856 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b7a68838-86b3-499a-86cd-943dcb86e129-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-z4djp\" (UID: \"b7a68838-86b3-499a-86cd-943dcb86e129\") " pod="openstack/dnsmasq-dns-7756b9d78c-z4djp" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.961306 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7a68838-86b3-499a-86cd-943dcb86e129-config\") pod \"dnsmasq-dns-7756b9d78c-z4djp\" (UID: \"b7a68838-86b3-499a-86cd-943dcb86e129\") " pod="openstack/dnsmasq-dns-7756b9d78c-z4djp" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.963829 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6ff767c7d5-88fm9"] Jan 26 18:56:11 crc kubenswrapper[4737]: E0126 18:56:11.974583 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dce274e8-16df-4b86-8803-b681b0160bc3" containerName="sg-core" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.974620 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="dce274e8-16df-4b86-8803-b681b0160bc3" containerName="sg-core" Jan 26 18:56:11 crc kubenswrapper[4737]: E0126 18:56:11.974836 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dce274e8-16df-4b86-8803-b681b0160bc3" containerName="proxy-httpd" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.974849 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="dce274e8-16df-4b86-8803-b681b0160bc3" containerName="proxy-httpd" Jan 26 18:56:11 crc kubenswrapper[4737]: E0126 18:56:11.974868 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dce274e8-16df-4b86-8803-b681b0160bc3" containerName="ceilometer-central-agent" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.974878 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="dce274e8-16df-4b86-8803-b681b0160bc3" containerName="ceilometer-central-agent" Jan 26 18:56:11 crc kubenswrapper[4737]: E0126 18:56:11.974894 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dce274e8-16df-4b86-8803-b681b0160bc3" containerName="ceilometer-notification-agent" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.974900 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="dce274e8-16df-4b86-8803-b681b0160bc3" containerName="ceilometer-notification-agent" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.975462 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="dce274e8-16df-4b86-8803-b681b0160bc3" containerName="proxy-httpd" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.975494 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="dce274e8-16df-4b86-8803-b681b0160bc3" containerName="ceilometer-notification-agent" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.975523 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="dce274e8-16df-4b86-8803-b681b0160bc3" containerName="sg-core" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.975541 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="dce274e8-16df-4b86-8803-b681b0160bc3" containerName="ceilometer-central-agent" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.987905 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6ff767c7d5-88fm9"] Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.988121 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6ff767c7d5-88fm9" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.993519 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Jan 26 18:56:11 crc kubenswrapper[4737]: I0126 18:56:11.996658 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2wzq\" (UniqueName: \"kubernetes.io/projected/b7a68838-86b3-499a-86cd-943dcb86e129-kube-api-access-r2wzq\") pod \"dnsmasq-dns-7756b9d78c-z4djp\" (UID: \"b7a68838-86b3-499a-86cd-943dcb86e129\") " pod="openstack/dnsmasq-dns-7756b9d78c-z4djp" Jan 26 18:56:12 crc kubenswrapper[4737]: I0126 18:56:12.006394 4737 scope.go:117] "RemoveContainer" containerID="ab6b962c9faa096a1c52d6d51fa797c462cb80650a5f052fca9c9324622c4e4a" Jan 26 18:56:12 crc kubenswrapper[4737]: I0126 18:56:12.051616 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5f8dbb8f99-b67tw" Jan 26 18:56:12 crc kubenswrapper[4737]: I0126 18:56:12.054003 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dce274e8-16df-4b86-8803-b681b0160bc3-log-httpd\") pod \"dce274e8-16df-4b86-8803-b681b0160bc3\" (UID: \"dce274e8-16df-4b86-8803-b681b0160bc3\") " Jan 26 18:56:12 crc kubenswrapper[4737]: I0126 18:56:12.054346 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fncsd\" (UniqueName: \"kubernetes.io/projected/dce274e8-16df-4b86-8803-b681b0160bc3-kube-api-access-fncsd\") pod \"dce274e8-16df-4b86-8803-b681b0160bc3\" (UID: \"dce274e8-16df-4b86-8803-b681b0160bc3\") " Jan 26 18:56:12 crc kubenswrapper[4737]: I0126 18:56:12.054570 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dce274e8-16df-4b86-8803-b681b0160bc3-combined-ca-bundle\") pod \"dce274e8-16df-4b86-8803-b681b0160bc3\" (UID: \"dce274e8-16df-4b86-8803-b681b0160bc3\") " Jan 26 18:56:12 crc kubenswrapper[4737]: I0126 18:56:12.054756 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dce274e8-16df-4b86-8803-b681b0160bc3-run-httpd\") pod \"dce274e8-16df-4b86-8803-b681b0160bc3\" (UID: \"dce274e8-16df-4b86-8803-b681b0160bc3\") " Jan 26 18:56:12 crc kubenswrapper[4737]: I0126 18:56:12.054930 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dce274e8-16df-4b86-8803-b681b0160bc3-sg-core-conf-yaml\") pod \"dce274e8-16df-4b86-8803-b681b0160bc3\" (UID: \"dce274e8-16df-4b86-8803-b681b0160bc3\") " Jan 26 18:56:12 crc kubenswrapper[4737]: I0126 18:56:12.055039 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dce274e8-16df-4b86-8803-b681b0160bc3-scripts\") pod \"dce274e8-16df-4b86-8803-b681b0160bc3\" (UID: \"dce274e8-16df-4b86-8803-b681b0160bc3\") " Jan 26 18:56:12 crc kubenswrapper[4737]: I0126 18:56:12.055255 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dce274e8-16df-4b86-8803-b681b0160bc3-config-data\") pod \"dce274e8-16df-4b86-8803-b681b0160bc3\" (UID: \"dce274e8-16df-4b86-8803-b681b0160bc3\") " Jan 26 18:56:12 crc kubenswrapper[4737]: I0126 18:56:12.055842 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11ac79cf-f745-4084-ba59-ee3ff364518d-combined-ca-bundle\") pod \"heat-cfnapi-b844f4d95-b87n7\" (UID: \"11ac79cf-f745-4084-ba59-ee3ff364518d\") " pod="openstack/heat-cfnapi-b844f4d95-b87n7" Jan 26 18:56:12 crc kubenswrapper[4737]: I0126 18:56:12.055973 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2d51eb8e-1bae-4432-9997-f74055d01000-config-data-custom\") pod \"heat-api-6ff767c7d5-88fm9\" (UID: \"2d51eb8e-1bae-4432-9997-f74055d01000\") " pod="openstack/heat-api-6ff767c7d5-88fm9" Jan 26 18:56:12 crc kubenswrapper[4737]: I0126 18:56:12.056084 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d51eb8e-1bae-4432-9997-f74055d01000-combined-ca-bundle\") pod \"heat-api-6ff767c7d5-88fm9\" (UID: \"2d51eb8e-1bae-4432-9997-f74055d01000\") " pod="openstack/heat-api-6ff767c7d5-88fm9" Jan 26 18:56:12 crc kubenswrapper[4737]: I0126 18:56:12.056205 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsm2j\" (UniqueName: \"kubernetes.io/projected/11ac79cf-f745-4084-ba59-ee3ff364518d-kube-api-access-hsm2j\") pod \"heat-cfnapi-b844f4d95-b87n7\" (UID: \"11ac79cf-f745-4084-ba59-ee3ff364518d\") " pod="openstack/heat-cfnapi-b844f4d95-b87n7" Jan 26 18:56:12 crc kubenswrapper[4737]: I0126 18:56:12.056295 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11ac79cf-f745-4084-ba59-ee3ff364518d-config-data\") pod \"heat-cfnapi-b844f4d95-b87n7\" (UID: \"11ac79cf-f745-4084-ba59-ee3ff364518d\") " pod="openstack/heat-cfnapi-b844f4d95-b87n7" Jan 26 18:56:12 crc kubenswrapper[4737]: I0126 18:56:12.056592 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbf5m\" (UniqueName: \"kubernetes.io/projected/2d51eb8e-1bae-4432-9997-f74055d01000-kube-api-access-wbf5m\") pod \"heat-api-6ff767c7d5-88fm9\" (UID: \"2d51eb8e-1bae-4432-9997-f74055d01000\") " pod="openstack/heat-api-6ff767c7d5-88fm9" Jan 26 18:56:12 crc kubenswrapper[4737]: I0126 18:56:12.056840 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d51eb8e-1bae-4432-9997-f74055d01000-config-data\") pod \"heat-api-6ff767c7d5-88fm9\" (UID: \"2d51eb8e-1bae-4432-9997-f74055d01000\") " pod="openstack/heat-api-6ff767c7d5-88fm9" Jan 26 18:56:12 crc kubenswrapper[4737]: I0126 18:56:12.057018 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/11ac79cf-f745-4084-ba59-ee3ff364518d-config-data-custom\") pod \"heat-cfnapi-b844f4d95-b87n7\" (UID: \"11ac79cf-f745-4084-ba59-ee3ff364518d\") " pod="openstack/heat-cfnapi-b844f4d95-b87n7" Jan 26 18:56:12 crc kubenswrapper[4737]: I0126 18:56:12.065803 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/11ac79cf-f745-4084-ba59-ee3ff364518d-config-data-custom\") pod \"heat-cfnapi-b844f4d95-b87n7\" (UID: \"11ac79cf-f745-4084-ba59-ee3ff364518d\") " pod="openstack/heat-cfnapi-b844f4d95-b87n7" Jan 26 18:56:12 crc kubenswrapper[4737]: I0126 18:56:12.072847 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11ac79cf-f745-4084-ba59-ee3ff364518d-combined-ca-bundle\") pod \"heat-cfnapi-b844f4d95-b87n7\" (UID: \"11ac79cf-f745-4084-ba59-ee3ff364518d\") " pod="openstack/heat-cfnapi-b844f4d95-b87n7" Jan 26 18:56:12 crc kubenswrapper[4737]: I0126 18:56:12.076512 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dce274e8-16df-4b86-8803-b681b0160bc3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "dce274e8-16df-4b86-8803-b681b0160bc3" (UID: "dce274e8-16df-4b86-8803-b681b0160bc3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:56:12 crc kubenswrapper[4737]: I0126 18:56:12.078321 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-75ff77bb76-fx82z"] Jan 26 18:56:12 crc kubenswrapper[4737]: I0126 18:56:12.081279 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11ac79cf-f745-4084-ba59-ee3ff364518d-config-data\") pod \"heat-cfnapi-b844f4d95-b87n7\" (UID: \"11ac79cf-f745-4084-ba59-ee3ff364518d\") " pod="openstack/heat-cfnapi-b844f4d95-b87n7" Jan 26 18:56:12 crc kubenswrapper[4737]: I0126 18:56:12.081299 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-75ff77bb76-fx82z"] Jan 26 18:56:12 crc kubenswrapper[4737]: I0126 18:56:12.081602 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dce274e8-16df-4b86-8803-b681b0160bc3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "dce274e8-16df-4b86-8803-b681b0160bc3" (UID: "dce274e8-16df-4b86-8803-b681b0160bc3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:56:12 crc kubenswrapper[4737]: I0126 18:56:12.085346 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dce274e8-16df-4b86-8803-b681b0160bc3-kube-api-access-fncsd" (OuterVolumeSpecName: "kube-api-access-fncsd") pod "dce274e8-16df-4b86-8803-b681b0160bc3" (UID: "dce274e8-16df-4b86-8803-b681b0160bc3"). InnerVolumeSpecName "kube-api-access-fncsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:56:12 crc kubenswrapper[4737]: I0126 18:56:12.108926 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsm2j\" (UniqueName: \"kubernetes.io/projected/11ac79cf-f745-4084-ba59-ee3ff364518d-kube-api-access-hsm2j\") pod \"heat-cfnapi-b844f4d95-b87n7\" (UID: \"11ac79cf-f745-4084-ba59-ee3ff364518d\") " pod="openstack/heat-cfnapi-b844f4d95-b87n7" Jan 26 18:56:12 crc kubenswrapper[4737]: I0126 18:56:12.109831 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dce274e8-16df-4b86-8803-b681b0160bc3-scripts" (OuterVolumeSpecName: "scripts") pod "dce274e8-16df-4b86-8803-b681b0160bc3" (UID: "dce274e8-16df-4b86-8803-b681b0160bc3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:56:12 crc kubenswrapper[4737]: I0126 18:56:12.155941 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-z4djp" Jan 26 18:56:12 crc kubenswrapper[4737]: I0126 18:56:12.163837 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2d51eb8e-1bae-4432-9997-f74055d01000-config-data-custom\") pod \"heat-api-6ff767c7d5-88fm9\" (UID: \"2d51eb8e-1bae-4432-9997-f74055d01000\") " pod="openstack/heat-api-6ff767c7d5-88fm9" Jan 26 18:56:12 crc kubenswrapper[4737]: I0126 18:56:12.163903 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d51eb8e-1bae-4432-9997-f74055d01000-combined-ca-bundle\") pod \"heat-api-6ff767c7d5-88fm9\" (UID: \"2d51eb8e-1bae-4432-9997-f74055d01000\") " pod="openstack/heat-api-6ff767c7d5-88fm9" Jan 26 18:56:12 crc kubenswrapper[4737]: I0126 18:56:12.164034 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbf5m\" (UniqueName: \"kubernetes.io/projected/2d51eb8e-1bae-4432-9997-f74055d01000-kube-api-access-wbf5m\") pod \"heat-api-6ff767c7d5-88fm9\" (UID: \"2d51eb8e-1bae-4432-9997-f74055d01000\") " pod="openstack/heat-api-6ff767c7d5-88fm9" Jan 26 18:56:12 crc kubenswrapper[4737]: I0126 18:56:12.164236 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d51eb8e-1bae-4432-9997-f74055d01000-config-data\") pod \"heat-api-6ff767c7d5-88fm9\" (UID: \"2d51eb8e-1bae-4432-9997-f74055d01000\") " pod="openstack/heat-api-6ff767c7d5-88fm9" Jan 26 18:56:12 crc kubenswrapper[4737]: I0126 18:56:12.164322 4737 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dce274e8-16df-4b86-8803-b681b0160bc3-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:12 crc kubenswrapper[4737]: I0126 18:56:12.164342 4737 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dce274e8-16df-4b86-8803-b681b0160bc3-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:12 crc kubenswrapper[4737]: I0126 18:56:12.164352 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fncsd\" (UniqueName: \"kubernetes.io/projected/dce274e8-16df-4b86-8803-b681b0160bc3-kube-api-access-fncsd\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:12 crc kubenswrapper[4737]: I0126 18:56:12.164364 4737 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dce274e8-16df-4b86-8803-b681b0160bc3-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:12 crc kubenswrapper[4737]: I0126 18:56:12.170576 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2d51eb8e-1bae-4432-9997-f74055d01000-config-data-custom\") pod \"heat-api-6ff767c7d5-88fm9\" (UID: \"2d51eb8e-1bae-4432-9997-f74055d01000\") " pod="openstack/heat-api-6ff767c7d5-88fm9" Jan 26 18:56:12 crc kubenswrapper[4737]: I0126 18:56:12.172780 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d51eb8e-1bae-4432-9997-f74055d01000-config-data\") pod \"heat-api-6ff767c7d5-88fm9\" (UID: \"2d51eb8e-1bae-4432-9997-f74055d01000\") " pod="openstack/heat-api-6ff767c7d5-88fm9" Jan 26 18:56:12 crc kubenswrapper[4737]: I0126 18:56:12.184768 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dce274e8-16df-4b86-8803-b681b0160bc3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "dce274e8-16df-4b86-8803-b681b0160bc3" (UID: "dce274e8-16df-4b86-8803-b681b0160bc3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:56:12 crc kubenswrapper[4737]: I0126 18:56:12.203684 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d51eb8e-1bae-4432-9997-f74055d01000-combined-ca-bundle\") pod \"heat-api-6ff767c7d5-88fm9\" (UID: \"2d51eb8e-1bae-4432-9997-f74055d01000\") " pod="openstack/heat-api-6ff767c7d5-88fm9" Jan 26 18:56:12 crc kubenswrapper[4737]: I0126 18:56:12.205776 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-b844f4d95-b87n7" Jan 26 18:56:12 crc kubenswrapper[4737]: I0126 18:56:12.206674 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbf5m\" (UniqueName: \"kubernetes.io/projected/2d51eb8e-1bae-4432-9997-f74055d01000-kube-api-access-wbf5m\") pod \"heat-api-6ff767c7d5-88fm9\" (UID: \"2d51eb8e-1bae-4432-9997-f74055d01000\") " pod="openstack/heat-api-6ff767c7d5-88fm9" Jan 26 18:56:12 crc kubenswrapper[4737]: I0126 18:56:12.270892 4737 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dce274e8-16df-4b86-8803-b681b0160bc3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:12 crc kubenswrapper[4737]: I0126 18:56:12.344085 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6ff767c7d5-88fm9" Jan 26 18:56:12 crc kubenswrapper[4737]: I0126 18:56:12.574303 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dce274e8-16df-4b86-8803-b681b0160bc3-config-data" (OuterVolumeSpecName: "config-data") pod "dce274e8-16df-4b86-8803-b681b0160bc3" (UID: "dce274e8-16df-4b86-8803-b681b0160bc3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:56:12 crc kubenswrapper[4737]: I0126 18:56:12.592128 4737 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dce274e8-16df-4b86-8803-b681b0160bc3-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:12 crc kubenswrapper[4737]: I0126 18:56:12.637736 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dce274e8-16df-4b86-8803-b681b0160bc3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dce274e8-16df-4b86-8803-b681b0160bc3" (UID: "dce274e8-16df-4b86-8803-b681b0160bc3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:56:12 crc kubenswrapper[4737]: I0126 18:56:12.695562 4737 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dce274e8-16df-4b86-8803-b681b0160bc3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:12 crc kubenswrapper[4737]: I0126 18:56:12.754692 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 26 18:56:12 crc kubenswrapper[4737]: I0126 18:56:12.941865 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6dd8ff9d59-rttts" event={"ID":"38df0a7c-47f1-4834-b970-d815d009b6d7","Type":"ContainerStarted","Data":"9505c89e7153fb6408c34b3abfe6f308deb5a07e643de5dd4174a632c15a53f4"} Jan 26 18:56:12 crc kubenswrapper[4737]: I0126 18:56:12.941932 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6dd8ff9d59-rttts" event={"ID":"38df0a7c-47f1-4834-b970-d815d009b6d7","Type":"ContainerStarted","Data":"a3fd49ca7d997353be868485bf05b9ab0a37b6566761db45e098e974cd80154f"} Jan 26 18:56:12 crc kubenswrapper[4737]: I0126 18:56:12.942156 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6dd8ff9d59-rttts" Jan 26 18:56:12 crc kubenswrapper[4737]: I0126 18:56:12.942205 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6dd8ff9d59-rttts" Jan 26 18:56:12 crc kubenswrapper[4737]: I0126 18:56:12.952059 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"715806cf-cb82-4224-bdb0-8aed20e29b49","Type":"ContainerStarted","Data":"b3c8a0c7d97c1d15919e2885fc038ec62bc2e422a3649dc0fb6f95144d60bd4f"} Jan 26 18:56:12 crc kubenswrapper[4737]: I0126 18:56:12.970112 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dce274e8-16df-4b86-8803-b681b0160bc3","Type":"ContainerDied","Data":"d6c33494fa081f904b65b97585a82b0340bfa833ed6c04301baf35e71db0c587"} Jan 26 18:56:12 crc kubenswrapper[4737]: I0126 18:56:12.970177 4737 scope.go:117] "RemoveContainer" containerID="ec29fec033478998c456ee72e5cedecbcd414e1690994e988310b88c816f604e" Jan 26 18:56:12 crc kubenswrapper[4737]: I0126 18:56:12.970382 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:56:12 crc kubenswrapper[4737]: I0126 18:56:12.994046 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6dd8ff9d59-rttts" podStartSLOduration=10.994020105 podStartE2EDuration="10.994020105s" podCreationTimestamp="2026-01-26 18:56:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:56:12.969298308 +0000 UTC m=+1546.277493016" watchObservedRunningTime="2026-01-26 18:56:12.994020105 +0000 UTC m=+1546.302214813" Jan 26 18:56:13 crc kubenswrapper[4737]: I0126 18:56:13.168517 4737 scope.go:117] "RemoveContainer" containerID="3ba4ea608b2a81a054094b03a442ad33b3817e85249a779afcab5b0ef5056092" Jan 26 18:56:13 crc kubenswrapper[4737]: I0126 18:56:13.186498 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c995be15-2ce8-471e-b1cb-880242eb10f6" path="/var/lib/kubelet/pods/c995be15-2ce8-471e-b1cb-880242eb10f6/volumes" Jan 26 18:56:13 crc kubenswrapper[4737]: I0126 18:56:13.187801 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="deadcd24-0a98-4f1d-986b-75187a3eccee" path="/var/lib/kubelet/pods/deadcd24-0a98-4f1d-986b-75187a3eccee/volumes" Jan 26 18:56:13 crc kubenswrapper[4737]: I0126 18:56:13.188927 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5f8dbb8f99-b67tw"] Jan 26 18:56:13 crc kubenswrapper[4737]: I0126 18:56:13.290146 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:56:13 crc kubenswrapper[4737]: I0126 18:56:13.314702 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:56:13 crc kubenswrapper[4737]: I0126 18:56:13.400495 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-b844f4d95-b87n7"] Jan 26 18:56:13 crc kubenswrapper[4737]: I0126 18:56:13.428841 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:56:13 crc kubenswrapper[4737]: I0126 18:56:13.432746 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:56:13 crc kubenswrapper[4737]: I0126 18:56:13.442707 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 18:56:13 crc kubenswrapper[4737]: I0126 18:56:13.453595 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 18:56:13 crc kubenswrapper[4737]: W0126 18:56:13.466213 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7a68838_86b3_499a_86cd_943dcb86e129.slice/crio-921a42853efa0835a026db284e6c35f47c6d5b2309102f938d7500d7e27b8cdc WatchSource:0}: Error finding container 921a42853efa0835a026db284e6c35f47c6d5b2309102f938d7500d7e27b8cdc: Status 404 returned error can't find the container with id 921a42853efa0835a026db284e6c35f47c6d5b2309102f938d7500d7e27b8cdc Jan 26 18:56:13 crc kubenswrapper[4737]: I0126 18:56:13.492006 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-z4djp"] Jan 26 18:56:13 crc kubenswrapper[4737]: I0126 18:56:13.536288 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:56:13 crc kubenswrapper[4737]: I0126 18:56:13.546611 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ee37723-a972-4371-9193-bf20e0126bca-scripts\") pod \"ceilometer-0\" (UID: \"7ee37723-a972-4371-9193-bf20e0126bca\") " pod="openstack/ceilometer-0" Jan 26 18:56:13 crc kubenswrapper[4737]: I0126 18:56:13.546681 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ee37723-a972-4371-9193-bf20e0126bca-run-httpd\") pod \"ceilometer-0\" (UID: \"7ee37723-a972-4371-9193-bf20e0126bca\") " pod="openstack/ceilometer-0" Jan 26 18:56:13 crc kubenswrapper[4737]: I0126 18:56:13.546723 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvx5w\" (UniqueName: \"kubernetes.io/projected/7ee37723-a972-4371-9193-bf20e0126bca-kube-api-access-xvx5w\") pod \"ceilometer-0\" (UID: \"7ee37723-a972-4371-9193-bf20e0126bca\") " pod="openstack/ceilometer-0" Jan 26 18:56:13 crc kubenswrapper[4737]: I0126 18:56:13.546759 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7ee37723-a972-4371-9193-bf20e0126bca-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7ee37723-a972-4371-9193-bf20e0126bca\") " pod="openstack/ceilometer-0" Jan 26 18:56:13 crc kubenswrapper[4737]: I0126 18:56:13.546833 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ee37723-a972-4371-9193-bf20e0126bca-config-data\") pod \"ceilometer-0\" (UID: \"7ee37723-a972-4371-9193-bf20e0126bca\") " pod="openstack/ceilometer-0" Jan 26 18:56:13 crc kubenswrapper[4737]: I0126 18:56:13.546863 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ee37723-a972-4371-9193-bf20e0126bca-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7ee37723-a972-4371-9193-bf20e0126bca\") " pod="openstack/ceilometer-0" Jan 26 18:56:13 crc kubenswrapper[4737]: I0126 18:56:13.546912 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ee37723-a972-4371-9193-bf20e0126bca-log-httpd\") pod \"ceilometer-0\" (UID: \"7ee37723-a972-4371-9193-bf20e0126bca\") " pod="openstack/ceilometer-0" Jan 26 18:56:13 crc kubenswrapper[4737]: I0126 18:56:13.566153 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6ff767c7d5-88fm9"] Jan 26 18:56:13 crc kubenswrapper[4737]: I0126 18:56:13.649588 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ee37723-a972-4371-9193-bf20e0126bca-log-httpd\") pod \"ceilometer-0\" (UID: \"7ee37723-a972-4371-9193-bf20e0126bca\") " pod="openstack/ceilometer-0" Jan 26 18:56:13 crc kubenswrapper[4737]: I0126 18:56:13.649749 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ee37723-a972-4371-9193-bf20e0126bca-scripts\") pod \"ceilometer-0\" (UID: \"7ee37723-a972-4371-9193-bf20e0126bca\") " pod="openstack/ceilometer-0" Jan 26 18:56:13 crc kubenswrapper[4737]: I0126 18:56:13.649788 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ee37723-a972-4371-9193-bf20e0126bca-run-httpd\") pod \"ceilometer-0\" (UID: \"7ee37723-a972-4371-9193-bf20e0126bca\") " pod="openstack/ceilometer-0" Jan 26 18:56:13 crc kubenswrapper[4737]: I0126 18:56:13.649811 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvx5w\" (UniqueName: \"kubernetes.io/projected/7ee37723-a972-4371-9193-bf20e0126bca-kube-api-access-xvx5w\") pod \"ceilometer-0\" (UID: \"7ee37723-a972-4371-9193-bf20e0126bca\") " pod="openstack/ceilometer-0" Jan 26 18:56:13 crc kubenswrapper[4737]: I0126 18:56:13.649839 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7ee37723-a972-4371-9193-bf20e0126bca-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7ee37723-a972-4371-9193-bf20e0126bca\") " pod="openstack/ceilometer-0" Jan 26 18:56:13 crc kubenswrapper[4737]: I0126 18:56:13.649994 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ee37723-a972-4371-9193-bf20e0126bca-config-data\") pod \"ceilometer-0\" (UID: \"7ee37723-a972-4371-9193-bf20e0126bca\") " pod="openstack/ceilometer-0" Jan 26 18:56:13 crc kubenswrapper[4737]: I0126 18:56:13.650026 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ee37723-a972-4371-9193-bf20e0126bca-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7ee37723-a972-4371-9193-bf20e0126bca\") " pod="openstack/ceilometer-0" Jan 26 18:56:13 crc kubenswrapper[4737]: I0126 18:56:13.650708 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ee37723-a972-4371-9193-bf20e0126bca-run-httpd\") pod \"ceilometer-0\" (UID: \"7ee37723-a972-4371-9193-bf20e0126bca\") " pod="openstack/ceilometer-0" Jan 26 18:56:13 crc kubenswrapper[4737]: I0126 18:56:13.650761 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ee37723-a972-4371-9193-bf20e0126bca-log-httpd\") pod \"ceilometer-0\" (UID: \"7ee37723-a972-4371-9193-bf20e0126bca\") " pod="openstack/ceilometer-0" Jan 26 18:56:13 crc kubenswrapper[4737]: I0126 18:56:13.665552 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7ee37723-a972-4371-9193-bf20e0126bca-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7ee37723-a972-4371-9193-bf20e0126bca\") " pod="openstack/ceilometer-0" Jan 26 18:56:13 crc kubenswrapper[4737]: I0126 18:56:13.671503 4737 scope.go:117] "RemoveContainer" containerID="7c6b320d56f98865258b0d472d82a5d9ee5605b4552d892201d84591eb450942" Jan 26 18:56:13 crc kubenswrapper[4737]: I0126 18:56:13.672895 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ee37723-a972-4371-9193-bf20e0126bca-scripts\") pod \"ceilometer-0\" (UID: \"7ee37723-a972-4371-9193-bf20e0126bca\") " pod="openstack/ceilometer-0" Jan 26 18:56:13 crc kubenswrapper[4737]: I0126 18:56:13.689735 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvx5w\" (UniqueName: \"kubernetes.io/projected/7ee37723-a972-4371-9193-bf20e0126bca-kube-api-access-xvx5w\") pod \"ceilometer-0\" (UID: \"7ee37723-a972-4371-9193-bf20e0126bca\") " pod="openstack/ceilometer-0" Jan 26 18:56:13 crc kubenswrapper[4737]: I0126 18:56:13.713292 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ee37723-a972-4371-9193-bf20e0126bca-config-data\") pod \"ceilometer-0\" (UID: \"7ee37723-a972-4371-9193-bf20e0126bca\") " pod="openstack/ceilometer-0" Jan 26 18:56:13 crc kubenswrapper[4737]: I0126 18:56:13.739371 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ee37723-a972-4371-9193-bf20e0126bca-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7ee37723-a972-4371-9193-bf20e0126bca\") " pod="openstack/ceilometer-0" Jan 26 18:56:13 crc kubenswrapper[4737]: I0126 18:56:13.800285 4737 scope.go:117] "RemoveContainer" containerID="7e562ddf445da0968e6968ec36b524eefb8335cd84d34770959ff0bdaddf959e" Jan 26 18:56:13 crc kubenswrapper[4737]: I0126 18:56:13.834408 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:56:14 crc kubenswrapper[4737]: I0126 18:56:14.051268 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5f8dbb8f99-b67tw" event={"ID":"6025d581-6326-4154-b2ad-ba111e0d0f61","Type":"ContainerStarted","Data":"f7ad0adf2c74e3ba5577b64e0db5f7c8ac35d38869aeee66172c0940191a5044"} Jan 26 18:56:14 crc kubenswrapper[4737]: I0126 18:56:14.051546 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5f8dbb8f99-b67tw" event={"ID":"6025d581-6326-4154-b2ad-ba111e0d0f61","Type":"ContainerStarted","Data":"445485ab3ac87c6dd79fbd1b397011dba978dccace9ae54e9f615d9e0e482a7d"} Jan 26 18:56:14 crc kubenswrapper[4737]: I0126 18:56:14.051609 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-5f8dbb8f99-b67tw" Jan 26 18:56:14 crc kubenswrapper[4737]: I0126 18:56:14.103437 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6ff767c7d5-88fm9" event={"ID":"2d51eb8e-1bae-4432-9997-f74055d01000","Type":"ContainerStarted","Data":"ddbfed64353f9c02d8d27f23bf1bfc87ce441bbf2e7e1bbe764ad1eb63e37731"} Jan 26 18:56:14 crc kubenswrapper[4737]: I0126 18:56:14.105319 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-b844f4d95-b87n7" event={"ID":"11ac79cf-f745-4084-ba59-ee3ff364518d","Type":"ContainerStarted","Data":"a08a5f2c4fe4d158e3b90169ba757f2ca2b8ebbfd282fa91945fbdd4681052f0"} Jan 26 18:56:14 crc kubenswrapper[4737]: I0126 18:56:14.108189 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-z4djp" event={"ID":"b7a68838-86b3-499a-86cd-943dcb86e129","Type":"ContainerStarted","Data":"921a42853efa0835a026db284e6c35f47c6d5b2309102f938d7500d7e27b8cdc"} Jan 26 18:56:14 crc kubenswrapper[4737]: I0126 18:56:14.571003 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-5f8dbb8f99-b67tw" podStartSLOduration=3.570955856 podStartE2EDuration="3.570955856s" podCreationTimestamp="2026-01-26 18:56:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:56:14.077576189 +0000 UTC m=+1547.385770897" watchObservedRunningTime="2026-01-26 18:56:14.570955856 +0000 UTC m=+1547.879150564" Jan 26 18:56:14 crc kubenswrapper[4737]: I0126 18:56:14.593327 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:56:14 crc kubenswrapper[4737]: W0126 18:56:14.598019 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ee37723_a972_4371_9193_bf20e0126bca.slice/crio-ad724d4c1de4c38566ca5720b3ecab0543b2c0133f28cd6a50096fcc94c8843b WatchSource:0}: Error finding container ad724d4c1de4c38566ca5720b3ecab0543b2c0133f28cd6a50096fcc94c8843b: Status 404 returned error can't find the container with id ad724d4c1de4c38566ca5720b3ecab0543b2c0133f28cd6a50096fcc94c8843b Jan 26 18:56:15 crc kubenswrapper[4737]: I0126 18:56:15.006432 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dce274e8-16df-4b86-8803-b681b0160bc3" path="/var/lib/kubelet/pods/dce274e8-16df-4b86-8803-b681b0160bc3/volumes" Jan 26 18:56:15 crc kubenswrapper[4737]: I0126 18:56:15.141574 4737 generic.go:334] "Generic (PLEG): container finished" podID="b7a68838-86b3-499a-86cd-943dcb86e129" containerID="36c9eb0d5966f1a83c16dedc873c3a51d737a01844299a49d77401c67793c528" exitCode=0 Jan 26 18:56:15 crc kubenswrapper[4737]: I0126 18:56:15.141677 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-z4djp" event={"ID":"b7a68838-86b3-499a-86cd-943dcb86e129","Type":"ContainerDied","Data":"36c9eb0d5966f1a83c16dedc873c3a51d737a01844299a49d77401c67793c528"} Jan 26 18:56:15 crc kubenswrapper[4737]: I0126 18:56:15.150843 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ee37723-a972-4371-9193-bf20e0126bca","Type":"ContainerStarted","Data":"ad724d4c1de4c38566ca5720b3ecab0543b2c0133f28cd6a50096fcc94c8843b"} Jan 26 18:56:15 crc kubenswrapper[4737]: I0126 18:56:15.198161 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"715806cf-cb82-4224-bdb0-8aed20e29b49","Type":"ContainerStarted","Data":"9a673d4671c6952b5a63c8edebf43bdf4da919c69d195ddb8b21f26f27b1f749"} Jan 26 18:56:17 crc kubenswrapper[4737]: I0126 18:56:17.203056 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"715806cf-cb82-4224-bdb0-8aed20e29b49","Type":"ContainerStarted","Data":"f9bba7652279cc4ea30a364ed5c79b8284da1720ad47bee18f1c1d2d6660b634"} Jan 26 18:56:17 crc kubenswrapper[4737]: I0126 18:56:17.203544 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 26 18:56:17 crc kubenswrapper[4737]: I0126 18:56:17.208230 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-z4djp" event={"ID":"b7a68838-86b3-499a-86cd-943dcb86e129","Type":"ContainerStarted","Data":"6ea931668de6131c7939bbf3ffd3496f7ac394220cee3dc2c9c2db9dd9bd1784"} Jan 26 18:56:17 crc kubenswrapper[4737]: I0126 18:56:17.209291 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7756b9d78c-z4djp" Jan 26 18:56:17 crc kubenswrapper[4737]: I0126 18:56:17.239373 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.239348905 podStartE2EDuration="6.239348905s" podCreationTimestamp="2026-01-26 18:56:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:56:17.238546005 +0000 UTC m=+1550.546740713" watchObservedRunningTime="2026-01-26 18:56:17.239348905 +0000 UTC m=+1550.547543613" Jan 26 18:56:17 crc kubenswrapper[4737]: I0126 18:56:17.273034 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7756b9d78c-z4djp" podStartSLOduration=6.27301031 podStartE2EDuration="6.27301031s" podCreationTimestamp="2026-01-26 18:56:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:56:17.262941805 +0000 UTC m=+1550.571136513" watchObservedRunningTime="2026-01-26 18:56:17.27301031 +0000 UTC m=+1550.581205018" Jan 26 18:56:17 crc kubenswrapper[4737]: I0126 18:56:17.425623 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6dd8ff9d59-rttts" Jan 26 18:56:17 crc kubenswrapper[4737]: I0126 18:56:17.427865 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6dd8ff9d59-rttts" Jan 26 18:56:18 crc kubenswrapper[4737]: I0126 18:56:18.219516 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ee37723-a972-4371-9193-bf20e0126bca","Type":"ContainerStarted","Data":"4e4f05e8f9757a70023f46eedbdb049345ee5ee7fba3a371ead8f1eb237611f7"} Jan 26 18:56:18 crc kubenswrapper[4737]: I0126 18:56:18.545887 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-5f8757c766-6hm2h"] Jan 26 18:56:18 crc kubenswrapper[4737]: I0126 18:56:18.561718 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5f8757c766-6hm2h" Jan 26 18:56:18 crc kubenswrapper[4737]: I0126 18:56:18.580628 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-86744b887-d62q9"] Jan 26 18:56:18 crc kubenswrapper[4737]: I0126 18:56:18.582503 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-86744b887-d62q9" Jan 26 18:56:18 crc kubenswrapper[4737]: I0126 18:56:18.614253 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-84dfd788f9-bd7kw"] Jan 26 18:56:18 crc kubenswrapper[4737]: I0126 18:56:18.616259 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-84dfd788f9-bd7kw" Jan 26 18:56:18 crc kubenswrapper[4737]: I0126 18:56:18.667643 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-86744b887-d62q9"] Jan 26 18:56:18 crc kubenswrapper[4737]: I0126 18:56:18.690232 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b65814f9-7380-40c2-8d93-d95858c98d6b-combined-ca-bundle\") pod \"heat-engine-5f8757c766-6hm2h\" (UID: \"b65814f9-7380-40c2-8d93-d95858c98d6b\") " pod="openstack/heat-engine-5f8757c766-6hm2h" Jan 26 18:56:18 crc kubenswrapper[4737]: I0126 18:56:18.690270 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6pbj\" (UniqueName: \"kubernetes.io/projected/3aa18399-89e5-455e-a44d-3f862b8c0237-kube-api-access-h6pbj\") pod \"heat-api-86744b887-d62q9\" (UID: \"3aa18399-89e5-455e-a44d-3f862b8c0237\") " pod="openstack/heat-api-86744b887-d62q9" Jan 26 18:56:18 crc kubenswrapper[4737]: I0126 18:56:18.690287 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ee3532ad-ceed-44bc-a5ab-10a0710c1ba5-config-data-custom\") pod \"heat-cfnapi-84dfd788f9-bd7kw\" (UID: \"ee3532ad-ceed-44bc-a5ab-10a0710c1ba5\") " pod="openstack/heat-cfnapi-84dfd788f9-bd7kw" Jan 26 18:56:18 crc kubenswrapper[4737]: I0126 18:56:18.690309 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vcmz\" (UniqueName: \"kubernetes.io/projected/b65814f9-7380-40c2-8d93-d95858c98d6b-kube-api-access-7vcmz\") pod \"heat-engine-5f8757c766-6hm2h\" (UID: \"b65814f9-7380-40c2-8d93-d95858c98d6b\") " pod="openstack/heat-engine-5f8757c766-6hm2h" Jan 26 18:56:18 crc kubenswrapper[4737]: I0126 18:56:18.690338 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4rvp\" (UniqueName: \"kubernetes.io/projected/ee3532ad-ceed-44bc-a5ab-10a0710c1ba5-kube-api-access-h4rvp\") pod \"heat-cfnapi-84dfd788f9-bd7kw\" (UID: \"ee3532ad-ceed-44bc-a5ab-10a0710c1ba5\") " pod="openstack/heat-cfnapi-84dfd788f9-bd7kw" Jan 26 18:56:18 crc kubenswrapper[4737]: I0126 18:56:18.690369 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee3532ad-ceed-44bc-a5ab-10a0710c1ba5-config-data\") pod \"heat-cfnapi-84dfd788f9-bd7kw\" (UID: \"ee3532ad-ceed-44bc-a5ab-10a0710c1ba5\") " pod="openstack/heat-cfnapi-84dfd788f9-bd7kw" Jan 26 18:56:18 crc kubenswrapper[4737]: I0126 18:56:18.690401 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa18399-89e5-455e-a44d-3f862b8c0237-combined-ca-bundle\") pod \"heat-api-86744b887-d62q9\" (UID: \"3aa18399-89e5-455e-a44d-3f862b8c0237\") " pod="openstack/heat-api-86744b887-d62q9" Jan 26 18:56:18 crc kubenswrapper[4737]: I0126 18:56:18.690438 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b65814f9-7380-40c2-8d93-d95858c98d6b-config-data-custom\") pod \"heat-engine-5f8757c766-6hm2h\" (UID: \"b65814f9-7380-40c2-8d93-d95858c98d6b\") " pod="openstack/heat-engine-5f8757c766-6hm2h" Jan 26 18:56:18 crc kubenswrapper[4737]: I0126 18:56:18.690461 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3aa18399-89e5-455e-a44d-3f862b8c0237-config-data\") pod \"heat-api-86744b887-d62q9\" (UID: \"3aa18399-89e5-455e-a44d-3f862b8c0237\") " pod="openstack/heat-api-86744b887-d62q9" Jan 26 18:56:18 crc kubenswrapper[4737]: I0126 18:56:18.690521 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3aa18399-89e5-455e-a44d-3f862b8c0237-config-data-custom\") pod \"heat-api-86744b887-d62q9\" (UID: \"3aa18399-89e5-455e-a44d-3f862b8c0237\") " pod="openstack/heat-api-86744b887-d62q9" Jan 26 18:56:18 crc kubenswrapper[4737]: I0126 18:56:18.690563 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee3532ad-ceed-44bc-a5ab-10a0710c1ba5-combined-ca-bundle\") pod \"heat-cfnapi-84dfd788f9-bd7kw\" (UID: \"ee3532ad-ceed-44bc-a5ab-10a0710c1ba5\") " pod="openstack/heat-cfnapi-84dfd788f9-bd7kw" Jan 26 18:56:18 crc kubenswrapper[4737]: I0126 18:56:18.690629 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b65814f9-7380-40c2-8d93-d95858c98d6b-config-data\") pod \"heat-engine-5f8757c766-6hm2h\" (UID: \"b65814f9-7380-40c2-8d93-d95858c98d6b\") " pod="openstack/heat-engine-5f8757c766-6hm2h" Jan 26 18:56:18 crc kubenswrapper[4737]: I0126 18:56:18.721224 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5f8757c766-6hm2h"] Jan 26 18:56:18 crc kubenswrapper[4737]: I0126 18:56:18.794310 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-84dfd788f9-bd7kw"] Jan 26 18:56:18 crc kubenswrapper[4737]: I0126 18:56:18.797473 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa18399-89e5-455e-a44d-3f862b8c0237-combined-ca-bundle\") pod \"heat-api-86744b887-d62q9\" (UID: \"3aa18399-89e5-455e-a44d-3f862b8c0237\") " pod="openstack/heat-api-86744b887-d62q9" Jan 26 18:56:18 crc kubenswrapper[4737]: I0126 18:56:18.797532 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b65814f9-7380-40c2-8d93-d95858c98d6b-config-data-custom\") pod \"heat-engine-5f8757c766-6hm2h\" (UID: \"b65814f9-7380-40c2-8d93-d95858c98d6b\") " pod="openstack/heat-engine-5f8757c766-6hm2h" Jan 26 18:56:18 crc kubenswrapper[4737]: I0126 18:56:18.797553 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3aa18399-89e5-455e-a44d-3f862b8c0237-config-data\") pod \"heat-api-86744b887-d62q9\" (UID: \"3aa18399-89e5-455e-a44d-3f862b8c0237\") " pod="openstack/heat-api-86744b887-d62q9" Jan 26 18:56:18 crc kubenswrapper[4737]: I0126 18:56:18.797609 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3aa18399-89e5-455e-a44d-3f862b8c0237-config-data-custom\") pod \"heat-api-86744b887-d62q9\" (UID: \"3aa18399-89e5-455e-a44d-3f862b8c0237\") " pod="openstack/heat-api-86744b887-d62q9" Jan 26 18:56:18 crc kubenswrapper[4737]: I0126 18:56:18.797638 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee3532ad-ceed-44bc-a5ab-10a0710c1ba5-combined-ca-bundle\") pod \"heat-cfnapi-84dfd788f9-bd7kw\" (UID: \"ee3532ad-ceed-44bc-a5ab-10a0710c1ba5\") " pod="openstack/heat-cfnapi-84dfd788f9-bd7kw" Jan 26 18:56:18 crc kubenswrapper[4737]: I0126 18:56:18.797689 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b65814f9-7380-40c2-8d93-d95858c98d6b-config-data\") pod \"heat-engine-5f8757c766-6hm2h\" (UID: \"b65814f9-7380-40c2-8d93-d95858c98d6b\") " pod="openstack/heat-engine-5f8757c766-6hm2h" Jan 26 18:56:18 crc kubenswrapper[4737]: I0126 18:56:18.797741 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6pbj\" (UniqueName: \"kubernetes.io/projected/3aa18399-89e5-455e-a44d-3f862b8c0237-kube-api-access-h6pbj\") pod \"heat-api-86744b887-d62q9\" (UID: \"3aa18399-89e5-455e-a44d-3f862b8c0237\") " pod="openstack/heat-api-86744b887-d62q9" Jan 26 18:56:18 crc kubenswrapper[4737]: I0126 18:56:18.797757 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b65814f9-7380-40c2-8d93-d95858c98d6b-combined-ca-bundle\") pod \"heat-engine-5f8757c766-6hm2h\" (UID: \"b65814f9-7380-40c2-8d93-d95858c98d6b\") " pod="openstack/heat-engine-5f8757c766-6hm2h" Jan 26 18:56:18 crc kubenswrapper[4737]: I0126 18:56:18.797775 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ee3532ad-ceed-44bc-a5ab-10a0710c1ba5-config-data-custom\") pod \"heat-cfnapi-84dfd788f9-bd7kw\" (UID: \"ee3532ad-ceed-44bc-a5ab-10a0710c1ba5\") " pod="openstack/heat-cfnapi-84dfd788f9-bd7kw" Jan 26 18:56:18 crc kubenswrapper[4737]: I0126 18:56:18.797795 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vcmz\" (UniqueName: \"kubernetes.io/projected/b65814f9-7380-40c2-8d93-d95858c98d6b-kube-api-access-7vcmz\") pod \"heat-engine-5f8757c766-6hm2h\" (UID: \"b65814f9-7380-40c2-8d93-d95858c98d6b\") " pod="openstack/heat-engine-5f8757c766-6hm2h" Jan 26 18:56:18 crc kubenswrapper[4737]: I0126 18:56:18.797822 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4rvp\" (UniqueName: \"kubernetes.io/projected/ee3532ad-ceed-44bc-a5ab-10a0710c1ba5-kube-api-access-h4rvp\") pod \"heat-cfnapi-84dfd788f9-bd7kw\" (UID: \"ee3532ad-ceed-44bc-a5ab-10a0710c1ba5\") " pod="openstack/heat-cfnapi-84dfd788f9-bd7kw" Jan 26 18:56:18 crc kubenswrapper[4737]: I0126 18:56:18.797846 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee3532ad-ceed-44bc-a5ab-10a0710c1ba5-config-data\") pod \"heat-cfnapi-84dfd788f9-bd7kw\" (UID: \"ee3532ad-ceed-44bc-a5ab-10a0710c1ba5\") " pod="openstack/heat-cfnapi-84dfd788f9-bd7kw" Jan 26 18:56:18 crc kubenswrapper[4737]: I0126 18:56:18.817624 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ee3532ad-ceed-44bc-a5ab-10a0710c1ba5-config-data-custom\") pod \"heat-cfnapi-84dfd788f9-bd7kw\" (UID: \"ee3532ad-ceed-44bc-a5ab-10a0710c1ba5\") " pod="openstack/heat-cfnapi-84dfd788f9-bd7kw" Jan 26 18:56:18 crc kubenswrapper[4737]: I0126 18:56:18.822264 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b65814f9-7380-40c2-8d93-d95858c98d6b-combined-ca-bundle\") pod \"heat-engine-5f8757c766-6hm2h\" (UID: \"b65814f9-7380-40c2-8d93-d95858c98d6b\") " pod="openstack/heat-engine-5f8757c766-6hm2h" Jan 26 18:56:18 crc kubenswrapper[4737]: I0126 18:56:18.824789 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee3532ad-ceed-44bc-a5ab-10a0710c1ba5-config-data\") pod \"heat-cfnapi-84dfd788f9-bd7kw\" (UID: \"ee3532ad-ceed-44bc-a5ab-10a0710c1ba5\") " pod="openstack/heat-cfnapi-84dfd788f9-bd7kw" Jan 26 18:56:18 crc kubenswrapper[4737]: I0126 18:56:18.827604 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b65814f9-7380-40c2-8d93-d95858c98d6b-config-data\") pod \"heat-engine-5f8757c766-6hm2h\" (UID: \"b65814f9-7380-40c2-8d93-d95858c98d6b\") " pod="openstack/heat-engine-5f8757c766-6hm2h" Jan 26 18:56:18 crc kubenswrapper[4737]: I0126 18:56:18.829255 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3aa18399-89e5-455e-a44d-3f862b8c0237-config-data\") pod \"heat-api-86744b887-d62q9\" (UID: \"3aa18399-89e5-455e-a44d-3f862b8c0237\") " pod="openstack/heat-api-86744b887-d62q9" Jan 26 18:56:18 crc kubenswrapper[4737]: I0126 18:56:18.834988 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3aa18399-89e5-455e-a44d-3f862b8c0237-config-data-custom\") pod \"heat-api-86744b887-d62q9\" (UID: \"3aa18399-89e5-455e-a44d-3f862b8c0237\") " pod="openstack/heat-api-86744b887-d62q9" Jan 26 18:56:18 crc kubenswrapper[4737]: I0126 18:56:18.835681 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee3532ad-ceed-44bc-a5ab-10a0710c1ba5-combined-ca-bundle\") pod \"heat-cfnapi-84dfd788f9-bd7kw\" (UID: \"ee3532ad-ceed-44bc-a5ab-10a0710c1ba5\") " pod="openstack/heat-cfnapi-84dfd788f9-bd7kw" Jan 26 18:56:18 crc kubenswrapper[4737]: I0126 18:56:18.845918 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vcmz\" (UniqueName: \"kubernetes.io/projected/b65814f9-7380-40c2-8d93-d95858c98d6b-kube-api-access-7vcmz\") pod \"heat-engine-5f8757c766-6hm2h\" (UID: \"b65814f9-7380-40c2-8d93-d95858c98d6b\") " pod="openstack/heat-engine-5f8757c766-6hm2h" Jan 26 18:56:18 crc kubenswrapper[4737]: I0126 18:56:18.851925 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b65814f9-7380-40c2-8d93-d95858c98d6b-config-data-custom\") pod \"heat-engine-5f8757c766-6hm2h\" (UID: \"b65814f9-7380-40c2-8d93-d95858c98d6b\") " pod="openstack/heat-engine-5f8757c766-6hm2h" Jan 26 18:56:18 crc kubenswrapper[4737]: I0126 18:56:18.858431 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6pbj\" (UniqueName: \"kubernetes.io/projected/3aa18399-89e5-455e-a44d-3f862b8c0237-kube-api-access-h6pbj\") pod \"heat-api-86744b887-d62q9\" (UID: \"3aa18399-89e5-455e-a44d-3f862b8c0237\") " pod="openstack/heat-api-86744b887-d62q9" Jan 26 18:56:18 crc kubenswrapper[4737]: I0126 18:56:18.870626 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa18399-89e5-455e-a44d-3f862b8c0237-combined-ca-bundle\") pod \"heat-api-86744b887-d62q9\" (UID: \"3aa18399-89e5-455e-a44d-3f862b8c0237\") " pod="openstack/heat-api-86744b887-d62q9" Jan 26 18:56:18 crc kubenswrapper[4737]: I0126 18:56:18.880962 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4rvp\" (UniqueName: \"kubernetes.io/projected/ee3532ad-ceed-44bc-a5ab-10a0710c1ba5-kube-api-access-h4rvp\") pod \"heat-cfnapi-84dfd788f9-bd7kw\" (UID: \"ee3532ad-ceed-44bc-a5ab-10a0710c1ba5\") " pod="openstack/heat-cfnapi-84dfd788f9-bd7kw" Jan 26 18:56:19 crc kubenswrapper[4737]: I0126 18:56:19.046802 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5f8757c766-6hm2h" Jan 26 18:56:19 crc kubenswrapper[4737]: I0126 18:56:19.068565 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-86744b887-d62q9" Jan 26 18:56:19 crc kubenswrapper[4737]: I0126 18:56:19.097761 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-84dfd788f9-bd7kw" Jan 26 18:56:19 crc kubenswrapper[4737]: I0126 18:56:19.264392 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6ff767c7d5-88fm9" event={"ID":"2d51eb8e-1bae-4432-9997-f74055d01000","Type":"ContainerStarted","Data":"5bd0fe5450e0b014f9b1890d301306732e938dde828c492337ae1f682ddb1db5"} Jan 26 18:56:19 crc kubenswrapper[4737]: I0126 18:56:19.265626 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6ff767c7d5-88fm9" Jan 26 18:56:19 crc kubenswrapper[4737]: I0126 18:56:19.277663 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-b844f4d95-b87n7" event={"ID":"11ac79cf-f745-4084-ba59-ee3ff364518d","Type":"ContainerStarted","Data":"7bac75e8a7f901dc9bb78ed9798abe8e671478f31767c436b2505a70dce880a4"} Jan 26 18:56:19 crc kubenswrapper[4737]: I0126 18:56:19.279137 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-b844f4d95-b87n7" Jan 26 18:56:19 crc kubenswrapper[4737]: I0126 18:56:19.326616 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-6ff767c7d5-88fm9" podStartSLOduration=3.656397539 podStartE2EDuration="8.326594898s" podCreationTimestamp="2026-01-26 18:56:11 +0000 UTC" firstStartedPulling="2026-01-26 18:56:13.7175299 +0000 UTC m=+1547.025724608" lastFinishedPulling="2026-01-26 18:56:18.387727259 +0000 UTC m=+1551.695921967" observedRunningTime="2026-01-26 18:56:19.290299312 +0000 UTC m=+1552.598494020" watchObservedRunningTime="2026-01-26 18:56:19.326594898 +0000 UTC m=+1552.634789606" Jan 26 18:56:19 crc kubenswrapper[4737]: I0126 18:56:19.335349 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-b844f4d95-b87n7" podStartSLOduration=3.352609615 podStartE2EDuration="8.335327872s" podCreationTimestamp="2026-01-26 18:56:11 +0000 UTC" firstStartedPulling="2026-01-26 18:56:13.399911563 +0000 UTC m=+1546.708106271" lastFinishedPulling="2026-01-26 18:56:18.38262982 +0000 UTC m=+1551.690824528" observedRunningTime="2026-01-26 18:56:19.316245216 +0000 UTC m=+1552.624439934" watchObservedRunningTime="2026-01-26 18:56:19.335327872 +0000 UTC m=+1552.643522580" Jan 26 18:56:20 crc kubenswrapper[4737]: I0126 18:56:20.261983 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5f8757c766-6hm2h"] Jan 26 18:56:20 crc kubenswrapper[4737]: I0126 18:56:20.380058 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ee37723-a972-4371-9193-bf20e0126bca","Type":"ContainerStarted","Data":"6aea5f7e1f8b0af0106347d80da70bce245e198db78acbec6f4607ef3246ecb9"} Jan 26 18:56:20 crc kubenswrapper[4737]: I0126 18:56:20.429513 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-86744b887-d62q9"] Jan 26 18:56:20 crc kubenswrapper[4737]: I0126 18:56:20.593875 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-84dfd788f9-bd7kw"] Jan 26 18:56:20 crc kubenswrapper[4737]: W0126 18:56:20.635421 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3aa18399_89e5_455e_a44d_3f862b8c0237.slice/crio-15aee058145ecf98f9f86843eb125e28ef401ab3e5021b4cdf0be2ee1b9b068d WatchSource:0}: Error finding container 15aee058145ecf98f9f86843eb125e28ef401ab3e5021b4cdf0be2ee1b9b068d: Status 404 returned error can't find the container with id 15aee058145ecf98f9f86843eb125e28ef401ab3e5021b4cdf0be2ee1b9b068d Jan 26 18:56:21 crc kubenswrapper[4737]: I0126 18:56:21.380294 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 18:56:21 crc kubenswrapper[4737]: I0126 18:56:21.380893 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="8c7f5f39-5fca-4ebd-b06b-1022c2500338" containerName="glance-log" containerID="cri-o://6debd79f4fd6a44a765ba729f95566537cb9c3413f3c8578e6c8bcef6cd06d62" gracePeriod=30 Jan 26 18:56:21 crc kubenswrapper[4737]: I0126 18:56:21.381885 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="8c7f5f39-5fca-4ebd-b06b-1022c2500338" containerName="glance-httpd" containerID="cri-o://d7375bbb295caf445cf9905cc9faf3379d6b72b3a2f9e577ed4d1edfe37cb42b" gracePeriod=30 Jan 26 18:56:21 crc kubenswrapper[4737]: I0126 18:56:21.433556 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-86744b887-d62q9" event={"ID":"3aa18399-89e5-455e-a44d-3f862b8c0237","Type":"ContainerStarted","Data":"0e3eb41a89a3bf62ad71bddca2dfc086bb8e3808390f0b8f21b4e096331d4314"} Jan 26 18:56:21 crc kubenswrapper[4737]: I0126 18:56:21.433634 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-86744b887-d62q9" event={"ID":"3aa18399-89e5-455e-a44d-3f862b8c0237","Type":"ContainerStarted","Data":"15aee058145ecf98f9f86843eb125e28ef401ab3e5021b4cdf0be2ee1b9b068d"} Jan 26 18:56:21 crc kubenswrapper[4737]: I0126 18:56:21.436510 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-86744b887-d62q9" Jan 26 18:56:21 crc kubenswrapper[4737]: I0126 18:56:21.449783 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5f8757c766-6hm2h" event={"ID":"b65814f9-7380-40c2-8d93-d95858c98d6b","Type":"ContainerStarted","Data":"2ff97339c8905235db89a01821aafbcb5143c6652bcbb69d630b4c6379074d7c"} Jan 26 18:56:21 crc kubenswrapper[4737]: I0126 18:56:21.449843 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5f8757c766-6hm2h" event={"ID":"b65814f9-7380-40c2-8d93-d95858c98d6b","Type":"ContainerStarted","Data":"7c3db5c789880e7ccd4e1aa703650d2eb3fa7c664c97e08be94127b6c19c85e1"} Jan 26 18:56:21 crc kubenswrapper[4737]: I0126 18:56:21.451673 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-5f8757c766-6hm2h" Jan 26 18:56:21 crc kubenswrapper[4737]: I0126 18:56:21.465731 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ee37723-a972-4371-9193-bf20e0126bca","Type":"ContainerStarted","Data":"402874a2c5bf9f3c1ad3566ef6dcaf25f325e681e995bd4b0ecfa3b8a1b7b693"} Jan 26 18:56:21 crc kubenswrapper[4737]: I0126 18:56:21.482271 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-84dfd788f9-bd7kw" event={"ID":"ee3532ad-ceed-44bc-a5ab-10a0710c1ba5","Type":"ContainerStarted","Data":"6e78e0ea91cc67dd4840a7005b36f61ef9f77b25fb2e342121a49dbbe9f73cd7"} Jan 26 18:56:21 crc kubenswrapper[4737]: I0126 18:56:21.482316 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-84dfd788f9-bd7kw" Jan 26 18:56:21 crc kubenswrapper[4737]: I0126 18:56:21.482327 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-84dfd788f9-bd7kw" event={"ID":"ee3532ad-ceed-44bc-a5ab-10a0710c1ba5","Type":"ContainerStarted","Data":"08fb1ca923f5f8b02dbd0ad1dce69ca211e5ef87ed8839517b5077c026ff4f06"} Jan 26 18:56:21 crc kubenswrapper[4737]: I0126 18:56:21.515356 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-86744b887-d62q9" podStartSLOduration=3.515328509 podStartE2EDuration="3.515328509s" podCreationTimestamp="2026-01-26 18:56:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:56:21.461660816 +0000 UTC m=+1554.769855524" watchObservedRunningTime="2026-01-26 18:56:21.515328509 +0000 UTC m=+1554.823523217" Jan 26 18:56:21 crc kubenswrapper[4737]: I0126 18:56:21.561588 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-5f8757c766-6hm2h" podStartSLOduration=3.561567217 podStartE2EDuration="3.561567217s" podCreationTimestamp="2026-01-26 18:56:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:56:21.490594521 +0000 UTC m=+1554.798789229" watchObservedRunningTime="2026-01-26 18:56:21.561567217 +0000 UTC m=+1554.869761925" Jan 26 18:56:21 crc kubenswrapper[4737]: I0126 18:56:21.619235 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-84dfd788f9-bd7kw" podStartSLOduration=3.619212381 podStartE2EDuration="3.619212381s" podCreationTimestamp="2026-01-26 18:56:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:56:21.556086029 +0000 UTC m=+1554.864280747" watchObservedRunningTime="2026-01-26 18:56:21.619212381 +0000 UTC m=+1554.927407089" Jan 26 18:56:22 crc kubenswrapper[4737]: I0126 18:56:22.160358 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7756b9d78c-z4djp" Jan 26 18:56:22 crc kubenswrapper[4737]: I0126 18:56:22.308202 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-6rgnn"] Jan 26 18:56:22 crc kubenswrapper[4737]: I0126 18:56:22.309643 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-6rgnn" podUID="67c8afbc-8ed9-4ebb-b150-f6f5257f7b15" containerName="dnsmasq-dns" containerID="cri-o://e544ad69ca751ec62e21a7ac226c2fe50389582109a0c738cf1fcae76616aeb9" gracePeriod=10 Jan 26 18:56:22 crc kubenswrapper[4737]: I0126 18:56:22.510145 4737 generic.go:334] "Generic (PLEG): container finished" podID="ee3532ad-ceed-44bc-a5ab-10a0710c1ba5" containerID="6e78e0ea91cc67dd4840a7005b36f61ef9f77b25fb2e342121a49dbbe9f73cd7" exitCode=1 Jan 26 18:56:22 crc kubenswrapper[4737]: I0126 18:56:22.510465 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-84dfd788f9-bd7kw" event={"ID":"ee3532ad-ceed-44bc-a5ab-10a0710c1ba5","Type":"ContainerDied","Data":"6e78e0ea91cc67dd4840a7005b36f61ef9f77b25fb2e342121a49dbbe9f73cd7"} Jan 26 18:56:22 crc kubenswrapper[4737]: I0126 18:56:22.511136 4737 scope.go:117] "RemoveContainer" containerID="6e78e0ea91cc67dd4840a7005b36f61ef9f77b25fb2e342121a49dbbe9f73cd7" Jan 26 18:56:22 crc kubenswrapper[4737]: I0126 18:56:22.520534 4737 generic.go:334] "Generic (PLEG): container finished" podID="8c7f5f39-5fca-4ebd-b06b-1022c2500338" containerID="6debd79f4fd6a44a765ba729f95566537cb9c3413f3c8578e6c8bcef6cd06d62" exitCode=143 Jan 26 18:56:22 crc kubenswrapper[4737]: I0126 18:56:22.520614 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8c7f5f39-5fca-4ebd-b06b-1022c2500338","Type":"ContainerDied","Data":"6debd79f4fd6a44a765ba729f95566537cb9c3413f3c8578e6c8bcef6cd06d62"} Jan 26 18:56:22 crc kubenswrapper[4737]: I0126 18:56:22.546929 4737 generic.go:334] "Generic (PLEG): container finished" podID="67c8afbc-8ed9-4ebb-b150-f6f5257f7b15" containerID="e544ad69ca751ec62e21a7ac226c2fe50389582109a0c738cf1fcae76616aeb9" exitCode=0 Jan 26 18:56:22 crc kubenswrapper[4737]: I0126 18:56:22.547030 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-6rgnn" event={"ID":"67c8afbc-8ed9-4ebb-b150-f6f5257f7b15","Type":"ContainerDied","Data":"e544ad69ca751ec62e21a7ac226c2fe50389582109a0c738cf1fcae76616aeb9"} Jan 26 18:56:22 crc kubenswrapper[4737]: I0126 18:56:22.573838 4737 generic.go:334] "Generic (PLEG): container finished" podID="3aa18399-89e5-455e-a44d-3f862b8c0237" containerID="0e3eb41a89a3bf62ad71bddca2dfc086bb8e3808390f0b8f21b4e096331d4314" exitCode=1 Jan 26 18:56:22 crc kubenswrapper[4737]: I0126 18:56:22.573961 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-86744b887-d62q9" event={"ID":"3aa18399-89e5-455e-a44d-3f862b8c0237","Type":"ContainerDied","Data":"0e3eb41a89a3bf62ad71bddca2dfc086bb8e3808390f0b8f21b4e096331d4314"} Jan 26 18:56:22 crc kubenswrapper[4737]: I0126 18:56:22.574623 4737 scope.go:117] "RemoveContainer" containerID="0e3eb41a89a3bf62ad71bddca2dfc086bb8e3808390f0b8f21b4e096331d4314" Jan 26 18:56:23 crc kubenswrapper[4737]: I0126 18:56:23.258184 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-6rgnn" Jan 26 18:56:23 crc kubenswrapper[4737]: I0126 18:56:23.340599 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/67c8afbc-8ed9-4ebb-b150-f6f5257f7b15-ovsdbserver-sb\") pod \"67c8afbc-8ed9-4ebb-b150-f6f5257f7b15\" (UID: \"67c8afbc-8ed9-4ebb-b150-f6f5257f7b15\") " Jan 26 18:56:23 crc kubenswrapper[4737]: I0126 18:56:23.340701 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snslv\" (UniqueName: \"kubernetes.io/projected/67c8afbc-8ed9-4ebb-b150-f6f5257f7b15-kube-api-access-snslv\") pod \"67c8afbc-8ed9-4ebb-b150-f6f5257f7b15\" (UID: \"67c8afbc-8ed9-4ebb-b150-f6f5257f7b15\") " Jan 26 18:56:23 crc kubenswrapper[4737]: I0126 18:56:23.340820 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/67c8afbc-8ed9-4ebb-b150-f6f5257f7b15-dns-swift-storage-0\") pod \"67c8afbc-8ed9-4ebb-b150-f6f5257f7b15\" (UID: \"67c8afbc-8ed9-4ebb-b150-f6f5257f7b15\") " Jan 26 18:56:23 crc kubenswrapper[4737]: I0126 18:56:23.340929 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/67c8afbc-8ed9-4ebb-b150-f6f5257f7b15-ovsdbserver-nb\") pod \"67c8afbc-8ed9-4ebb-b150-f6f5257f7b15\" (UID: \"67c8afbc-8ed9-4ebb-b150-f6f5257f7b15\") " Jan 26 18:56:23 crc kubenswrapper[4737]: I0126 18:56:23.341150 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67c8afbc-8ed9-4ebb-b150-f6f5257f7b15-config\") pod \"67c8afbc-8ed9-4ebb-b150-f6f5257f7b15\" (UID: \"67c8afbc-8ed9-4ebb-b150-f6f5257f7b15\") " Jan 26 18:56:23 crc kubenswrapper[4737]: I0126 18:56:23.341259 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/67c8afbc-8ed9-4ebb-b150-f6f5257f7b15-dns-svc\") pod \"67c8afbc-8ed9-4ebb-b150-f6f5257f7b15\" (UID: \"67c8afbc-8ed9-4ebb-b150-f6f5257f7b15\") " Jan 26 18:56:23 crc kubenswrapper[4737]: I0126 18:56:23.352491 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67c8afbc-8ed9-4ebb-b150-f6f5257f7b15-kube-api-access-snslv" (OuterVolumeSpecName: "kube-api-access-snslv") pod "67c8afbc-8ed9-4ebb-b150-f6f5257f7b15" (UID: "67c8afbc-8ed9-4ebb-b150-f6f5257f7b15"). InnerVolumeSpecName "kube-api-access-snslv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:56:23 crc kubenswrapper[4737]: I0126 18:56:23.410930 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67c8afbc-8ed9-4ebb-b150-f6f5257f7b15-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "67c8afbc-8ed9-4ebb-b150-f6f5257f7b15" (UID: "67c8afbc-8ed9-4ebb-b150-f6f5257f7b15"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:56:23 crc kubenswrapper[4737]: I0126 18:56:23.444108 4737 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/67c8afbc-8ed9-4ebb-b150-f6f5257f7b15-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:23 crc kubenswrapper[4737]: I0126 18:56:23.444831 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snslv\" (UniqueName: \"kubernetes.io/projected/67c8afbc-8ed9-4ebb-b150-f6f5257f7b15-kube-api-access-snslv\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:23 crc kubenswrapper[4737]: I0126 18:56:23.471560 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67c8afbc-8ed9-4ebb-b150-f6f5257f7b15-config" (OuterVolumeSpecName: "config") pod "67c8afbc-8ed9-4ebb-b150-f6f5257f7b15" (UID: "67c8afbc-8ed9-4ebb-b150-f6f5257f7b15"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:56:23 crc kubenswrapper[4737]: I0126 18:56:23.479019 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67c8afbc-8ed9-4ebb-b150-f6f5257f7b15-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "67c8afbc-8ed9-4ebb-b150-f6f5257f7b15" (UID: "67c8afbc-8ed9-4ebb-b150-f6f5257f7b15"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:56:23 crc kubenswrapper[4737]: I0126 18:56:23.493912 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67c8afbc-8ed9-4ebb-b150-f6f5257f7b15-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "67c8afbc-8ed9-4ebb-b150-f6f5257f7b15" (UID: "67c8afbc-8ed9-4ebb-b150-f6f5257f7b15"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:56:23 crc kubenswrapper[4737]: I0126 18:56:23.524988 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67c8afbc-8ed9-4ebb-b150-f6f5257f7b15-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "67c8afbc-8ed9-4ebb-b150-f6f5257f7b15" (UID: "67c8afbc-8ed9-4ebb-b150-f6f5257f7b15"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:56:23 crc kubenswrapper[4737]: I0126 18:56:23.548232 4737 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67c8afbc-8ed9-4ebb-b150-f6f5257f7b15-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:23 crc kubenswrapper[4737]: I0126 18:56:23.548277 4737 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/67c8afbc-8ed9-4ebb-b150-f6f5257f7b15-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:23 crc kubenswrapper[4737]: I0126 18:56:23.548293 4737 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/67c8afbc-8ed9-4ebb-b150-f6f5257f7b15-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:23 crc kubenswrapper[4737]: I0126 18:56:23.548307 4737 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/67c8afbc-8ed9-4ebb-b150-f6f5257f7b15-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:23 crc kubenswrapper[4737]: I0126 18:56:23.591147 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-84dfd788f9-bd7kw" event={"ID":"ee3532ad-ceed-44bc-a5ab-10a0710c1ba5","Type":"ContainerStarted","Data":"b8f34e50940767633a7c49d48c3db43cabd157526042c9a8edda982ebe08bd4e"} Jan 26 18:56:23 crc kubenswrapper[4737]: I0126 18:56:23.592969 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-84dfd788f9-bd7kw" Jan 26 18:56:23 crc kubenswrapper[4737]: I0126 18:56:23.601497 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-6rgnn" event={"ID":"67c8afbc-8ed9-4ebb-b150-f6f5257f7b15","Type":"ContainerDied","Data":"0773aeaf6e888366f35a8a9a6b297c774861e220ed355ba46447e5071da7b6ff"} Jan 26 18:56:23 crc kubenswrapper[4737]: I0126 18:56:23.601560 4737 scope.go:117] "RemoveContainer" containerID="e544ad69ca751ec62e21a7ac226c2fe50389582109a0c738cf1fcae76616aeb9" Jan 26 18:56:23 crc kubenswrapper[4737]: I0126 18:56:23.601732 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-6rgnn" Jan 26 18:56:23 crc kubenswrapper[4737]: I0126 18:56:23.614011 4737 generic.go:334] "Generic (PLEG): container finished" podID="3aa18399-89e5-455e-a44d-3f862b8c0237" containerID="75c5945a3c88ea126b1616e23b8ae4fffb6b560f71ccc7bfcf9e3f21d45267ec" exitCode=1 Jan 26 18:56:23 crc kubenswrapper[4737]: I0126 18:56:23.614382 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-86744b887-d62q9" event={"ID":"3aa18399-89e5-455e-a44d-3f862b8c0237","Type":"ContainerDied","Data":"75c5945a3c88ea126b1616e23b8ae4fffb6b560f71ccc7bfcf9e3f21d45267ec"} Jan 26 18:56:23 crc kubenswrapper[4737]: I0126 18:56:23.615352 4737 scope.go:117] "RemoveContainer" containerID="75c5945a3c88ea126b1616e23b8ae4fffb6b560f71ccc7bfcf9e3f21d45267ec" Jan 26 18:56:23 crc kubenswrapper[4737]: E0126 18:56:23.615788 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-86744b887-d62q9_openstack(3aa18399-89e5-455e-a44d-3f862b8c0237)\"" pod="openstack/heat-api-86744b887-d62q9" podUID="3aa18399-89e5-455e-a44d-3f862b8c0237" Jan 26 18:56:23 crc kubenswrapper[4737]: I0126 18:56:23.628428 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ee37723-a972-4371-9193-bf20e0126bca","Type":"ContainerStarted","Data":"1907b2858a93268f4ad912ad74c45fb6c7fffd8e1b845e014e57a45e3f151aca"} Jan 26 18:56:23 crc kubenswrapper[4737]: I0126 18:56:23.628712 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 18:56:23 crc kubenswrapper[4737]: I0126 18:56:23.633025 4737 scope.go:117] "RemoveContainer" containerID="eb7684d9a841f565e5357a1df2da7c7753f9620fa191cc3ff74d931e4cb881a7" Jan 26 18:56:23 crc kubenswrapper[4737]: I0126 18:56:23.669710 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.154534341 podStartE2EDuration="10.669686907s" podCreationTimestamp="2026-01-26 18:56:13 +0000 UTC" firstStartedPulling="2026-01-26 18:56:14.60929361 +0000 UTC m=+1547.917488318" lastFinishedPulling="2026-01-26 18:56:22.124446176 +0000 UTC m=+1555.432640884" observedRunningTime="2026-01-26 18:56:23.656031049 +0000 UTC m=+1556.964225767" watchObservedRunningTime="2026-01-26 18:56:23.669686907 +0000 UTC m=+1556.977881625" Jan 26 18:56:23 crc kubenswrapper[4737]: I0126 18:56:23.684775 4737 scope.go:117] "RemoveContainer" containerID="0e3eb41a89a3bf62ad71bddca2dfc086bb8e3808390f0b8f21b4e096331d4314" Jan 26 18:56:23 crc kubenswrapper[4737]: I0126 18:56:23.739119 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-6rgnn"] Jan 26 18:56:23 crc kubenswrapper[4737]: I0126 18:56:23.758405 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-6rgnn"] Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.069743 4737 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-86744b887-d62q9" Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.069833 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-86744b887-d62q9" Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.201426 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-b844f4d95-b87n7"] Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.202456 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-b844f4d95-b87n7" podUID="11ac79cf-f745-4084-ba59-ee3ff364518d" containerName="heat-cfnapi" containerID="cri-o://7bac75e8a7f901dc9bb78ed9798abe8e671478f31767c436b2505a70dce880a4" gracePeriod=60 Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.233611 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6ff767c7d5-88fm9"] Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.233926 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-6ff767c7d5-88fm9" podUID="2d51eb8e-1bae-4432-9997-f74055d01000" containerName="heat-api" containerID="cri-o://5bd0fe5450e0b014f9b1890d301306732e938dde828c492337ae1f682ddb1db5" gracePeriod=60 Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.254296 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-6ff767c7d5-88fm9" podUID="2d51eb8e-1bae-4432-9997-f74055d01000" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.220:8004/healthcheck\": EOF" Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.276160 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-755b5655f9-7jhg9"] Jan 26 18:56:24 crc kubenswrapper[4737]: E0126 18:56:24.277225 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67c8afbc-8ed9-4ebb-b150-f6f5257f7b15" containerName="init" Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.277347 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="67c8afbc-8ed9-4ebb-b150-f6f5257f7b15" containerName="init" Jan 26 18:56:24 crc kubenswrapper[4737]: E0126 18:56:24.277445 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67c8afbc-8ed9-4ebb-b150-f6f5257f7b15" containerName="dnsmasq-dns" Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.277527 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="67c8afbc-8ed9-4ebb-b150-f6f5257f7b15" containerName="dnsmasq-dns" Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.277934 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="67c8afbc-8ed9-4ebb-b150-f6f5257f7b15" containerName="dnsmasq-dns" Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.302288 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-755b5655f9-7jhg9" Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.303336 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-b844f4d95-b87n7" podUID="11ac79cf-f745-4084-ba59-ee3ff364518d" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.219:8000/healthcheck\": EOF" Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.307632 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.308461 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.308507 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-5494f5754b-8k4bc"] Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.310541 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5494f5754b-8k4bc" Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.315110 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.315456 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.331400 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-755b5655f9-7jhg9"] Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.378133 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5494f5754b-8k4bc"] Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.386354 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/514a8219-8732-4d4b-abe6-154d215f65ed-combined-ca-bundle\") pod \"heat-cfnapi-755b5655f9-7jhg9\" (UID: \"514a8219-8732-4d4b-abe6-154d215f65ed\") " pod="openstack/heat-cfnapi-755b5655f9-7jhg9" Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.386441 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/514a8219-8732-4d4b-abe6-154d215f65ed-config-data\") pod \"heat-cfnapi-755b5655f9-7jhg9\" (UID: \"514a8219-8732-4d4b-abe6-154d215f65ed\") " pod="openstack/heat-cfnapi-755b5655f9-7jhg9" Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.386531 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/514a8219-8732-4d4b-abe6-154d215f65ed-config-data-custom\") pod \"heat-cfnapi-755b5655f9-7jhg9\" (UID: \"514a8219-8732-4d4b-abe6-154d215f65ed\") " pod="openstack/heat-cfnapi-755b5655f9-7jhg9" Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.386582 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj9bv\" (UniqueName: \"kubernetes.io/projected/514a8219-8732-4d4b-abe6-154d215f65ed-kube-api-access-rj9bv\") pod \"heat-cfnapi-755b5655f9-7jhg9\" (UID: \"514a8219-8732-4d4b-abe6-154d215f65ed\") " pod="openstack/heat-cfnapi-755b5655f9-7jhg9" Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.386613 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e432095-1a99-44ef-8941-dd57947cfea2-config-data\") pod \"heat-api-5494f5754b-8k4bc\" (UID: \"3e432095-1a99-44ef-8941-dd57947cfea2\") " pod="openstack/heat-api-5494f5754b-8k4bc" Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.386676 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3e432095-1a99-44ef-8941-dd57947cfea2-config-data-custom\") pod \"heat-api-5494f5754b-8k4bc\" (UID: \"3e432095-1a99-44ef-8941-dd57947cfea2\") " pod="openstack/heat-api-5494f5754b-8k4bc" Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.386703 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e432095-1a99-44ef-8941-dd57947cfea2-combined-ca-bundle\") pod \"heat-api-5494f5754b-8k4bc\" (UID: \"3e432095-1a99-44ef-8941-dd57947cfea2\") " pod="openstack/heat-api-5494f5754b-8k4bc" Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.386727 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/514a8219-8732-4d4b-abe6-154d215f65ed-internal-tls-certs\") pod \"heat-cfnapi-755b5655f9-7jhg9\" (UID: \"514a8219-8732-4d4b-abe6-154d215f65ed\") " pod="openstack/heat-cfnapi-755b5655f9-7jhg9" Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.386803 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e432095-1a99-44ef-8941-dd57947cfea2-public-tls-certs\") pod \"heat-api-5494f5754b-8k4bc\" (UID: \"3e432095-1a99-44ef-8941-dd57947cfea2\") " pod="openstack/heat-api-5494f5754b-8k4bc" Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.387004 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e432095-1a99-44ef-8941-dd57947cfea2-internal-tls-certs\") pod \"heat-api-5494f5754b-8k4bc\" (UID: \"3e432095-1a99-44ef-8941-dd57947cfea2\") " pod="openstack/heat-api-5494f5754b-8k4bc" Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.387054 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lczkp\" (UniqueName: \"kubernetes.io/projected/3e432095-1a99-44ef-8941-dd57947cfea2-kube-api-access-lczkp\") pod \"heat-api-5494f5754b-8k4bc\" (UID: \"3e432095-1a99-44ef-8941-dd57947cfea2\") " pod="openstack/heat-api-5494f5754b-8k4bc" Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.387143 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/514a8219-8732-4d4b-abe6-154d215f65ed-public-tls-certs\") pod \"heat-cfnapi-755b5655f9-7jhg9\" (UID: \"514a8219-8732-4d4b-abe6-154d215f65ed\") " pod="openstack/heat-cfnapi-755b5655f9-7jhg9" Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.488955 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e432095-1a99-44ef-8941-dd57947cfea2-combined-ca-bundle\") pod \"heat-api-5494f5754b-8k4bc\" (UID: \"3e432095-1a99-44ef-8941-dd57947cfea2\") " pod="openstack/heat-api-5494f5754b-8k4bc" Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.488995 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/514a8219-8732-4d4b-abe6-154d215f65ed-internal-tls-certs\") pod \"heat-cfnapi-755b5655f9-7jhg9\" (UID: \"514a8219-8732-4d4b-abe6-154d215f65ed\") " pod="openstack/heat-cfnapi-755b5655f9-7jhg9" Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.489049 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e432095-1a99-44ef-8941-dd57947cfea2-public-tls-certs\") pod \"heat-api-5494f5754b-8k4bc\" (UID: \"3e432095-1a99-44ef-8941-dd57947cfea2\") " pod="openstack/heat-api-5494f5754b-8k4bc" Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.489148 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e432095-1a99-44ef-8941-dd57947cfea2-internal-tls-certs\") pod \"heat-api-5494f5754b-8k4bc\" (UID: \"3e432095-1a99-44ef-8941-dd57947cfea2\") " pod="openstack/heat-api-5494f5754b-8k4bc" Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.489172 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lczkp\" (UniqueName: \"kubernetes.io/projected/3e432095-1a99-44ef-8941-dd57947cfea2-kube-api-access-lczkp\") pod \"heat-api-5494f5754b-8k4bc\" (UID: \"3e432095-1a99-44ef-8941-dd57947cfea2\") " pod="openstack/heat-api-5494f5754b-8k4bc" Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.489204 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/514a8219-8732-4d4b-abe6-154d215f65ed-public-tls-certs\") pod \"heat-cfnapi-755b5655f9-7jhg9\" (UID: \"514a8219-8732-4d4b-abe6-154d215f65ed\") " pod="openstack/heat-cfnapi-755b5655f9-7jhg9" Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.489327 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/514a8219-8732-4d4b-abe6-154d215f65ed-combined-ca-bundle\") pod \"heat-cfnapi-755b5655f9-7jhg9\" (UID: \"514a8219-8732-4d4b-abe6-154d215f65ed\") " pod="openstack/heat-cfnapi-755b5655f9-7jhg9" Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.489355 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/514a8219-8732-4d4b-abe6-154d215f65ed-config-data\") pod \"heat-cfnapi-755b5655f9-7jhg9\" (UID: \"514a8219-8732-4d4b-abe6-154d215f65ed\") " pod="openstack/heat-cfnapi-755b5655f9-7jhg9" Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.489399 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/514a8219-8732-4d4b-abe6-154d215f65ed-config-data-custom\") pod \"heat-cfnapi-755b5655f9-7jhg9\" (UID: \"514a8219-8732-4d4b-abe6-154d215f65ed\") " pod="openstack/heat-cfnapi-755b5655f9-7jhg9" Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.489426 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj9bv\" (UniqueName: \"kubernetes.io/projected/514a8219-8732-4d4b-abe6-154d215f65ed-kube-api-access-rj9bv\") pod \"heat-cfnapi-755b5655f9-7jhg9\" (UID: \"514a8219-8732-4d4b-abe6-154d215f65ed\") " pod="openstack/heat-cfnapi-755b5655f9-7jhg9" Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.489447 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e432095-1a99-44ef-8941-dd57947cfea2-config-data\") pod \"heat-api-5494f5754b-8k4bc\" (UID: \"3e432095-1a99-44ef-8941-dd57947cfea2\") " pod="openstack/heat-api-5494f5754b-8k4bc" Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.489482 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3e432095-1a99-44ef-8941-dd57947cfea2-config-data-custom\") pod \"heat-api-5494f5754b-8k4bc\" (UID: \"3e432095-1a99-44ef-8941-dd57947cfea2\") " pod="openstack/heat-api-5494f5754b-8k4bc" Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.496909 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e432095-1a99-44ef-8941-dd57947cfea2-internal-tls-certs\") pod \"heat-api-5494f5754b-8k4bc\" (UID: \"3e432095-1a99-44ef-8941-dd57947cfea2\") " pod="openstack/heat-api-5494f5754b-8k4bc" Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.497056 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e432095-1a99-44ef-8941-dd57947cfea2-combined-ca-bundle\") pod \"heat-api-5494f5754b-8k4bc\" (UID: \"3e432095-1a99-44ef-8941-dd57947cfea2\") " pod="openstack/heat-api-5494f5754b-8k4bc" Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.497261 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3e432095-1a99-44ef-8941-dd57947cfea2-config-data-custom\") pod \"heat-api-5494f5754b-8k4bc\" (UID: \"3e432095-1a99-44ef-8941-dd57947cfea2\") " pod="openstack/heat-api-5494f5754b-8k4bc" Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.499145 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/514a8219-8732-4d4b-abe6-154d215f65ed-combined-ca-bundle\") pod \"heat-cfnapi-755b5655f9-7jhg9\" (UID: \"514a8219-8732-4d4b-abe6-154d215f65ed\") " pod="openstack/heat-cfnapi-755b5655f9-7jhg9" Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.499870 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e432095-1a99-44ef-8941-dd57947cfea2-public-tls-certs\") pod \"heat-api-5494f5754b-8k4bc\" (UID: \"3e432095-1a99-44ef-8941-dd57947cfea2\") " pod="openstack/heat-api-5494f5754b-8k4bc" Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.500781 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/514a8219-8732-4d4b-abe6-154d215f65ed-config-data-custom\") pod \"heat-cfnapi-755b5655f9-7jhg9\" (UID: \"514a8219-8732-4d4b-abe6-154d215f65ed\") " pod="openstack/heat-cfnapi-755b5655f9-7jhg9" Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.507689 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/514a8219-8732-4d4b-abe6-154d215f65ed-public-tls-certs\") pod \"heat-cfnapi-755b5655f9-7jhg9\" (UID: \"514a8219-8732-4d4b-abe6-154d215f65ed\") " pod="openstack/heat-cfnapi-755b5655f9-7jhg9" Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.507856 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/514a8219-8732-4d4b-abe6-154d215f65ed-internal-tls-certs\") pod \"heat-cfnapi-755b5655f9-7jhg9\" (UID: \"514a8219-8732-4d4b-abe6-154d215f65ed\") " pod="openstack/heat-cfnapi-755b5655f9-7jhg9" Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.508770 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e432095-1a99-44ef-8941-dd57947cfea2-config-data\") pod \"heat-api-5494f5754b-8k4bc\" (UID: \"3e432095-1a99-44ef-8941-dd57947cfea2\") " pod="openstack/heat-api-5494f5754b-8k4bc" Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.509768 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/514a8219-8732-4d4b-abe6-154d215f65ed-config-data\") pod \"heat-cfnapi-755b5655f9-7jhg9\" (UID: \"514a8219-8732-4d4b-abe6-154d215f65ed\") " pod="openstack/heat-cfnapi-755b5655f9-7jhg9" Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.513920 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lczkp\" (UniqueName: \"kubernetes.io/projected/3e432095-1a99-44ef-8941-dd57947cfea2-kube-api-access-lczkp\") pod \"heat-api-5494f5754b-8k4bc\" (UID: \"3e432095-1a99-44ef-8941-dd57947cfea2\") " pod="openstack/heat-api-5494f5754b-8k4bc" Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.514785 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj9bv\" (UniqueName: \"kubernetes.io/projected/514a8219-8732-4d4b-abe6-154d215f65ed-kube-api-access-rj9bv\") pod \"heat-cfnapi-755b5655f9-7jhg9\" (UID: \"514a8219-8732-4d4b-abe6-154d215f65ed\") " pod="openstack/heat-cfnapi-755b5655f9-7jhg9" Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.643438 4737 generic.go:334] "Generic (PLEG): container finished" podID="ee3532ad-ceed-44bc-a5ab-10a0710c1ba5" containerID="b8f34e50940767633a7c49d48c3db43cabd157526042c9a8edda982ebe08bd4e" exitCode=1 Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.643528 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-84dfd788f9-bd7kw" event={"ID":"ee3532ad-ceed-44bc-a5ab-10a0710c1ba5","Type":"ContainerDied","Data":"b8f34e50940767633a7c49d48c3db43cabd157526042c9a8edda982ebe08bd4e"} Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.643597 4737 scope.go:117] "RemoveContainer" containerID="6e78e0ea91cc67dd4840a7005b36f61ef9f77b25fb2e342121a49dbbe9f73cd7" Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.644373 4737 scope.go:117] "RemoveContainer" containerID="b8f34e50940767633a7c49d48c3db43cabd157526042c9a8edda982ebe08bd4e" Jan 26 18:56:24 crc kubenswrapper[4737]: E0126 18:56:24.644845 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-84dfd788f9-bd7kw_openstack(ee3532ad-ceed-44bc-a5ab-10a0710c1ba5)\"" pod="openstack/heat-cfnapi-84dfd788f9-bd7kw" podUID="ee3532ad-ceed-44bc-a5ab-10a0710c1ba5" Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.657594 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-755b5655f9-7jhg9" Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.659518 4737 scope.go:117] "RemoveContainer" containerID="75c5945a3c88ea126b1616e23b8ae4fffb6b560f71ccc7bfcf9e3f21d45267ec" Jan 26 18:56:24 crc kubenswrapper[4737]: E0126 18:56:24.659894 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-86744b887-d62q9_openstack(3aa18399-89e5-455e-a44d-3f862b8c0237)\"" pod="openstack/heat-api-86744b887-d62q9" podUID="3aa18399-89e5-455e-a44d-3f862b8c0237" Jan 26 18:56:24 crc kubenswrapper[4737]: I0126 18:56:24.675174 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5494f5754b-8k4bc" Jan 26 18:56:25 crc kubenswrapper[4737]: I0126 18:56:25.041815 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67c8afbc-8ed9-4ebb-b150-f6f5257f7b15" path="/var/lib/kubelet/pods/67c8afbc-8ed9-4ebb-b150-f6f5257f7b15/volumes" Jan 26 18:56:25 crc kubenswrapper[4737]: I0126 18:56:25.503002 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5494f5754b-8k4bc"] Jan 26 18:56:25 crc kubenswrapper[4737]: I0126 18:56:25.671785 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-755b5655f9-7jhg9"] Jan 26 18:56:25 crc kubenswrapper[4737]: I0126 18:56:25.675195 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5494f5754b-8k4bc" event={"ID":"3e432095-1a99-44ef-8941-dd57947cfea2","Type":"ContainerStarted","Data":"4b84466188b9d0e36a36e1538367c696778a98d0c965822666729d8bab2cb6a1"} Jan 26 18:56:25 crc kubenswrapper[4737]: I0126 18:56:25.682128 4737 scope.go:117] "RemoveContainer" containerID="b8f34e50940767633a7c49d48c3db43cabd157526042c9a8edda982ebe08bd4e" Jan 26 18:56:25 crc kubenswrapper[4737]: E0126 18:56:25.682394 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-84dfd788f9-bd7kw_openstack(ee3532ad-ceed-44bc-a5ab-10a0710c1ba5)\"" pod="openstack/heat-cfnapi-84dfd788f9-bd7kw" podUID="ee3532ad-ceed-44bc-a5ab-10a0710c1ba5" Jan 26 18:56:25 crc kubenswrapper[4737]: I0126 18:56:25.691384 4737 generic.go:334] "Generic (PLEG): container finished" podID="8c7f5f39-5fca-4ebd-b06b-1022c2500338" containerID="d7375bbb295caf445cf9905cc9faf3379d6b72b3a2f9e577ed4d1edfe37cb42b" exitCode=0 Jan 26 18:56:25 crc kubenswrapper[4737]: I0126 18:56:25.691509 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8c7f5f39-5fca-4ebd-b06b-1022c2500338","Type":"ContainerDied","Data":"d7375bbb295caf445cf9905cc9faf3379d6b72b3a2f9e577ed4d1edfe37cb42b"} Jan 26 18:56:25 crc kubenswrapper[4737]: I0126 18:56:25.692244 4737 scope.go:117] "RemoveContainer" containerID="75c5945a3c88ea126b1616e23b8ae4fffb6b560f71ccc7bfcf9e3f21d45267ec" Jan 26 18:56:25 crc kubenswrapper[4737]: E0126 18:56:25.692516 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-86744b887-d62q9_openstack(3aa18399-89e5-455e-a44d-3f862b8c0237)\"" pod="openstack/heat-api-86744b887-d62q9" podUID="3aa18399-89e5-455e-a44d-3f862b8c0237" Jan 26 18:56:26 crc kubenswrapper[4737]: I0126 18:56:26.013401 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 18:56:26 crc kubenswrapper[4737]: I0126 18:56:26.014041 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f2105678-a452-433f-aa75-908321272f46" containerName="glance-log" containerID="cri-o://fbdf9cd4e5898363e13e592218834c4a83818b60685c65abeca87b0bc8064703" gracePeriod=30 Jan 26 18:56:26 crc kubenswrapper[4737]: I0126 18:56:26.014628 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f2105678-a452-433f-aa75-908321272f46" containerName="glance-httpd" containerID="cri-o://0be6c934d819d7882080f2d5bcefc3f6ede201b6a0c105d7d0b2ec4ca03547ab" gracePeriod=30 Jan 26 18:56:26 crc kubenswrapper[4737]: I0126 18:56:26.217847 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 18:56:26 crc kubenswrapper[4737]: I0126 18:56:26.359840 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c7f5f39-5fca-4ebd-b06b-1022c2500338-scripts\") pod \"8c7f5f39-5fca-4ebd-b06b-1022c2500338\" (UID: \"8c7f5f39-5fca-4ebd-b06b-1022c2500338\") " Jan 26 18:56:26 crc kubenswrapper[4737]: I0126 18:56:26.359927 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c7f5f39-5fca-4ebd-b06b-1022c2500338-config-data\") pod \"8c7f5f39-5fca-4ebd-b06b-1022c2500338\" (UID: \"8c7f5f39-5fca-4ebd-b06b-1022c2500338\") " Jan 26 18:56:26 crc kubenswrapper[4737]: I0126 18:56:26.360020 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8c7f5f39-5fca-4ebd-b06b-1022c2500338-httpd-run\") pod \"8c7f5f39-5fca-4ebd-b06b-1022c2500338\" (UID: \"8c7f5f39-5fca-4ebd-b06b-1022c2500338\") " Jan 26 18:56:26 crc kubenswrapper[4737]: I0126 18:56:26.360225 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c7f5f39-5fca-4ebd-b06b-1022c2500338-public-tls-certs\") pod \"8c7f5f39-5fca-4ebd-b06b-1022c2500338\" (UID: \"8c7f5f39-5fca-4ebd-b06b-1022c2500338\") " Jan 26 18:56:26 crc kubenswrapper[4737]: I0126 18:56:26.360283 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4tpb\" (UniqueName: \"kubernetes.io/projected/8c7f5f39-5fca-4ebd-b06b-1022c2500338-kube-api-access-q4tpb\") pod \"8c7f5f39-5fca-4ebd-b06b-1022c2500338\" (UID: \"8c7f5f39-5fca-4ebd-b06b-1022c2500338\") " Jan 26 18:56:26 crc kubenswrapper[4737]: I0126 18:56:26.360355 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c7f5f39-5fca-4ebd-b06b-1022c2500338-logs\") pod \"8c7f5f39-5fca-4ebd-b06b-1022c2500338\" (UID: \"8c7f5f39-5fca-4ebd-b06b-1022c2500338\") " Jan 26 18:56:26 crc kubenswrapper[4737]: I0126 18:56:26.360584 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-503a318c-a4ff-4b14-bac7-f0b8ecb31d43\") pod \"8c7f5f39-5fca-4ebd-b06b-1022c2500338\" (UID: \"8c7f5f39-5fca-4ebd-b06b-1022c2500338\") " Jan 26 18:56:26 crc kubenswrapper[4737]: I0126 18:56:26.360610 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c7f5f39-5fca-4ebd-b06b-1022c2500338-combined-ca-bundle\") pod \"8c7f5f39-5fca-4ebd-b06b-1022c2500338\" (UID: \"8c7f5f39-5fca-4ebd-b06b-1022c2500338\") " Jan 26 18:56:26 crc kubenswrapper[4737]: I0126 18:56:26.367705 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c7f5f39-5fca-4ebd-b06b-1022c2500338-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "8c7f5f39-5fca-4ebd-b06b-1022c2500338" (UID: "8c7f5f39-5fca-4ebd-b06b-1022c2500338"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:56:26 crc kubenswrapper[4737]: I0126 18:56:26.368219 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c7f5f39-5fca-4ebd-b06b-1022c2500338-logs" (OuterVolumeSpecName: "logs") pod "8c7f5f39-5fca-4ebd-b06b-1022c2500338" (UID: "8c7f5f39-5fca-4ebd-b06b-1022c2500338"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:56:26 crc kubenswrapper[4737]: I0126 18:56:26.372287 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c7f5f39-5fca-4ebd-b06b-1022c2500338-kube-api-access-q4tpb" (OuterVolumeSpecName: "kube-api-access-q4tpb") pod "8c7f5f39-5fca-4ebd-b06b-1022c2500338" (UID: "8c7f5f39-5fca-4ebd-b06b-1022c2500338"). InnerVolumeSpecName "kube-api-access-q4tpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:56:26 crc kubenswrapper[4737]: I0126 18:56:26.380314 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c7f5f39-5fca-4ebd-b06b-1022c2500338-scripts" (OuterVolumeSpecName: "scripts") pod "8c7f5f39-5fca-4ebd-b06b-1022c2500338" (UID: "8c7f5f39-5fca-4ebd-b06b-1022c2500338"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:56:26 crc kubenswrapper[4737]: I0126 18:56:26.445686 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-503a318c-a4ff-4b14-bac7-f0b8ecb31d43" (OuterVolumeSpecName: "glance") pod "8c7f5f39-5fca-4ebd-b06b-1022c2500338" (UID: "8c7f5f39-5fca-4ebd-b06b-1022c2500338"). InnerVolumeSpecName "pvc-503a318c-a4ff-4b14-bac7-f0b8ecb31d43". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 18:56:26 crc kubenswrapper[4737]: I0126 18:56:26.478537 4737 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c7f5f39-5fca-4ebd-b06b-1022c2500338-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:26 crc kubenswrapper[4737]: I0126 18:56:26.478594 4737 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8c7f5f39-5fca-4ebd-b06b-1022c2500338-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:26 crc kubenswrapper[4737]: I0126 18:56:26.478614 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4tpb\" (UniqueName: \"kubernetes.io/projected/8c7f5f39-5fca-4ebd-b06b-1022c2500338-kube-api-access-q4tpb\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:26 crc kubenswrapper[4737]: I0126 18:56:26.478645 4737 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c7f5f39-5fca-4ebd-b06b-1022c2500338-logs\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:26 crc kubenswrapper[4737]: I0126 18:56:26.478680 4737 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-503a318c-a4ff-4b14-bac7-f0b8ecb31d43\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-503a318c-a4ff-4b14-bac7-f0b8ecb31d43\") on node \"crc\" " Jan 26 18:56:26 crc kubenswrapper[4737]: I0126 18:56:26.489782 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c7f5f39-5fca-4ebd-b06b-1022c2500338-config-data" (OuterVolumeSpecName: "config-data") pod "8c7f5f39-5fca-4ebd-b06b-1022c2500338" (UID: "8c7f5f39-5fca-4ebd-b06b-1022c2500338"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:56:26 crc kubenswrapper[4737]: I0126 18:56:26.523084 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c7f5f39-5fca-4ebd-b06b-1022c2500338-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8c7f5f39-5fca-4ebd-b06b-1022c2500338" (UID: "8c7f5f39-5fca-4ebd-b06b-1022c2500338"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:56:26 crc kubenswrapper[4737]: I0126 18:56:26.561082 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c7f5f39-5fca-4ebd-b06b-1022c2500338-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8c7f5f39-5fca-4ebd-b06b-1022c2500338" (UID: "8c7f5f39-5fca-4ebd-b06b-1022c2500338"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:56:26 crc kubenswrapper[4737]: I0126 18:56:26.578566 4737 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 26 18:56:26 crc kubenswrapper[4737]: I0126 18:56:26.578712 4737 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-503a318c-a4ff-4b14-bac7-f0b8ecb31d43" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-503a318c-a4ff-4b14-bac7-f0b8ecb31d43") on node "crc" Jan 26 18:56:26 crc kubenswrapper[4737]: I0126 18:56:26.582703 4737 reconciler_common.go:293] "Volume detached for volume \"pvc-503a318c-a4ff-4b14-bac7-f0b8ecb31d43\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-503a318c-a4ff-4b14-bac7-f0b8ecb31d43\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:26 crc kubenswrapper[4737]: I0126 18:56:26.582753 4737 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c7f5f39-5fca-4ebd-b06b-1022c2500338-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:26 crc kubenswrapper[4737]: I0126 18:56:26.582769 4737 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c7f5f39-5fca-4ebd-b06b-1022c2500338-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:26 crc kubenswrapper[4737]: I0126 18:56:26.582777 4737 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c7f5f39-5fca-4ebd-b06b-1022c2500338-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:26 crc kubenswrapper[4737]: I0126 18:56:26.736260 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5494f5754b-8k4bc" event={"ID":"3e432095-1a99-44ef-8941-dd57947cfea2","Type":"ContainerStarted","Data":"2bb1c3822e504e928e9e99802f63896084a67c29e44d0d476bf0cfbf3a001047"} Jan 26 18:56:26 crc kubenswrapper[4737]: I0126 18:56:26.737932 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5494f5754b-8k4bc" Jan 26 18:56:26 crc kubenswrapper[4737]: I0126 18:56:26.773580 4737 generic.go:334] "Generic (PLEG): container finished" podID="f2105678-a452-433f-aa75-908321272f46" containerID="fbdf9cd4e5898363e13e592218834c4a83818b60685c65abeca87b0bc8064703" exitCode=143 Jan 26 18:56:26 crc kubenswrapper[4737]: I0126 18:56:26.773664 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f2105678-a452-433f-aa75-908321272f46","Type":"ContainerDied","Data":"fbdf9cd4e5898363e13e592218834c4a83818b60685c65abeca87b0bc8064703"} Jan 26 18:56:26 crc kubenswrapper[4737]: I0126 18:56:26.789616 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-5494f5754b-8k4bc" podStartSLOduration=2.789594797 podStartE2EDuration="2.789594797s" podCreationTimestamp="2026-01-26 18:56:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:56:26.771812242 +0000 UTC m=+1560.080006940" watchObservedRunningTime="2026-01-26 18:56:26.789594797 +0000 UTC m=+1560.097789505" Jan 26 18:56:26 crc kubenswrapper[4737]: I0126 18:56:26.793867 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8c7f5f39-5fca-4ebd-b06b-1022c2500338","Type":"ContainerDied","Data":"f440ea29c2469be4a2dd1a6f421238767af9d15fff6b0c58bff8a9cd59062828"} Jan 26 18:56:26 crc kubenswrapper[4737]: I0126 18:56:26.793921 4737 scope.go:117] "RemoveContainer" containerID="d7375bbb295caf445cf9905cc9faf3379d6b72b3a2f9e577ed4d1edfe37cb42b" Jan 26 18:56:26 crc kubenswrapper[4737]: I0126 18:56:26.794047 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 18:56:26 crc kubenswrapper[4737]: I0126 18:56:26.803043 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-755b5655f9-7jhg9" event={"ID":"514a8219-8732-4d4b-abe6-154d215f65ed","Type":"ContainerStarted","Data":"34933807e8713fbb46bfe29c3169d3d3a8435a9b015d9b5d3c70887166bfcc2c"} Jan 26 18:56:26 crc kubenswrapper[4737]: I0126 18:56:26.803140 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-755b5655f9-7jhg9" event={"ID":"514a8219-8732-4d4b-abe6-154d215f65ed","Type":"ContainerStarted","Data":"0557ea9d71d3b623a7939eb8d0a1d6b2b4745470d449e6b3b26a5cd1ad736075"} Jan 26 18:56:26 crc kubenswrapper[4737]: I0126 18:56:26.804505 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-755b5655f9-7jhg9" Jan 26 18:56:26 crc kubenswrapper[4737]: I0126 18:56:26.839536 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-755b5655f9-7jhg9" podStartSLOduration=2.83950906 podStartE2EDuration="2.83950906s" podCreationTimestamp="2026-01-26 18:56:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:56:26.822600436 +0000 UTC m=+1560.130795144" watchObservedRunningTime="2026-01-26 18:56:26.83950906 +0000 UTC m=+1560.147703768" Jan 26 18:56:26 crc kubenswrapper[4737]: I0126 18:56:26.888384 4737 scope.go:117] "RemoveContainer" containerID="6debd79f4fd6a44a765ba729f95566537cb9c3413f3c8578e6c8bcef6cd06d62" Jan 26 18:56:26 crc kubenswrapper[4737]: I0126 18:56:26.919752 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 18:56:26 crc kubenswrapper[4737]: I0126 18:56:26.959292 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 18:56:26 crc kubenswrapper[4737]: I0126 18:56:26.975253 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="715806cf-cb82-4224-bdb0-8aed20e29b49" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.216:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 18:56:27 crc kubenswrapper[4737]: I0126 18:56:27.040718 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c7f5f39-5fca-4ebd-b06b-1022c2500338" path="/var/lib/kubelet/pods/8c7f5f39-5fca-4ebd-b06b-1022c2500338/volumes" Jan 26 18:56:27 crc kubenswrapper[4737]: I0126 18:56:27.055342 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 18:56:27 crc kubenswrapper[4737]: E0126 18:56:27.055745 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c7f5f39-5fca-4ebd-b06b-1022c2500338" containerName="glance-log" Jan 26 18:56:27 crc kubenswrapper[4737]: I0126 18:56:27.055765 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c7f5f39-5fca-4ebd-b06b-1022c2500338" containerName="glance-log" Jan 26 18:56:27 crc kubenswrapper[4737]: E0126 18:56:27.055815 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c7f5f39-5fca-4ebd-b06b-1022c2500338" containerName="glance-httpd" Jan 26 18:56:27 crc kubenswrapper[4737]: I0126 18:56:27.055821 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c7f5f39-5fca-4ebd-b06b-1022c2500338" containerName="glance-httpd" Jan 26 18:56:27 crc kubenswrapper[4737]: I0126 18:56:27.056026 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c7f5f39-5fca-4ebd-b06b-1022c2500338" containerName="glance-httpd" Jan 26 18:56:27 crc kubenswrapper[4737]: I0126 18:56:27.056042 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c7f5f39-5fca-4ebd-b06b-1022c2500338" containerName="glance-log" Jan 26 18:56:27 crc kubenswrapper[4737]: I0126 18:56:27.062043 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 18:56:27 crc kubenswrapper[4737]: I0126 18:56:27.062182 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 18:56:27 crc kubenswrapper[4737]: I0126 18:56:27.067318 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 26 18:56:27 crc kubenswrapper[4737]: I0126 18:56:27.067616 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 26 18:56:27 crc kubenswrapper[4737]: I0126 18:56:27.213270 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-503a318c-a4ff-4b14-bac7-f0b8ecb31d43\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-503a318c-a4ff-4b14-bac7-f0b8ecb31d43\") pod \"glance-default-external-api-0\" (UID: \"5de2e392-7605-4b8c-831c-4101c098fc0e\") " pod="openstack/glance-default-external-api-0" Jan 26 18:56:27 crc kubenswrapper[4737]: I0126 18:56:27.213374 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5de2e392-7605-4b8c-831c-4101c098fc0e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5de2e392-7605-4b8c-831c-4101c098fc0e\") " pod="openstack/glance-default-external-api-0" Jan 26 18:56:27 crc kubenswrapper[4737]: I0126 18:56:27.213411 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5de2e392-7605-4b8c-831c-4101c098fc0e-scripts\") pod \"glance-default-external-api-0\" (UID: \"5de2e392-7605-4b8c-831c-4101c098fc0e\") " pod="openstack/glance-default-external-api-0" Jan 26 18:56:27 crc kubenswrapper[4737]: I0126 18:56:27.213481 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5de2e392-7605-4b8c-831c-4101c098fc0e-logs\") pod \"glance-default-external-api-0\" (UID: \"5de2e392-7605-4b8c-831c-4101c098fc0e\") " pod="openstack/glance-default-external-api-0" Jan 26 18:56:27 crc kubenswrapper[4737]: I0126 18:56:27.213592 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5de2e392-7605-4b8c-831c-4101c098fc0e-config-data\") pod \"glance-default-external-api-0\" (UID: \"5de2e392-7605-4b8c-831c-4101c098fc0e\") " pod="openstack/glance-default-external-api-0" Jan 26 18:56:27 crc kubenswrapper[4737]: I0126 18:56:27.213835 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5de2e392-7605-4b8c-831c-4101c098fc0e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5de2e392-7605-4b8c-831c-4101c098fc0e\") " pod="openstack/glance-default-external-api-0" Jan 26 18:56:27 crc kubenswrapper[4737]: I0126 18:56:27.214008 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5de2e392-7605-4b8c-831c-4101c098fc0e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5de2e392-7605-4b8c-831c-4101c098fc0e\") " pod="openstack/glance-default-external-api-0" Jan 26 18:56:27 crc kubenswrapper[4737]: I0126 18:56:27.214102 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmdff\" (UniqueName: \"kubernetes.io/projected/5de2e392-7605-4b8c-831c-4101c098fc0e-kube-api-access-qmdff\") pod \"glance-default-external-api-0\" (UID: \"5de2e392-7605-4b8c-831c-4101c098fc0e\") " pod="openstack/glance-default-external-api-0" Jan 26 18:56:27 crc kubenswrapper[4737]: I0126 18:56:27.315940 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5de2e392-7605-4b8c-831c-4101c098fc0e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5de2e392-7605-4b8c-831c-4101c098fc0e\") " pod="openstack/glance-default-external-api-0" Jan 26 18:56:27 crc kubenswrapper[4737]: I0126 18:56:27.316006 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5de2e392-7605-4b8c-831c-4101c098fc0e-scripts\") pod \"glance-default-external-api-0\" (UID: \"5de2e392-7605-4b8c-831c-4101c098fc0e\") " pod="openstack/glance-default-external-api-0" Jan 26 18:56:27 crc kubenswrapper[4737]: I0126 18:56:27.316061 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5de2e392-7605-4b8c-831c-4101c098fc0e-logs\") pod \"glance-default-external-api-0\" (UID: \"5de2e392-7605-4b8c-831c-4101c098fc0e\") " pod="openstack/glance-default-external-api-0" Jan 26 18:56:27 crc kubenswrapper[4737]: I0126 18:56:27.316184 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5de2e392-7605-4b8c-831c-4101c098fc0e-config-data\") pod \"glance-default-external-api-0\" (UID: \"5de2e392-7605-4b8c-831c-4101c098fc0e\") " pod="openstack/glance-default-external-api-0" Jan 26 18:56:27 crc kubenswrapper[4737]: I0126 18:56:27.316240 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5de2e392-7605-4b8c-831c-4101c098fc0e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5de2e392-7605-4b8c-831c-4101c098fc0e\") " pod="openstack/glance-default-external-api-0" Jan 26 18:56:27 crc kubenswrapper[4737]: I0126 18:56:27.316293 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5de2e392-7605-4b8c-831c-4101c098fc0e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5de2e392-7605-4b8c-831c-4101c098fc0e\") " pod="openstack/glance-default-external-api-0" Jan 26 18:56:27 crc kubenswrapper[4737]: I0126 18:56:27.316325 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmdff\" (UniqueName: \"kubernetes.io/projected/5de2e392-7605-4b8c-831c-4101c098fc0e-kube-api-access-qmdff\") pod \"glance-default-external-api-0\" (UID: \"5de2e392-7605-4b8c-831c-4101c098fc0e\") " pod="openstack/glance-default-external-api-0" Jan 26 18:56:27 crc kubenswrapper[4737]: I0126 18:56:27.316360 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-503a318c-a4ff-4b14-bac7-f0b8ecb31d43\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-503a318c-a4ff-4b14-bac7-f0b8ecb31d43\") pod \"glance-default-external-api-0\" (UID: \"5de2e392-7605-4b8c-831c-4101c098fc0e\") " pod="openstack/glance-default-external-api-0" Jan 26 18:56:27 crc kubenswrapper[4737]: I0126 18:56:27.316608 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5de2e392-7605-4b8c-831c-4101c098fc0e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5de2e392-7605-4b8c-831c-4101c098fc0e\") " pod="openstack/glance-default-external-api-0" Jan 26 18:56:27 crc kubenswrapper[4737]: I0126 18:56:27.316703 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5de2e392-7605-4b8c-831c-4101c098fc0e-logs\") pod \"glance-default-external-api-0\" (UID: \"5de2e392-7605-4b8c-831c-4101c098fc0e\") " pod="openstack/glance-default-external-api-0" Jan 26 18:56:27 crc kubenswrapper[4737]: I0126 18:56:27.323220 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5de2e392-7605-4b8c-831c-4101c098fc0e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5de2e392-7605-4b8c-831c-4101c098fc0e\") " pod="openstack/glance-default-external-api-0" Jan 26 18:56:27 crc kubenswrapper[4737]: I0126 18:56:27.323734 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5de2e392-7605-4b8c-831c-4101c098fc0e-scripts\") pod \"glance-default-external-api-0\" (UID: \"5de2e392-7605-4b8c-831c-4101c098fc0e\") " pod="openstack/glance-default-external-api-0" Jan 26 18:56:27 crc kubenswrapper[4737]: I0126 18:56:27.323874 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5de2e392-7605-4b8c-831c-4101c098fc0e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5de2e392-7605-4b8c-831c-4101c098fc0e\") " pod="openstack/glance-default-external-api-0" Jan 26 18:56:27 crc kubenswrapper[4737]: I0126 18:56:27.324317 4737 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 18:56:27 crc kubenswrapper[4737]: I0126 18:56:27.324352 4737 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-503a318c-a4ff-4b14-bac7-f0b8ecb31d43\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-503a318c-a4ff-4b14-bac7-f0b8ecb31d43\") pod \"glance-default-external-api-0\" (UID: \"5de2e392-7605-4b8c-831c-4101c098fc0e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/543a0865806adc8a1aa4ef4cf4d6f37534ce583cc9c348d82f63f0aa114aec1f/globalmount\"" pod="openstack/glance-default-external-api-0" Jan 26 18:56:27 crc kubenswrapper[4737]: I0126 18:56:27.324504 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5de2e392-7605-4b8c-831c-4101c098fc0e-config-data\") pod \"glance-default-external-api-0\" (UID: \"5de2e392-7605-4b8c-831c-4101c098fc0e\") " pod="openstack/glance-default-external-api-0" Jan 26 18:56:27 crc kubenswrapper[4737]: I0126 18:56:27.335723 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmdff\" (UniqueName: \"kubernetes.io/projected/5de2e392-7605-4b8c-831c-4101c098fc0e-kube-api-access-qmdff\") pod \"glance-default-external-api-0\" (UID: \"5de2e392-7605-4b8c-831c-4101c098fc0e\") " pod="openstack/glance-default-external-api-0" Jan 26 18:56:27 crc kubenswrapper[4737]: I0126 18:56:27.417420 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-503a318c-a4ff-4b14-bac7-f0b8ecb31d43\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-503a318c-a4ff-4b14-bac7-f0b8ecb31d43\") pod \"glance-default-external-api-0\" (UID: \"5de2e392-7605-4b8c-831c-4101c098fc0e\") " pod="openstack/glance-default-external-api-0" Jan 26 18:56:27 crc kubenswrapper[4737]: I0126 18:56:27.691624 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 18:56:27 crc kubenswrapper[4737]: I0126 18:56:27.772250 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-75zv4"] Jan 26 18:56:27 crc kubenswrapper[4737]: I0126 18:56:27.773856 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-75zv4" Jan 26 18:56:27 crc kubenswrapper[4737]: I0126 18:56:27.792852 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-75zv4"] Jan 26 18:56:27 crc kubenswrapper[4737]: I0126 18:56:27.915054 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-h7hhd"] Jan 26 18:56:27 crc kubenswrapper[4737]: I0126 18:56:27.922617 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-h7hhd" Jan 26 18:56:27 crc kubenswrapper[4737]: I0126 18:56:27.929924 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c9776ccc5-6rgnn" podUID="67c8afbc-8ed9-4ebb-b150-f6f5257f7b15" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.208:5353: i/o timeout" Jan 26 18:56:27 crc kubenswrapper[4737]: I0126 18:56:27.938586 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8df0f55f-4f9c-4ef5-88f1-16a3a5ec1d47-operator-scripts\") pod \"nova-api-db-create-75zv4\" (UID: \"8df0f55f-4f9c-4ef5-88f1-16a3a5ec1d47\") " pod="openstack/nova-api-db-create-75zv4" Jan 26 18:56:27 crc kubenswrapper[4737]: I0126 18:56:27.938982 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb2bx\" (UniqueName: \"kubernetes.io/projected/8df0f55f-4f9c-4ef5-88f1-16a3a5ec1d47-kube-api-access-xb2bx\") pod \"nova-api-db-create-75zv4\" (UID: \"8df0f55f-4f9c-4ef5-88f1-16a3a5ec1d47\") " pod="openstack/nova-api-db-create-75zv4" Jan 26 18:56:28 crc kubenswrapper[4737]: I0126 18:56:28.009474 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-h7hhd"] Jan 26 18:56:28 crc kubenswrapper[4737]: I0126 18:56:28.044536 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8df0f55f-4f9c-4ef5-88f1-16a3a5ec1d47-operator-scripts\") pod \"nova-api-db-create-75zv4\" (UID: \"8df0f55f-4f9c-4ef5-88f1-16a3a5ec1d47\") " pod="openstack/nova-api-db-create-75zv4" Jan 26 18:56:28 crc kubenswrapper[4737]: I0126 18:56:28.044611 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xb2bx\" (UniqueName: \"kubernetes.io/projected/8df0f55f-4f9c-4ef5-88f1-16a3a5ec1d47-kube-api-access-xb2bx\") pod \"nova-api-db-create-75zv4\" (UID: \"8df0f55f-4f9c-4ef5-88f1-16a3a5ec1d47\") " pod="openstack/nova-api-db-create-75zv4" Jan 26 18:56:28 crc kubenswrapper[4737]: I0126 18:56:28.044791 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e07b7037-d1bb-485f-a2e0-951b51de8c74-operator-scripts\") pod \"nova-cell0-db-create-h7hhd\" (UID: \"e07b7037-d1bb-485f-a2e0-951b51de8c74\") " pod="openstack/nova-cell0-db-create-h7hhd" Jan 26 18:56:28 crc kubenswrapper[4737]: I0126 18:56:28.044858 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45tg2\" (UniqueName: \"kubernetes.io/projected/e07b7037-d1bb-485f-a2e0-951b51de8c74-kube-api-access-45tg2\") pod \"nova-cell0-db-create-h7hhd\" (UID: \"e07b7037-d1bb-485f-a2e0-951b51de8c74\") " pod="openstack/nova-cell0-db-create-h7hhd" Jan 26 18:56:28 crc kubenswrapper[4737]: I0126 18:56:28.045910 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8df0f55f-4f9c-4ef5-88f1-16a3a5ec1d47-operator-scripts\") pod \"nova-api-db-create-75zv4\" (UID: \"8df0f55f-4f9c-4ef5-88f1-16a3a5ec1d47\") " pod="openstack/nova-api-db-create-75zv4" Jan 26 18:56:28 crc kubenswrapper[4737]: I0126 18:56:28.182929 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-g9xvw"] Jan 26 18:56:28 crc kubenswrapper[4737]: I0126 18:56:28.185826 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-g9xvw" Jan 26 18:56:28 crc kubenswrapper[4737]: I0126 18:56:28.203470 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xb2bx\" (UniqueName: \"kubernetes.io/projected/8df0f55f-4f9c-4ef5-88f1-16a3a5ec1d47-kube-api-access-xb2bx\") pod \"nova-api-db-create-75zv4\" (UID: \"8df0f55f-4f9c-4ef5-88f1-16a3a5ec1d47\") " pod="openstack/nova-api-db-create-75zv4" Jan 26 18:56:28 crc kubenswrapper[4737]: I0126 18:56:28.245897 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e07b7037-d1bb-485f-a2e0-951b51de8c74-operator-scripts\") pod \"nova-cell0-db-create-h7hhd\" (UID: \"e07b7037-d1bb-485f-a2e0-951b51de8c74\") " pod="openstack/nova-cell0-db-create-h7hhd" Jan 26 18:56:28 crc kubenswrapper[4737]: I0126 18:56:28.246022 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45tg2\" (UniqueName: \"kubernetes.io/projected/e07b7037-d1bb-485f-a2e0-951b51de8c74-kube-api-access-45tg2\") pod \"nova-cell0-db-create-h7hhd\" (UID: \"e07b7037-d1bb-485f-a2e0-951b51de8c74\") " pod="openstack/nova-cell0-db-create-h7hhd" Jan 26 18:56:28 crc kubenswrapper[4737]: I0126 18:56:28.248168 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-75zv4" Jan 26 18:56:28 crc kubenswrapper[4737]: I0126 18:56:28.257503 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-3193-account-create-update-8m9fw"] Jan 26 18:56:28 crc kubenswrapper[4737]: I0126 18:56:28.260113 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3193-account-create-update-8m9fw" Jan 26 18:56:28 crc kubenswrapper[4737]: I0126 18:56:28.286785 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 26 18:56:28 crc kubenswrapper[4737]: I0126 18:56:28.289530 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e07b7037-d1bb-485f-a2e0-951b51de8c74-operator-scripts\") pod \"nova-cell0-db-create-h7hhd\" (UID: \"e07b7037-d1bb-485f-a2e0-951b51de8c74\") " pod="openstack/nova-cell0-db-create-h7hhd" Jan 26 18:56:28 crc kubenswrapper[4737]: I0126 18:56:28.327496 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-g9xvw"] Jan 26 18:56:28 crc kubenswrapper[4737]: I0126 18:56:28.376386 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-3193-account-create-update-8m9fw"] Jan 26 18:56:28 crc kubenswrapper[4737]: I0126 18:56:28.336129 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45tg2\" (UniqueName: \"kubernetes.io/projected/e07b7037-d1bb-485f-a2e0-951b51de8c74-kube-api-access-45tg2\") pod \"nova-cell0-db-create-h7hhd\" (UID: \"e07b7037-d1bb-485f-a2e0-951b51de8c74\") " pod="openstack/nova-cell0-db-create-h7hhd" Jan 26 18:56:28 crc kubenswrapper[4737]: I0126 18:56:28.383744 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82w4h\" (UniqueName: \"kubernetes.io/projected/0bcd08ca-7be6-4684-b83d-19a94dee32ad-kube-api-access-82w4h\") pod \"nova-api-3193-account-create-update-8m9fw\" (UID: \"0bcd08ca-7be6-4684-b83d-19a94dee32ad\") " pod="openstack/nova-api-3193-account-create-update-8m9fw" Jan 26 18:56:28 crc kubenswrapper[4737]: I0126 18:56:28.384115 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bcd08ca-7be6-4684-b83d-19a94dee32ad-operator-scripts\") pod \"nova-api-3193-account-create-update-8m9fw\" (UID: \"0bcd08ca-7be6-4684-b83d-19a94dee32ad\") " pod="openstack/nova-api-3193-account-create-update-8m9fw" Jan 26 18:56:28 crc kubenswrapper[4737]: I0126 18:56:28.384394 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ba096b6-6f0d-4c4f-afd1-40d5c8ba5e7f-operator-scripts\") pod \"nova-cell1-db-create-g9xvw\" (UID: \"8ba096b6-6f0d-4c4f-afd1-40d5c8ba5e7f\") " pod="openstack/nova-cell1-db-create-g9xvw" Jan 26 18:56:28 crc kubenswrapper[4737]: I0126 18:56:28.384608 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zrqp\" (UniqueName: \"kubernetes.io/projected/8ba096b6-6f0d-4c4f-afd1-40d5c8ba5e7f-kube-api-access-9zrqp\") pod \"nova-cell1-db-create-g9xvw\" (UID: \"8ba096b6-6f0d-4c4f-afd1-40d5c8ba5e7f\") " pod="openstack/nova-cell1-db-create-g9xvw" Jan 26 18:56:28 crc kubenswrapper[4737]: I0126 18:56:28.495461 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zrqp\" (UniqueName: \"kubernetes.io/projected/8ba096b6-6f0d-4c4f-afd1-40d5c8ba5e7f-kube-api-access-9zrqp\") pod \"nova-cell1-db-create-g9xvw\" (UID: \"8ba096b6-6f0d-4c4f-afd1-40d5c8ba5e7f\") " pod="openstack/nova-cell1-db-create-g9xvw" Jan 26 18:56:28 crc kubenswrapper[4737]: I0126 18:56:28.495530 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82w4h\" (UniqueName: \"kubernetes.io/projected/0bcd08ca-7be6-4684-b83d-19a94dee32ad-kube-api-access-82w4h\") pod \"nova-api-3193-account-create-update-8m9fw\" (UID: \"0bcd08ca-7be6-4684-b83d-19a94dee32ad\") " pod="openstack/nova-api-3193-account-create-update-8m9fw" Jan 26 18:56:28 crc kubenswrapper[4737]: I0126 18:56:28.495564 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bcd08ca-7be6-4684-b83d-19a94dee32ad-operator-scripts\") pod \"nova-api-3193-account-create-update-8m9fw\" (UID: \"0bcd08ca-7be6-4684-b83d-19a94dee32ad\") " pod="openstack/nova-api-3193-account-create-update-8m9fw" Jan 26 18:56:28 crc kubenswrapper[4737]: I0126 18:56:28.495746 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ba096b6-6f0d-4c4f-afd1-40d5c8ba5e7f-operator-scripts\") pod \"nova-cell1-db-create-g9xvw\" (UID: \"8ba096b6-6f0d-4c4f-afd1-40d5c8ba5e7f\") " pod="openstack/nova-cell1-db-create-g9xvw" Jan 26 18:56:28 crc kubenswrapper[4737]: I0126 18:56:28.496417 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ba096b6-6f0d-4c4f-afd1-40d5c8ba5e7f-operator-scripts\") pod \"nova-cell1-db-create-g9xvw\" (UID: \"8ba096b6-6f0d-4c4f-afd1-40d5c8ba5e7f\") " pod="openstack/nova-cell1-db-create-g9xvw" Jan 26 18:56:28 crc kubenswrapper[4737]: I0126 18:56:28.506632 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bcd08ca-7be6-4684-b83d-19a94dee32ad-operator-scripts\") pod \"nova-api-3193-account-create-update-8m9fw\" (UID: \"0bcd08ca-7be6-4684-b83d-19a94dee32ad\") " pod="openstack/nova-api-3193-account-create-update-8m9fw" Jan 26 18:56:28 crc kubenswrapper[4737]: I0126 18:56:28.569643 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-h7hhd" Jan 26 18:56:28 crc kubenswrapper[4737]: I0126 18:56:28.585971 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zrqp\" (UniqueName: \"kubernetes.io/projected/8ba096b6-6f0d-4c4f-afd1-40d5c8ba5e7f-kube-api-access-9zrqp\") pod \"nova-cell1-db-create-g9xvw\" (UID: \"8ba096b6-6f0d-4c4f-afd1-40d5c8ba5e7f\") " pod="openstack/nova-cell1-db-create-g9xvw" Jan 26 18:56:28 crc kubenswrapper[4737]: I0126 18:56:28.598216 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-4c57-account-create-update-xw7qm"] Jan 26 18:56:28 crc kubenswrapper[4737]: I0126 18:56:28.599985 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-4c57-account-create-update-xw7qm" Jan 26 18:56:28 crc kubenswrapper[4737]: I0126 18:56:28.613709 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 26 18:56:28 crc kubenswrapper[4737]: I0126 18:56:28.614373 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82w4h\" (UniqueName: \"kubernetes.io/projected/0bcd08ca-7be6-4684-b83d-19a94dee32ad-kube-api-access-82w4h\") pod \"nova-api-3193-account-create-update-8m9fw\" (UID: \"0bcd08ca-7be6-4684-b83d-19a94dee32ad\") " pod="openstack/nova-api-3193-account-create-update-8m9fw" Jan 26 18:56:28 crc kubenswrapper[4737]: I0126 18:56:28.640661 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-4c57-account-create-update-xw7qm"] Jan 26 18:56:28 crc kubenswrapper[4737]: I0126 18:56:28.678637 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-2bf9-account-create-update-kccxv"] Jan 26 18:56:28 crc kubenswrapper[4737]: I0126 18:56:28.689257 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-2bf9-account-create-update-kccxv" Jan 26 18:56:28 crc kubenswrapper[4737]: I0126 18:56:28.692644 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 26 18:56:28 crc kubenswrapper[4737]: I0126 18:56:28.712231 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-2bf9-account-create-update-kccxv"] Jan 26 18:56:28 crc kubenswrapper[4737]: I0126 18:56:28.713802 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec2b468d-e649-4320-8687-bc3b4ed09593-operator-scripts\") pod \"nova-cell0-4c57-account-create-update-xw7qm\" (UID: \"ec2b468d-e649-4320-8687-bc3b4ed09593\") " pod="openstack/nova-cell0-4c57-account-create-update-xw7qm" Jan 26 18:56:28 crc kubenswrapper[4737]: I0126 18:56:28.713856 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mwwg\" (UniqueName: \"kubernetes.io/projected/ec2b468d-e649-4320-8687-bc3b4ed09593-kube-api-access-5mwwg\") pod \"nova-cell0-4c57-account-create-update-xw7qm\" (UID: \"ec2b468d-e649-4320-8687-bc3b4ed09593\") " pod="openstack/nova-cell0-4c57-account-create-update-xw7qm" Jan 26 18:56:28 crc kubenswrapper[4737]: I0126 18:56:28.768852 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="ca50689d-e7af-4267-9ee0-11d254c08962" containerName="galera" probeResult="failure" output="command timed out" Jan 26 18:56:28 crc kubenswrapper[4737]: I0126 18:56:28.777547 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="ca50689d-e7af-4267-9ee0-11d254c08962" containerName="galera" probeResult="failure" output="command timed out" Jan 26 18:56:28 crc kubenswrapper[4737]: I0126 18:56:28.811386 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-g9xvw" Jan 26 18:56:28 crc kubenswrapper[4737]: I0126 18:56:28.816282 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec2b468d-e649-4320-8687-bc3b4ed09593-operator-scripts\") pod \"nova-cell0-4c57-account-create-update-xw7qm\" (UID: \"ec2b468d-e649-4320-8687-bc3b4ed09593\") " pod="openstack/nova-cell0-4c57-account-create-update-xw7qm" Jan 26 18:56:28 crc kubenswrapper[4737]: I0126 18:56:28.816327 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mwwg\" (UniqueName: \"kubernetes.io/projected/ec2b468d-e649-4320-8687-bc3b4ed09593-kube-api-access-5mwwg\") pod \"nova-cell0-4c57-account-create-update-xw7qm\" (UID: \"ec2b468d-e649-4320-8687-bc3b4ed09593\") " pod="openstack/nova-cell0-4c57-account-create-update-xw7qm" Jan 26 18:56:28 crc kubenswrapper[4737]: I0126 18:56:28.816473 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67dz6\" (UniqueName: \"kubernetes.io/projected/ea8f2357-50f9-46d8-9527-f04533ce926b-kube-api-access-67dz6\") pod \"nova-cell1-2bf9-account-create-update-kccxv\" (UID: \"ea8f2357-50f9-46d8-9527-f04533ce926b\") " pod="openstack/nova-cell1-2bf9-account-create-update-kccxv" Jan 26 18:56:28 crc kubenswrapper[4737]: I0126 18:56:28.816537 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea8f2357-50f9-46d8-9527-f04533ce926b-operator-scripts\") pod \"nova-cell1-2bf9-account-create-update-kccxv\" (UID: \"ea8f2357-50f9-46d8-9527-f04533ce926b\") " pod="openstack/nova-cell1-2bf9-account-create-update-kccxv" Jan 26 18:56:28 crc kubenswrapper[4737]: I0126 18:56:28.816987 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec2b468d-e649-4320-8687-bc3b4ed09593-operator-scripts\") pod \"nova-cell0-4c57-account-create-update-xw7qm\" (UID: \"ec2b468d-e649-4320-8687-bc3b4ed09593\") " pod="openstack/nova-cell0-4c57-account-create-update-xw7qm" Jan 26 18:56:28 crc kubenswrapper[4737]: I0126 18:56:28.871486 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mwwg\" (UniqueName: \"kubernetes.io/projected/ec2b468d-e649-4320-8687-bc3b4ed09593-kube-api-access-5mwwg\") pod \"nova-cell0-4c57-account-create-update-xw7qm\" (UID: \"ec2b468d-e649-4320-8687-bc3b4ed09593\") " pod="openstack/nova-cell0-4c57-account-create-update-xw7qm" Jan 26 18:56:28 crc kubenswrapper[4737]: I0126 18:56:28.911956 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3193-account-create-update-8m9fw" Jan 26 18:56:28 crc kubenswrapper[4737]: I0126 18:56:28.918810 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea8f2357-50f9-46d8-9527-f04533ce926b-operator-scripts\") pod \"nova-cell1-2bf9-account-create-update-kccxv\" (UID: \"ea8f2357-50f9-46d8-9527-f04533ce926b\") " pod="openstack/nova-cell1-2bf9-account-create-update-kccxv" Jan 26 18:56:28 crc kubenswrapper[4737]: I0126 18:56:28.919042 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67dz6\" (UniqueName: \"kubernetes.io/projected/ea8f2357-50f9-46d8-9527-f04533ce926b-kube-api-access-67dz6\") pod \"nova-cell1-2bf9-account-create-update-kccxv\" (UID: \"ea8f2357-50f9-46d8-9527-f04533ce926b\") " pod="openstack/nova-cell1-2bf9-account-create-update-kccxv" Jan 26 18:56:28 crc kubenswrapper[4737]: I0126 18:56:28.924404 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea8f2357-50f9-46d8-9527-f04533ce926b-operator-scripts\") pod \"nova-cell1-2bf9-account-create-update-kccxv\" (UID: \"ea8f2357-50f9-46d8-9527-f04533ce926b\") " pod="openstack/nova-cell1-2bf9-account-create-update-kccxv" Jan 26 18:56:28 crc kubenswrapper[4737]: I0126 18:56:28.949401 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="715806cf-cb82-4224-bdb0-8aed20e29b49" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.216:8776/healthcheck\": context deadline exceeded" Jan 26 18:56:29 crc kubenswrapper[4737]: I0126 18:56:29.014521 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67dz6\" (UniqueName: \"kubernetes.io/projected/ea8f2357-50f9-46d8-9527-f04533ce926b-kube-api-access-67dz6\") pod \"nova-cell1-2bf9-account-create-update-kccxv\" (UID: \"ea8f2357-50f9-46d8-9527-f04533ce926b\") " pod="openstack/nova-cell1-2bf9-account-create-update-kccxv" Jan 26 18:56:29 crc kubenswrapper[4737]: I0126 18:56:29.047668 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-4c57-account-create-update-xw7qm" Jan 26 18:56:29 crc kubenswrapper[4737]: I0126 18:56:29.077323 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-2bf9-account-create-update-kccxv" Jan 26 18:56:29 crc kubenswrapper[4737]: I0126 18:56:29.103729 4737 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-84dfd788f9-bd7kw" Jan 26 18:56:29 crc kubenswrapper[4737]: I0126 18:56:29.104829 4737 scope.go:117] "RemoveContainer" containerID="b8f34e50940767633a7c49d48c3db43cabd157526042c9a8edda982ebe08bd4e" Jan 26 18:56:29 crc kubenswrapper[4737]: E0126 18:56:29.105252 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-84dfd788f9-bd7kw_openstack(ee3532ad-ceed-44bc-a5ab-10a0710c1ba5)\"" pod="openstack/heat-cfnapi-84dfd788f9-bd7kw" podUID="ee3532ad-ceed-44bc-a5ab-10a0710c1ba5" Jan 26 18:56:29 crc kubenswrapper[4737]: I0126 18:56:29.362302 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 18:56:29 crc kubenswrapper[4737]: I0126 18:56:29.644893 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-75zv4"] Jan 26 18:56:29 crc kubenswrapper[4737]: I0126 18:56:29.713572 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-h7hhd"] Jan 26 18:56:30 crc kubenswrapper[4737]: I0126 18:56:30.087947 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5de2e392-7605-4b8c-831c-4101c098fc0e","Type":"ContainerStarted","Data":"5eb57d049bd28269e95d60e79046a2d6006cc9c14ce58394567767491c5bb1e2"} Jan 26 18:56:30 crc kubenswrapper[4737]: I0126 18:56:30.099588 4737 generic.go:334] "Generic (PLEG): container finished" podID="f2105678-a452-433f-aa75-908321272f46" containerID="0be6c934d819d7882080f2d5bcefc3f6ede201b6a0c105d7d0b2ec4ca03547ab" exitCode=0 Jan 26 18:56:30 crc kubenswrapper[4737]: I0126 18:56:30.099706 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f2105678-a452-433f-aa75-908321272f46","Type":"ContainerDied","Data":"0be6c934d819d7882080f2d5bcefc3f6ede201b6a0c105d7d0b2ec4ca03547ab"} Jan 26 18:56:30 crc kubenswrapper[4737]: I0126 18:56:30.106757 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-h7hhd" event={"ID":"e07b7037-d1bb-485f-a2e0-951b51de8c74","Type":"ContainerStarted","Data":"2333d289ac974a6254622f2525a6d7476f23fc49ff9706ddaa9efd10ffe07dc9"} Jan 26 18:56:30 crc kubenswrapper[4737]: I0126 18:56:30.116315 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-75zv4" event={"ID":"8df0f55f-4f9c-4ef5-88f1-16a3a5ec1d47","Type":"ContainerStarted","Data":"0d73cfb7d0da866e12f29d124b2ea48d5082175907f51efeeb49c49d8dc9e530"} Jan 26 18:56:30 crc kubenswrapper[4737]: I0126 18:56:30.615486 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-2bf9-account-create-update-kccxv"] Jan 26 18:56:30 crc kubenswrapper[4737]: I0126 18:56:30.634188 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-3193-account-create-update-8m9fw"] Jan 26 18:56:30 crc kubenswrapper[4737]: I0126 18:56:30.695945 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-g9xvw"] Jan 26 18:56:30 crc kubenswrapper[4737]: I0126 18:56:30.745917 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-4c57-account-create-update-xw7qm"] Jan 26 18:56:31 crc kubenswrapper[4737]: I0126 18:56:31.158892 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-2bf9-account-create-update-kccxv" event={"ID":"ea8f2357-50f9-46d8-9527-f04533ce926b","Type":"ContainerStarted","Data":"f42155817eb9ad4297c61a7c4a278863cc2572cbdc12b711fe9ce79bbafcb789"} Jan 26 18:56:31 crc kubenswrapper[4737]: I0126 18:56:31.161037 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-g9xvw" event={"ID":"8ba096b6-6f0d-4c4f-afd1-40d5c8ba5e7f","Type":"ContainerStarted","Data":"f2f0d1025d40083f14b47e029ce335577177c8e876c22028fa9866bf9206742d"} Jan 26 18:56:31 crc kubenswrapper[4737]: I0126 18:56:31.175059 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-75zv4" event={"ID":"8df0f55f-4f9c-4ef5-88f1-16a3a5ec1d47","Type":"ContainerStarted","Data":"7a5ab34ef8c18a3f42ad36a0fe0dea4a3a1521e1b4027853de0176849388bfc1"} Jan 26 18:56:31 crc kubenswrapper[4737]: I0126 18:56:31.185101 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3193-account-create-update-8m9fw" event={"ID":"0bcd08ca-7be6-4684-b83d-19a94dee32ad","Type":"ContainerStarted","Data":"ff7f9b71c6b76e9f6a42c6451962d6b960e3af91c30b1d78e10ddd57e3d99745"} Jan 26 18:56:31 crc kubenswrapper[4737]: I0126 18:56:31.201639 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-75zv4" podStartSLOduration=4.201617894 podStartE2EDuration="4.201617894s" podCreationTimestamp="2026-01-26 18:56:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:56:31.198356948 +0000 UTC m=+1564.506551656" watchObservedRunningTime="2026-01-26 18:56:31.201617894 +0000 UTC m=+1564.509812602" Jan 26 18:56:31 crc kubenswrapper[4737]: I0126 18:56:31.202735 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-h7hhd" event={"ID":"e07b7037-d1bb-485f-a2e0-951b51de8c74","Type":"ContainerStarted","Data":"a5ccc9d03de02387d0b5845fc439b27e44b744fd08f9f6e0335a795f24f6a471"} Jan 26 18:56:31 crc kubenswrapper[4737]: I0126 18:56:31.213322 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-4c57-account-create-update-xw7qm" event={"ID":"ec2b468d-e649-4320-8687-bc3b4ed09593","Type":"ContainerStarted","Data":"90ea35e6baa3e0606a5c3acab602f04be4852a2a97f1a7e762074d2449918601"} Jan 26 18:56:31 crc kubenswrapper[4737]: I0126 18:56:31.238673 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f2105678-a452-433f-aa75-908321272f46","Type":"ContainerDied","Data":"7e899d3b113f0d353bc9d3743fae421c517bb357d4e87d780a5d647f8a716d99"} Jan 26 18:56:31 crc kubenswrapper[4737]: I0126 18:56:31.238716 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e899d3b113f0d353bc9d3743fae421c517bb357d4e87d780a5d647f8a716d99" Jan 26 18:56:31 crc kubenswrapper[4737]: I0126 18:56:31.249927 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-h7hhd" podStartSLOduration=4.249903799 podStartE2EDuration="4.249903799s" podCreationTimestamp="2026-01-26 18:56:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:56:31.224950218 +0000 UTC m=+1564.533144926" watchObservedRunningTime="2026-01-26 18:56:31.249903799 +0000 UTC m=+1564.558098517" Jan 26 18:56:31 crc kubenswrapper[4737]: I0126 18:56:31.285113 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 18:56:31 crc kubenswrapper[4737]: I0126 18:56:31.362896 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f2105678-a452-433f-aa75-908321272f46-httpd-run\") pod \"f2105678-a452-433f-aa75-908321272f46\" (UID: \"f2105678-a452-433f-aa75-908321272f46\") " Jan 26 18:56:31 crc kubenswrapper[4737]: I0126 18:56:31.362977 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2105678-a452-433f-aa75-908321272f46-combined-ca-bundle\") pod \"f2105678-a452-433f-aa75-908321272f46\" (UID: \"f2105678-a452-433f-aa75-908321272f46\") " Jan 26 18:56:31 crc kubenswrapper[4737]: I0126 18:56:31.363156 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ffb43bf-5e4b-4a05-81cd-85836e6d2780\") pod \"f2105678-a452-433f-aa75-908321272f46\" (UID: \"f2105678-a452-433f-aa75-908321272f46\") " Jan 26 18:56:31 crc kubenswrapper[4737]: I0126 18:56:31.363262 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2105678-a452-433f-aa75-908321272f46-logs\") pod \"f2105678-a452-433f-aa75-908321272f46\" (UID: \"f2105678-a452-433f-aa75-908321272f46\") " Jan 26 18:56:31 crc kubenswrapper[4737]: I0126 18:56:31.363314 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2105678-a452-433f-aa75-908321272f46-config-data\") pod \"f2105678-a452-433f-aa75-908321272f46\" (UID: \"f2105678-a452-433f-aa75-908321272f46\") " Jan 26 18:56:31 crc kubenswrapper[4737]: I0126 18:56:31.363332 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2105678-a452-433f-aa75-908321272f46-internal-tls-certs\") pod \"f2105678-a452-433f-aa75-908321272f46\" (UID: \"f2105678-a452-433f-aa75-908321272f46\") " Jan 26 18:56:31 crc kubenswrapper[4737]: I0126 18:56:31.363356 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2105678-a452-433f-aa75-908321272f46-scripts\") pod \"f2105678-a452-433f-aa75-908321272f46\" (UID: \"f2105678-a452-433f-aa75-908321272f46\") " Jan 26 18:56:31 crc kubenswrapper[4737]: I0126 18:56:31.363400 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljwtg\" (UniqueName: \"kubernetes.io/projected/f2105678-a452-433f-aa75-908321272f46-kube-api-access-ljwtg\") pod \"f2105678-a452-433f-aa75-908321272f46\" (UID: \"f2105678-a452-433f-aa75-908321272f46\") " Jan 26 18:56:31 crc kubenswrapper[4737]: I0126 18:56:31.363465 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2105678-a452-433f-aa75-908321272f46-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f2105678-a452-433f-aa75-908321272f46" (UID: "f2105678-a452-433f-aa75-908321272f46"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:56:31 crc kubenswrapper[4737]: I0126 18:56:31.364026 4737 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f2105678-a452-433f-aa75-908321272f46-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:31 crc kubenswrapper[4737]: I0126 18:56:31.364991 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2105678-a452-433f-aa75-908321272f46-logs" (OuterVolumeSpecName: "logs") pod "f2105678-a452-433f-aa75-908321272f46" (UID: "f2105678-a452-433f-aa75-908321272f46"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:56:31 crc kubenswrapper[4737]: I0126 18:56:31.440867 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2105678-a452-433f-aa75-908321272f46-kube-api-access-ljwtg" (OuterVolumeSpecName: "kube-api-access-ljwtg") pod "f2105678-a452-433f-aa75-908321272f46" (UID: "f2105678-a452-433f-aa75-908321272f46"). InnerVolumeSpecName "kube-api-access-ljwtg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:56:31 crc kubenswrapper[4737]: I0126 18:56:31.461207 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2105678-a452-433f-aa75-908321272f46-scripts" (OuterVolumeSpecName: "scripts") pod "f2105678-a452-433f-aa75-908321272f46" (UID: "f2105678-a452-433f-aa75-908321272f46"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:56:31 crc kubenswrapper[4737]: I0126 18:56:31.478043 4737 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2105678-a452-433f-aa75-908321272f46-logs\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:31 crc kubenswrapper[4737]: I0126 18:56:31.483245 4737 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2105678-a452-433f-aa75-908321272f46-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:31 crc kubenswrapper[4737]: I0126 18:56:31.483291 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljwtg\" (UniqueName: \"kubernetes.io/projected/f2105678-a452-433f-aa75-908321272f46-kube-api-access-ljwtg\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:31 crc kubenswrapper[4737]: I0126 18:56:31.700176 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ffb43bf-5e4b-4a05-81cd-85836e6d2780" (OuterVolumeSpecName: "glance") pod "f2105678-a452-433f-aa75-908321272f46" (UID: "f2105678-a452-433f-aa75-908321272f46"). InnerVolumeSpecName "pvc-0ffb43bf-5e4b-4a05-81cd-85836e6d2780". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 18:56:31 crc kubenswrapper[4737]: I0126 18:56:31.763375 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2105678-a452-433f-aa75-908321272f46-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f2105678-a452-433f-aa75-908321272f46" (UID: "f2105678-a452-433f-aa75-908321272f46"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:56:31 crc kubenswrapper[4737]: I0126 18:56:31.766736 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2105678-a452-433f-aa75-908321272f46-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f2105678-a452-433f-aa75-908321272f46" (UID: "f2105678-a452-433f-aa75-908321272f46"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:56:31 crc kubenswrapper[4737]: I0126 18:56:31.789806 4737 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2105678-a452-433f-aa75-908321272f46-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:31 crc kubenswrapper[4737]: I0126 18:56:31.789878 4737 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-0ffb43bf-5e4b-4a05-81cd-85836e6d2780\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ffb43bf-5e4b-4a05-81cd-85836e6d2780\") on node \"crc\" " Jan 26 18:56:31 crc kubenswrapper[4737]: I0126 18:56:31.789900 4737 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2105678-a452-433f-aa75-908321272f46-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:31 crc kubenswrapper[4737]: I0126 18:56:31.812213 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2105678-a452-433f-aa75-908321272f46-config-data" (OuterVolumeSpecName: "config-data") pod "f2105678-a452-433f-aa75-908321272f46" (UID: "f2105678-a452-433f-aa75-908321272f46"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:56:31 crc kubenswrapper[4737]: I0126 18:56:31.832324 4737 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 26 18:56:31 crc kubenswrapper[4737]: I0126 18:56:31.832519 4737 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-0ffb43bf-5e4b-4a05-81cd-85836e6d2780" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ffb43bf-5e4b-4a05-81cd-85836e6d2780") on node "crc" Jan 26 18:56:31 crc kubenswrapper[4737]: I0126 18:56:31.893039 4737 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2105678-a452-433f-aa75-908321272f46-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:31 crc kubenswrapper[4737]: I0126 18:56:31.893108 4737 reconciler_common.go:293] "Volume detached for volume \"pvc-0ffb43bf-5e4b-4a05-81cd-85836e6d2780\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ffb43bf-5e4b-4a05-81cd-85836e6d2780\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:31 crc kubenswrapper[4737]: I0126 18:56:31.992452 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="715806cf-cb82-4224-bdb0-8aed20e29b49" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.216:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 18:56:32 crc kubenswrapper[4737]: I0126 18:56:32.114371 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-5f8dbb8f99-b67tw" Jan 26 18:56:32 crc kubenswrapper[4737]: I0126 18:56:32.236997 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 26 18:56:32 crc kubenswrapper[4737]: I0126 18:56:32.268423 4737 generic.go:334] "Generic (PLEG): container finished" podID="e07b7037-d1bb-485f-a2e0-951b51de8c74" containerID="a5ccc9d03de02387d0b5845fc439b27e44b744fd08f9f6e0335a795f24f6a471" exitCode=0 Jan 26 18:56:32 crc kubenswrapper[4737]: I0126 18:56:32.268530 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-h7hhd" event={"ID":"e07b7037-d1bb-485f-a2e0-951b51de8c74","Type":"ContainerDied","Data":"a5ccc9d03de02387d0b5845fc439b27e44b744fd08f9f6e0335a795f24f6a471"} Jan 26 18:56:32 crc kubenswrapper[4737]: I0126 18:56:32.291386 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-2bf9-account-create-update-kccxv" event={"ID":"ea8f2357-50f9-46d8-9527-f04533ce926b","Type":"ContainerStarted","Data":"0fa505b2da759bce2da43968177c07e66fba26bacdb584fa732d441bb7bca5c5"} Jan 26 18:56:32 crc kubenswrapper[4737]: I0126 18:56:32.311185 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-g9xvw" event={"ID":"8ba096b6-6f0d-4c4f-afd1-40d5c8ba5e7f","Type":"ContainerStarted","Data":"fa5bc3e8945224807808ed3cc617cc7e4a6f9b8c9f533c7884cc9455bbaeda2c"} Jan 26 18:56:32 crc kubenswrapper[4737]: I0126 18:56:32.326319 4737 generic.go:334] "Generic (PLEG): container finished" podID="8df0f55f-4f9c-4ef5-88f1-16a3a5ec1d47" containerID="7a5ab34ef8c18a3f42ad36a0fe0dea4a3a1521e1b4027853de0176849388bfc1" exitCode=0 Jan 26 18:56:32 crc kubenswrapper[4737]: I0126 18:56:32.326425 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-75zv4" event={"ID":"8df0f55f-4f9c-4ef5-88f1-16a3a5ec1d47","Type":"ContainerDied","Data":"7a5ab34ef8c18a3f42ad36a0fe0dea4a3a1521e1b4027853de0176849388bfc1"} Jan 26 18:56:32 crc kubenswrapper[4737]: I0126 18:56:32.340701 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-4c57-account-create-update-xw7qm" event={"ID":"ec2b468d-e649-4320-8687-bc3b4ed09593","Type":"ContainerStarted","Data":"bd8c76d9bc90f419a1408d1209e85ee77bc4dfcfc2d4fe09745ce39583a6c986"} Jan 26 18:56:32 crc kubenswrapper[4737]: I0126 18:56:32.354384 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3193-account-create-update-8m9fw" event={"ID":"0bcd08ca-7be6-4684-b83d-19a94dee32ad","Type":"ContainerStarted","Data":"8676037a138c03205571ce081641fb8e12b7eb1050fb674d7b55b047ce4b6d95"} Jan 26 18:56:32 crc kubenswrapper[4737]: I0126 18:56:32.368988 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-2bf9-account-create-update-kccxv" podStartSLOduration=4.368962131 podStartE2EDuration="4.368962131s" podCreationTimestamp="2026-01-26 18:56:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:56:32.344684805 +0000 UTC m=+1565.652879513" watchObservedRunningTime="2026-01-26 18:56:32.368962131 +0000 UTC m=+1565.677156839" Jan 26 18:56:32 crc kubenswrapper[4737]: I0126 18:56:32.373361 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 18:56:32 crc kubenswrapper[4737]: I0126 18:56:32.374325 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5de2e392-7605-4b8c-831c-4101c098fc0e","Type":"ContainerStarted","Data":"166be1126e12a587ce5d2d0d3cf94257648f94dc8cbe416e59ce864e19b6e897"} Jan 26 18:56:32 crc kubenswrapper[4737]: I0126 18:56:32.495262 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-g9xvw" podStartSLOduration=4.495237627 podStartE2EDuration="4.495237627s" podCreationTimestamp="2026-01-26 18:56:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:56:32.425849228 +0000 UTC m=+1565.734043946" watchObservedRunningTime="2026-01-26 18:56:32.495237627 +0000 UTC m=+1565.803432335" Jan 26 18:56:32 crc kubenswrapper[4737]: I0126 18:56:32.627867 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-4c57-account-create-update-xw7qm" podStartSLOduration=4.627842399 podStartE2EDuration="4.627842399s" podCreationTimestamp="2026-01-26 18:56:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:56:32.503605862 +0000 UTC m=+1565.811800570" watchObservedRunningTime="2026-01-26 18:56:32.627842399 +0000 UTC m=+1565.936037107" Jan 26 18:56:32 crc kubenswrapper[4737]: I0126 18:56:32.649687 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-3193-account-create-update-8m9fw" podStartSLOduration=4.649657968 podStartE2EDuration="4.649657968s" podCreationTimestamp="2026-01-26 18:56:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:56:32.552182894 +0000 UTC m=+1565.860377602" watchObservedRunningTime="2026-01-26 18:56:32.649657968 +0000 UTC m=+1565.957852676" Jan 26 18:56:32 crc kubenswrapper[4737]: I0126 18:56:32.673666 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 18:56:32 crc kubenswrapper[4737]: I0126 18:56:32.694184 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 18:56:32 crc kubenswrapper[4737]: I0126 18:56:32.710905 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 18:56:32 crc kubenswrapper[4737]: E0126 18:56:32.711495 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2105678-a452-433f-aa75-908321272f46" containerName="glance-log" Jan 26 18:56:32 crc kubenswrapper[4737]: I0126 18:56:32.711513 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2105678-a452-433f-aa75-908321272f46" containerName="glance-log" Jan 26 18:56:32 crc kubenswrapper[4737]: E0126 18:56:32.711558 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2105678-a452-433f-aa75-908321272f46" containerName="glance-httpd" Jan 26 18:56:32 crc kubenswrapper[4737]: I0126 18:56:32.711566 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2105678-a452-433f-aa75-908321272f46" containerName="glance-httpd" Jan 26 18:56:32 crc kubenswrapper[4737]: I0126 18:56:32.711786 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2105678-a452-433f-aa75-908321272f46" containerName="glance-httpd" Jan 26 18:56:32 crc kubenswrapper[4737]: I0126 18:56:32.711815 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2105678-a452-433f-aa75-908321272f46" containerName="glance-log" Jan 26 18:56:32 crc kubenswrapper[4737]: I0126 18:56:32.724060 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 18:56:32 crc kubenswrapper[4737]: I0126 18:56:32.728114 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 18:56:32 crc kubenswrapper[4737]: I0126 18:56:32.729990 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 26 18:56:32 crc kubenswrapper[4737]: I0126 18:56:32.730349 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 26 18:56:32 crc kubenswrapper[4737]: I0126 18:56:32.749621 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-b844f4d95-b87n7" podUID="11ac79cf-f745-4084-ba59-ee3ff364518d" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.219:8000/healthcheck\": read tcp 10.217.0.2:52604->10.217.0.219:8000: read: connection reset by peer" Jan 26 18:56:32 crc kubenswrapper[4737]: I0126 18:56:32.751458 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-b844f4d95-b87n7" podUID="11ac79cf-f745-4084-ba59-ee3ff364518d" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.219:8000/healthcheck\": dial tcp 10.217.0.219:8000: connect: connection refused" Jan 26 18:56:32 crc kubenswrapper[4737]: I0126 18:56:32.840352 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-6ff767c7d5-88fm9" podUID="2d51eb8e-1bae-4432-9997-f74055d01000" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.220:8004/healthcheck\": read tcp 10.217.0.2:38952->10.217.0.220:8004: read: connection reset by peer" Jan 26 18:56:32 crc kubenswrapper[4737]: I0126 18:56:32.841077 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-6ff767c7d5-88fm9" podUID="2d51eb8e-1bae-4432-9997-f74055d01000" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.220:8004/healthcheck\": dial tcp 10.217.0.220:8004: connect: connection refused" Jan 26 18:56:32 crc kubenswrapper[4737]: I0126 18:56:32.928468 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9c0fd189-4592-4f52-a100-e6fc3581ef26-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9c0fd189-4592-4f52-a100-e6fc3581ef26\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:56:32 crc kubenswrapper[4737]: I0126 18:56:32.928551 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c0fd189-4592-4f52-a100-e6fc3581ef26-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9c0fd189-4592-4f52-a100-e6fc3581ef26\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:56:32 crc kubenswrapper[4737]: I0126 18:56:32.928706 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c0fd189-4592-4f52-a100-e6fc3581ef26-logs\") pod \"glance-default-internal-api-0\" (UID: \"9c0fd189-4592-4f52-a100-e6fc3581ef26\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:56:32 crc kubenswrapper[4737]: I0126 18:56:32.928743 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c0fd189-4592-4f52-a100-e6fc3581ef26-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9c0fd189-4592-4f52-a100-e6fc3581ef26\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:56:32 crc kubenswrapper[4737]: I0126 18:56:32.928776 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x89kv\" (UniqueName: \"kubernetes.io/projected/9c0fd189-4592-4f52-a100-e6fc3581ef26-kube-api-access-x89kv\") pod \"glance-default-internal-api-0\" (UID: \"9c0fd189-4592-4f52-a100-e6fc3581ef26\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:56:32 crc kubenswrapper[4737]: I0126 18:56:32.928967 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c0fd189-4592-4f52-a100-e6fc3581ef26-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9c0fd189-4592-4f52-a100-e6fc3581ef26\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:56:32 crc kubenswrapper[4737]: I0126 18:56:32.929278 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0ffb43bf-5e4b-4a05-81cd-85836e6d2780\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ffb43bf-5e4b-4a05-81cd-85836e6d2780\") pod \"glance-default-internal-api-0\" (UID: \"9c0fd189-4592-4f52-a100-e6fc3581ef26\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:56:32 crc kubenswrapper[4737]: I0126 18:56:32.929416 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c0fd189-4592-4f52-a100-e6fc3581ef26-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9c0fd189-4592-4f52-a100-e6fc3581ef26\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.008869 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2105678-a452-433f-aa75-908321272f46" path="/var/lib/kubelet/pods/f2105678-a452-433f-aa75-908321272f46/volumes" Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.036637 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9c0fd189-4592-4f52-a100-e6fc3581ef26-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9c0fd189-4592-4f52-a100-e6fc3581ef26\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.036884 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c0fd189-4592-4f52-a100-e6fc3581ef26-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9c0fd189-4592-4f52-a100-e6fc3581ef26\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.037107 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c0fd189-4592-4f52-a100-e6fc3581ef26-logs\") pod \"glance-default-internal-api-0\" (UID: \"9c0fd189-4592-4f52-a100-e6fc3581ef26\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.037135 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c0fd189-4592-4f52-a100-e6fc3581ef26-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9c0fd189-4592-4f52-a100-e6fc3581ef26\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.037211 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x89kv\" (UniqueName: \"kubernetes.io/projected/9c0fd189-4592-4f52-a100-e6fc3581ef26-kube-api-access-x89kv\") pod \"glance-default-internal-api-0\" (UID: \"9c0fd189-4592-4f52-a100-e6fc3581ef26\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.037307 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c0fd189-4592-4f52-a100-e6fc3581ef26-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9c0fd189-4592-4f52-a100-e6fc3581ef26\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.037581 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0ffb43bf-5e4b-4a05-81cd-85836e6d2780\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ffb43bf-5e4b-4a05-81cd-85836e6d2780\") pod \"glance-default-internal-api-0\" (UID: \"9c0fd189-4592-4f52-a100-e6fc3581ef26\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.037746 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c0fd189-4592-4f52-a100-e6fc3581ef26-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9c0fd189-4592-4f52-a100-e6fc3581ef26\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.044215 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c0fd189-4592-4f52-a100-e6fc3581ef26-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9c0fd189-4592-4f52-a100-e6fc3581ef26\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.044295 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c0fd189-4592-4f52-a100-e6fc3581ef26-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9c0fd189-4592-4f52-a100-e6fc3581ef26\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.044581 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9c0fd189-4592-4f52-a100-e6fc3581ef26-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9c0fd189-4592-4f52-a100-e6fc3581ef26\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.044820 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c0fd189-4592-4f52-a100-e6fc3581ef26-logs\") pod \"glance-default-internal-api-0\" (UID: \"9c0fd189-4592-4f52-a100-e6fc3581ef26\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.049848 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c0fd189-4592-4f52-a100-e6fc3581ef26-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9c0fd189-4592-4f52-a100-e6fc3581ef26\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.051804 4737 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.051841 4737 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0ffb43bf-5e4b-4a05-81cd-85836e6d2780\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ffb43bf-5e4b-4a05-81cd-85836e6d2780\") pod \"glance-default-internal-api-0\" (UID: \"9c0fd189-4592-4f52-a100-e6fc3581ef26\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/fdf5547aa86845271a11e2b0db53f95e86a38bbd5e41234fa2d6106d36b4b80f/globalmount\"" pod="openstack/glance-default-internal-api-0" Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.059297 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c0fd189-4592-4f52-a100-e6fc3581ef26-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9c0fd189-4592-4f52-a100-e6fc3581ef26\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.079908 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x89kv\" (UniqueName: \"kubernetes.io/projected/9c0fd189-4592-4f52-a100-e6fc3581ef26-kube-api-access-x89kv\") pod \"glance-default-internal-api-0\" (UID: \"9c0fd189-4592-4f52-a100-e6fc3581ef26\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.159799 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0ffb43bf-5e4b-4a05-81cd-85836e6d2780\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ffb43bf-5e4b-4a05-81cd-85836e6d2780\") pod \"glance-default-internal-api-0\" (UID: \"9c0fd189-4592-4f52-a100-e6fc3581ef26\") " pod="openstack/glance-default-internal-api-0" Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.202590 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.356909 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-b844f4d95-b87n7" Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.468393 4737 generic.go:334] "Generic (PLEG): container finished" podID="ec2b468d-e649-4320-8687-bc3b4ed09593" containerID="bd8c76d9bc90f419a1408d1209e85ee77bc4dfcfc2d4fe09745ce39583a6c986" exitCode=0 Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.468975 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-4c57-account-create-update-xw7qm" event={"ID":"ec2b468d-e649-4320-8687-bc3b4ed09593","Type":"ContainerDied","Data":"bd8c76d9bc90f419a1408d1209e85ee77bc4dfcfc2d4fe09745ce39583a6c986"} Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.474680 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/11ac79cf-f745-4084-ba59-ee3ff364518d-config-data-custom\") pod \"11ac79cf-f745-4084-ba59-ee3ff364518d\" (UID: \"11ac79cf-f745-4084-ba59-ee3ff364518d\") " Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.474992 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsm2j\" (UniqueName: \"kubernetes.io/projected/11ac79cf-f745-4084-ba59-ee3ff364518d-kube-api-access-hsm2j\") pod \"11ac79cf-f745-4084-ba59-ee3ff364518d\" (UID: \"11ac79cf-f745-4084-ba59-ee3ff364518d\") " Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.475170 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11ac79cf-f745-4084-ba59-ee3ff364518d-combined-ca-bundle\") pod \"11ac79cf-f745-4084-ba59-ee3ff364518d\" (UID: \"11ac79cf-f745-4084-ba59-ee3ff364518d\") " Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.475223 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11ac79cf-f745-4084-ba59-ee3ff364518d-config-data\") pod \"11ac79cf-f745-4084-ba59-ee3ff364518d\" (UID: \"11ac79cf-f745-4084-ba59-ee3ff364518d\") " Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.484610 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11ac79cf-f745-4084-ba59-ee3ff364518d-kube-api-access-hsm2j" (OuterVolumeSpecName: "kube-api-access-hsm2j") pod "11ac79cf-f745-4084-ba59-ee3ff364518d" (UID: "11ac79cf-f745-4084-ba59-ee3ff364518d"). InnerVolumeSpecName "kube-api-access-hsm2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.490402 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11ac79cf-f745-4084-ba59-ee3ff364518d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "11ac79cf-f745-4084-ba59-ee3ff364518d" (UID: "11ac79cf-f745-4084-ba59-ee3ff364518d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.509667 4737 generic.go:334] "Generic (PLEG): container finished" podID="0bcd08ca-7be6-4684-b83d-19a94dee32ad" containerID="8676037a138c03205571ce081641fb8e12b7eb1050fb674d7b55b047ce4b6d95" exitCode=0 Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.509744 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3193-account-create-update-8m9fw" event={"ID":"0bcd08ca-7be6-4684-b83d-19a94dee32ad","Type":"ContainerDied","Data":"8676037a138c03205571ce081641fb8e12b7eb1050fb674d7b55b047ce4b6d95"} Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.564364 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11ac79cf-f745-4084-ba59-ee3ff364518d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "11ac79cf-f745-4084-ba59-ee3ff364518d" (UID: "11ac79cf-f745-4084-ba59-ee3ff364518d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.579394 4737 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/11ac79cf-f745-4084-ba59-ee3ff364518d-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.579434 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsm2j\" (UniqueName: \"kubernetes.io/projected/11ac79cf-f745-4084-ba59-ee3ff364518d-kube-api-access-hsm2j\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.579448 4737 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11ac79cf-f745-4084-ba59-ee3ff364518d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.580918 4737 generic.go:334] "Generic (PLEG): container finished" podID="2d51eb8e-1bae-4432-9997-f74055d01000" containerID="5bd0fe5450e0b014f9b1890d301306732e938dde828c492337ae1f682ddb1db5" exitCode=0 Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.581015 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6ff767c7d5-88fm9" event={"ID":"2d51eb8e-1bae-4432-9997-f74055d01000","Type":"ContainerDied","Data":"5bd0fe5450e0b014f9b1890d301306732e938dde828c492337ae1f682ddb1db5"} Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.584353 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11ac79cf-f745-4084-ba59-ee3ff364518d-config-data" (OuterVolumeSpecName: "config-data") pod "11ac79cf-f745-4084-ba59-ee3ff364518d" (UID: "11ac79cf-f745-4084-ba59-ee3ff364518d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.590022 4737 generic.go:334] "Generic (PLEG): container finished" podID="11ac79cf-f745-4084-ba59-ee3ff364518d" containerID="7bac75e8a7f901dc9bb78ed9798abe8e671478f31767c436b2505a70dce880a4" exitCode=0 Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.590176 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-b844f4d95-b87n7" event={"ID":"11ac79cf-f745-4084-ba59-ee3ff364518d","Type":"ContainerDied","Data":"7bac75e8a7f901dc9bb78ed9798abe8e671478f31767c436b2505a70dce880a4"} Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.590213 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-b844f4d95-b87n7" event={"ID":"11ac79cf-f745-4084-ba59-ee3ff364518d","Type":"ContainerDied","Data":"a08a5f2c4fe4d158e3b90169ba757f2ca2b8ebbfd282fa91945fbdd4681052f0"} Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.590230 4737 scope.go:117] "RemoveContainer" containerID="7bac75e8a7f901dc9bb78ed9798abe8e671478f31767c436b2505a70dce880a4" Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.590368 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-b844f4d95-b87n7" Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.593775 4737 generic.go:334] "Generic (PLEG): container finished" podID="ea8f2357-50f9-46d8-9527-f04533ce926b" containerID="0fa505b2da759bce2da43968177c07e66fba26bacdb584fa732d441bb7bca5c5" exitCode=0 Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.593932 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-2bf9-account-create-update-kccxv" event={"ID":"ea8f2357-50f9-46d8-9527-f04533ce926b","Type":"ContainerDied","Data":"0fa505b2da759bce2da43968177c07e66fba26bacdb584fa732d441bb7bca5c5"} Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.600358 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.600337452 podStartE2EDuration="7.600337452s" podCreationTimestamp="2026-01-26 18:56:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:56:33.57111459 +0000 UTC m=+1566.879309298" watchObservedRunningTime="2026-01-26 18:56:33.600337452 +0000 UTC m=+1566.908532160" Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.605981 4737 generic.go:334] "Generic (PLEG): container finished" podID="8ba096b6-6f0d-4c4f-afd1-40d5c8ba5e7f" containerID="fa5bc3e8945224807808ed3cc617cc7e4a6f9b8c9f533c7884cc9455bbaeda2c" exitCode=0 Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.606277 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-g9xvw" event={"ID":"8ba096b6-6f0d-4c4f-afd1-40d5c8ba5e7f","Type":"ContainerDied","Data":"fa5bc3e8945224807808ed3cc617cc7e4a6f9b8c9f533c7884cc9455bbaeda2c"} Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.682182 4737 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11ac79cf-f745-4084-ba59-ee3ff364518d-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.699464 4737 scope.go:117] "RemoveContainer" containerID="7bac75e8a7f901dc9bb78ed9798abe8e671478f31767c436b2505a70dce880a4" Jan 26 18:56:33 crc kubenswrapper[4737]: E0126 18:56:33.702351 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bac75e8a7f901dc9bb78ed9798abe8e671478f31767c436b2505a70dce880a4\": container with ID starting with 7bac75e8a7f901dc9bb78ed9798abe8e671478f31767c436b2505a70dce880a4 not found: ID does not exist" containerID="7bac75e8a7f901dc9bb78ed9798abe8e671478f31767c436b2505a70dce880a4" Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.702402 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bac75e8a7f901dc9bb78ed9798abe8e671478f31767c436b2505a70dce880a4"} err="failed to get container status \"7bac75e8a7f901dc9bb78ed9798abe8e671478f31767c436b2505a70dce880a4\": rpc error: code = NotFound desc = could not find container \"7bac75e8a7f901dc9bb78ed9798abe8e671478f31767c436b2505a70dce880a4\": container with ID starting with 7bac75e8a7f901dc9bb78ed9798abe8e671478f31767c436b2505a70dce880a4 not found: ID does not exist" Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.702820 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6ff767c7d5-88fm9" Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.752033 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-b844f4d95-b87n7"] Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.761174 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-b844f4d95-b87n7"] Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.783777 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2d51eb8e-1bae-4432-9997-f74055d01000-config-data-custom\") pod \"2d51eb8e-1bae-4432-9997-f74055d01000\" (UID: \"2d51eb8e-1bae-4432-9997-f74055d01000\") " Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.783838 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d51eb8e-1bae-4432-9997-f74055d01000-combined-ca-bundle\") pod \"2d51eb8e-1bae-4432-9997-f74055d01000\" (UID: \"2d51eb8e-1bae-4432-9997-f74055d01000\") " Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.783886 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d51eb8e-1bae-4432-9997-f74055d01000-config-data\") pod \"2d51eb8e-1bae-4432-9997-f74055d01000\" (UID: \"2d51eb8e-1bae-4432-9997-f74055d01000\") " Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.784582 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbf5m\" (UniqueName: \"kubernetes.io/projected/2d51eb8e-1bae-4432-9997-f74055d01000-kube-api-access-wbf5m\") pod \"2d51eb8e-1bae-4432-9997-f74055d01000\" (UID: \"2d51eb8e-1bae-4432-9997-f74055d01000\") " Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.789682 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d51eb8e-1bae-4432-9997-f74055d01000-kube-api-access-wbf5m" (OuterVolumeSpecName: "kube-api-access-wbf5m") pod "2d51eb8e-1bae-4432-9997-f74055d01000" (UID: "2d51eb8e-1bae-4432-9997-f74055d01000"). InnerVolumeSpecName "kube-api-access-wbf5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.793239 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d51eb8e-1bae-4432-9997-f74055d01000-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2d51eb8e-1bae-4432-9997-f74055d01000" (UID: "2d51eb8e-1bae-4432-9997-f74055d01000"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.838841 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d51eb8e-1bae-4432-9997-f74055d01000-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2d51eb8e-1bae-4432-9997-f74055d01000" (UID: "2d51eb8e-1bae-4432-9997-f74055d01000"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.888085 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbf5m\" (UniqueName: \"kubernetes.io/projected/2d51eb8e-1bae-4432-9997-f74055d01000-kube-api-access-wbf5m\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.888118 4737 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2d51eb8e-1bae-4432-9997-f74055d01000-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.888127 4737 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d51eb8e-1bae-4432-9997-f74055d01000-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.907240 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d51eb8e-1bae-4432-9997-f74055d01000-config-data" (OuterVolumeSpecName: "config-data") pod "2d51eb8e-1bae-4432-9997-f74055d01000" (UID: "2d51eb8e-1bae-4432-9997-f74055d01000"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.953888 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="715806cf-cb82-4224-bdb0-8aed20e29b49" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.216:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 18:56:33 crc kubenswrapper[4737]: I0126 18:56:33.990984 4737 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d51eb8e-1bae-4432-9997-f74055d01000-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:34 crc kubenswrapper[4737]: W0126 18:56:34.266444 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c0fd189_4592_4f52_a100_e6fc3581ef26.slice/crio-8bdd4e7409a38f3e4ed39b7db829d8eae173e96b03b295b07fa2ae86bcced642 WatchSource:0}: Error finding container 8bdd4e7409a38f3e4ed39b7db829d8eae173e96b03b295b07fa2ae86bcced642: Status 404 returned error can't find the container with id 8bdd4e7409a38f3e4ed39b7db829d8eae173e96b03b295b07fa2ae86bcced642 Jan 26 18:56:34 crc kubenswrapper[4737]: I0126 18:56:34.282736 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 18:56:34 crc kubenswrapper[4737]: I0126 18:56:34.471120 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-75zv4" Jan 26 18:56:34 crc kubenswrapper[4737]: I0126 18:56:34.484199 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-h7hhd" Jan 26 18:56:34 crc kubenswrapper[4737]: I0126 18:56:34.558240 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45tg2\" (UniqueName: \"kubernetes.io/projected/e07b7037-d1bb-485f-a2e0-951b51de8c74-kube-api-access-45tg2\") pod \"e07b7037-d1bb-485f-a2e0-951b51de8c74\" (UID: \"e07b7037-d1bb-485f-a2e0-951b51de8c74\") " Jan 26 18:56:34 crc kubenswrapper[4737]: I0126 18:56:34.558494 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e07b7037-d1bb-485f-a2e0-951b51de8c74-operator-scripts\") pod \"e07b7037-d1bb-485f-a2e0-951b51de8c74\" (UID: \"e07b7037-d1bb-485f-a2e0-951b51de8c74\") " Jan 26 18:56:34 crc kubenswrapper[4737]: I0126 18:56:34.558608 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xb2bx\" (UniqueName: \"kubernetes.io/projected/8df0f55f-4f9c-4ef5-88f1-16a3a5ec1d47-kube-api-access-xb2bx\") pod \"8df0f55f-4f9c-4ef5-88f1-16a3a5ec1d47\" (UID: \"8df0f55f-4f9c-4ef5-88f1-16a3a5ec1d47\") " Jan 26 18:56:34 crc kubenswrapper[4737]: I0126 18:56:34.558766 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8df0f55f-4f9c-4ef5-88f1-16a3a5ec1d47-operator-scripts\") pod \"8df0f55f-4f9c-4ef5-88f1-16a3a5ec1d47\" (UID: \"8df0f55f-4f9c-4ef5-88f1-16a3a5ec1d47\") " Jan 26 18:56:34 crc kubenswrapper[4737]: I0126 18:56:34.564979 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8df0f55f-4f9c-4ef5-88f1-16a3a5ec1d47-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8df0f55f-4f9c-4ef5-88f1-16a3a5ec1d47" (UID: "8df0f55f-4f9c-4ef5-88f1-16a3a5ec1d47"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:56:34 crc kubenswrapper[4737]: I0126 18:56:34.564983 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e07b7037-d1bb-485f-a2e0-951b51de8c74-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e07b7037-d1bb-485f-a2e0-951b51de8c74" (UID: "e07b7037-d1bb-485f-a2e0-951b51de8c74"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:56:34 crc kubenswrapper[4737]: I0126 18:56:34.601285 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e07b7037-d1bb-485f-a2e0-951b51de8c74-kube-api-access-45tg2" (OuterVolumeSpecName: "kube-api-access-45tg2") pod "e07b7037-d1bb-485f-a2e0-951b51de8c74" (UID: "e07b7037-d1bb-485f-a2e0-951b51de8c74"). InnerVolumeSpecName "kube-api-access-45tg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:56:34 crc kubenswrapper[4737]: I0126 18:56:34.611924 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8df0f55f-4f9c-4ef5-88f1-16a3a5ec1d47-kube-api-access-xb2bx" (OuterVolumeSpecName: "kube-api-access-xb2bx") pod "8df0f55f-4f9c-4ef5-88f1-16a3a5ec1d47" (UID: "8df0f55f-4f9c-4ef5-88f1-16a3a5ec1d47"). InnerVolumeSpecName "kube-api-access-xb2bx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:56:34 crc kubenswrapper[4737]: I0126 18:56:34.675228 4737 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e07b7037-d1bb-485f-a2e0-951b51de8c74-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:34 crc kubenswrapper[4737]: I0126 18:56:34.675276 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xb2bx\" (UniqueName: \"kubernetes.io/projected/8df0f55f-4f9c-4ef5-88f1-16a3a5ec1d47-kube-api-access-xb2bx\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:34 crc kubenswrapper[4737]: I0126 18:56:34.675291 4737 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8df0f55f-4f9c-4ef5-88f1-16a3a5ec1d47-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:34 crc kubenswrapper[4737]: I0126 18:56:34.675302 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45tg2\" (UniqueName: \"kubernetes.io/projected/e07b7037-d1bb-485f-a2e0-951b51de8c74-kube-api-access-45tg2\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:34 crc kubenswrapper[4737]: I0126 18:56:34.736740 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6ff767c7d5-88fm9" event={"ID":"2d51eb8e-1bae-4432-9997-f74055d01000","Type":"ContainerDied","Data":"ddbfed64353f9c02d8d27f23bf1bfc87ce441bbf2e7e1bbe764ad1eb63e37731"} Jan 26 18:56:34 crc kubenswrapper[4737]: I0126 18:56:34.737060 4737 scope.go:117] "RemoveContainer" containerID="5bd0fe5450e0b014f9b1890d301306732e938dde828c492337ae1f682ddb1db5" Jan 26 18:56:34 crc kubenswrapper[4737]: I0126 18:56:34.737433 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6ff767c7d5-88fm9" Jan 26 18:56:34 crc kubenswrapper[4737]: I0126 18:56:34.742569 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-75zv4" event={"ID":"8df0f55f-4f9c-4ef5-88f1-16a3a5ec1d47","Type":"ContainerDied","Data":"0d73cfb7d0da866e12f29d124b2ea48d5082175907f51efeeb49c49d8dc9e530"} Jan 26 18:56:34 crc kubenswrapper[4737]: I0126 18:56:34.742622 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d73cfb7d0da866e12f29d124b2ea48d5082175907f51efeeb49c49d8dc9e530" Jan 26 18:56:34 crc kubenswrapper[4737]: I0126 18:56:34.742729 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-75zv4" Jan 26 18:56:34 crc kubenswrapper[4737]: I0126 18:56:34.744838 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9c0fd189-4592-4f52-a100-e6fc3581ef26","Type":"ContainerStarted","Data":"8bdd4e7409a38f3e4ed39b7db829d8eae173e96b03b295b07fa2ae86bcced642"} Jan 26 18:56:34 crc kubenswrapper[4737]: I0126 18:56:34.751049 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5de2e392-7605-4b8c-831c-4101c098fc0e","Type":"ContainerStarted","Data":"e41915caccdad59db07936414e874885d94aec2c270efa3c7f3acba843290cfe"} Jan 26 18:56:34 crc kubenswrapper[4737]: I0126 18:56:34.761005 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-h7hhd" Jan 26 18:56:34 crc kubenswrapper[4737]: I0126 18:56:34.762698 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-h7hhd" event={"ID":"e07b7037-d1bb-485f-a2e0-951b51de8c74","Type":"ContainerDied","Data":"2333d289ac974a6254622f2525a6d7476f23fc49ff9706ddaa9efd10ffe07dc9"} Jan 26 18:56:34 crc kubenswrapper[4737]: I0126 18:56:34.762755 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2333d289ac974a6254622f2525a6d7476f23fc49ff9706ddaa9efd10ffe07dc9" Jan 26 18:56:34 crc kubenswrapper[4737]: I0126 18:56:34.832662 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6ff767c7d5-88fm9"] Jan 26 18:56:34 crc kubenswrapper[4737]: I0126 18:56:34.857027 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-6ff767c7d5-88fm9"] Jan 26 18:56:35 crc kubenswrapper[4737]: I0126 18:56:35.080593 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11ac79cf-f745-4084-ba59-ee3ff364518d" path="/var/lib/kubelet/pods/11ac79cf-f745-4084-ba59-ee3ff364518d/volumes" Jan 26 18:56:35 crc kubenswrapper[4737]: I0126 18:56:35.083679 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d51eb8e-1bae-4432-9997-f74055d01000" path="/var/lib/kubelet/pods/2d51eb8e-1bae-4432-9997-f74055d01000/volumes" Jan 26 18:56:35 crc kubenswrapper[4737]: I0126 18:56:35.572547 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-g9xvw" Jan 26 18:56:35 crc kubenswrapper[4737]: I0126 18:56:35.698121 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:56:35 crc kubenswrapper[4737]: I0126 18:56:35.698510 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7ee37723-a972-4371-9193-bf20e0126bca" containerName="ceilometer-central-agent" containerID="cri-o://4e4f05e8f9757a70023f46eedbdb049345ee5ee7fba3a371ead8f1eb237611f7" gracePeriod=30 Jan 26 18:56:35 crc kubenswrapper[4737]: I0126 18:56:35.698976 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7ee37723-a972-4371-9193-bf20e0126bca" containerName="proxy-httpd" containerID="cri-o://1907b2858a93268f4ad912ad74c45fb6c7fffd8e1b845e014e57a45e3f151aca" gracePeriod=30 Jan 26 18:56:35 crc kubenswrapper[4737]: I0126 18:56:35.699030 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7ee37723-a972-4371-9193-bf20e0126bca" containerName="sg-core" containerID="cri-o://402874a2c5bf9f3c1ad3566ef6dcaf25f325e681e995bd4b0ecfa3b8a1b7b693" gracePeriod=30 Jan 26 18:56:35 crc kubenswrapper[4737]: I0126 18:56:35.699064 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7ee37723-a972-4371-9193-bf20e0126bca" containerName="ceilometer-notification-agent" containerID="cri-o://6aea5f7e1f8b0af0106347d80da70bce245e198db78acbec6f4607ef3246ecb9" gracePeriod=30 Jan 26 18:56:35 crc kubenswrapper[4737]: I0126 18:56:35.762652 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zrqp\" (UniqueName: \"kubernetes.io/projected/8ba096b6-6f0d-4c4f-afd1-40d5c8ba5e7f-kube-api-access-9zrqp\") pod \"8ba096b6-6f0d-4c4f-afd1-40d5c8ba5e7f\" (UID: \"8ba096b6-6f0d-4c4f-afd1-40d5c8ba5e7f\") " Jan 26 18:56:35 crc kubenswrapper[4737]: I0126 18:56:35.762920 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ba096b6-6f0d-4c4f-afd1-40d5c8ba5e7f-operator-scripts\") pod \"8ba096b6-6f0d-4c4f-afd1-40d5c8ba5e7f\" (UID: \"8ba096b6-6f0d-4c4f-afd1-40d5c8ba5e7f\") " Jan 26 18:56:35 crc kubenswrapper[4737]: I0126 18:56:35.794443 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ba096b6-6f0d-4c4f-afd1-40d5c8ba5e7f-kube-api-access-9zrqp" (OuterVolumeSpecName: "kube-api-access-9zrqp") pod "8ba096b6-6f0d-4c4f-afd1-40d5c8ba5e7f" (UID: "8ba096b6-6f0d-4c4f-afd1-40d5c8ba5e7f"). InnerVolumeSpecName "kube-api-access-9zrqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:56:35 crc kubenswrapper[4737]: I0126 18:56:35.805571 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ba096b6-6f0d-4c4f-afd1-40d5c8ba5e7f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8ba096b6-6f0d-4c4f-afd1-40d5c8ba5e7f" (UID: "8ba096b6-6f0d-4c4f-afd1-40d5c8ba5e7f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:56:35 crc kubenswrapper[4737]: I0126 18:56:35.813190 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9c0fd189-4592-4f52-a100-e6fc3581ef26","Type":"ContainerStarted","Data":"299f0d21b21e5d6d872b983aa64a2732ac857877a9ba82cd6fbdb6450fc4a053"} Jan 26 18:56:35 crc kubenswrapper[4737]: I0126 18:56:35.833134 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-g9xvw" event={"ID":"8ba096b6-6f0d-4c4f-afd1-40d5c8ba5e7f","Type":"ContainerDied","Data":"f2f0d1025d40083f14b47e029ce335577177c8e876c22028fa9866bf9206742d"} Jan 26 18:56:35 crc kubenswrapper[4737]: I0126 18:56:35.833174 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2f0d1025d40083f14b47e029ce335577177c8e876c22028fa9866bf9206742d" Jan 26 18:56:35 crc kubenswrapper[4737]: I0126 18:56:35.833235 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-g9xvw" Jan 26 18:56:35 crc kubenswrapper[4737]: I0126 18:56:35.867712 4737 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ba096b6-6f0d-4c4f-afd1-40d5c8ba5e7f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:35 crc kubenswrapper[4737]: I0126 18:56:35.868143 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zrqp\" (UniqueName: \"kubernetes.io/projected/8ba096b6-6f0d-4c4f-afd1-40d5c8ba5e7f-kube-api-access-9zrqp\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:36 crc kubenswrapper[4737]: I0126 18:56:36.161265 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="7ee37723-a972-4371-9193-bf20e0126bca" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.221:3000/\": read tcp 10.217.0.2:40332->10.217.0.221:3000: read: connection reset by peer" Jan 26 18:56:36 crc kubenswrapper[4737]: I0126 18:56:36.328704 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3193-account-create-update-8m9fw" Jan 26 18:56:36 crc kubenswrapper[4737]: I0126 18:56:36.339314 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-4c57-account-create-update-xw7qm" Jan 26 18:56:36 crc kubenswrapper[4737]: I0126 18:56:36.342837 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-2bf9-account-create-update-kccxv" Jan 26 18:56:36 crc kubenswrapper[4737]: I0126 18:56:36.487964 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mwwg\" (UniqueName: \"kubernetes.io/projected/ec2b468d-e649-4320-8687-bc3b4ed09593-kube-api-access-5mwwg\") pod \"ec2b468d-e649-4320-8687-bc3b4ed09593\" (UID: \"ec2b468d-e649-4320-8687-bc3b4ed09593\") " Jan 26 18:56:36 crc kubenswrapper[4737]: I0126 18:56:36.488008 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82w4h\" (UniqueName: \"kubernetes.io/projected/0bcd08ca-7be6-4684-b83d-19a94dee32ad-kube-api-access-82w4h\") pod \"0bcd08ca-7be6-4684-b83d-19a94dee32ad\" (UID: \"0bcd08ca-7be6-4684-b83d-19a94dee32ad\") " Jan 26 18:56:36 crc kubenswrapper[4737]: I0126 18:56:36.488148 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67dz6\" (UniqueName: \"kubernetes.io/projected/ea8f2357-50f9-46d8-9527-f04533ce926b-kube-api-access-67dz6\") pod \"ea8f2357-50f9-46d8-9527-f04533ce926b\" (UID: \"ea8f2357-50f9-46d8-9527-f04533ce926b\") " Jan 26 18:56:36 crc kubenswrapper[4737]: I0126 18:56:36.488229 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bcd08ca-7be6-4684-b83d-19a94dee32ad-operator-scripts\") pod \"0bcd08ca-7be6-4684-b83d-19a94dee32ad\" (UID: \"0bcd08ca-7be6-4684-b83d-19a94dee32ad\") " Jan 26 18:56:36 crc kubenswrapper[4737]: I0126 18:56:36.488255 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea8f2357-50f9-46d8-9527-f04533ce926b-operator-scripts\") pod \"ea8f2357-50f9-46d8-9527-f04533ce926b\" (UID: \"ea8f2357-50f9-46d8-9527-f04533ce926b\") " Jan 26 18:56:36 crc kubenswrapper[4737]: I0126 18:56:36.488336 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec2b468d-e649-4320-8687-bc3b4ed09593-operator-scripts\") pod \"ec2b468d-e649-4320-8687-bc3b4ed09593\" (UID: \"ec2b468d-e649-4320-8687-bc3b4ed09593\") " Jan 26 18:56:36 crc kubenswrapper[4737]: I0126 18:56:36.489204 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bcd08ca-7be6-4684-b83d-19a94dee32ad-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0bcd08ca-7be6-4684-b83d-19a94dee32ad" (UID: "0bcd08ca-7be6-4684-b83d-19a94dee32ad"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:56:36 crc kubenswrapper[4737]: I0126 18:56:36.489259 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec2b468d-e649-4320-8687-bc3b4ed09593-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ec2b468d-e649-4320-8687-bc3b4ed09593" (UID: "ec2b468d-e649-4320-8687-bc3b4ed09593"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:56:36 crc kubenswrapper[4737]: I0126 18:56:36.489747 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea8f2357-50f9-46d8-9527-f04533ce926b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ea8f2357-50f9-46d8-9527-f04533ce926b" (UID: "ea8f2357-50f9-46d8-9527-f04533ce926b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:56:36 crc kubenswrapper[4737]: I0126 18:56:36.497299 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea8f2357-50f9-46d8-9527-f04533ce926b-kube-api-access-67dz6" (OuterVolumeSpecName: "kube-api-access-67dz6") pod "ea8f2357-50f9-46d8-9527-f04533ce926b" (UID: "ea8f2357-50f9-46d8-9527-f04533ce926b"). InnerVolumeSpecName "kube-api-access-67dz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:56:36 crc kubenswrapper[4737]: I0126 18:56:36.510035 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bcd08ca-7be6-4684-b83d-19a94dee32ad-kube-api-access-82w4h" (OuterVolumeSpecName: "kube-api-access-82w4h") pod "0bcd08ca-7be6-4684-b83d-19a94dee32ad" (UID: "0bcd08ca-7be6-4684-b83d-19a94dee32ad"). InnerVolumeSpecName "kube-api-access-82w4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:56:36 crc kubenswrapper[4737]: I0126 18:56:36.516223 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec2b468d-e649-4320-8687-bc3b4ed09593-kube-api-access-5mwwg" (OuterVolumeSpecName: "kube-api-access-5mwwg") pod "ec2b468d-e649-4320-8687-bc3b4ed09593" (UID: "ec2b468d-e649-4320-8687-bc3b4ed09593"). InnerVolumeSpecName "kube-api-access-5mwwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:56:36 crc kubenswrapper[4737]: I0126 18:56:36.591869 4737 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bcd08ca-7be6-4684-b83d-19a94dee32ad-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:36 crc kubenswrapper[4737]: I0126 18:56:36.591921 4737 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea8f2357-50f9-46d8-9527-f04533ce926b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:36 crc kubenswrapper[4737]: I0126 18:56:36.591935 4737 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec2b468d-e649-4320-8687-bc3b4ed09593-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:36 crc kubenswrapper[4737]: I0126 18:56:36.591950 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mwwg\" (UniqueName: \"kubernetes.io/projected/ec2b468d-e649-4320-8687-bc3b4ed09593-kube-api-access-5mwwg\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:36 crc kubenswrapper[4737]: I0126 18:56:36.591965 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82w4h\" (UniqueName: \"kubernetes.io/projected/0bcd08ca-7be6-4684-b83d-19a94dee32ad-kube-api-access-82w4h\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:36 crc kubenswrapper[4737]: I0126 18:56:36.591982 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67dz6\" (UniqueName: \"kubernetes.io/projected/ea8f2357-50f9-46d8-9527-f04533ce926b-kube-api-access-67dz6\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:36 crc kubenswrapper[4737]: I0126 18:56:36.846052 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-2bf9-account-create-update-kccxv" event={"ID":"ea8f2357-50f9-46d8-9527-f04533ce926b","Type":"ContainerDied","Data":"f42155817eb9ad4297c61a7c4a278863cc2572cbdc12b711fe9ce79bbafcb789"} Jan 26 18:56:36 crc kubenswrapper[4737]: I0126 18:56:36.846414 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f42155817eb9ad4297c61a7c4a278863cc2572cbdc12b711fe9ce79bbafcb789" Jan 26 18:56:36 crc kubenswrapper[4737]: I0126 18:56:36.846479 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-2bf9-account-create-update-kccxv" Jan 26 18:56:36 crc kubenswrapper[4737]: I0126 18:56:36.859774 4737 generic.go:334] "Generic (PLEG): container finished" podID="7ee37723-a972-4371-9193-bf20e0126bca" containerID="1907b2858a93268f4ad912ad74c45fb6c7fffd8e1b845e014e57a45e3f151aca" exitCode=0 Jan 26 18:56:36 crc kubenswrapper[4737]: I0126 18:56:36.859813 4737 generic.go:334] "Generic (PLEG): container finished" podID="7ee37723-a972-4371-9193-bf20e0126bca" containerID="402874a2c5bf9f3c1ad3566ef6dcaf25f325e681e995bd4b0ecfa3b8a1b7b693" exitCode=2 Jan 26 18:56:36 crc kubenswrapper[4737]: I0126 18:56:36.859822 4737 generic.go:334] "Generic (PLEG): container finished" podID="7ee37723-a972-4371-9193-bf20e0126bca" containerID="6aea5f7e1f8b0af0106347d80da70bce245e198db78acbec6f4607ef3246ecb9" exitCode=0 Jan 26 18:56:36 crc kubenswrapper[4737]: I0126 18:56:36.859877 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ee37723-a972-4371-9193-bf20e0126bca","Type":"ContainerDied","Data":"1907b2858a93268f4ad912ad74c45fb6c7fffd8e1b845e014e57a45e3f151aca"} Jan 26 18:56:36 crc kubenswrapper[4737]: I0126 18:56:36.859908 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ee37723-a972-4371-9193-bf20e0126bca","Type":"ContainerDied","Data":"402874a2c5bf9f3c1ad3566ef6dcaf25f325e681e995bd4b0ecfa3b8a1b7b693"} Jan 26 18:56:36 crc kubenswrapper[4737]: I0126 18:56:36.859922 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ee37723-a972-4371-9193-bf20e0126bca","Type":"ContainerDied","Data":"6aea5f7e1f8b0af0106347d80da70bce245e198db78acbec6f4607ef3246ecb9"} Jan 26 18:56:36 crc kubenswrapper[4737]: I0126 18:56:36.861554 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-4c57-account-create-update-xw7qm" event={"ID":"ec2b468d-e649-4320-8687-bc3b4ed09593","Type":"ContainerDied","Data":"90ea35e6baa3e0606a5c3acab602f04be4852a2a97f1a7e762074d2449918601"} Jan 26 18:56:36 crc kubenswrapper[4737]: I0126 18:56:36.861583 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90ea35e6baa3e0606a5c3acab602f04be4852a2a97f1a7e762074d2449918601" Jan 26 18:56:36 crc kubenswrapper[4737]: I0126 18:56:36.861648 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-4c57-account-create-update-xw7qm" Jan 26 18:56:36 crc kubenswrapper[4737]: I0126 18:56:36.886325 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3193-account-create-update-8m9fw" event={"ID":"0bcd08ca-7be6-4684-b83d-19a94dee32ad","Type":"ContainerDied","Data":"ff7f9b71c6b76e9f6a42c6451962d6b960e3af91c30b1d78e10ddd57e3d99745"} Jan 26 18:56:36 crc kubenswrapper[4737]: I0126 18:56:36.886378 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff7f9b71c6b76e9f6a42c6451962d6b960e3af91c30b1d78e10ddd57e3d99745" Jan 26 18:56:36 crc kubenswrapper[4737]: I0126 18:56:36.886400 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3193-account-create-update-8m9fw" Jan 26 18:56:37 crc kubenswrapper[4737]: I0126 18:56:37.477855 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-755b5655f9-7jhg9" Jan 26 18:56:37 crc kubenswrapper[4737]: I0126 18:56:37.542919 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-84dfd788f9-bd7kw"] Jan 26 18:56:37 crc kubenswrapper[4737]: I0126 18:56:37.693604 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 26 18:56:37 crc kubenswrapper[4737]: I0126 18:56:37.693665 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 26 18:56:37 crc kubenswrapper[4737]: I0126 18:56:37.744454 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 26 18:56:37 crc kubenswrapper[4737]: I0126 18:56:37.753742 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 26 18:56:37 crc kubenswrapper[4737]: I0126 18:56:37.907740 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9c0fd189-4592-4f52-a100-e6fc3581ef26","Type":"ContainerStarted","Data":"b5a7dd65ad5f144d4d2f9b2b3907c509a0411f9b237b7f5dc6557fe78f969fc0"} Jan 26 18:56:37 crc kubenswrapper[4737]: I0126 18:56:37.907787 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 26 18:56:37 crc kubenswrapper[4737]: I0126 18:56:37.907801 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 26 18:56:37 crc kubenswrapper[4737]: I0126 18:56:37.909958 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-5494f5754b-8k4bc" Jan 26 18:56:37 crc kubenswrapper[4737]: I0126 18:56:37.969617 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.969588542 podStartE2EDuration="5.969588542s" podCreationTimestamp="2026-01-26 18:56:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:56:37.940283308 +0000 UTC m=+1571.248478006" watchObservedRunningTime="2026-01-26 18:56:37.969588542 +0000 UTC m=+1571.277783250" Jan 26 18:56:37 crc kubenswrapper[4737]: I0126 18:56:37.982223 4737 scope.go:117] "RemoveContainer" containerID="75c5945a3c88ea126b1616e23b8ae4fffb6b560f71ccc7bfcf9e3f21d45267ec" Jan 26 18:56:38 crc kubenswrapper[4737]: I0126 18:56:38.058386 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-86744b887-d62q9"] Jan 26 18:56:38 crc kubenswrapper[4737]: I0126 18:56:38.412610 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-84dfd788f9-bd7kw" Jan 26 18:56:38 crc kubenswrapper[4737]: I0126 18:56:38.629620 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ee3532ad-ceed-44bc-a5ab-10a0710c1ba5-config-data-custom\") pod \"ee3532ad-ceed-44bc-a5ab-10a0710c1ba5\" (UID: \"ee3532ad-ceed-44bc-a5ab-10a0710c1ba5\") " Jan 26 18:56:38 crc kubenswrapper[4737]: I0126 18:56:38.630107 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee3532ad-ceed-44bc-a5ab-10a0710c1ba5-config-data\") pod \"ee3532ad-ceed-44bc-a5ab-10a0710c1ba5\" (UID: \"ee3532ad-ceed-44bc-a5ab-10a0710c1ba5\") " Jan 26 18:56:38 crc kubenswrapper[4737]: I0126 18:56:38.630300 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4rvp\" (UniqueName: \"kubernetes.io/projected/ee3532ad-ceed-44bc-a5ab-10a0710c1ba5-kube-api-access-h4rvp\") pod \"ee3532ad-ceed-44bc-a5ab-10a0710c1ba5\" (UID: \"ee3532ad-ceed-44bc-a5ab-10a0710c1ba5\") " Jan 26 18:56:38 crc kubenswrapper[4737]: I0126 18:56:38.630328 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee3532ad-ceed-44bc-a5ab-10a0710c1ba5-combined-ca-bundle\") pod \"ee3532ad-ceed-44bc-a5ab-10a0710c1ba5\" (UID: \"ee3532ad-ceed-44bc-a5ab-10a0710c1ba5\") " Jan 26 18:56:38 crc kubenswrapper[4737]: I0126 18:56:38.650252 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee3532ad-ceed-44bc-a5ab-10a0710c1ba5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ee3532ad-ceed-44bc-a5ab-10a0710c1ba5" (UID: "ee3532ad-ceed-44bc-a5ab-10a0710c1ba5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:56:38 crc kubenswrapper[4737]: I0126 18:56:38.650488 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee3532ad-ceed-44bc-a5ab-10a0710c1ba5-kube-api-access-h4rvp" (OuterVolumeSpecName: "kube-api-access-h4rvp") pod "ee3532ad-ceed-44bc-a5ab-10a0710c1ba5" (UID: "ee3532ad-ceed-44bc-a5ab-10a0710c1ba5"). InnerVolumeSpecName "kube-api-access-h4rvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:56:38 crc kubenswrapper[4737]: I0126 18:56:38.674613 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee3532ad-ceed-44bc-a5ab-10a0710c1ba5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ee3532ad-ceed-44bc-a5ab-10a0710c1ba5" (UID: "ee3532ad-ceed-44bc-a5ab-10a0710c1ba5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:56:38 crc kubenswrapper[4737]: I0126 18:56:38.708293 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee3532ad-ceed-44bc-a5ab-10a0710c1ba5-config-data" (OuterVolumeSpecName: "config-data") pod "ee3532ad-ceed-44bc-a5ab-10a0710c1ba5" (UID: "ee3532ad-ceed-44bc-a5ab-10a0710c1ba5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:56:38 crc kubenswrapper[4737]: I0126 18:56:38.732996 4737 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ee3532ad-ceed-44bc-a5ab-10a0710c1ba5-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:38 crc kubenswrapper[4737]: I0126 18:56:38.733032 4737 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee3532ad-ceed-44bc-a5ab-10a0710c1ba5-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:38 crc kubenswrapper[4737]: I0126 18:56:38.733042 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4rvp\" (UniqueName: \"kubernetes.io/projected/ee3532ad-ceed-44bc-a5ab-10a0710c1ba5-kube-api-access-h4rvp\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:38 crc kubenswrapper[4737]: I0126 18:56:38.733052 4737 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee3532ad-ceed-44bc-a5ab-10a0710c1ba5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:38 crc kubenswrapper[4737]: I0126 18:56:38.914316 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-s2dcv"] Jan 26 18:56:38 crc kubenswrapper[4737]: E0126 18:56:38.914800 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee3532ad-ceed-44bc-a5ab-10a0710c1ba5" containerName="heat-cfnapi" Jan 26 18:56:38 crc kubenswrapper[4737]: I0126 18:56:38.914821 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee3532ad-ceed-44bc-a5ab-10a0710c1ba5" containerName="heat-cfnapi" Jan 26 18:56:38 crc kubenswrapper[4737]: E0126 18:56:38.914838 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea8f2357-50f9-46d8-9527-f04533ce926b" containerName="mariadb-account-create-update" Jan 26 18:56:38 crc kubenswrapper[4737]: I0126 18:56:38.914844 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea8f2357-50f9-46d8-9527-f04533ce926b" containerName="mariadb-account-create-update" Jan 26 18:56:38 crc kubenswrapper[4737]: E0126 18:56:38.914854 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11ac79cf-f745-4084-ba59-ee3ff364518d" containerName="heat-cfnapi" Jan 26 18:56:38 crc kubenswrapper[4737]: I0126 18:56:38.914862 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="11ac79cf-f745-4084-ba59-ee3ff364518d" containerName="heat-cfnapi" Jan 26 18:56:38 crc kubenswrapper[4737]: E0126 18:56:38.914871 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8df0f55f-4f9c-4ef5-88f1-16a3a5ec1d47" containerName="mariadb-database-create" Jan 26 18:56:38 crc kubenswrapper[4737]: I0126 18:56:38.914877 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="8df0f55f-4f9c-4ef5-88f1-16a3a5ec1d47" containerName="mariadb-database-create" Jan 26 18:56:38 crc kubenswrapper[4737]: E0126 18:56:38.914888 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d51eb8e-1bae-4432-9997-f74055d01000" containerName="heat-api" Jan 26 18:56:38 crc kubenswrapper[4737]: I0126 18:56:38.914893 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d51eb8e-1bae-4432-9997-f74055d01000" containerName="heat-api" Jan 26 18:56:38 crc kubenswrapper[4737]: E0126 18:56:38.914906 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee3532ad-ceed-44bc-a5ab-10a0710c1ba5" containerName="heat-cfnapi" Jan 26 18:56:38 crc kubenswrapper[4737]: I0126 18:56:38.914912 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee3532ad-ceed-44bc-a5ab-10a0710c1ba5" containerName="heat-cfnapi" Jan 26 18:56:38 crc kubenswrapper[4737]: E0126 18:56:38.914924 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e07b7037-d1bb-485f-a2e0-951b51de8c74" containerName="mariadb-database-create" Jan 26 18:56:38 crc kubenswrapper[4737]: I0126 18:56:38.914930 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="e07b7037-d1bb-485f-a2e0-951b51de8c74" containerName="mariadb-database-create" Jan 26 18:56:38 crc kubenswrapper[4737]: E0126 18:56:38.914943 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec2b468d-e649-4320-8687-bc3b4ed09593" containerName="mariadb-account-create-update" Jan 26 18:56:38 crc kubenswrapper[4737]: I0126 18:56:38.914949 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec2b468d-e649-4320-8687-bc3b4ed09593" containerName="mariadb-account-create-update" Jan 26 18:56:38 crc kubenswrapper[4737]: E0126 18:56:38.914961 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bcd08ca-7be6-4684-b83d-19a94dee32ad" containerName="mariadb-account-create-update" Jan 26 18:56:38 crc kubenswrapper[4737]: I0126 18:56:38.914966 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bcd08ca-7be6-4684-b83d-19a94dee32ad" containerName="mariadb-account-create-update" Jan 26 18:56:38 crc kubenswrapper[4737]: E0126 18:56:38.914989 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ba096b6-6f0d-4c4f-afd1-40d5c8ba5e7f" containerName="mariadb-database-create" Jan 26 18:56:38 crc kubenswrapper[4737]: I0126 18:56:38.914994 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ba096b6-6f0d-4c4f-afd1-40d5c8ba5e7f" containerName="mariadb-database-create" Jan 26 18:56:38 crc kubenswrapper[4737]: I0126 18:56:38.915198 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="8df0f55f-4f9c-4ef5-88f1-16a3a5ec1d47" containerName="mariadb-database-create" Jan 26 18:56:38 crc kubenswrapper[4737]: I0126 18:56:38.915208 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d51eb8e-1bae-4432-9997-f74055d01000" containerName="heat-api" Jan 26 18:56:38 crc kubenswrapper[4737]: I0126 18:56:38.915219 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="e07b7037-d1bb-485f-a2e0-951b51de8c74" containerName="mariadb-database-create" Jan 26 18:56:38 crc kubenswrapper[4737]: I0126 18:56:38.915231 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ba096b6-6f0d-4c4f-afd1-40d5c8ba5e7f" containerName="mariadb-database-create" Jan 26 18:56:38 crc kubenswrapper[4737]: I0126 18:56:38.915250 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee3532ad-ceed-44bc-a5ab-10a0710c1ba5" containerName="heat-cfnapi" Jan 26 18:56:38 crc kubenswrapper[4737]: I0126 18:56:38.915257 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee3532ad-ceed-44bc-a5ab-10a0710c1ba5" containerName="heat-cfnapi" Jan 26 18:56:38 crc kubenswrapper[4737]: I0126 18:56:38.915266 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea8f2357-50f9-46d8-9527-f04533ce926b" containerName="mariadb-account-create-update" Jan 26 18:56:38 crc kubenswrapper[4737]: I0126 18:56:38.915282 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bcd08ca-7be6-4684-b83d-19a94dee32ad" containerName="mariadb-account-create-update" Jan 26 18:56:38 crc kubenswrapper[4737]: I0126 18:56:38.915292 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec2b468d-e649-4320-8687-bc3b4ed09593" containerName="mariadb-account-create-update" Jan 26 18:56:38 crc kubenswrapper[4737]: I0126 18:56:38.915299 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="11ac79cf-f745-4084-ba59-ee3ff364518d" containerName="heat-cfnapi" Jan 26 18:56:38 crc kubenswrapper[4737]: I0126 18:56:38.916038 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-s2dcv" Jan 26 18:56:38 crc kubenswrapper[4737]: I0126 18:56:38.918450 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 26 18:56:38 crc kubenswrapper[4737]: I0126 18:56:38.919080 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 26 18:56:38 crc kubenswrapper[4737]: I0126 18:56:38.920107 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-84dfd788f9-bd7kw" event={"ID":"ee3532ad-ceed-44bc-a5ab-10a0710c1ba5","Type":"ContainerDied","Data":"08fb1ca923f5f8b02dbd0ad1dce69ca211e5ef87ed8839517b5077c026ff4f06"} Jan 26 18:56:38 crc kubenswrapper[4737]: I0126 18:56:38.920137 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-84dfd788f9-bd7kw" Jan 26 18:56:38 crc kubenswrapper[4737]: I0126 18:56:38.920168 4737 scope.go:117] "RemoveContainer" containerID="b8f34e50940767633a7c49d48c3db43cabd157526042c9a8edda982ebe08bd4e" Jan 26 18:56:38 crc kubenswrapper[4737]: I0126 18:56:38.927308 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-86744b887-d62q9" event={"ID":"3aa18399-89e5-455e-a44d-3f862b8c0237","Type":"ContainerStarted","Data":"0ffa945baecee5354cd9a89ea2dd736848a345911cb7809bdbfe2ad65de51b2d"} Jan 26 18:56:38 crc kubenswrapper[4737]: I0126 18:56:38.928574 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-86744b887-d62q9" podUID="3aa18399-89e5-455e-a44d-3f862b8c0237" containerName="heat-api" containerID="cri-o://0ffa945baecee5354cd9a89ea2dd736848a345911cb7809bdbfe2ad65de51b2d" gracePeriod=60 Jan 26 18:56:38 crc kubenswrapper[4737]: I0126 18:56:38.931269 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-wcfhr" Jan 26 18:56:38 crc kubenswrapper[4737]: I0126 18:56:38.957314 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-s2dcv"] Jan 26 18:56:39 crc kubenswrapper[4737]: I0126 18:56:39.054207 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a24b527-6d52-4550-9e95-543e53e4a0fc-scripts\") pod \"nova-cell0-conductor-db-sync-s2dcv\" (UID: \"9a24b527-6d52-4550-9e95-543e53e4a0fc\") " pod="openstack/nova-cell0-conductor-db-sync-s2dcv" Jan 26 18:56:39 crc kubenswrapper[4737]: I0126 18:56:39.054309 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-954c5\" (UniqueName: \"kubernetes.io/projected/9a24b527-6d52-4550-9e95-543e53e4a0fc-kube-api-access-954c5\") pod \"nova-cell0-conductor-db-sync-s2dcv\" (UID: \"9a24b527-6d52-4550-9e95-543e53e4a0fc\") " pod="openstack/nova-cell0-conductor-db-sync-s2dcv" Jan 26 18:56:39 crc kubenswrapper[4737]: I0126 18:56:39.054560 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a24b527-6d52-4550-9e95-543e53e4a0fc-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-s2dcv\" (UID: \"9a24b527-6d52-4550-9e95-543e53e4a0fc\") " pod="openstack/nova-cell0-conductor-db-sync-s2dcv" Jan 26 18:56:39 crc kubenswrapper[4737]: I0126 18:56:39.054756 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a24b527-6d52-4550-9e95-543e53e4a0fc-config-data\") pod \"nova-cell0-conductor-db-sync-s2dcv\" (UID: \"9a24b527-6d52-4550-9e95-543e53e4a0fc\") " pod="openstack/nova-cell0-conductor-db-sync-s2dcv" Jan 26 18:56:39 crc kubenswrapper[4737]: I0126 18:56:39.056553 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-84dfd788f9-bd7kw"] Jan 26 18:56:39 crc kubenswrapper[4737]: I0126 18:56:39.070985 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-86744b887-d62q9" Jan 26 18:56:39 crc kubenswrapper[4737]: I0126 18:56:39.092682 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-5f8757c766-6hm2h" Jan 26 18:56:39 crc kubenswrapper[4737]: I0126 18:56:39.101714 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-84dfd788f9-bd7kw"] Jan 26 18:56:39 crc kubenswrapper[4737]: I0126 18:56:39.158870 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a24b527-6d52-4550-9e95-543e53e4a0fc-scripts\") pod \"nova-cell0-conductor-db-sync-s2dcv\" (UID: \"9a24b527-6d52-4550-9e95-543e53e4a0fc\") " pod="openstack/nova-cell0-conductor-db-sync-s2dcv" Jan 26 18:56:39 crc kubenswrapper[4737]: I0126 18:56:39.158930 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-954c5\" (UniqueName: \"kubernetes.io/projected/9a24b527-6d52-4550-9e95-543e53e4a0fc-kube-api-access-954c5\") pod \"nova-cell0-conductor-db-sync-s2dcv\" (UID: \"9a24b527-6d52-4550-9e95-543e53e4a0fc\") " pod="openstack/nova-cell0-conductor-db-sync-s2dcv" Jan 26 18:56:39 crc kubenswrapper[4737]: I0126 18:56:39.159017 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a24b527-6d52-4550-9e95-543e53e4a0fc-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-s2dcv\" (UID: \"9a24b527-6d52-4550-9e95-543e53e4a0fc\") " pod="openstack/nova-cell0-conductor-db-sync-s2dcv" Jan 26 18:56:39 crc kubenswrapper[4737]: I0126 18:56:39.159126 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a24b527-6d52-4550-9e95-543e53e4a0fc-config-data\") pod \"nova-cell0-conductor-db-sync-s2dcv\" (UID: \"9a24b527-6d52-4550-9e95-543e53e4a0fc\") " pod="openstack/nova-cell0-conductor-db-sync-s2dcv" Jan 26 18:56:39 crc kubenswrapper[4737]: I0126 18:56:39.160250 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-5f8dbb8f99-b67tw"] Jan 26 18:56:39 crc kubenswrapper[4737]: I0126 18:56:39.160567 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-5f8dbb8f99-b67tw" podUID="6025d581-6326-4154-b2ad-ba111e0d0f61" containerName="heat-engine" containerID="cri-o://f7ad0adf2c74e3ba5577b64e0db5f7c8ac35d38869aeee66172c0940191a5044" gracePeriod=60 Jan 26 18:56:39 crc kubenswrapper[4737]: I0126 18:56:39.166876 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a24b527-6d52-4550-9e95-543e53e4a0fc-scripts\") pod \"nova-cell0-conductor-db-sync-s2dcv\" (UID: \"9a24b527-6d52-4550-9e95-543e53e4a0fc\") " pod="openstack/nova-cell0-conductor-db-sync-s2dcv" Jan 26 18:56:39 crc kubenswrapper[4737]: I0126 18:56:39.167834 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a24b527-6d52-4550-9e95-543e53e4a0fc-config-data\") pod \"nova-cell0-conductor-db-sync-s2dcv\" (UID: \"9a24b527-6d52-4550-9e95-543e53e4a0fc\") " pod="openstack/nova-cell0-conductor-db-sync-s2dcv" Jan 26 18:56:39 crc kubenswrapper[4737]: I0126 18:56:39.172889 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a24b527-6d52-4550-9e95-543e53e4a0fc-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-s2dcv\" (UID: \"9a24b527-6d52-4550-9e95-543e53e4a0fc\") " pod="openstack/nova-cell0-conductor-db-sync-s2dcv" Jan 26 18:56:39 crc kubenswrapper[4737]: I0126 18:56:39.198088 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-954c5\" (UniqueName: \"kubernetes.io/projected/9a24b527-6d52-4550-9e95-543e53e4a0fc-kube-api-access-954c5\") pod \"nova-cell0-conductor-db-sync-s2dcv\" (UID: \"9a24b527-6d52-4550-9e95-543e53e4a0fc\") " pod="openstack/nova-cell0-conductor-db-sync-s2dcv" Jan 26 18:56:39 crc kubenswrapper[4737]: I0126 18:56:39.236675 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-s2dcv" Jan 26 18:56:39 crc kubenswrapper[4737]: I0126 18:56:39.842062 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-s2dcv"] Jan 26 18:56:39 crc kubenswrapper[4737]: I0126 18:56:39.972538 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-s2dcv" event={"ID":"9a24b527-6d52-4550-9e95-543e53e4a0fc","Type":"ContainerStarted","Data":"5ea9bd8a541c6ea5e518807d040e47614069a52365e48cc47e9030bebd4e3ce8"} Jan 26 18:56:39 crc kubenswrapper[4737]: I0126 18:56:39.974853 4737 generic.go:334] "Generic (PLEG): container finished" podID="3aa18399-89e5-455e-a44d-3f862b8c0237" containerID="0ffa945baecee5354cd9a89ea2dd736848a345911cb7809bdbfe2ad65de51b2d" exitCode=1 Jan 26 18:56:39 crc kubenswrapper[4737]: I0126 18:56:39.974899 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-86744b887-d62q9" event={"ID":"3aa18399-89e5-455e-a44d-3f862b8c0237","Type":"ContainerDied","Data":"0ffa945baecee5354cd9a89ea2dd736848a345911cb7809bdbfe2ad65de51b2d"} Jan 26 18:56:39 crc kubenswrapper[4737]: I0126 18:56:39.974929 4737 scope.go:117] "RemoveContainer" containerID="75c5945a3c88ea126b1616e23b8ae4fffb6b560f71ccc7bfcf9e3f21d45267ec" Jan 26 18:56:39 crc kubenswrapper[4737]: I0126 18:56:39.988244 4737 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 18:56:40 crc kubenswrapper[4737]: I0126 18:56:40.104279 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-86744b887-d62q9" Jan 26 18:56:40 crc kubenswrapper[4737]: I0126 18:56:40.194575 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6pbj\" (UniqueName: \"kubernetes.io/projected/3aa18399-89e5-455e-a44d-3f862b8c0237-kube-api-access-h6pbj\") pod \"3aa18399-89e5-455e-a44d-3f862b8c0237\" (UID: \"3aa18399-89e5-455e-a44d-3f862b8c0237\") " Jan 26 18:56:40 crc kubenswrapper[4737]: I0126 18:56:40.194642 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa18399-89e5-455e-a44d-3f862b8c0237-combined-ca-bundle\") pod \"3aa18399-89e5-455e-a44d-3f862b8c0237\" (UID: \"3aa18399-89e5-455e-a44d-3f862b8c0237\") " Jan 26 18:56:40 crc kubenswrapper[4737]: I0126 18:56:40.194674 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3aa18399-89e5-455e-a44d-3f862b8c0237-config-data\") pod \"3aa18399-89e5-455e-a44d-3f862b8c0237\" (UID: \"3aa18399-89e5-455e-a44d-3f862b8c0237\") " Jan 26 18:56:40 crc kubenswrapper[4737]: I0126 18:56:40.194814 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3aa18399-89e5-455e-a44d-3f862b8c0237-config-data-custom\") pod \"3aa18399-89e5-455e-a44d-3f862b8c0237\" (UID: \"3aa18399-89e5-455e-a44d-3f862b8c0237\") " Jan 26 18:56:40 crc kubenswrapper[4737]: I0126 18:56:40.210290 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aa18399-89e5-455e-a44d-3f862b8c0237-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3aa18399-89e5-455e-a44d-3f862b8c0237" (UID: "3aa18399-89e5-455e-a44d-3f862b8c0237"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:56:40 crc kubenswrapper[4737]: I0126 18:56:40.210342 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3aa18399-89e5-455e-a44d-3f862b8c0237-kube-api-access-h6pbj" (OuterVolumeSpecName: "kube-api-access-h6pbj") pod "3aa18399-89e5-455e-a44d-3f862b8c0237" (UID: "3aa18399-89e5-455e-a44d-3f862b8c0237"). InnerVolumeSpecName "kube-api-access-h6pbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:56:40 crc kubenswrapper[4737]: I0126 18:56:40.243152 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aa18399-89e5-455e-a44d-3f862b8c0237-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3aa18399-89e5-455e-a44d-3f862b8c0237" (UID: "3aa18399-89e5-455e-a44d-3f862b8c0237"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:56:40 crc kubenswrapper[4737]: I0126 18:56:40.297108 4737 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa18399-89e5-455e-a44d-3f862b8c0237-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:40 crc kubenswrapper[4737]: I0126 18:56:40.297139 4737 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3aa18399-89e5-455e-a44d-3f862b8c0237-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:40 crc kubenswrapper[4737]: I0126 18:56:40.297149 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6pbj\" (UniqueName: \"kubernetes.io/projected/3aa18399-89e5-455e-a44d-3f862b8c0237-kube-api-access-h6pbj\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:40 crc kubenswrapper[4737]: I0126 18:56:40.306869 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aa18399-89e5-455e-a44d-3f862b8c0237-config-data" (OuterVolumeSpecName: "config-data") pod "3aa18399-89e5-455e-a44d-3f862b8c0237" (UID: "3aa18399-89e5-455e-a44d-3f862b8c0237"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:56:40 crc kubenswrapper[4737]: I0126 18:56:40.398924 4737 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3aa18399-89e5-455e-a44d-3f862b8c0237-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:41 crc kubenswrapper[4737]: I0126 18:56:41.008613 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee3532ad-ceed-44bc-a5ab-10a0710c1ba5" path="/var/lib/kubelet/pods/ee3532ad-ceed-44bc-a5ab-10a0710c1ba5/volumes" Jan 26 18:56:41 crc kubenswrapper[4737]: I0126 18:56:41.039992 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-86744b887-d62q9" event={"ID":"3aa18399-89e5-455e-a44d-3f862b8c0237","Type":"ContainerDied","Data":"15aee058145ecf98f9f86843eb125e28ef401ab3e5021b4cdf0be2ee1b9b068d"} Jan 26 18:56:41 crc kubenswrapper[4737]: I0126 18:56:41.040164 4737 scope.go:117] "RemoveContainer" containerID="0ffa945baecee5354cd9a89ea2dd736848a345911cb7809bdbfe2ad65de51b2d" Jan 26 18:56:41 crc kubenswrapper[4737]: I0126 18:56:41.040318 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-86744b887-d62q9" Jan 26 18:56:41 crc kubenswrapper[4737]: I0126 18:56:41.187141 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-86744b887-d62q9"] Jan 26 18:56:41 crc kubenswrapper[4737]: I0126 18:56:41.233962 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-86744b887-d62q9"] Jan 26 18:56:42 crc kubenswrapper[4737]: E0126 18:56:42.064029 4737 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f7ad0adf2c74e3ba5577b64e0db5f7c8ac35d38869aeee66172c0940191a5044" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 26 18:56:42 crc kubenswrapper[4737]: E0126 18:56:42.091928 4737 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f7ad0adf2c74e3ba5577b64e0db5f7c8ac35d38869aeee66172c0940191a5044" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 26 18:56:42 crc kubenswrapper[4737]: E0126 18:56:42.098872 4737 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f7ad0adf2c74e3ba5577b64e0db5f7c8ac35d38869aeee66172c0940191a5044" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 26 18:56:42 crc kubenswrapper[4737]: E0126 18:56:42.098927 4737 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-5f8dbb8f99-b67tw" podUID="6025d581-6326-4154-b2ad-ba111e0d0f61" containerName="heat-engine" Jan 26 18:56:42 crc kubenswrapper[4737]: I0126 18:56:42.702846 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 26 18:56:42 crc kubenswrapper[4737]: I0126 18:56:42.703916 4737 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 18:56:42 crc kubenswrapper[4737]: I0126 18:56:42.747799 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 26 18:56:43 crc kubenswrapper[4737]: I0126 18:56:43.009976 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3aa18399-89e5-455e-a44d-3f862b8c0237" path="/var/lib/kubelet/pods/3aa18399-89e5-455e-a44d-3f862b8c0237/volumes" Jan 26 18:56:43 crc kubenswrapper[4737]: I0126 18:56:43.204087 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 26 18:56:43 crc kubenswrapper[4737]: I0126 18:56:43.204185 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 26 18:56:43 crc kubenswrapper[4737]: I0126 18:56:43.276966 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 26 18:56:43 crc kubenswrapper[4737]: I0126 18:56:43.277447 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 26 18:56:43 crc kubenswrapper[4737]: I0126 18:56:43.835690 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="7ee37723-a972-4371-9193-bf20e0126bca" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.221:3000/\": dial tcp 10.217.0.221:3000: connect: connection refused" Jan 26 18:56:44 crc kubenswrapper[4737]: I0126 18:56:44.129803 4737 generic.go:334] "Generic (PLEG): container finished" podID="7ee37723-a972-4371-9193-bf20e0126bca" containerID="4e4f05e8f9757a70023f46eedbdb049345ee5ee7fba3a371ead8f1eb237611f7" exitCode=0 Jan 26 18:56:44 crc kubenswrapper[4737]: I0126 18:56:44.133213 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ee37723-a972-4371-9193-bf20e0126bca","Type":"ContainerDied","Data":"4e4f05e8f9757a70023f46eedbdb049345ee5ee7fba3a371ead8f1eb237611f7"} Jan 26 18:56:44 crc kubenswrapper[4737]: I0126 18:56:44.133283 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 26 18:56:44 crc kubenswrapper[4737]: I0126 18:56:44.133541 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 26 18:56:44 crc kubenswrapper[4737]: I0126 18:56:44.914278 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.057706 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ee37723-a972-4371-9193-bf20e0126bca-run-httpd\") pod \"7ee37723-a972-4371-9193-bf20e0126bca\" (UID: \"7ee37723-a972-4371-9193-bf20e0126bca\") " Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.057766 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ee37723-a972-4371-9193-bf20e0126bca-log-httpd\") pod \"7ee37723-a972-4371-9193-bf20e0126bca\" (UID: \"7ee37723-a972-4371-9193-bf20e0126bca\") " Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.057860 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvx5w\" (UniqueName: \"kubernetes.io/projected/7ee37723-a972-4371-9193-bf20e0126bca-kube-api-access-xvx5w\") pod \"7ee37723-a972-4371-9193-bf20e0126bca\" (UID: \"7ee37723-a972-4371-9193-bf20e0126bca\") " Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.057900 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ee37723-a972-4371-9193-bf20e0126bca-combined-ca-bundle\") pod \"7ee37723-a972-4371-9193-bf20e0126bca\" (UID: \"7ee37723-a972-4371-9193-bf20e0126bca\") " Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.057945 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ee37723-a972-4371-9193-bf20e0126bca-scripts\") pod \"7ee37723-a972-4371-9193-bf20e0126bca\" (UID: \"7ee37723-a972-4371-9193-bf20e0126bca\") " Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.058111 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7ee37723-a972-4371-9193-bf20e0126bca-sg-core-conf-yaml\") pod \"7ee37723-a972-4371-9193-bf20e0126bca\" (UID: \"7ee37723-a972-4371-9193-bf20e0126bca\") " Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.058282 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ee37723-a972-4371-9193-bf20e0126bca-config-data\") pod \"7ee37723-a972-4371-9193-bf20e0126bca\" (UID: \"7ee37723-a972-4371-9193-bf20e0126bca\") " Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.071505 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ee37723-a972-4371-9193-bf20e0126bca-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7ee37723-a972-4371-9193-bf20e0126bca" (UID: "7ee37723-a972-4371-9193-bf20e0126bca"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.071751 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ee37723-a972-4371-9193-bf20e0126bca-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7ee37723-a972-4371-9193-bf20e0126bca" (UID: "7ee37723-a972-4371-9193-bf20e0126bca"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.100853 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ee37723-a972-4371-9193-bf20e0126bca-kube-api-access-xvx5w" (OuterVolumeSpecName: "kube-api-access-xvx5w") pod "7ee37723-a972-4371-9193-bf20e0126bca" (UID: "7ee37723-a972-4371-9193-bf20e0126bca"). InnerVolumeSpecName "kube-api-access-xvx5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.152013 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ee37723-a972-4371-9193-bf20e0126bca-scripts" (OuterVolumeSpecName: "scripts") pod "7ee37723-a972-4371-9193-bf20e0126bca" (UID: "7ee37723-a972-4371-9193-bf20e0126bca"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.171377 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvx5w\" (UniqueName: \"kubernetes.io/projected/7ee37723-a972-4371-9193-bf20e0126bca-kube-api-access-xvx5w\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.171416 4737 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ee37723-a972-4371-9193-bf20e0126bca-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.171428 4737 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ee37723-a972-4371-9193-bf20e0126bca-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.171438 4737 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ee37723-a972-4371-9193-bf20e0126bca-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.182388 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.183556 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ee37723-a972-4371-9193-bf20e0126bca","Type":"ContainerDied","Data":"ad724d4c1de4c38566ca5720b3ecab0543b2c0133f28cd6a50096fcc94c8843b"} Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.183655 4737 scope.go:117] "RemoveContainer" containerID="1907b2858a93268f4ad912ad74c45fb6c7fffd8e1b845e014e57a45e3f151aca" Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.270313 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ee37723-a972-4371-9193-bf20e0126bca-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7ee37723-a972-4371-9193-bf20e0126bca" (UID: "7ee37723-a972-4371-9193-bf20e0126bca"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.274141 4737 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7ee37723-a972-4371-9193-bf20e0126bca-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.396466 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ee37723-a972-4371-9193-bf20e0126bca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7ee37723-a972-4371-9193-bf20e0126bca" (UID: "7ee37723-a972-4371-9193-bf20e0126bca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.396939 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ee37723-a972-4371-9193-bf20e0126bca-config-data" (OuterVolumeSpecName: "config-data") pod "7ee37723-a972-4371-9193-bf20e0126bca" (UID: "7ee37723-a972-4371-9193-bf20e0126bca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.479223 4737 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ee37723-a972-4371-9193-bf20e0126bca-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.479671 4737 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ee37723-a972-4371-9193-bf20e0126bca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.538675 4737 scope.go:117] "RemoveContainer" containerID="402874a2c5bf9f3c1ad3566ef6dcaf25f325e681e995bd4b0ecfa3b8a1b7b693" Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.552726 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.590982 4737 scope.go:117] "RemoveContainer" containerID="6aea5f7e1f8b0af0106347d80da70bce245e198db78acbec6f4607ef3246ecb9" Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.600522 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.636178 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:56:45 crc kubenswrapper[4737]: E0126 18:56:45.636828 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ee37723-a972-4371-9193-bf20e0126bca" containerName="proxy-httpd" Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.636854 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ee37723-a972-4371-9193-bf20e0126bca" containerName="proxy-httpd" Jan 26 18:56:45 crc kubenswrapper[4737]: E0126 18:56:45.636882 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aa18399-89e5-455e-a44d-3f862b8c0237" containerName="heat-api" Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.636891 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aa18399-89e5-455e-a44d-3f862b8c0237" containerName="heat-api" Jan 26 18:56:45 crc kubenswrapper[4737]: E0126 18:56:45.636905 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aa18399-89e5-455e-a44d-3f862b8c0237" containerName="heat-api" Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.636913 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aa18399-89e5-455e-a44d-3f862b8c0237" containerName="heat-api" Jan 26 18:56:45 crc kubenswrapper[4737]: E0126 18:56:45.636944 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ee37723-a972-4371-9193-bf20e0126bca" containerName="sg-core" Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.636952 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ee37723-a972-4371-9193-bf20e0126bca" containerName="sg-core" Jan 26 18:56:45 crc kubenswrapper[4737]: E0126 18:56:45.636973 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ee37723-a972-4371-9193-bf20e0126bca" containerName="ceilometer-central-agent" Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.636980 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ee37723-a972-4371-9193-bf20e0126bca" containerName="ceilometer-central-agent" Jan 26 18:56:45 crc kubenswrapper[4737]: E0126 18:56:45.637003 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ee37723-a972-4371-9193-bf20e0126bca" containerName="ceilometer-notification-agent" Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.637011 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ee37723-a972-4371-9193-bf20e0126bca" containerName="ceilometer-notification-agent" Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.637296 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ee37723-a972-4371-9193-bf20e0126bca" containerName="sg-core" Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.637328 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ee37723-a972-4371-9193-bf20e0126bca" containerName="ceilometer-central-agent" Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.637338 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="3aa18399-89e5-455e-a44d-3f862b8c0237" containerName="heat-api" Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.637353 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ee37723-a972-4371-9193-bf20e0126bca" containerName="ceilometer-notification-agent" Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.637366 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ee37723-a972-4371-9193-bf20e0126bca" containerName="proxy-httpd" Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.637391 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="3aa18399-89e5-455e-a44d-3f862b8c0237" containerName="heat-api" Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.637407 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="3aa18399-89e5-455e-a44d-3f862b8c0237" containerName="heat-api" Jan 26 18:56:45 crc kubenswrapper[4737]: E0126 18:56:45.637699 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aa18399-89e5-455e-a44d-3f862b8c0237" containerName="heat-api" Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.637714 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aa18399-89e5-455e-a44d-3f862b8c0237" containerName="heat-api" Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.640718 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.642738 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.643035 4737 scope.go:117] "RemoveContainer" containerID="4e4f05e8f9757a70023f46eedbdb049345ee5ee7fba3a371ead8f1eb237611f7" Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.643031 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.673310 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.691844 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/017f5dce-64ad-4a66-be2f-1ced9ae7c9ce-config-data\") pod \"ceilometer-0\" (UID: \"017f5dce-64ad-4a66-be2f-1ced9ae7c9ce\") " pod="openstack/ceilometer-0" Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.691918 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5r4j\" (UniqueName: \"kubernetes.io/projected/017f5dce-64ad-4a66-be2f-1ced9ae7c9ce-kube-api-access-p5r4j\") pod \"ceilometer-0\" (UID: \"017f5dce-64ad-4a66-be2f-1ced9ae7c9ce\") " pod="openstack/ceilometer-0" Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.691955 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/017f5dce-64ad-4a66-be2f-1ced9ae7c9ce-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"017f5dce-64ad-4a66-be2f-1ced9ae7c9ce\") " pod="openstack/ceilometer-0" Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.691990 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/017f5dce-64ad-4a66-be2f-1ced9ae7c9ce-scripts\") pod \"ceilometer-0\" (UID: \"017f5dce-64ad-4a66-be2f-1ced9ae7c9ce\") " pod="openstack/ceilometer-0" Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.692010 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/017f5dce-64ad-4a66-be2f-1ced9ae7c9ce-log-httpd\") pod \"ceilometer-0\" (UID: \"017f5dce-64ad-4a66-be2f-1ced9ae7c9ce\") " pod="openstack/ceilometer-0" Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.692403 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/017f5dce-64ad-4a66-be2f-1ced9ae7c9ce-run-httpd\") pod \"ceilometer-0\" (UID: \"017f5dce-64ad-4a66-be2f-1ced9ae7c9ce\") " pod="openstack/ceilometer-0" Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.692499 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/017f5dce-64ad-4a66-be2f-1ced9ae7c9ce-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"017f5dce-64ad-4a66-be2f-1ced9ae7c9ce\") " pod="openstack/ceilometer-0" Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.794400 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5r4j\" (UniqueName: \"kubernetes.io/projected/017f5dce-64ad-4a66-be2f-1ced9ae7c9ce-kube-api-access-p5r4j\") pod \"ceilometer-0\" (UID: \"017f5dce-64ad-4a66-be2f-1ced9ae7c9ce\") " pod="openstack/ceilometer-0" Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.794470 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/017f5dce-64ad-4a66-be2f-1ced9ae7c9ce-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"017f5dce-64ad-4a66-be2f-1ced9ae7c9ce\") " pod="openstack/ceilometer-0" Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.794513 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/017f5dce-64ad-4a66-be2f-1ced9ae7c9ce-scripts\") pod \"ceilometer-0\" (UID: \"017f5dce-64ad-4a66-be2f-1ced9ae7c9ce\") " pod="openstack/ceilometer-0" Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.794539 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/017f5dce-64ad-4a66-be2f-1ced9ae7c9ce-log-httpd\") pod \"ceilometer-0\" (UID: \"017f5dce-64ad-4a66-be2f-1ced9ae7c9ce\") " pod="openstack/ceilometer-0" Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.794599 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/017f5dce-64ad-4a66-be2f-1ced9ae7c9ce-run-httpd\") pod \"ceilometer-0\" (UID: \"017f5dce-64ad-4a66-be2f-1ced9ae7c9ce\") " pod="openstack/ceilometer-0" Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.794637 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/017f5dce-64ad-4a66-be2f-1ced9ae7c9ce-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"017f5dce-64ad-4a66-be2f-1ced9ae7c9ce\") " pod="openstack/ceilometer-0" Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.794700 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/017f5dce-64ad-4a66-be2f-1ced9ae7c9ce-config-data\") pod \"ceilometer-0\" (UID: \"017f5dce-64ad-4a66-be2f-1ced9ae7c9ce\") " pod="openstack/ceilometer-0" Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.795948 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/017f5dce-64ad-4a66-be2f-1ced9ae7c9ce-run-httpd\") pod \"ceilometer-0\" (UID: \"017f5dce-64ad-4a66-be2f-1ced9ae7c9ce\") " pod="openstack/ceilometer-0" Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.796238 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/017f5dce-64ad-4a66-be2f-1ced9ae7c9ce-log-httpd\") pod \"ceilometer-0\" (UID: \"017f5dce-64ad-4a66-be2f-1ced9ae7c9ce\") " pod="openstack/ceilometer-0" Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.805526 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/017f5dce-64ad-4a66-be2f-1ced9ae7c9ce-scripts\") pod \"ceilometer-0\" (UID: \"017f5dce-64ad-4a66-be2f-1ced9ae7c9ce\") " pod="openstack/ceilometer-0" Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.806096 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/017f5dce-64ad-4a66-be2f-1ced9ae7c9ce-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"017f5dce-64ad-4a66-be2f-1ced9ae7c9ce\") " pod="openstack/ceilometer-0" Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.806930 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/017f5dce-64ad-4a66-be2f-1ced9ae7c9ce-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"017f5dce-64ad-4a66-be2f-1ced9ae7c9ce\") " pod="openstack/ceilometer-0" Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.807676 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/017f5dce-64ad-4a66-be2f-1ced9ae7c9ce-config-data\") pod \"ceilometer-0\" (UID: \"017f5dce-64ad-4a66-be2f-1ced9ae7c9ce\") " pod="openstack/ceilometer-0" Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.816575 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5r4j\" (UniqueName: \"kubernetes.io/projected/017f5dce-64ad-4a66-be2f-1ced9ae7c9ce-kube-api-access-p5r4j\") pod \"ceilometer-0\" (UID: \"017f5dce-64ad-4a66-be2f-1ced9ae7c9ce\") " pod="openstack/ceilometer-0" Jan 26 18:56:45 crc kubenswrapper[4737]: I0126 18:56:45.967279 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:56:46 crc kubenswrapper[4737]: I0126 18:56:46.220325 4737 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 18:56:46 crc kubenswrapper[4737]: I0126 18:56:46.220644 4737 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 18:56:47 crc kubenswrapper[4737]: I0126 18:56:47.015156 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ee37723-a972-4371-9193-bf20e0126bca" path="/var/lib/kubelet/pods/7ee37723-a972-4371-9193-bf20e0126bca/volumes" Jan 26 18:56:47 crc kubenswrapper[4737]: I0126 18:56:47.246862 4737 generic.go:334] "Generic (PLEG): container finished" podID="6025d581-6326-4154-b2ad-ba111e0d0f61" containerID="f7ad0adf2c74e3ba5577b64e0db5f7c8ac35d38869aeee66172c0940191a5044" exitCode=0 Jan 26 18:56:47 crc kubenswrapper[4737]: I0126 18:56:47.246952 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5f8dbb8f99-b67tw" event={"ID":"6025d581-6326-4154-b2ad-ba111e0d0f61","Type":"ContainerDied","Data":"f7ad0adf2c74e3ba5577b64e0db5f7c8ac35d38869aeee66172c0940191a5044"} Jan 26 18:56:48 crc kubenswrapper[4737]: I0126 18:56:48.161157 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 26 18:56:48 crc kubenswrapper[4737]: I0126 18:56:48.161274 4737 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 18:56:48 crc kubenswrapper[4737]: I0126 18:56:48.246964 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 26 18:56:51 crc kubenswrapper[4737]: I0126 18:56:51.846891 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:56:52 crc kubenswrapper[4737]: E0126 18:56:52.056250 4737 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f7ad0adf2c74e3ba5577b64e0db5f7c8ac35d38869aeee66172c0940191a5044 is running failed: container process not found" containerID="f7ad0adf2c74e3ba5577b64e0db5f7c8ac35d38869aeee66172c0940191a5044" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 26 18:56:52 crc kubenswrapper[4737]: E0126 18:56:52.059607 4737 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f7ad0adf2c74e3ba5577b64e0db5f7c8ac35d38869aeee66172c0940191a5044 is running failed: container process not found" containerID="f7ad0adf2c74e3ba5577b64e0db5f7c8ac35d38869aeee66172c0940191a5044" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 26 18:56:52 crc kubenswrapper[4737]: E0126 18:56:52.061896 4737 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f7ad0adf2c74e3ba5577b64e0db5f7c8ac35d38869aeee66172c0940191a5044 is running failed: container process not found" containerID="f7ad0adf2c74e3ba5577b64e0db5f7c8ac35d38869aeee66172c0940191a5044" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 26 18:56:52 crc kubenswrapper[4737]: E0126 18:56:52.061939 4737 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f7ad0adf2c74e3ba5577b64e0db5f7c8ac35d38869aeee66172c0940191a5044 is running failed: container process not found" probeType="Readiness" pod="openstack/heat-engine-5f8dbb8f99-b67tw" podUID="6025d581-6326-4154-b2ad-ba111e0d0f61" containerName="heat-engine" Jan 26 18:56:54 crc kubenswrapper[4737]: I0126 18:56:54.263207 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5f8dbb8f99-b67tw" Jan 26 18:56:54 crc kubenswrapper[4737]: I0126 18:56:54.332850 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-s2dcv" event={"ID":"9a24b527-6d52-4550-9e95-543e53e4a0fc","Type":"ContainerStarted","Data":"9550a24705751f7a1b329052cfaa40e7a39b4389b6801007d25f80bc6fe485a2"} Jan 26 18:56:54 crc kubenswrapper[4737]: I0126 18:56:54.342357 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5f8dbb8f99-b67tw" event={"ID":"6025d581-6326-4154-b2ad-ba111e0d0f61","Type":"ContainerDied","Data":"445485ab3ac87c6dd79fbd1b397011dba978dccace9ae54e9f615d9e0e482a7d"} Jan 26 18:56:54 crc kubenswrapper[4737]: I0126 18:56:54.342432 4737 scope.go:117] "RemoveContainer" containerID="f7ad0adf2c74e3ba5577b64e0db5f7c8ac35d38869aeee66172c0940191a5044" Jan 26 18:56:54 crc kubenswrapper[4737]: I0126 18:56:54.342677 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5f8dbb8f99-b67tw" Jan 26 18:56:54 crc kubenswrapper[4737]: I0126 18:56:54.365423 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-s2dcv" podStartSLOduration=2.40089321 podStartE2EDuration="16.365402213s" podCreationTimestamp="2026-01-26 18:56:38 +0000 UTC" firstStartedPulling="2026-01-26 18:56:39.899887624 +0000 UTC m=+1573.208082332" lastFinishedPulling="2026-01-26 18:56:53.864396617 +0000 UTC m=+1587.172591335" observedRunningTime="2026-01-26 18:56:54.353412053 +0000 UTC m=+1587.661606761" watchObservedRunningTime="2026-01-26 18:56:54.365402213 +0000 UTC m=+1587.673596921" Jan 26 18:56:54 crc kubenswrapper[4737]: I0126 18:56:54.388439 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:56:54 crc kubenswrapper[4737]: I0126 18:56:54.405992 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6025d581-6326-4154-b2ad-ba111e0d0f61-config-data\") pod \"6025d581-6326-4154-b2ad-ba111e0d0f61\" (UID: \"6025d581-6326-4154-b2ad-ba111e0d0f61\") " Jan 26 18:56:54 crc kubenswrapper[4737]: I0126 18:56:54.406280 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6025d581-6326-4154-b2ad-ba111e0d0f61-config-data-custom\") pod \"6025d581-6326-4154-b2ad-ba111e0d0f61\" (UID: \"6025d581-6326-4154-b2ad-ba111e0d0f61\") " Jan 26 18:56:54 crc kubenswrapper[4737]: I0126 18:56:54.406637 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwt89\" (UniqueName: \"kubernetes.io/projected/6025d581-6326-4154-b2ad-ba111e0d0f61-kube-api-access-hwt89\") pod \"6025d581-6326-4154-b2ad-ba111e0d0f61\" (UID: \"6025d581-6326-4154-b2ad-ba111e0d0f61\") " Jan 26 18:56:54 crc kubenswrapper[4737]: I0126 18:56:54.406735 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6025d581-6326-4154-b2ad-ba111e0d0f61-combined-ca-bundle\") pod \"6025d581-6326-4154-b2ad-ba111e0d0f61\" (UID: \"6025d581-6326-4154-b2ad-ba111e0d0f61\") " Jan 26 18:56:54 crc kubenswrapper[4737]: I0126 18:56:54.414125 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6025d581-6326-4154-b2ad-ba111e0d0f61-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6025d581-6326-4154-b2ad-ba111e0d0f61" (UID: "6025d581-6326-4154-b2ad-ba111e0d0f61"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:56:54 crc kubenswrapper[4737]: I0126 18:56:54.418245 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6025d581-6326-4154-b2ad-ba111e0d0f61-kube-api-access-hwt89" (OuterVolumeSpecName: "kube-api-access-hwt89") pod "6025d581-6326-4154-b2ad-ba111e0d0f61" (UID: "6025d581-6326-4154-b2ad-ba111e0d0f61"). InnerVolumeSpecName "kube-api-access-hwt89". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:56:54 crc kubenswrapper[4737]: I0126 18:56:54.444700 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6025d581-6326-4154-b2ad-ba111e0d0f61-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6025d581-6326-4154-b2ad-ba111e0d0f61" (UID: "6025d581-6326-4154-b2ad-ba111e0d0f61"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:56:54 crc kubenswrapper[4737]: I0126 18:56:54.477327 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6025d581-6326-4154-b2ad-ba111e0d0f61-config-data" (OuterVolumeSpecName: "config-data") pod "6025d581-6326-4154-b2ad-ba111e0d0f61" (UID: "6025d581-6326-4154-b2ad-ba111e0d0f61"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:56:54 crc kubenswrapper[4737]: I0126 18:56:54.510285 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwt89\" (UniqueName: \"kubernetes.io/projected/6025d581-6326-4154-b2ad-ba111e0d0f61-kube-api-access-hwt89\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:54 crc kubenswrapper[4737]: I0126 18:56:54.510322 4737 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6025d581-6326-4154-b2ad-ba111e0d0f61-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:54 crc kubenswrapper[4737]: I0126 18:56:54.510333 4737 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6025d581-6326-4154-b2ad-ba111e0d0f61-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:54 crc kubenswrapper[4737]: I0126 18:56:54.510344 4737 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6025d581-6326-4154-b2ad-ba111e0d0f61-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:54 crc kubenswrapper[4737]: I0126 18:56:54.689315 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-5f8dbb8f99-b67tw"] Jan 26 18:56:54 crc kubenswrapper[4737]: I0126 18:56:54.703626 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-5f8dbb8f99-b67tw"] Jan 26 18:56:54 crc kubenswrapper[4737]: I0126 18:56:54.997518 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6025d581-6326-4154-b2ad-ba111e0d0f61" path="/var/lib/kubelet/pods/6025d581-6326-4154-b2ad-ba111e0d0f61/volumes" Jan 26 18:56:55 crc kubenswrapper[4737]: I0126 18:56:55.378143 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"017f5dce-64ad-4a66-be2f-1ced9ae7c9ce","Type":"ContainerStarted","Data":"b1fd9439b71dd0bb450527cc64945fda13d085cd73000d7b986a8b9ed49db546"} Jan 26 18:56:55 crc kubenswrapper[4737]: I0126 18:56:55.378212 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"017f5dce-64ad-4a66-be2f-1ced9ae7c9ce","Type":"ContainerStarted","Data":"a348a92319d82ec1b6a1a6efee2d6051c7051e7c03f3bd954c46587444f7d5fb"} Jan 26 18:56:56 crc kubenswrapper[4737]: I0126 18:56:56.394663 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"017f5dce-64ad-4a66-be2f-1ced9ae7c9ce","Type":"ContainerStarted","Data":"dfb5316391219fe2677f4dcb7434498b89581f308133624bef3d1ac1f0256895"} Jan 26 18:56:57 crc kubenswrapper[4737]: I0126 18:56:57.408808 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"017f5dce-64ad-4a66-be2f-1ced9ae7c9ce","Type":"ContainerStarted","Data":"eba2baaadf0d55b7aefb35243d0d6460754423de600cf76fe5c0a77e3e91077c"} Jan 26 18:56:59 crc kubenswrapper[4737]: I0126 18:56:59.440630 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"017f5dce-64ad-4a66-be2f-1ced9ae7c9ce","Type":"ContainerStarted","Data":"e3f7fb73a02346192c2dd0c383b762e95c953006fc16f3cfa1990d8b470a7a91"} Jan 26 18:56:59 crc kubenswrapper[4737]: I0126 18:56:59.441638 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="017f5dce-64ad-4a66-be2f-1ced9ae7c9ce" containerName="ceilometer-central-agent" containerID="cri-o://b1fd9439b71dd0bb450527cc64945fda13d085cd73000d7b986a8b9ed49db546" gracePeriod=30 Jan 26 18:56:59 crc kubenswrapper[4737]: I0126 18:56:59.441750 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 18:56:59 crc kubenswrapper[4737]: I0126 18:56:59.441787 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="017f5dce-64ad-4a66-be2f-1ced9ae7c9ce" containerName="proxy-httpd" containerID="cri-o://e3f7fb73a02346192c2dd0c383b762e95c953006fc16f3cfa1990d8b470a7a91" gracePeriod=30 Jan 26 18:56:59 crc kubenswrapper[4737]: I0126 18:56:59.441884 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="017f5dce-64ad-4a66-be2f-1ced9ae7c9ce" containerName="ceilometer-notification-agent" containerID="cri-o://dfb5316391219fe2677f4dcb7434498b89581f308133624bef3d1ac1f0256895" gracePeriod=30 Jan 26 18:56:59 crc kubenswrapper[4737]: I0126 18:56:59.442325 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="017f5dce-64ad-4a66-be2f-1ced9ae7c9ce" containerName="sg-core" containerID="cri-o://eba2baaadf0d55b7aefb35243d0d6460754423de600cf76fe5c0a77e3e91077c" gracePeriod=30 Jan 26 18:56:59 crc kubenswrapper[4737]: I0126 18:56:59.494572 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=9.905289295 podStartE2EDuration="14.494553175s" podCreationTimestamp="2026-01-26 18:56:45 +0000 UTC" firstStartedPulling="2026-01-26 18:56:54.397384139 +0000 UTC m=+1587.705578847" lastFinishedPulling="2026-01-26 18:56:58.986648019 +0000 UTC m=+1592.294842727" observedRunningTime="2026-01-26 18:56:59.482911514 +0000 UTC m=+1592.791106222" watchObservedRunningTime="2026-01-26 18:56:59.494553175 +0000 UTC m=+1592.802747883" Jan 26 18:57:00 crc kubenswrapper[4737]: I0126 18:57:00.453356 4737 generic.go:334] "Generic (PLEG): container finished" podID="017f5dce-64ad-4a66-be2f-1ced9ae7c9ce" containerID="eba2baaadf0d55b7aefb35243d0d6460754423de600cf76fe5c0a77e3e91077c" exitCode=2 Jan 26 18:57:00 crc kubenswrapper[4737]: I0126 18:57:00.453720 4737 generic.go:334] "Generic (PLEG): container finished" podID="017f5dce-64ad-4a66-be2f-1ced9ae7c9ce" containerID="dfb5316391219fe2677f4dcb7434498b89581f308133624bef3d1ac1f0256895" exitCode=0 Jan 26 18:57:00 crc kubenswrapper[4737]: I0126 18:57:00.453746 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"017f5dce-64ad-4a66-be2f-1ced9ae7c9ce","Type":"ContainerDied","Data":"eba2baaadf0d55b7aefb35243d0d6460754423de600cf76fe5c0a77e3e91077c"} Jan 26 18:57:00 crc kubenswrapper[4737]: I0126 18:57:00.453781 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"017f5dce-64ad-4a66-be2f-1ced9ae7c9ce","Type":"ContainerDied","Data":"dfb5316391219fe2677f4dcb7434498b89581f308133624bef3d1ac1f0256895"} Jan 26 18:57:01 crc kubenswrapper[4737]: I0126 18:57:01.329195 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qtng7"] Jan 26 18:57:01 crc kubenswrapper[4737]: E0126 18:57:01.330165 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6025d581-6326-4154-b2ad-ba111e0d0f61" containerName="heat-engine" Jan 26 18:57:01 crc kubenswrapper[4737]: I0126 18:57:01.330187 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="6025d581-6326-4154-b2ad-ba111e0d0f61" containerName="heat-engine" Jan 26 18:57:01 crc kubenswrapper[4737]: I0126 18:57:01.330504 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="6025d581-6326-4154-b2ad-ba111e0d0f61" containerName="heat-engine" Jan 26 18:57:01 crc kubenswrapper[4737]: I0126 18:57:01.332557 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qtng7" Jan 26 18:57:01 crc kubenswrapper[4737]: I0126 18:57:01.355862 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qtng7"] Jan 26 18:57:01 crc kubenswrapper[4737]: I0126 18:57:01.500872 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8017af82-c5f4-443a-8f61-e69b015d296b-utilities\") pod \"certified-operators-qtng7\" (UID: \"8017af82-c5f4-443a-8f61-e69b015d296b\") " pod="openshift-marketplace/certified-operators-qtng7" Jan 26 18:57:01 crc kubenswrapper[4737]: I0126 18:57:01.501134 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsttz\" (UniqueName: \"kubernetes.io/projected/8017af82-c5f4-443a-8f61-e69b015d296b-kube-api-access-fsttz\") pod \"certified-operators-qtng7\" (UID: \"8017af82-c5f4-443a-8f61-e69b015d296b\") " pod="openshift-marketplace/certified-operators-qtng7" Jan 26 18:57:01 crc kubenswrapper[4737]: I0126 18:57:01.501507 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8017af82-c5f4-443a-8f61-e69b015d296b-catalog-content\") pod \"certified-operators-qtng7\" (UID: \"8017af82-c5f4-443a-8f61-e69b015d296b\") " pod="openshift-marketplace/certified-operators-qtng7" Jan 26 18:57:01 crc kubenswrapper[4737]: I0126 18:57:01.604351 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8017af82-c5f4-443a-8f61-e69b015d296b-utilities\") pod \"certified-operators-qtng7\" (UID: \"8017af82-c5f4-443a-8f61-e69b015d296b\") " pod="openshift-marketplace/certified-operators-qtng7" Jan 26 18:57:01 crc kubenswrapper[4737]: I0126 18:57:01.604770 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsttz\" (UniqueName: \"kubernetes.io/projected/8017af82-c5f4-443a-8f61-e69b015d296b-kube-api-access-fsttz\") pod \"certified-operators-qtng7\" (UID: \"8017af82-c5f4-443a-8f61-e69b015d296b\") " pod="openshift-marketplace/certified-operators-qtng7" Jan 26 18:57:01 crc kubenswrapper[4737]: I0126 18:57:01.604837 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8017af82-c5f4-443a-8f61-e69b015d296b-utilities\") pod \"certified-operators-qtng7\" (UID: \"8017af82-c5f4-443a-8f61-e69b015d296b\") " pod="openshift-marketplace/certified-operators-qtng7" Jan 26 18:57:01 crc kubenswrapper[4737]: I0126 18:57:01.604974 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8017af82-c5f4-443a-8f61-e69b015d296b-catalog-content\") pod \"certified-operators-qtng7\" (UID: \"8017af82-c5f4-443a-8f61-e69b015d296b\") " pod="openshift-marketplace/certified-operators-qtng7" Jan 26 18:57:01 crc kubenswrapper[4737]: I0126 18:57:01.605306 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8017af82-c5f4-443a-8f61-e69b015d296b-catalog-content\") pod \"certified-operators-qtng7\" (UID: \"8017af82-c5f4-443a-8f61-e69b015d296b\") " pod="openshift-marketplace/certified-operators-qtng7" Jan 26 18:57:01 crc kubenswrapper[4737]: I0126 18:57:01.627177 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsttz\" (UniqueName: \"kubernetes.io/projected/8017af82-c5f4-443a-8f61-e69b015d296b-kube-api-access-fsttz\") pod \"certified-operators-qtng7\" (UID: \"8017af82-c5f4-443a-8f61-e69b015d296b\") " pod="openshift-marketplace/certified-operators-qtng7" Jan 26 18:57:01 crc kubenswrapper[4737]: I0126 18:57:01.656309 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qtng7" Jan 26 18:57:02 crc kubenswrapper[4737]: I0126 18:57:02.209541 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qtng7"] Jan 26 18:57:02 crc kubenswrapper[4737]: I0126 18:57:02.477435 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qtng7" event={"ID":"8017af82-c5f4-443a-8f61-e69b015d296b","Type":"ContainerStarted","Data":"94e25f4d19369c361fbe8329c44a9241a0c3baef04eecac2694541e75f338fa2"} Jan 26 18:57:03 crc kubenswrapper[4737]: I0126 18:57:03.491489 4737 generic.go:334] "Generic (PLEG): container finished" podID="8017af82-c5f4-443a-8f61-e69b015d296b" containerID="5ad288c63cb4e3caa71b724665902fa519df278bcab7b97227381f988ac8c08a" exitCode=0 Jan 26 18:57:03 crc kubenswrapper[4737]: I0126 18:57:03.491562 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qtng7" event={"ID":"8017af82-c5f4-443a-8f61-e69b015d296b","Type":"ContainerDied","Data":"5ad288c63cb4e3caa71b724665902fa519df278bcab7b97227381f988ac8c08a"} Jan 26 18:57:04 crc kubenswrapper[4737]: I0126 18:57:04.511021 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qtng7" event={"ID":"8017af82-c5f4-443a-8f61-e69b015d296b","Type":"ContainerStarted","Data":"c1be2233e894976dce2a18bb87516a801011d8d84b7d2b77dc7c8461c8601059"} Jan 26 18:57:05 crc kubenswrapper[4737]: I0126 18:57:05.525646 4737 generic.go:334] "Generic (PLEG): container finished" podID="8017af82-c5f4-443a-8f61-e69b015d296b" containerID="c1be2233e894976dce2a18bb87516a801011d8d84b7d2b77dc7c8461c8601059" exitCode=0 Jan 26 18:57:05 crc kubenswrapper[4737]: I0126 18:57:05.525823 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qtng7" event={"ID":"8017af82-c5f4-443a-8f61-e69b015d296b","Type":"ContainerDied","Data":"c1be2233e894976dce2a18bb87516a801011d8d84b7d2b77dc7c8461c8601059"} Jan 26 18:57:06 crc kubenswrapper[4737]: I0126 18:57:06.539989 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qtng7" event={"ID":"8017af82-c5f4-443a-8f61-e69b015d296b","Type":"ContainerStarted","Data":"e3e4ac898336c470b6d40ba2d448111dd1f06aa7d8e1653c324263c3307ba331"} Jan 26 18:57:06 crc kubenswrapper[4737]: I0126 18:57:06.567334 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qtng7" podStartSLOduration=3.143433248 podStartE2EDuration="5.567292192s" podCreationTimestamp="2026-01-26 18:57:01 +0000 UTC" firstStartedPulling="2026-01-26 18:57:03.49399123 +0000 UTC m=+1596.802185948" lastFinishedPulling="2026-01-26 18:57:05.917850184 +0000 UTC m=+1599.226044892" observedRunningTime="2026-01-26 18:57:06.559522001 +0000 UTC m=+1599.867716709" watchObservedRunningTime="2026-01-26 18:57:06.567292192 +0000 UTC m=+1599.875486900" Jan 26 18:57:08 crc kubenswrapper[4737]: I0126 18:57:08.562009 4737 generic.go:334] "Generic (PLEG): container finished" podID="9a24b527-6d52-4550-9e95-543e53e4a0fc" containerID="9550a24705751f7a1b329052cfaa40e7a39b4389b6801007d25f80bc6fe485a2" exitCode=0 Jan 26 18:57:08 crc kubenswrapper[4737]: I0126 18:57:08.562198 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-s2dcv" event={"ID":"9a24b527-6d52-4550-9e95-543e53e4a0fc","Type":"ContainerDied","Data":"9550a24705751f7a1b329052cfaa40e7a39b4389b6801007d25f80bc6fe485a2"} Jan 26 18:57:09 crc kubenswrapper[4737]: I0126 18:57:09.595047 4737 generic.go:334] "Generic (PLEG): container finished" podID="017f5dce-64ad-4a66-be2f-1ced9ae7c9ce" containerID="b1fd9439b71dd0bb450527cc64945fda13d085cd73000d7b986a8b9ed49db546" exitCode=0 Jan 26 18:57:09 crc kubenswrapper[4737]: I0126 18:57:09.595624 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"017f5dce-64ad-4a66-be2f-1ced9ae7c9ce","Type":"ContainerDied","Data":"b1fd9439b71dd0bb450527cc64945fda13d085cd73000d7b986a8b9ed49db546"} Jan 26 18:57:10 crc kubenswrapper[4737]: I0126 18:57:10.135484 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-s2dcv" Jan 26 18:57:10 crc kubenswrapper[4737]: I0126 18:57:10.276740 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a24b527-6d52-4550-9e95-543e53e4a0fc-scripts\") pod \"9a24b527-6d52-4550-9e95-543e53e4a0fc\" (UID: \"9a24b527-6d52-4550-9e95-543e53e4a0fc\") " Jan 26 18:57:10 crc kubenswrapper[4737]: I0126 18:57:10.276960 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a24b527-6d52-4550-9e95-543e53e4a0fc-config-data\") pod \"9a24b527-6d52-4550-9e95-543e53e4a0fc\" (UID: \"9a24b527-6d52-4550-9e95-543e53e4a0fc\") " Jan 26 18:57:10 crc kubenswrapper[4737]: I0126 18:57:10.277061 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-954c5\" (UniqueName: \"kubernetes.io/projected/9a24b527-6d52-4550-9e95-543e53e4a0fc-kube-api-access-954c5\") pod \"9a24b527-6d52-4550-9e95-543e53e4a0fc\" (UID: \"9a24b527-6d52-4550-9e95-543e53e4a0fc\") " Jan 26 18:57:10 crc kubenswrapper[4737]: I0126 18:57:10.277216 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a24b527-6d52-4550-9e95-543e53e4a0fc-combined-ca-bundle\") pod \"9a24b527-6d52-4550-9e95-543e53e4a0fc\" (UID: \"9a24b527-6d52-4550-9e95-543e53e4a0fc\") " Jan 26 18:57:10 crc kubenswrapper[4737]: I0126 18:57:10.289792 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a24b527-6d52-4550-9e95-543e53e4a0fc-kube-api-access-954c5" (OuterVolumeSpecName: "kube-api-access-954c5") pod "9a24b527-6d52-4550-9e95-543e53e4a0fc" (UID: "9a24b527-6d52-4550-9e95-543e53e4a0fc"). InnerVolumeSpecName "kube-api-access-954c5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:57:10 crc kubenswrapper[4737]: I0126 18:57:10.294365 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a24b527-6d52-4550-9e95-543e53e4a0fc-scripts" (OuterVolumeSpecName: "scripts") pod "9a24b527-6d52-4550-9e95-543e53e4a0fc" (UID: "9a24b527-6d52-4550-9e95-543e53e4a0fc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:57:10 crc kubenswrapper[4737]: I0126 18:57:10.332575 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a24b527-6d52-4550-9e95-543e53e4a0fc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9a24b527-6d52-4550-9e95-543e53e4a0fc" (UID: "9a24b527-6d52-4550-9e95-543e53e4a0fc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:57:10 crc kubenswrapper[4737]: I0126 18:57:10.333171 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a24b527-6d52-4550-9e95-543e53e4a0fc-config-data" (OuterVolumeSpecName: "config-data") pod "9a24b527-6d52-4550-9e95-543e53e4a0fc" (UID: "9a24b527-6d52-4550-9e95-543e53e4a0fc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:57:10 crc kubenswrapper[4737]: I0126 18:57:10.380371 4737 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a24b527-6d52-4550-9e95-543e53e4a0fc-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:57:10 crc kubenswrapper[4737]: I0126 18:57:10.380406 4737 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a24b527-6d52-4550-9e95-543e53e4a0fc-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:57:10 crc kubenswrapper[4737]: I0126 18:57:10.380419 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-954c5\" (UniqueName: \"kubernetes.io/projected/9a24b527-6d52-4550-9e95-543e53e4a0fc-kube-api-access-954c5\") on node \"crc\" DevicePath \"\"" Jan 26 18:57:10 crc kubenswrapper[4737]: I0126 18:57:10.380435 4737 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a24b527-6d52-4550-9e95-543e53e4a0fc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:57:10 crc kubenswrapper[4737]: I0126 18:57:10.606997 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-s2dcv" event={"ID":"9a24b527-6d52-4550-9e95-543e53e4a0fc","Type":"ContainerDied","Data":"5ea9bd8a541c6ea5e518807d040e47614069a52365e48cc47e9030bebd4e3ce8"} Jan 26 18:57:10 crc kubenswrapper[4737]: I0126 18:57:10.607048 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ea9bd8a541c6ea5e518807d040e47614069a52365e48cc47e9030bebd4e3ce8" Jan 26 18:57:10 crc kubenswrapper[4737]: I0126 18:57:10.607128 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-s2dcv" Jan 26 18:57:10 crc kubenswrapper[4737]: I0126 18:57:10.734379 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 18:57:10 crc kubenswrapper[4737]: E0126 18:57:10.735786 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a24b527-6d52-4550-9e95-543e53e4a0fc" containerName="nova-cell0-conductor-db-sync" Jan 26 18:57:10 crc kubenswrapper[4737]: I0126 18:57:10.735831 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a24b527-6d52-4550-9e95-543e53e4a0fc" containerName="nova-cell0-conductor-db-sync" Jan 26 18:57:10 crc kubenswrapper[4737]: I0126 18:57:10.736370 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a24b527-6d52-4550-9e95-543e53e4a0fc" containerName="nova-cell0-conductor-db-sync" Jan 26 18:57:10 crc kubenswrapper[4737]: I0126 18:57:10.738226 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 26 18:57:10 crc kubenswrapper[4737]: I0126 18:57:10.741739 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-wcfhr" Jan 26 18:57:10 crc kubenswrapper[4737]: I0126 18:57:10.742221 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 26 18:57:10 crc kubenswrapper[4737]: I0126 18:57:10.754885 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 18:57:10 crc kubenswrapper[4737]: I0126 18:57:10.896668 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg9l7\" (UniqueName: \"kubernetes.io/projected/5d833a0c-e63e-4296-85f9-f7489007fa6c-kube-api-access-vg9l7\") pod \"nova-cell0-conductor-0\" (UID: \"5d833a0c-e63e-4296-85f9-f7489007fa6c\") " pod="openstack/nova-cell0-conductor-0" Jan 26 18:57:10 crc kubenswrapper[4737]: I0126 18:57:10.897161 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d833a0c-e63e-4296-85f9-f7489007fa6c-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"5d833a0c-e63e-4296-85f9-f7489007fa6c\") " pod="openstack/nova-cell0-conductor-0" Jan 26 18:57:10 crc kubenswrapper[4737]: I0126 18:57:10.897399 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d833a0c-e63e-4296-85f9-f7489007fa6c-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"5d833a0c-e63e-4296-85f9-f7489007fa6c\") " pod="openstack/nova-cell0-conductor-0" Jan 26 18:57:10 crc kubenswrapper[4737]: I0126 18:57:10.999760 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vg9l7\" (UniqueName: \"kubernetes.io/projected/5d833a0c-e63e-4296-85f9-f7489007fa6c-kube-api-access-vg9l7\") pod \"nova-cell0-conductor-0\" (UID: \"5d833a0c-e63e-4296-85f9-f7489007fa6c\") " pod="openstack/nova-cell0-conductor-0" Jan 26 18:57:10 crc kubenswrapper[4737]: I0126 18:57:10.999871 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d833a0c-e63e-4296-85f9-f7489007fa6c-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"5d833a0c-e63e-4296-85f9-f7489007fa6c\") " pod="openstack/nova-cell0-conductor-0" Jan 26 18:57:11 crc kubenswrapper[4737]: I0126 18:57:10.999960 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d833a0c-e63e-4296-85f9-f7489007fa6c-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"5d833a0c-e63e-4296-85f9-f7489007fa6c\") " pod="openstack/nova-cell0-conductor-0" Jan 26 18:57:11 crc kubenswrapper[4737]: I0126 18:57:11.005701 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d833a0c-e63e-4296-85f9-f7489007fa6c-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"5d833a0c-e63e-4296-85f9-f7489007fa6c\") " pod="openstack/nova-cell0-conductor-0" Jan 26 18:57:11 crc kubenswrapper[4737]: I0126 18:57:11.006684 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d833a0c-e63e-4296-85f9-f7489007fa6c-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"5d833a0c-e63e-4296-85f9-f7489007fa6c\") " pod="openstack/nova-cell0-conductor-0" Jan 26 18:57:11 crc kubenswrapper[4737]: I0126 18:57:11.033127 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vg9l7\" (UniqueName: \"kubernetes.io/projected/5d833a0c-e63e-4296-85f9-f7489007fa6c-kube-api-access-vg9l7\") pod \"nova-cell0-conductor-0\" (UID: \"5d833a0c-e63e-4296-85f9-f7489007fa6c\") " pod="openstack/nova-cell0-conductor-0" Jan 26 18:57:11 crc kubenswrapper[4737]: I0126 18:57:11.059048 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 26 18:57:11 crc kubenswrapper[4737]: I0126 18:57:11.591019 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 18:57:11 crc kubenswrapper[4737]: I0126 18:57:11.620502 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"5d833a0c-e63e-4296-85f9-f7489007fa6c","Type":"ContainerStarted","Data":"608b1c991d71ce01d1bf3517088128c987ad62835fec440a66a0bfa4e95171ab"} Jan 26 18:57:11 crc kubenswrapper[4737]: I0126 18:57:11.656504 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qtng7" Jan 26 18:57:11 crc kubenswrapper[4737]: I0126 18:57:11.658378 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qtng7" Jan 26 18:57:11 crc kubenswrapper[4737]: I0126 18:57:11.724468 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qtng7" Jan 26 18:57:12 crc kubenswrapper[4737]: I0126 18:57:12.633466 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"5d833a0c-e63e-4296-85f9-f7489007fa6c","Type":"ContainerStarted","Data":"7265516e01411c51a2c85f2010dfb20b68382e2143e16965f7838e327066cad7"} Jan 26 18:57:12 crc kubenswrapper[4737]: I0126 18:57:12.633908 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 26 18:57:12 crc kubenswrapper[4737]: I0126 18:57:12.658764 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.65874029 podStartE2EDuration="2.65874029s" podCreationTimestamp="2026-01-26 18:57:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:57:12.648518182 +0000 UTC m=+1605.956712890" watchObservedRunningTime="2026-01-26 18:57:12.65874029 +0000 UTC m=+1605.966934988" Jan 26 18:57:12 crc kubenswrapper[4737]: I0126 18:57:12.684947 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qtng7" Jan 26 18:57:12 crc kubenswrapper[4737]: I0126 18:57:12.736134 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qtng7"] Jan 26 18:57:14 crc kubenswrapper[4737]: I0126 18:57:14.653292 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qtng7" podUID="8017af82-c5f4-443a-8f61-e69b015d296b" containerName="registry-server" containerID="cri-o://e3e4ac898336c470b6d40ba2d448111dd1f06aa7d8e1653c324263c3307ba331" gracePeriod=2 Jan 26 18:57:15 crc kubenswrapper[4737]: I0126 18:57:15.238621 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qtng7" Jan 26 18:57:15 crc kubenswrapper[4737]: I0126 18:57:15.316312 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8017af82-c5f4-443a-8f61-e69b015d296b-catalog-content\") pod \"8017af82-c5f4-443a-8f61-e69b015d296b\" (UID: \"8017af82-c5f4-443a-8f61-e69b015d296b\") " Jan 26 18:57:15 crc kubenswrapper[4737]: I0126 18:57:15.316393 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsttz\" (UniqueName: \"kubernetes.io/projected/8017af82-c5f4-443a-8f61-e69b015d296b-kube-api-access-fsttz\") pod \"8017af82-c5f4-443a-8f61-e69b015d296b\" (UID: \"8017af82-c5f4-443a-8f61-e69b015d296b\") " Jan 26 18:57:15 crc kubenswrapper[4737]: I0126 18:57:15.316434 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8017af82-c5f4-443a-8f61-e69b015d296b-utilities\") pod \"8017af82-c5f4-443a-8f61-e69b015d296b\" (UID: \"8017af82-c5f4-443a-8f61-e69b015d296b\") " Jan 26 18:57:15 crc kubenswrapper[4737]: I0126 18:57:15.317614 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8017af82-c5f4-443a-8f61-e69b015d296b-utilities" (OuterVolumeSpecName: "utilities") pod "8017af82-c5f4-443a-8f61-e69b015d296b" (UID: "8017af82-c5f4-443a-8f61-e69b015d296b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:57:15 crc kubenswrapper[4737]: I0126 18:57:15.325598 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8017af82-c5f4-443a-8f61-e69b015d296b-kube-api-access-fsttz" (OuterVolumeSpecName: "kube-api-access-fsttz") pod "8017af82-c5f4-443a-8f61-e69b015d296b" (UID: "8017af82-c5f4-443a-8f61-e69b015d296b"). InnerVolumeSpecName "kube-api-access-fsttz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:57:15 crc kubenswrapper[4737]: I0126 18:57:15.375095 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8017af82-c5f4-443a-8f61-e69b015d296b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8017af82-c5f4-443a-8f61-e69b015d296b" (UID: "8017af82-c5f4-443a-8f61-e69b015d296b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:57:15 crc kubenswrapper[4737]: I0126 18:57:15.419334 4737 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8017af82-c5f4-443a-8f61-e69b015d296b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:57:15 crc kubenswrapper[4737]: I0126 18:57:15.419642 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsttz\" (UniqueName: \"kubernetes.io/projected/8017af82-c5f4-443a-8f61-e69b015d296b-kube-api-access-fsttz\") on node \"crc\" DevicePath \"\"" Jan 26 18:57:15 crc kubenswrapper[4737]: I0126 18:57:15.419705 4737 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8017af82-c5f4-443a-8f61-e69b015d296b-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:57:15 crc kubenswrapper[4737]: I0126 18:57:15.665182 4737 generic.go:334] "Generic (PLEG): container finished" podID="8017af82-c5f4-443a-8f61-e69b015d296b" containerID="e3e4ac898336c470b6d40ba2d448111dd1f06aa7d8e1653c324263c3307ba331" exitCode=0 Jan 26 18:57:15 crc kubenswrapper[4737]: I0126 18:57:15.665259 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qtng7" event={"ID":"8017af82-c5f4-443a-8f61-e69b015d296b","Type":"ContainerDied","Data":"e3e4ac898336c470b6d40ba2d448111dd1f06aa7d8e1653c324263c3307ba331"} Jan 26 18:57:15 crc kubenswrapper[4737]: I0126 18:57:15.665563 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qtng7" event={"ID":"8017af82-c5f4-443a-8f61-e69b015d296b","Type":"ContainerDied","Data":"94e25f4d19369c361fbe8329c44a9241a0c3baef04eecac2694541e75f338fa2"} Jan 26 18:57:15 crc kubenswrapper[4737]: I0126 18:57:15.665588 4737 scope.go:117] "RemoveContainer" containerID="e3e4ac898336c470b6d40ba2d448111dd1f06aa7d8e1653c324263c3307ba331" Jan 26 18:57:15 crc kubenswrapper[4737]: I0126 18:57:15.665303 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qtng7" Jan 26 18:57:15 crc kubenswrapper[4737]: I0126 18:57:15.698431 4737 scope.go:117] "RemoveContainer" containerID="c1be2233e894976dce2a18bb87516a801011d8d84b7d2b77dc7c8461c8601059" Jan 26 18:57:15 crc kubenswrapper[4737]: I0126 18:57:15.721168 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qtng7"] Jan 26 18:57:15 crc kubenswrapper[4737]: I0126 18:57:15.732964 4737 scope.go:117] "RemoveContainer" containerID="5ad288c63cb4e3caa71b724665902fa519df278bcab7b97227381f988ac8c08a" Jan 26 18:57:15 crc kubenswrapper[4737]: I0126 18:57:15.733290 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qtng7"] Jan 26 18:57:15 crc kubenswrapper[4737]: I0126 18:57:15.788869 4737 scope.go:117] "RemoveContainer" containerID="e3e4ac898336c470b6d40ba2d448111dd1f06aa7d8e1653c324263c3307ba331" Jan 26 18:57:15 crc kubenswrapper[4737]: E0126 18:57:15.789633 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3e4ac898336c470b6d40ba2d448111dd1f06aa7d8e1653c324263c3307ba331\": container with ID starting with e3e4ac898336c470b6d40ba2d448111dd1f06aa7d8e1653c324263c3307ba331 not found: ID does not exist" containerID="e3e4ac898336c470b6d40ba2d448111dd1f06aa7d8e1653c324263c3307ba331" Jan 26 18:57:15 crc kubenswrapper[4737]: I0126 18:57:15.789698 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3e4ac898336c470b6d40ba2d448111dd1f06aa7d8e1653c324263c3307ba331"} err="failed to get container status \"e3e4ac898336c470b6d40ba2d448111dd1f06aa7d8e1653c324263c3307ba331\": rpc error: code = NotFound desc = could not find container \"e3e4ac898336c470b6d40ba2d448111dd1f06aa7d8e1653c324263c3307ba331\": container with ID starting with e3e4ac898336c470b6d40ba2d448111dd1f06aa7d8e1653c324263c3307ba331 not found: ID does not exist" Jan 26 18:57:15 crc kubenswrapper[4737]: I0126 18:57:15.789734 4737 scope.go:117] "RemoveContainer" containerID="c1be2233e894976dce2a18bb87516a801011d8d84b7d2b77dc7c8461c8601059" Jan 26 18:57:15 crc kubenswrapper[4737]: E0126 18:57:15.790515 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1be2233e894976dce2a18bb87516a801011d8d84b7d2b77dc7c8461c8601059\": container with ID starting with c1be2233e894976dce2a18bb87516a801011d8d84b7d2b77dc7c8461c8601059 not found: ID does not exist" containerID="c1be2233e894976dce2a18bb87516a801011d8d84b7d2b77dc7c8461c8601059" Jan 26 18:57:15 crc kubenswrapper[4737]: I0126 18:57:15.790585 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1be2233e894976dce2a18bb87516a801011d8d84b7d2b77dc7c8461c8601059"} err="failed to get container status \"c1be2233e894976dce2a18bb87516a801011d8d84b7d2b77dc7c8461c8601059\": rpc error: code = NotFound desc = could not find container \"c1be2233e894976dce2a18bb87516a801011d8d84b7d2b77dc7c8461c8601059\": container with ID starting with c1be2233e894976dce2a18bb87516a801011d8d84b7d2b77dc7c8461c8601059 not found: ID does not exist" Jan 26 18:57:15 crc kubenswrapper[4737]: I0126 18:57:15.790613 4737 scope.go:117] "RemoveContainer" containerID="5ad288c63cb4e3caa71b724665902fa519df278bcab7b97227381f988ac8c08a" Jan 26 18:57:15 crc kubenswrapper[4737]: E0126 18:57:15.791296 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ad288c63cb4e3caa71b724665902fa519df278bcab7b97227381f988ac8c08a\": container with ID starting with 5ad288c63cb4e3caa71b724665902fa519df278bcab7b97227381f988ac8c08a not found: ID does not exist" containerID="5ad288c63cb4e3caa71b724665902fa519df278bcab7b97227381f988ac8c08a" Jan 26 18:57:15 crc kubenswrapper[4737]: I0126 18:57:15.791335 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ad288c63cb4e3caa71b724665902fa519df278bcab7b97227381f988ac8c08a"} err="failed to get container status \"5ad288c63cb4e3caa71b724665902fa519df278bcab7b97227381f988ac8c08a\": rpc error: code = NotFound desc = could not find container \"5ad288c63cb4e3caa71b724665902fa519df278bcab7b97227381f988ac8c08a\": container with ID starting with 5ad288c63cb4e3caa71b724665902fa519df278bcab7b97227381f988ac8c08a not found: ID does not exist" Jan 26 18:57:15 crc kubenswrapper[4737]: I0126 18:57:15.972820 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="017f5dce-64ad-4a66-be2f-1ced9ae7c9ce" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 26 18:57:16 crc kubenswrapper[4737]: I0126 18:57:16.105571 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 26 18:57:16 crc kubenswrapper[4737]: I0126 18:57:16.654177 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-j866g"] Jan 26 18:57:16 crc kubenswrapper[4737]: E0126 18:57:16.654700 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8017af82-c5f4-443a-8f61-e69b015d296b" containerName="extract-utilities" Jan 26 18:57:16 crc kubenswrapper[4737]: I0126 18:57:16.654716 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="8017af82-c5f4-443a-8f61-e69b015d296b" containerName="extract-utilities" Jan 26 18:57:16 crc kubenswrapper[4737]: E0126 18:57:16.654733 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8017af82-c5f4-443a-8f61-e69b015d296b" containerName="extract-content" Jan 26 18:57:16 crc kubenswrapper[4737]: I0126 18:57:16.654740 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="8017af82-c5f4-443a-8f61-e69b015d296b" containerName="extract-content" Jan 26 18:57:16 crc kubenswrapper[4737]: E0126 18:57:16.654776 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8017af82-c5f4-443a-8f61-e69b015d296b" containerName="registry-server" Jan 26 18:57:16 crc kubenswrapper[4737]: I0126 18:57:16.654782 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="8017af82-c5f4-443a-8f61-e69b015d296b" containerName="registry-server" Jan 26 18:57:16 crc kubenswrapper[4737]: I0126 18:57:16.654968 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="8017af82-c5f4-443a-8f61-e69b015d296b" containerName="registry-server" Jan 26 18:57:16 crc kubenswrapper[4737]: I0126 18:57:16.655779 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-j866g" Jan 26 18:57:16 crc kubenswrapper[4737]: I0126 18:57:16.675025 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 26 18:57:16 crc kubenswrapper[4737]: I0126 18:57:16.675354 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 26 18:57:16 crc kubenswrapper[4737]: I0126 18:57:16.704846 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-j866g"] Jan 26 18:57:16 crc kubenswrapper[4737]: I0126 18:57:16.761414 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bea1a20-5eb7-4003-8fdd-43ecb5fb550a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-j866g\" (UID: \"5bea1a20-5eb7-4003-8fdd-43ecb5fb550a\") " pod="openstack/nova-cell0-cell-mapping-j866g" Jan 26 18:57:16 crc kubenswrapper[4737]: I0126 18:57:16.761492 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhdjm\" (UniqueName: \"kubernetes.io/projected/5bea1a20-5eb7-4003-8fdd-43ecb5fb550a-kube-api-access-lhdjm\") pod \"nova-cell0-cell-mapping-j866g\" (UID: \"5bea1a20-5eb7-4003-8fdd-43ecb5fb550a\") " pod="openstack/nova-cell0-cell-mapping-j866g" Jan 26 18:57:16 crc kubenswrapper[4737]: I0126 18:57:16.761599 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5bea1a20-5eb7-4003-8fdd-43ecb5fb550a-scripts\") pod \"nova-cell0-cell-mapping-j866g\" (UID: \"5bea1a20-5eb7-4003-8fdd-43ecb5fb550a\") " pod="openstack/nova-cell0-cell-mapping-j866g" Jan 26 18:57:16 crc kubenswrapper[4737]: I0126 18:57:16.761628 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bea1a20-5eb7-4003-8fdd-43ecb5fb550a-config-data\") pod \"nova-cell0-cell-mapping-j866g\" (UID: \"5bea1a20-5eb7-4003-8fdd-43ecb5fb550a\") " pod="openstack/nova-cell0-cell-mapping-j866g" Jan 26 18:57:16 crc kubenswrapper[4737]: I0126 18:57:16.864880 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5bea1a20-5eb7-4003-8fdd-43ecb5fb550a-scripts\") pod \"nova-cell0-cell-mapping-j866g\" (UID: \"5bea1a20-5eb7-4003-8fdd-43ecb5fb550a\") " pod="openstack/nova-cell0-cell-mapping-j866g" Jan 26 18:57:16 crc kubenswrapper[4737]: I0126 18:57:16.865238 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bea1a20-5eb7-4003-8fdd-43ecb5fb550a-config-data\") pod \"nova-cell0-cell-mapping-j866g\" (UID: \"5bea1a20-5eb7-4003-8fdd-43ecb5fb550a\") " pod="openstack/nova-cell0-cell-mapping-j866g" Jan 26 18:57:16 crc kubenswrapper[4737]: I0126 18:57:16.865406 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bea1a20-5eb7-4003-8fdd-43ecb5fb550a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-j866g\" (UID: \"5bea1a20-5eb7-4003-8fdd-43ecb5fb550a\") " pod="openstack/nova-cell0-cell-mapping-j866g" Jan 26 18:57:16 crc kubenswrapper[4737]: I0126 18:57:16.865466 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhdjm\" (UniqueName: \"kubernetes.io/projected/5bea1a20-5eb7-4003-8fdd-43ecb5fb550a-kube-api-access-lhdjm\") pod \"nova-cell0-cell-mapping-j866g\" (UID: \"5bea1a20-5eb7-4003-8fdd-43ecb5fb550a\") " pod="openstack/nova-cell0-cell-mapping-j866g" Jan 26 18:57:16 crc kubenswrapper[4737]: I0126 18:57:16.877002 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bea1a20-5eb7-4003-8fdd-43ecb5fb550a-config-data\") pod \"nova-cell0-cell-mapping-j866g\" (UID: \"5bea1a20-5eb7-4003-8fdd-43ecb5fb550a\") " pod="openstack/nova-cell0-cell-mapping-j866g" Jan 26 18:57:16 crc kubenswrapper[4737]: I0126 18:57:16.888811 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bea1a20-5eb7-4003-8fdd-43ecb5fb550a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-j866g\" (UID: \"5bea1a20-5eb7-4003-8fdd-43ecb5fb550a\") " pod="openstack/nova-cell0-cell-mapping-j866g" Jan 26 18:57:16 crc kubenswrapper[4737]: I0126 18:57:16.900880 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhdjm\" (UniqueName: \"kubernetes.io/projected/5bea1a20-5eb7-4003-8fdd-43ecb5fb550a-kube-api-access-lhdjm\") pod \"nova-cell0-cell-mapping-j866g\" (UID: \"5bea1a20-5eb7-4003-8fdd-43ecb5fb550a\") " pod="openstack/nova-cell0-cell-mapping-j866g" Jan 26 18:57:16 crc kubenswrapper[4737]: I0126 18:57:16.903194 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5bea1a20-5eb7-4003-8fdd-43ecb5fb550a-scripts\") pod \"nova-cell0-cell-mapping-j866g\" (UID: \"5bea1a20-5eb7-4003-8fdd-43ecb5fb550a\") " pod="openstack/nova-cell0-cell-mapping-j866g" Jan 26 18:57:16 crc kubenswrapper[4737]: I0126 18:57:16.979410 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-j866g" Jan 26 18:57:17 crc kubenswrapper[4737]: I0126 18:57:17.038761 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8017af82-c5f4-443a-8f61-e69b015d296b" path="/var/lib/kubelet/pods/8017af82-c5f4-443a-8f61-e69b015d296b/volumes" Jan 26 18:57:17 crc kubenswrapper[4737]: I0126 18:57:17.039550 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 26 18:57:17 crc kubenswrapper[4737]: I0126 18:57:17.057960 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 18:57:17 crc kubenswrapper[4737]: I0126 18:57:17.068639 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 18:57:17 crc kubenswrapper[4737]: I0126 18:57:17.072359 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 18:57:17 crc kubenswrapper[4737]: I0126 18:57:17.177464 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a92049d-1c34-47c9-b128-366728af476a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2a92049d-1c34-47c9-b128-366728af476a\") " pod="openstack/nova-api-0" Jan 26 18:57:17 crc kubenswrapper[4737]: I0126 18:57:17.177808 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a92049d-1c34-47c9-b128-366728af476a-config-data\") pod \"nova-api-0\" (UID: \"2a92049d-1c34-47c9-b128-366728af476a\") " pod="openstack/nova-api-0" Jan 26 18:57:17 crc kubenswrapper[4737]: I0126 18:57:17.177835 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n9jq\" (UniqueName: \"kubernetes.io/projected/2a92049d-1c34-47c9-b128-366728af476a-kube-api-access-9n9jq\") pod \"nova-api-0\" (UID: \"2a92049d-1c34-47c9-b128-366728af476a\") " pod="openstack/nova-api-0" Jan 26 18:57:17 crc kubenswrapper[4737]: I0126 18:57:17.177901 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a92049d-1c34-47c9-b128-366728af476a-logs\") pod \"nova-api-0\" (UID: \"2a92049d-1c34-47c9-b128-366728af476a\") " pod="openstack/nova-api-0" Jan 26 18:57:17 crc kubenswrapper[4737]: I0126 18:57:17.279654 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a92049d-1c34-47c9-b128-366728af476a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2a92049d-1c34-47c9-b128-366728af476a\") " pod="openstack/nova-api-0" Jan 26 18:57:17 crc kubenswrapper[4737]: I0126 18:57:17.279822 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a92049d-1c34-47c9-b128-366728af476a-config-data\") pod \"nova-api-0\" (UID: \"2a92049d-1c34-47c9-b128-366728af476a\") " pod="openstack/nova-api-0" Jan 26 18:57:17 crc kubenswrapper[4737]: I0126 18:57:17.279847 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n9jq\" (UniqueName: \"kubernetes.io/projected/2a92049d-1c34-47c9-b128-366728af476a-kube-api-access-9n9jq\") pod \"nova-api-0\" (UID: \"2a92049d-1c34-47c9-b128-366728af476a\") " pod="openstack/nova-api-0" Jan 26 18:57:17 crc kubenswrapper[4737]: I0126 18:57:17.279886 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a92049d-1c34-47c9-b128-366728af476a-logs\") pod \"nova-api-0\" (UID: \"2a92049d-1c34-47c9-b128-366728af476a\") " pod="openstack/nova-api-0" Jan 26 18:57:17 crc kubenswrapper[4737]: I0126 18:57:17.280314 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a92049d-1c34-47c9-b128-366728af476a-logs\") pod \"nova-api-0\" (UID: \"2a92049d-1c34-47c9-b128-366728af476a\") " pod="openstack/nova-api-0" Jan 26 18:57:17 crc kubenswrapper[4737]: I0126 18:57:17.302466 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a92049d-1c34-47c9-b128-366728af476a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2a92049d-1c34-47c9-b128-366728af476a\") " pod="openstack/nova-api-0" Jan 26 18:57:17 crc kubenswrapper[4737]: I0126 18:57:17.303602 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a92049d-1c34-47c9-b128-366728af476a-config-data\") pod \"nova-api-0\" (UID: \"2a92049d-1c34-47c9-b128-366728af476a\") " pod="openstack/nova-api-0" Jan 26 18:57:17 crc kubenswrapper[4737]: I0126 18:57:17.310145 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 18:57:17 crc kubenswrapper[4737]: I0126 18:57:17.312572 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 18:57:17 crc kubenswrapper[4737]: I0126 18:57:17.323702 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 26 18:57:17 crc kubenswrapper[4737]: I0126 18:57:17.354179 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n9jq\" (UniqueName: \"kubernetes.io/projected/2a92049d-1c34-47c9-b128-366728af476a-kube-api-access-9n9jq\") pod \"nova-api-0\" (UID: \"2a92049d-1c34-47c9-b128-366728af476a\") " pod="openstack/nova-api-0" Jan 26 18:57:17 crc kubenswrapper[4737]: I0126 18:57:17.357974 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 18:57:17 crc kubenswrapper[4737]: I0126 18:57:17.383144 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtzjq\" (UniqueName: \"kubernetes.io/projected/252aaba1-e252-4c10-b9de-8e6100e48267-kube-api-access-vtzjq\") pod \"nova-scheduler-0\" (UID: \"252aaba1-e252-4c10-b9de-8e6100e48267\") " pod="openstack/nova-scheduler-0" Jan 26 18:57:17 crc kubenswrapper[4737]: I0126 18:57:17.383323 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/252aaba1-e252-4c10-b9de-8e6100e48267-config-data\") pod \"nova-scheduler-0\" (UID: \"252aaba1-e252-4c10-b9de-8e6100e48267\") " pod="openstack/nova-scheduler-0" Jan 26 18:57:17 crc kubenswrapper[4737]: I0126 18:57:17.383354 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/252aaba1-e252-4c10-b9de-8e6100e48267-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"252aaba1-e252-4c10-b9de-8e6100e48267\") " pod="openstack/nova-scheduler-0" Jan 26 18:57:17 crc kubenswrapper[4737]: I0126 18:57:17.488034 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 18:57:17 crc kubenswrapper[4737]: I0126 18:57:17.496519 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/252aaba1-e252-4c10-b9de-8e6100e48267-config-data\") pod \"nova-scheduler-0\" (UID: \"252aaba1-e252-4c10-b9de-8e6100e48267\") " pod="openstack/nova-scheduler-0" Jan 26 18:57:17 crc kubenswrapper[4737]: I0126 18:57:17.496585 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/252aaba1-e252-4c10-b9de-8e6100e48267-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"252aaba1-e252-4c10-b9de-8e6100e48267\") " pod="openstack/nova-scheduler-0" Jan 26 18:57:17 crc kubenswrapper[4737]: I0126 18:57:17.496775 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtzjq\" (UniqueName: \"kubernetes.io/projected/252aaba1-e252-4c10-b9de-8e6100e48267-kube-api-access-vtzjq\") pod \"nova-scheduler-0\" (UID: \"252aaba1-e252-4c10-b9de-8e6100e48267\") " pod="openstack/nova-scheduler-0" Jan 26 18:57:17 crc kubenswrapper[4737]: I0126 18:57:17.512394 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/252aaba1-e252-4c10-b9de-8e6100e48267-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"252aaba1-e252-4c10-b9de-8e6100e48267\") " pod="openstack/nova-scheduler-0" Jan 26 18:57:17 crc kubenswrapper[4737]: I0126 18:57:17.553765 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/252aaba1-e252-4c10-b9de-8e6100e48267-config-data\") pod \"nova-scheduler-0\" (UID: \"252aaba1-e252-4c10-b9de-8e6100e48267\") " pod="openstack/nova-scheduler-0" Jan 26 18:57:17 crc kubenswrapper[4737]: I0126 18:57:17.566226 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 26 18:57:17 crc kubenswrapper[4737]: I0126 18:57:17.618356 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtzjq\" (UniqueName: \"kubernetes.io/projected/252aaba1-e252-4c10-b9de-8e6100e48267-kube-api-access-vtzjq\") pod \"nova-scheduler-0\" (UID: \"252aaba1-e252-4c10-b9de-8e6100e48267\") " pod="openstack/nova-scheduler-0" Jan 26 18:57:17 crc kubenswrapper[4737]: I0126 18:57:17.686744 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 18:57:17 crc kubenswrapper[4737]: I0126 18:57:17.703323 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 26 18:57:17 crc kubenswrapper[4737]: I0126 18:57:17.749102 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 18:57:17 crc kubenswrapper[4737]: I0126 18:57:17.785524 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68f221b9-d702-4331-b67f-10bd1b2125dc-logs\") pod \"nova-metadata-0\" (UID: \"68f221b9-d702-4331-b67f-10bd1b2125dc\") " pod="openstack/nova-metadata-0" Jan 26 18:57:17 crc kubenswrapper[4737]: I0126 18:57:17.785924 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68f221b9-d702-4331-b67f-10bd1b2125dc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"68f221b9-d702-4331-b67f-10bd1b2125dc\") " pod="openstack/nova-metadata-0" Jan 26 18:57:17 crc kubenswrapper[4737]: I0126 18:57:17.786025 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r88gj\" (UniqueName: \"kubernetes.io/projected/68f221b9-d702-4331-b67f-10bd1b2125dc-kube-api-access-r88gj\") pod \"nova-metadata-0\" (UID: \"68f221b9-d702-4331-b67f-10bd1b2125dc\") " pod="openstack/nova-metadata-0" Jan 26 18:57:17 crc kubenswrapper[4737]: I0126 18:57:17.786211 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68f221b9-d702-4331-b67f-10bd1b2125dc-config-data\") pod \"nova-metadata-0\" (UID: \"68f221b9-d702-4331-b67f-10bd1b2125dc\") " pod="openstack/nova-metadata-0" Jan 26 18:57:17 crc kubenswrapper[4737]: I0126 18:57:17.838155 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 18:57:17 crc kubenswrapper[4737]: I0126 18:57:17.859637 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 18:57:17 crc kubenswrapper[4737]: I0126 18:57:17.861197 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:57:17 crc kubenswrapper[4737]: I0126 18:57:17.866468 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 26 18:57:17 crc kubenswrapper[4737]: I0126 18:57:17.884904 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 18:57:17 crc kubenswrapper[4737]: I0126 18:57:17.888709 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68f221b9-d702-4331-b67f-10bd1b2125dc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"68f221b9-d702-4331-b67f-10bd1b2125dc\") " pod="openstack/nova-metadata-0" Jan 26 18:57:17 crc kubenswrapper[4737]: I0126 18:57:17.901134 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r88gj\" (UniqueName: \"kubernetes.io/projected/68f221b9-d702-4331-b67f-10bd1b2125dc-kube-api-access-r88gj\") pod \"nova-metadata-0\" (UID: \"68f221b9-d702-4331-b67f-10bd1b2125dc\") " pod="openstack/nova-metadata-0" Jan 26 18:57:17 crc kubenswrapper[4737]: I0126 18:57:17.901325 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68f221b9-d702-4331-b67f-10bd1b2125dc-config-data\") pod \"nova-metadata-0\" (UID: \"68f221b9-d702-4331-b67f-10bd1b2125dc\") " pod="openstack/nova-metadata-0" Jan 26 18:57:17 crc kubenswrapper[4737]: I0126 18:57:17.901695 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68f221b9-d702-4331-b67f-10bd1b2125dc-logs\") pod \"nova-metadata-0\" (UID: \"68f221b9-d702-4331-b67f-10bd1b2125dc\") " pod="openstack/nova-metadata-0" Jan 26 18:57:17 crc kubenswrapper[4737]: I0126 18:57:17.902283 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68f221b9-d702-4331-b67f-10bd1b2125dc-logs\") pod \"nova-metadata-0\" (UID: \"68f221b9-d702-4331-b67f-10bd1b2125dc\") " pod="openstack/nova-metadata-0" Jan 26 18:57:17 crc kubenswrapper[4737]: I0126 18:57:17.906607 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-c8p2s"] Jan 26 18:57:17 crc kubenswrapper[4737]: I0126 18:57:17.911020 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68f221b9-d702-4331-b67f-10bd1b2125dc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"68f221b9-d702-4331-b67f-10bd1b2125dc\") " pod="openstack/nova-metadata-0" Jan 26 18:57:17 crc kubenswrapper[4737]: I0126 18:57:17.915443 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r88gj\" (UniqueName: \"kubernetes.io/projected/68f221b9-d702-4331-b67f-10bd1b2125dc-kube-api-access-r88gj\") pod \"nova-metadata-0\" (UID: \"68f221b9-d702-4331-b67f-10bd1b2125dc\") " pod="openstack/nova-metadata-0" Jan 26 18:57:17 crc kubenswrapper[4737]: I0126 18:57:17.917496 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68f221b9-d702-4331-b67f-10bd1b2125dc-config-data\") pod \"nova-metadata-0\" (UID: \"68f221b9-d702-4331-b67f-10bd1b2125dc\") " pod="openstack/nova-metadata-0" Jan 26 18:57:17 crc kubenswrapper[4737]: I0126 18:57:17.917768 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-c8p2s" Jan 26 18:57:17 crc kubenswrapper[4737]: I0126 18:57:17.971940 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-c8p2s"] Jan 26 18:57:18 crc kubenswrapper[4737]: I0126 18:57:18.003888 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b6804db-b6cc-41ad-bb1a-603bdca29f7f-config\") pod \"dnsmasq-dns-9b86998b5-c8p2s\" (UID: \"4b6804db-b6cc-41ad-bb1a-603bdca29f7f\") " pod="openstack/dnsmasq-dns-9b86998b5-c8p2s" Jan 26 18:57:18 crc kubenswrapper[4737]: I0126 18:57:18.003936 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b82b7d0-7418-465f-a126-5882e578889b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4b82b7d0-7418-465f-a126-5882e578889b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:57:18 crc kubenswrapper[4737]: I0126 18:57:18.003963 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hxjn\" (UniqueName: \"kubernetes.io/projected/4b82b7d0-7418-465f-a126-5882e578889b-kube-api-access-5hxjn\") pod \"nova-cell1-novncproxy-0\" (UID: \"4b82b7d0-7418-465f-a126-5882e578889b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:57:18 crc kubenswrapper[4737]: I0126 18:57:18.004022 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b6804db-b6cc-41ad-bb1a-603bdca29f7f-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-c8p2s\" (UID: \"4b6804db-b6cc-41ad-bb1a-603bdca29f7f\") " pod="openstack/dnsmasq-dns-9b86998b5-c8p2s" Jan 26 18:57:18 crc kubenswrapper[4737]: I0126 18:57:18.004046 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b6804db-b6cc-41ad-bb1a-603bdca29f7f-dns-svc\") pod \"dnsmasq-dns-9b86998b5-c8p2s\" (UID: \"4b6804db-b6cc-41ad-bb1a-603bdca29f7f\") " pod="openstack/dnsmasq-dns-9b86998b5-c8p2s" Jan 26 18:57:18 crc kubenswrapper[4737]: I0126 18:57:18.004063 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4b6804db-b6cc-41ad-bb1a-603bdca29f7f-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-c8p2s\" (UID: \"4b6804db-b6cc-41ad-bb1a-603bdca29f7f\") " pod="openstack/dnsmasq-dns-9b86998b5-c8p2s" Jan 26 18:57:18 crc kubenswrapper[4737]: I0126 18:57:18.004156 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4b6804db-b6cc-41ad-bb1a-603bdca29f7f-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-c8p2s\" (UID: \"4b6804db-b6cc-41ad-bb1a-603bdca29f7f\") " pod="openstack/dnsmasq-dns-9b86998b5-c8p2s" Jan 26 18:57:18 crc kubenswrapper[4737]: I0126 18:57:18.004202 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjstg\" (UniqueName: \"kubernetes.io/projected/4b6804db-b6cc-41ad-bb1a-603bdca29f7f-kube-api-access-zjstg\") pod \"dnsmasq-dns-9b86998b5-c8p2s\" (UID: \"4b6804db-b6cc-41ad-bb1a-603bdca29f7f\") " pod="openstack/dnsmasq-dns-9b86998b5-c8p2s" Jan 26 18:57:18 crc kubenswrapper[4737]: I0126 18:57:18.004270 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b82b7d0-7418-465f-a126-5882e578889b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4b82b7d0-7418-465f-a126-5882e578889b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:57:18 crc kubenswrapper[4737]: I0126 18:57:18.030723 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 18:57:18 crc kubenswrapper[4737]: I0126 18:57:18.107136 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjstg\" (UniqueName: \"kubernetes.io/projected/4b6804db-b6cc-41ad-bb1a-603bdca29f7f-kube-api-access-zjstg\") pod \"dnsmasq-dns-9b86998b5-c8p2s\" (UID: \"4b6804db-b6cc-41ad-bb1a-603bdca29f7f\") " pod="openstack/dnsmasq-dns-9b86998b5-c8p2s" Jan 26 18:57:18 crc kubenswrapper[4737]: I0126 18:57:18.107319 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b82b7d0-7418-465f-a126-5882e578889b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4b82b7d0-7418-465f-a126-5882e578889b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:57:18 crc kubenswrapper[4737]: I0126 18:57:18.109936 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b6804db-b6cc-41ad-bb1a-603bdca29f7f-config\") pod \"dnsmasq-dns-9b86998b5-c8p2s\" (UID: \"4b6804db-b6cc-41ad-bb1a-603bdca29f7f\") " pod="openstack/dnsmasq-dns-9b86998b5-c8p2s" Jan 26 18:57:18 crc kubenswrapper[4737]: I0126 18:57:18.109972 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b82b7d0-7418-465f-a126-5882e578889b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4b82b7d0-7418-465f-a126-5882e578889b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:57:18 crc kubenswrapper[4737]: I0126 18:57:18.110016 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hxjn\" (UniqueName: \"kubernetes.io/projected/4b82b7d0-7418-465f-a126-5882e578889b-kube-api-access-5hxjn\") pod \"nova-cell1-novncproxy-0\" (UID: \"4b82b7d0-7418-465f-a126-5882e578889b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:57:18 crc kubenswrapper[4737]: I0126 18:57:18.110208 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b6804db-b6cc-41ad-bb1a-603bdca29f7f-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-c8p2s\" (UID: \"4b6804db-b6cc-41ad-bb1a-603bdca29f7f\") " pod="openstack/dnsmasq-dns-9b86998b5-c8p2s" Jan 26 18:57:18 crc kubenswrapper[4737]: I0126 18:57:18.110254 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b6804db-b6cc-41ad-bb1a-603bdca29f7f-dns-svc\") pod \"dnsmasq-dns-9b86998b5-c8p2s\" (UID: \"4b6804db-b6cc-41ad-bb1a-603bdca29f7f\") " pod="openstack/dnsmasq-dns-9b86998b5-c8p2s" Jan 26 18:57:18 crc kubenswrapper[4737]: I0126 18:57:18.110278 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4b6804db-b6cc-41ad-bb1a-603bdca29f7f-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-c8p2s\" (UID: \"4b6804db-b6cc-41ad-bb1a-603bdca29f7f\") " pod="openstack/dnsmasq-dns-9b86998b5-c8p2s" Jan 26 18:57:18 crc kubenswrapper[4737]: I0126 18:57:18.110448 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4b6804db-b6cc-41ad-bb1a-603bdca29f7f-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-c8p2s\" (UID: \"4b6804db-b6cc-41ad-bb1a-603bdca29f7f\") " pod="openstack/dnsmasq-dns-9b86998b5-c8p2s" Jan 26 18:57:18 crc kubenswrapper[4737]: I0126 18:57:18.111694 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4b6804db-b6cc-41ad-bb1a-603bdca29f7f-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-c8p2s\" (UID: \"4b6804db-b6cc-41ad-bb1a-603bdca29f7f\") " pod="openstack/dnsmasq-dns-9b86998b5-c8p2s" Jan 26 18:57:18 crc kubenswrapper[4737]: I0126 18:57:18.115831 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b6804db-b6cc-41ad-bb1a-603bdca29f7f-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-c8p2s\" (UID: \"4b6804db-b6cc-41ad-bb1a-603bdca29f7f\") " pod="openstack/dnsmasq-dns-9b86998b5-c8p2s" Jan 26 18:57:18 crc kubenswrapper[4737]: I0126 18:57:18.119106 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4b6804db-b6cc-41ad-bb1a-603bdca29f7f-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-c8p2s\" (UID: \"4b6804db-b6cc-41ad-bb1a-603bdca29f7f\") " pod="openstack/dnsmasq-dns-9b86998b5-c8p2s" Jan 26 18:57:18 crc kubenswrapper[4737]: I0126 18:57:18.126140 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b6804db-b6cc-41ad-bb1a-603bdca29f7f-config\") pod \"dnsmasq-dns-9b86998b5-c8p2s\" (UID: \"4b6804db-b6cc-41ad-bb1a-603bdca29f7f\") " pod="openstack/dnsmasq-dns-9b86998b5-c8p2s" Jan 26 18:57:18 crc kubenswrapper[4737]: I0126 18:57:18.128543 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b6804db-b6cc-41ad-bb1a-603bdca29f7f-dns-svc\") pod \"dnsmasq-dns-9b86998b5-c8p2s\" (UID: \"4b6804db-b6cc-41ad-bb1a-603bdca29f7f\") " pod="openstack/dnsmasq-dns-9b86998b5-c8p2s" Jan 26 18:57:18 crc kubenswrapper[4737]: I0126 18:57:18.137264 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b82b7d0-7418-465f-a126-5882e578889b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4b82b7d0-7418-465f-a126-5882e578889b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:57:18 crc kubenswrapper[4737]: I0126 18:57:18.152202 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b82b7d0-7418-465f-a126-5882e578889b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4b82b7d0-7418-465f-a126-5882e578889b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:57:18 crc kubenswrapper[4737]: I0126 18:57:18.152980 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjstg\" (UniqueName: \"kubernetes.io/projected/4b6804db-b6cc-41ad-bb1a-603bdca29f7f-kube-api-access-zjstg\") pod \"dnsmasq-dns-9b86998b5-c8p2s\" (UID: \"4b6804db-b6cc-41ad-bb1a-603bdca29f7f\") " pod="openstack/dnsmasq-dns-9b86998b5-c8p2s" Jan 26 18:57:18 crc kubenswrapper[4737]: I0126 18:57:18.159620 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hxjn\" (UniqueName: \"kubernetes.io/projected/4b82b7d0-7418-465f-a126-5882e578889b-kube-api-access-5hxjn\") pod \"nova-cell1-novncproxy-0\" (UID: \"4b82b7d0-7418-465f-a126-5882e578889b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:57:18 crc kubenswrapper[4737]: I0126 18:57:18.187485 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-j866g"] Jan 26 18:57:18 crc kubenswrapper[4737]: W0126 18:57:18.245538 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5bea1a20_5eb7_4003_8fdd_43ecb5fb550a.slice/crio-d3ae729ac56dae73f9db6ab9dd094497e24268a8c2a32ec105f143d660750dcd WatchSource:0}: Error finding container d3ae729ac56dae73f9db6ab9dd094497e24268a8c2a32ec105f143d660750dcd: Status 404 returned error can't find the container with id d3ae729ac56dae73f9db6ab9dd094497e24268a8c2a32ec105f143d660750dcd Jan 26 18:57:18 crc kubenswrapper[4737]: I0126 18:57:18.364230 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:57:18 crc kubenswrapper[4737]: I0126 18:57:18.398016 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-c8p2s" Jan 26 18:57:18 crc kubenswrapper[4737]: I0126 18:57:18.664708 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 18:57:18 crc kubenswrapper[4737]: I0126 18:57:18.766789 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-j866g" event={"ID":"5bea1a20-5eb7-4003-8fdd-43ecb5fb550a","Type":"ContainerStarted","Data":"b8f1aa0848e0a3f4d0a592fd5228b2391f3981971cb36c36e7aec34ce8cd5abb"} Jan 26 18:57:18 crc kubenswrapper[4737]: I0126 18:57:18.766885 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-j866g" event={"ID":"5bea1a20-5eb7-4003-8fdd-43ecb5fb550a","Type":"ContainerStarted","Data":"d3ae729ac56dae73f9db6ab9dd094497e24268a8c2a32ec105f143d660750dcd"} Jan 26 18:57:18 crc kubenswrapper[4737]: I0126 18:57:18.774952 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2a92049d-1c34-47c9-b128-366728af476a","Type":"ContainerStarted","Data":"e3d1103dd282442d18e4998c54c155735870696ea0fa1d1d84b2df26b1982ce9"} Jan 26 18:57:18 crc kubenswrapper[4737]: I0126 18:57:18.797360 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-j866g" podStartSLOduration=2.7973242579999997 podStartE2EDuration="2.797324258s" podCreationTimestamp="2026-01-26 18:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:57:18.791549924 +0000 UTC m=+1612.099744632" watchObservedRunningTime="2026-01-26 18:57:18.797324258 +0000 UTC m=+1612.105518966" Jan 26 18:57:18 crc kubenswrapper[4737]: I0126 18:57:18.905176 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 18:57:18 crc kubenswrapper[4737]: I0126 18:57:18.927104 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 18:57:19 crc kubenswrapper[4737]: I0126 18:57:19.565897 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 18:57:19 crc kubenswrapper[4737]: I0126 18:57:19.581212 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-c8p2s"] Jan 26 18:57:19 crc kubenswrapper[4737]: I0126 18:57:19.839578 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4b82b7d0-7418-465f-a126-5882e578889b","Type":"ContainerStarted","Data":"09d7715b1e424057d1e9d24a1bfda745683008af0b7dfb381a27b4be575acc78"} Jan 26 18:57:19 crc kubenswrapper[4737]: I0126 18:57:19.868394 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-c8p2s" event={"ID":"4b6804db-b6cc-41ad-bb1a-603bdca29f7f","Type":"ContainerStarted","Data":"b4349625c82eadd172ffdac233b98962f55b9d2ad99eec67fe31d80c9379255f"} Jan 26 18:57:19 crc kubenswrapper[4737]: I0126 18:57:19.936927 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"252aaba1-e252-4c10-b9de-8e6100e48267","Type":"ContainerStarted","Data":"c4143507e4ba82f6a0954feb9209ecff4ddac0fc5607e824206028e34742ce1e"} Jan 26 18:57:19 crc kubenswrapper[4737]: I0126 18:57:19.961135 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"68f221b9-d702-4331-b67f-10bd1b2125dc","Type":"ContainerStarted","Data":"754c978fea018126d93e2d70f4f72ce7d2c510ddbbcbf0828db9f838439517ce"} Jan 26 18:57:20 crc kubenswrapper[4737]: I0126 18:57:20.186245 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-b22wn"] Jan 26 18:57:20 crc kubenswrapper[4737]: I0126 18:57:20.201109 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-b22wn"] Jan 26 18:57:20 crc kubenswrapper[4737]: I0126 18:57:20.201249 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-b22wn" Jan 26 18:57:20 crc kubenswrapper[4737]: I0126 18:57:20.205656 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 26 18:57:20 crc kubenswrapper[4737]: I0126 18:57:20.211136 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 26 18:57:20 crc kubenswrapper[4737]: I0126 18:57:20.281512 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e850b319-4b13-4da1-a138-3373c2c6ecd2-config-data\") pod \"nova-cell1-conductor-db-sync-b22wn\" (UID: \"e850b319-4b13-4da1-a138-3373c2c6ecd2\") " pod="openstack/nova-cell1-conductor-db-sync-b22wn" Jan 26 18:57:20 crc kubenswrapper[4737]: I0126 18:57:20.281814 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e850b319-4b13-4da1-a138-3373c2c6ecd2-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-b22wn\" (UID: \"e850b319-4b13-4da1-a138-3373c2c6ecd2\") " pod="openstack/nova-cell1-conductor-db-sync-b22wn" Jan 26 18:57:20 crc kubenswrapper[4737]: I0126 18:57:20.281832 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e850b319-4b13-4da1-a138-3373c2c6ecd2-scripts\") pod \"nova-cell1-conductor-db-sync-b22wn\" (UID: \"e850b319-4b13-4da1-a138-3373c2c6ecd2\") " pod="openstack/nova-cell1-conductor-db-sync-b22wn" Jan 26 18:57:20 crc kubenswrapper[4737]: I0126 18:57:20.281850 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4zv5\" (UniqueName: \"kubernetes.io/projected/e850b319-4b13-4da1-a138-3373c2c6ecd2-kube-api-access-n4zv5\") pod \"nova-cell1-conductor-db-sync-b22wn\" (UID: \"e850b319-4b13-4da1-a138-3373c2c6ecd2\") " pod="openstack/nova-cell1-conductor-db-sync-b22wn" Jan 26 18:57:20 crc kubenswrapper[4737]: I0126 18:57:20.383596 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e850b319-4b13-4da1-a138-3373c2c6ecd2-scripts\") pod \"nova-cell1-conductor-db-sync-b22wn\" (UID: \"e850b319-4b13-4da1-a138-3373c2c6ecd2\") " pod="openstack/nova-cell1-conductor-db-sync-b22wn" Jan 26 18:57:20 crc kubenswrapper[4737]: I0126 18:57:20.383647 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e850b319-4b13-4da1-a138-3373c2c6ecd2-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-b22wn\" (UID: \"e850b319-4b13-4da1-a138-3373c2c6ecd2\") " pod="openstack/nova-cell1-conductor-db-sync-b22wn" Jan 26 18:57:20 crc kubenswrapper[4737]: I0126 18:57:20.383668 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4zv5\" (UniqueName: \"kubernetes.io/projected/e850b319-4b13-4da1-a138-3373c2c6ecd2-kube-api-access-n4zv5\") pod \"nova-cell1-conductor-db-sync-b22wn\" (UID: \"e850b319-4b13-4da1-a138-3373c2c6ecd2\") " pod="openstack/nova-cell1-conductor-db-sync-b22wn" Jan 26 18:57:20 crc kubenswrapper[4737]: I0126 18:57:20.383720 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e850b319-4b13-4da1-a138-3373c2c6ecd2-config-data\") pod \"nova-cell1-conductor-db-sync-b22wn\" (UID: \"e850b319-4b13-4da1-a138-3373c2c6ecd2\") " pod="openstack/nova-cell1-conductor-db-sync-b22wn" Jan 26 18:57:20 crc kubenswrapper[4737]: I0126 18:57:20.391267 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e850b319-4b13-4da1-a138-3373c2c6ecd2-scripts\") pod \"nova-cell1-conductor-db-sync-b22wn\" (UID: \"e850b319-4b13-4da1-a138-3373c2c6ecd2\") " pod="openstack/nova-cell1-conductor-db-sync-b22wn" Jan 26 18:57:20 crc kubenswrapper[4737]: I0126 18:57:20.395639 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e850b319-4b13-4da1-a138-3373c2c6ecd2-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-b22wn\" (UID: \"e850b319-4b13-4da1-a138-3373c2c6ecd2\") " pod="openstack/nova-cell1-conductor-db-sync-b22wn" Jan 26 18:57:20 crc kubenswrapper[4737]: I0126 18:57:20.405712 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e850b319-4b13-4da1-a138-3373c2c6ecd2-config-data\") pod \"nova-cell1-conductor-db-sync-b22wn\" (UID: \"e850b319-4b13-4da1-a138-3373c2c6ecd2\") " pod="openstack/nova-cell1-conductor-db-sync-b22wn" Jan 26 18:57:20 crc kubenswrapper[4737]: I0126 18:57:20.406099 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4zv5\" (UniqueName: \"kubernetes.io/projected/e850b319-4b13-4da1-a138-3373c2c6ecd2-kube-api-access-n4zv5\") pod \"nova-cell1-conductor-db-sync-b22wn\" (UID: \"e850b319-4b13-4da1-a138-3373c2c6ecd2\") " pod="openstack/nova-cell1-conductor-db-sync-b22wn" Jan 26 18:57:20 crc kubenswrapper[4737]: I0126 18:57:20.551174 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-b22wn" Jan 26 18:57:21 crc kubenswrapper[4737]: I0126 18:57:21.017842 4737 generic.go:334] "Generic (PLEG): container finished" podID="4b6804db-b6cc-41ad-bb1a-603bdca29f7f" containerID="aee62a199182feb54c12831e27f38c9b6c79049a2c17fc7561602ad72ca61e28" exitCode=0 Jan 26 18:57:21 crc kubenswrapper[4737]: I0126 18:57:21.018016 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-c8p2s" event={"ID":"4b6804db-b6cc-41ad-bb1a-603bdca29f7f","Type":"ContainerDied","Data":"aee62a199182feb54c12831e27f38c9b6c79049a2c17fc7561602ad72ca61e28"} Jan 26 18:57:21 crc kubenswrapper[4737]: I0126 18:57:21.203190 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-b22wn"] Jan 26 18:57:21 crc kubenswrapper[4737]: I0126 18:57:21.764361 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 18:57:21 crc kubenswrapper[4737]: I0126 18:57:21.785776 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 18:57:22 crc kubenswrapper[4737]: I0126 18:57:22.037363 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-b22wn" event={"ID":"e850b319-4b13-4da1-a138-3373c2c6ecd2","Type":"ContainerStarted","Data":"9c5c86e220b689720e2541702ca731231d3515f7071e96ed7256880fbe86cb2e"} Jan 26 18:57:22 crc kubenswrapper[4737]: I0126 18:57:22.037470 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-b22wn" event={"ID":"e850b319-4b13-4da1-a138-3373c2c6ecd2","Type":"ContainerStarted","Data":"64dfe153be568f6f6eaab719860f8ca08491c47b2c88830f452128c252343d28"} Jan 26 18:57:22 crc kubenswrapper[4737]: I0126 18:57:22.042394 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-c8p2s" event={"ID":"4b6804db-b6cc-41ad-bb1a-603bdca29f7f","Type":"ContainerStarted","Data":"e9571c9baea36e025096e54b33009a1b78d2a2c98391d28b6d9992276e4ac403"} Jan 26 18:57:22 crc kubenswrapper[4737]: I0126 18:57:22.043680 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-9b86998b5-c8p2s" Jan 26 18:57:22 crc kubenswrapper[4737]: I0126 18:57:22.073802 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-b22wn" podStartSLOduration=2.073778909 podStartE2EDuration="2.073778909s" podCreationTimestamp="2026-01-26 18:57:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:57:22.060603952 +0000 UTC m=+1615.368798680" watchObservedRunningTime="2026-01-26 18:57:22.073778909 +0000 UTC m=+1615.381973617" Jan 26 18:57:22 crc kubenswrapper[4737]: I0126 18:57:22.097855 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-9b86998b5-c8p2s" podStartSLOduration=5.097827831 podStartE2EDuration="5.097827831s" podCreationTimestamp="2026-01-26 18:57:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:57:22.094156455 +0000 UTC m=+1615.402351163" watchObservedRunningTime="2026-01-26 18:57:22.097827831 +0000 UTC m=+1615.406022539" Jan 26 18:57:26 crc kubenswrapper[4737]: I0126 18:57:26.100499 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4b82b7d0-7418-465f-a126-5882e578889b","Type":"ContainerStarted","Data":"c4caed85dfdaa08f522433e6649a9f5f1190c348bd22173b1ffdb9c004d73256"} Jan 26 18:57:26 crc kubenswrapper[4737]: I0126 18:57:26.100534 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="4b82b7d0-7418-465f-a126-5882e578889b" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://c4caed85dfdaa08f522433e6649a9f5f1190c348bd22173b1ffdb9c004d73256" gracePeriod=30 Jan 26 18:57:26 crc kubenswrapper[4737]: I0126 18:57:26.104782 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"252aaba1-e252-4c10-b9de-8e6100e48267","Type":"ContainerStarted","Data":"285e46cc52fab39e08ffd257b90f7d72d6cc30c9c8e4df2a7d7263be1b2e3d30"} Jan 26 18:57:26 crc kubenswrapper[4737]: I0126 18:57:26.110712 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2a92049d-1c34-47c9-b128-366728af476a","Type":"ContainerStarted","Data":"c70535ceb1c154b2917d6d6227aa22d70ae2cfa94a14e6b7444dfdf3de6b23e0"} Jan 26 18:57:26 crc kubenswrapper[4737]: I0126 18:57:26.110765 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2a92049d-1c34-47c9-b128-366728af476a","Type":"ContainerStarted","Data":"90482cd04f867c621a9067e356b2c50c1620be34f6396a069eb7ce590115d872"} Jan 26 18:57:26 crc kubenswrapper[4737]: I0126 18:57:26.118553 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"68f221b9-d702-4331-b67f-10bd1b2125dc","Type":"ContainerStarted","Data":"c5128ed303a6101215e83d0fa973d348e0a067a7fc01154118b16688fed29ac5"} Jan 26 18:57:26 crc kubenswrapper[4737]: I0126 18:57:26.118613 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"68f221b9-d702-4331-b67f-10bd1b2125dc","Type":"ContainerStarted","Data":"e46297169c0093b29255cf65e3c6b887db47bb482e17fce9293f965f587b5494"} Jan 26 18:57:26 crc kubenswrapper[4737]: I0126 18:57:26.118794 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="68f221b9-d702-4331-b67f-10bd1b2125dc" containerName="nova-metadata-log" containerID="cri-o://e46297169c0093b29255cf65e3c6b887db47bb482e17fce9293f965f587b5494" gracePeriod=30 Jan 26 18:57:26 crc kubenswrapper[4737]: I0126 18:57:26.118836 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="68f221b9-d702-4331-b67f-10bd1b2125dc" containerName="nova-metadata-metadata" containerID="cri-o://c5128ed303a6101215e83d0fa973d348e0a067a7fc01154118b16688fed29ac5" gracePeriod=30 Jan 26 18:57:26 crc kubenswrapper[4737]: I0126 18:57:26.135847 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=4.461187156 podStartE2EDuration="9.135816479s" podCreationTimestamp="2026-01-26 18:57:17 +0000 UTC" firstStartedPulling="2026-01-26 18:57:19.693422089 +0000 UTC m=+1613.001616797" lastFinishedPulling="2026-01-26 18:57:24.368051412 +0000 UTC m=+1617.676246120" observedRunningTime="2026-01-26 18:57:26.124059803 +0000 UTC m=+1619.432254511" watchObservedRunningTime="2026-01-26 18:57:26.135816479 +0000 UTC m=+1619.444011187" Jan 26 18:57:26 crc kubenswrapper[4737]: I0126 18:57:26.175279 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.729273148 podStartE2EDuration="9.175255712s" podCreationTimestamp="2026-01-26 18:57:17 +0000 UTC" firstStartedPulling="2026-01-26 18:57:18.917203955 +0000 UTC m=+1612.225398663" lastFinishedPulling="2026-01-26 18:57:24.363186519 +0000 UTC m=+1617.671381227" observedRunningTime="2026-01-26 18:57:26.154038465 +0000 UTC m=+1619.462233173" watchObservedRunningTime="2026-01-26 18:57:26.175255712 +0000 UTC m=+1619.483450420" Jan 26 18:57:26 crc kubenswrapper[4737]: I0126 18:57:26.192167 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.492469606 podStartE2EDuration="10.192137127s" podCreationTimestamp="2026-01-26 18:57:16 +0000 UTC" firstStartedPulling="2026-01-26 18:57:18.673570632 +0000 UTC m=+1611.981765340" lastFinishedPulling="2026-01-26 18:57:24.373238153 +0000 UTC m=+1617.681432861" observedRunningTime="2026-01-26 18:57:26.1730374 +0000 UTC m=+1619.481232108" watchObservedRunningTime="2026-01-26 18:57:26.192137127 +0000 UTC m=+1619.500331845" Jan 26 18:57:26 crc kubenswrapper[4737]: I0126 18:57:26.212299 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.83515864 podStartE2EDuration="9.212270368s" podCreationTimestamp="2026-01-26 18:57:17 +0000 UTC" firstStartedPulling="2026-01-26 18:57:18.989229044 +0000 UTC m=+1612.297423752" lastFinishedPulling="2026-01-26 18:57:24.366340772 +0000 UTC m=+1617.674535480" observedRunningTime="2026-01-26 18:57:26.197723498 +0000 UTC m=+1619.505918206" watchObservedRunningTime="2026-01-26 18:57:26.212270368 +0000 UTC m=+1619.520465086" Jan 26 18:57:26 crc kubenswrapper[4737]: I0126 18:57:26.871791 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 18:57:26 crc kubenswrapper[4737]: I0126 18:57:26.907954 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68f221b9-d702-4331-b67f-10bd1b2125dc-combined-ca-bundle\") pod \"68f221b9-d702-4331-b67f-10bd1b2125dc\" (UID: \"68f221b9-d702-4331-b67f-10bd1b2125dc\") " Jan 26 18:57:26 crc kubenswrapper[4737]: I0126 18:57:26.908375 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r88gj\" (UniqueName: \"kubernetes.io/projected/68f221b9-d702-4331-b67f-10bd1b2125dc-kube-api-access-r88gj\") pod \"68f221b9-d702-4331-b67f-10bd1b2125dc\" (UID: \"68f221b9-d702-4331-b67f-10bd1b2125dc\") " Jan 26 18:57:26 crc kubenswrapper[4737]: I0126 18:57:26.908454 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68f221b9-d702-4331-b67f-10bd1b2125dc-config-data\") pod \"68f221b9-d702-4331-b67f-10bd1b2125dc\" (UID: \"68f221b9-d702-4331-b67f-10bd1b2125dc\") " Jan 26 18:57:26 crc kubenswrapper[4737]: I0126 18:57:26.908502 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68f221b9-d702-4331-b67f-10bd1b2125dc-logs\") pod \"68f221b9-d702-4331-b67f-10bd1b2125dc\" (UID: \"68f221b9-d702-4331-b67f-10bd1b2125dc\") " Jan 26 18:57:26 crc kubenswrapper[4737]: I0126 18:57:26.909384 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68f221b9-d702-4331-b67f-10bd1b2125dc-logs" (OuterVolumeSpecName: "logs") pod "68f221b9-d702-4331-b67f-10bd1b2125dc" (UID: "68f221b9-d702-4331-b67f-10bd1b2125dc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:57:26 crc kubenswrapper[4737]: I0126 18:57:26.915298 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68f221b9-d702-4331-b67f-10bd1b2125dc-kube-api-access-r88gj" (OuterVolumeSpecName: "kube-api-access-r88gj") pod "68f221b9-d702-4331-b67f-10bd1b2125dc" (UID: "68f221b9-d702-4331-b67f-10bd1b2125dc"). InnerVolumeSpecName "kube-api-access-r88gj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:57:26 crc kubenswrapper[4737]: I0126 18:57:26.980171 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68f221b9-d702-4331-b67f-10bd1b2125dc-config-data" (OuterVolumeSpecName: "config-data") pod "68f221b9-d702-4331-b67f-10bd1b2125dc" (UID: "68f221b9-d702-4331-b67f-10bd1b2125dc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:57:27 crc kubenswrapper[4737]: I0126 18:57:27.012477 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r88gj\" (UniqueName: \"kubernetes.io/projected/68f221b9-d702-4331-b67f-10bd1b2125dc-kube-api-access-r88gj\") on node \"crc\" DevicePath \"\"" Jan 26 18:57:27 crc kubenswrapper[4737]: I0126 18:57:27.012507 4737 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68f221b9-d702-4331-b67f-10bd1b2125dc-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:57:27 crc kubenswrapper[4737]: I0126 18:57:27.012521 4737 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68f221b9-d702-4331-b67f-10bd1b2125dc-logs\") on node \"crc\" DevicePath \"\"" Jan 26 18:57:27 crc kubenswrapper[4737]: I0126 18:57:27.049571 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68f221b9-d702-4331-b67f-10bd1b2125dc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "68f221b9-d702-4331-b67f-10bd1b2125dc" (UID: "68f221b9-d702-4331-b67f-10bd1b2125dc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:57:27 crc kubenswrapper[4737]: I0126 18:57:27.115339 4737 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68f221b9-d702-4331-b67f-10bd1b2125dc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:57:27 crc kubenswrapper[4737]: I0126 18:57:27.132006 4737 generic.go:334] "Generic (PLEG): container finished" podID="68f221b9-d702-4331-b67f-10bd1b2125dc" containerID="c5128ed303a6101215e83d0fa973d348e0a067a7fc01154118b16688fed29ac5" exitCode=0 Jan 26 18:57:27 crc kubenswrapper[4737]: I0126 18:57:27.132235 4737 generic.go:334] "Generic (PLEG): container finished" podID="68f221b9-d702-4331-b67f-10bd1b2125dc" containerID="e46297169c0093b29255cf65e3c6b887db47bb482e17fce9293f965f587b5494" exitCode=143 Jan 26 18:57:27 crc kubenswrapper[4737]: I0126 18:57:27.132125 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"68f221b9-d702-4331-b67f-10bd1b2125dc","Type":"ContainerDied","Data":"c5128ed303a6101215e83d0fa973d348e0a067a7fc01154118b16688fed29ac5"} Jan 26 18:57:27 crc kubenswrapper[4737]: I0126 18:57:27.132359 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"68f221b9-d702-4331-b67f-10bd1b2125dc","Type":"ContainerDied","Data":"e46297169c0093b29255cf65e3c6b887db47bb482e17fce9293f965f587b5494"} Jan 26 18:57:27 crc kubenswrapper[4737]: I0126 18:57:27.132388 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"68f221b9-d702-4331-b67f-10bd1b2125dc","Type":"ContainerDied","Data":"754c978fea018126d93e2d70f4f72ce7d2c510ddbbcbf0828db9f838439517ce"} Jan 26 18:57:27 crc kubenswrapper[4737]: I0126 18:57:27.132406 4737 scope.go:117] "RemoveContainer" containerID="c5128ed303a6101215e83d0fa973d348e0a067a7fc01154118b16688fed29ac5" Jan 26 18:57:27 crc kubenswrapper[4737]: I0126 18:57:27.132086 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 18:57:27 crc kubenswrapper[4737]: I0126 18:57:27.178392 4737 scope.go:117] "RemoveContainer" containerID="e46297169c0093b29255cf65e3c6b887db47bb482e17fce9293f965f587b5494" Jan 26 18:57:27 crc kubenswrapper[4737]: I0126 18:57:27.181739 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 18:57:27 crc kubenswrapper[4737]: I0126 18:57:27.193607 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 18:57:27 crc kubenswrapper[4737]: I0126 18:57:27.213894 4737 scope.go:117] "RemoveContainer" containerID="c5128ed303a6101215e83d0fa973d348e0a067a7fc01154118b16688fed29ac5" Jan 26 18:57:27 crc kubenswrapper[4737]: E0126 18:57:27.214485 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5128ed303a6101215e83d0fa973d348e0a067a7fc01154118b16688fed29ac5\": container with ID starting with c5128ed303a6101215e83d0fa973d348e0a067a7fc01154118b16688fed29ac5 not found: ID does not exist" containerID="c5128ed303a6101215e83d0fa973d348e0a067a7fc01154118b16688fed29ac5" Jan 26 18:57:27 crc kubenswrapper[4737]: I0126 18:57:27.214517 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5128ed303a6101215e83d0fa973d348e0a067a7fc01154118b16688fed29ac5"} err="failed to get container status \"c5128ed303a6101215e83d0fa973d348e0a067a7fc01154118b16688fed29ac5\": rpc error: code = NotFound desc = could not find container \"c5128ed303a6101215e83d0fa973d348e0a067a7fc01154118b16688fed29ac5\": container with ID starting with c5128ed303a6101215e83d0fa973d348e0a067a7fc01154118b16688fed29ac5 not found: ID does not exist" Jan 26 18:57:27 crc kubenswrapper[4737]: I0126 18:57:27.214539 4737 scope.go:117] "RemoveContainer" containerID="e46297169c0093b29255cf65e3c6b887db47bb482e17fce9293f965f587b5494" Jan 26 18:57:27 crc kubenswrapper[4737]: E0126 18:57:27.214828 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e46297169c0093b29255cf65e3c6b887db47bb482e17fce9293f965f587b5494\": container with ID starting with e46297169c0093b29255cf65e3c6b887db47bb482e17fce9293f965f587b5494 not found: ID does not exist" containerID="e46297169c0093b29255cf65e3c6b887db47bb482e17fce9293f965f587b5494" Jan 26 18:57:27 crc kubenswrapper[4737]: I0126 18:57:27.214848 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e46297169c0093b29255cf65e3c6b887db47bb482e17fce9293f965f587b5494"} err="failed to get container status \"e46297169c0093b29255cf65e3c6b887db47bb482e17fce9293f965f587b5494\": rpc error: code = NotFound desc = could not find container \"e46297169c0093b29255cf65e3c6b887db47bb482e17fce9293f965f587b5494\": container with ID starting with e46297169c0093b29255cf65e3c6b887db47bb482e17fce9293f965f587b5494 not found: ID does not exist" Jan 26 18:57:27 crc kubenswrapper[4737]: I0126 18:57:27.214866 4737 scope.go:117] "RemoveContainer" containerID="c5128ed303a6101215e83d0fa973d348e0a067a7fc01154118b16688fed29ac5" Jan 26 18:57:27 crc kubenswrapper[4737]: I0126 18:57:27.215158 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5128ed303a6101215e83d0fa973d348e0a067a7fc01154118b16688fed29ac5"} err="failed to get container status \"c5128ed303a6101215e83d0fa973d348e0a067a7fc01154118b16688fed29ac5\": rpc error: code = NotFound desc = could not find container \"c5128ed303a6101215e83d0fa973d348e0a067a7fc01154118b16688fed29ac5\": container with ID starting with c5128ed303a6101215e83d0fa973d348e0a067a7fc01154118b16688fed29ac5 not found: ID does not exist" Jan 26 18:57:27 crc kubenswrapper[4737]: I0126 18:57:27.215176 4737 scope.go:117] "RemoveContainer" containerID="e46297169c0093b29255cf65e3c6b887db47bb482e17fce9293f965f587b5494" Jan 26 18:57:27 crc kubenswrapper[4737]: I0126 18:57:27.215384 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e46297169c0093b29255cf65e3c6b887db47bb482e17fce9293f965f587b5494"} err="failed to get container status \"e46297169c0093b29255cf65e3c6b887db47bb482e17fce9293f965f587b5494\": rpc error: code = NotFound desc = could not find container \"e46297169c0093b29255cf65e3c6b887db47bb482e17fce9293f965f587b5494\": container with ID starting with e46297169c0093b29255cf65e3c6b887db47bb482e17fce9293f965f587b5494 not found: ID does not exist" Jan 26 18:57:27 crc kubenswrapper[4737]: I0126 18:57:27.234385 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 26 18:57:27 crc kubenswrapper[4737]: E0126 18:57:27.234898 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68f221b9-d702-4331-b67f-10bd1b2125dc" containerName="nova-metadata-log" Jan 26 18:57:27 crc kubenswrapper[4737]: I0126 18:57:27.234910 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="68f221b9-d702-4331-b67f-10bd1b2125dc" containerName="nova-metadata-log" Jan 26 18:57:27 crc kubenswrapper[4737]: E0126 18:57:27.234927 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68f221b9-d702-4331-b67f-10bd1b2125dc" containerName="nova-metadata-metadata" Jan 26 18:57:27 crc kubenswrapper[4737]: I0126 18:57:27.234933 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="68f221b9-d702-4331-b67f-10bd1b2125dc" containerName="nova-metadata-metadata" Jan 26 18:57:27 crc kubenswrapper[4737]: I0126 18:57:27.235164 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="68f221b9-d702-4331-b67f-10bd1b2125dc" containerName="nova-metadata-log" Jan 26 18:57:27 crc kubenswrapper[4737]: I0126 18:57:27.235188 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="68f221b9-d702-4331-b67f-10bd1b2125dc" containerName="nova-metadata-metadata" Jan 26 18:57:27 crc kubenswrapper[4737]: I0126 18:57:27.236579 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 18:57:27 crc kubenswrapper[4737]: I0126 18:57:27.239183 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 26 18:57:27 crc kubenswrapper[4737]: I0126 18:57:27.239376 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 26 18:57:27 crc kubenswrapper[4737]: I0126 18:57:27.248161 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 18:57:27 crc kubenswrapper[4737]: I0126 18:57:27.320539 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47-logs\") pod \"nova-metadata-0\" (UID: \"5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47\") " pod="openstack/nova-metadata-0" Jan 26 18:57:27 crc kubenswrapper[4737]: I0126 18:57:27.320995 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47-config-data\") pod \"nova-metadata-0\" (UID: \"5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47\") " pod="openstack/nova-metadata-0" Jan 26 18:57:27 crc kubenswrapper[4737]: I0126 18:57:27.321174 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47\") " pod="openstack/nova-metadata-0" Jan 26 18:57:27 crc kubenswrapper[4737]: I0126 18:57:27.321409 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7gcp\" (UniqueName: \"kubernetes.io/projected/5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47-kube-api-access-k7gcp\") pod \"nova-metadata-0\" (UID: \"5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47\") " pod="openstack/nova-metadata-0" Jan 26 18:57:27 crc kubenswrapper[4737]: I0126 18:57:27.321621 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47\") " pod="openstack/nova-metadata-0" Jan 26 18:57:27 crc kubenswrapper[4737]: I0126 18:57:27.423445 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47\") " pod="openstack/nova-metadata-0" Jan 26 18:57:27 crc kubenswrapper[4737]: I0126 18:57:27.423630 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47-logs\") pod \"nova-metadata-0\" (UID: \"5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47\") " pod="openstack/nova-metadata-0" Jan 26 18:57:27 crc kubenswrapper[4737]: I0126 18:57:27.423673 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47-config-data\") pod \"nova-metadata-0\" (UID: \"5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47\") " pod="openstack/nova-metadata-0" Jan 26 18:57:27 crc kubenswrapper[4737]: I0126 18:57:27.423708 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47\") " pod="openstack/nova-metadata-0" Jan 26 18:57:27 crc kubenswrapper[4737]: I0126 18:57:27.423745 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7gcp\" (UniqueName: \"kubernetes.io/projected/5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47-kube-api-access-k7gcp\") pod \"nova-metadata-0\" (UID: \"5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47\") " pod="openstack/nova-metadata-0" Jan 26 18:57:27 crc kubenswrapper[4737]: I0126 18:57:27.424393 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47-logs\") pod \"nova-metadata-0\" (UID: \"5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47\") " pod="openstack/nova-metadata-0" Jan 26 18:57:27 crc kubenswrapper[4737]: I0126 18:57:27.431431 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47\") " pod="openstack/nova-metadata-0" Jan 26 18:57:27 crc kubenswrapper[4737]: I0126 18:57:27.431758 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47-config-data\") pod \"nova-metadata-0\" (UID: \"5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47\") " pod="openstack/nova-metadata-0" Jan 26 18:57:27 crc kubenswrapper[4737]: I0126 18:57:27.435814 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47\") " pod="openstack/nova-metadata-0" Jan 26 18:57:27 crc kubenswrapper[4737]: I0126 18:57:27.445649 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7gcp\" (UniqueName: \"kubernetes.io/projected/5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47-kube-api-access-k7gcp\") pod \"nova-metadata-0\" (UID: \"5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47\") " pod="openstack/nova-metadata-0" Jan 26 18:57:27 crc kubenswrapper[4737]: I0126 18:57:27.489223 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 18:57:27 crc kubenswrapper[4737]: I0126 18:57:27.489530 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 18:57:27 crc kubenswrapper[4737]: I0126 18:57:27.582213 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 18:57:27 crc kubenswrapper[4737]: I0126 18:57:27.749940 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 26 18:57:27 crc kubenswrapper[4737]: I0126 18:57:27.750258 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 26 18:57:27 crc kubenswrapper[4737]: I0126 18:57:27.813346 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 26 18:57:28 crc kubenswrapper[4737]: I0126 18:57:28.202563 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 18:57:28 crc kubenswrapper[4737]: W0126 18:57:28.208973 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a1eca90_8a72_4b98_ac6b_c81a5d1e2e47.slice/crio-636b7e1b50b77a2e77a176a4a6e4760244e78027dda02b05a2a38e277d2e7cac WatchSource:0}: Error finding container 636b7e1b50b77a2e77a176a4a6e4760244e78027dda02b05a2a38e277d2e7cac: Status 404 returned error can't find the container with id 636b7e1b50b77a2e77a176a4a6e4760244e78027dda02b05a2a38e277d2e7cac Jan 26 18:57:28 crc kubenswrapper[4737]: I0126 18:57:28.216323 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 26 18:57:28 crc kubenswrapper[4737]: I0126 18:57:28.364902 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:57:28 crc kubenswrapper[4737]: I0126 18:57:28.441953 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-9b86998b5-c8p2s" Jan 26 18:57:28 crc kubenswrapper[4737]: I0126 18:57:28.569480 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-z4djp"] Jan 26 18:57:28 crc kubenswrapper[4737]: I0126 18:57:28.569835 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7756b9d78c-z4djp" podUID="b7a68838-86b3-499a-86cd-943dcb86e129" containerName="dnsmasq-dns" containerID="cri-o://6ea931668de6131c7939bbf3ffd3496f7ac394220cee3dc2c9c2db9dd9bd1784" gracePeriod=10 Jan 26 18:57:28 crc kubenswrapper[4737]: I0126 18:57:28.578678 4737 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2a92049d-1c34-47c9-b128-366728af476a" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.240:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 18:57:28 crc kubenswrapper[4737]: I0126 18:57:28.578845 4737 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2a92049d-1c34-47c9-b128-366728af476a" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.240:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 18:57:29 crc kubenswrapper[4737]: I0126 18:57:29.043612 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68f221b9-d702-4331-b67f-10bd1b2125dc" path="/var/lib/kubelet/pods/68f221b9-d702-4331-b67f-10bd1b2125dc/volumes" Jan 26 18:57:29 crc kubenswrapper[4737]: I0126 18:57:29.205392 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47","Type":"ContainerStarted","Data":"8aeba4181ad38c48720e15d9a829ad3fd4d0b2a45d0ae003ed12e6bab9730f11"} Jan 26 18:57:29 crc kubenswrapper[4737]: I0126 18:57:29.205820 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47","Type":"ContainerStarted","Data":"636b7e1b50b77a2e77a176a4a6e4760244e78027dda02b05a2a38e277d2e7cac"} Jan 26 18:57:29 crc kubenswrapper[4737]: I0126 18:57:29.207823 4737 generic.go:334] "Generic (PLEG): container finished" podID="b7a68838-86b3-499a-86cd-943dcb86e129" containerID="6ea931668de6131c7939bbf3ffd3496f7ac394220cee3dc2c9c2db9dd9bd1784" exitCode=0 Jan 26 18:57:29 crc kubenswrapper[4737]: I0126 18:57:29.208403 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-z4djp" event={"ID":"b7a68838-86b3-499a-86cd-943dcb86e129","Type":"ContainerDied","Data":"6ea931668de6131c7939bbf3ffd3496f7ac394220cee3dc2c9c2db9dd9bd1784"} Jan 26 18:57:29 crc kubenswrapper[4737]: I0126 18:57:29.629940 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-z4djp" Jan 26 18:57:29 crc kubenswrapper[4737]: I0126 18:57:29.713408 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7a68838-86b3-499a-86cd-943dcb86e129-dns-svc\") pod \"b7a68838-86b3-499a-86cd-943dcb86e129\" (UID: \"b7a68838-86b3-499a-86cd-943dcb86e129\") " Jan 26 18:57:29 crc kubenswrapper[4737]: I0126 18:57:29.713530 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7a68838-86b3-499a-86cd-943dcb86e129-config\") pod \"b7a68838-86b3-499a-86cd-943dcb86e129\" (UID: \"b7a68838-86b3-499a-86cd-943dcb86e129\") " Jan 26 18:57:29 crc kubenswrapper[4737]: I0126 18:57:29.713717 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b7a68838-86b3-499a-86cd-943dcb86e129-ovsdbserver-nb\") pod \"b7a68838-86b3-499a-86cd-943dcb86e129\" (UID: \"b7a68838-86b3-499a-86cd-943dcb86e129\") " Jan 26 18:57:29 crc kubenswrapper[4737]: I0126 18:57:29.713833 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b7a68838-86b3-499a-86cd-943dcb86e129-ovsdbserver-sb\") pod \"b7a68838-86b3-499a-86cd-943dcb86e129\" (UID: \"b7a68838-86b3-499a-86cd-943dcb86e129\") " Jan 26 18:57:29 crc kubenswrapper[4737]: I0126 18:57:29.713923 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2wzq\" (UniqueName: \"kubernetes.io/projected/b7a68838-86b3-499a-86cd-943dcb86e129-kube-api-access-r2wzq\") pod \"b7a68838-86b3-499a-86cd-943dcb86e129\" (UID: \"b7a68838-86b3-499a-86cd-943dcb86e129\") " Jan 26 18:57:29 crc kubenswrapper[4737]: I0126 18:57:29.714109 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b7a68838-86b3-499a-86cd-943dcb86e129-dns-swift-storage-0\") pod \"b7a68838-86b3-499a-86cd-943dcb86e129\" (UID: \"b7a68838-86b3-499a-86cd-943dcb86e129\") " Jan 26 18:57:29 crc kubenswrapper[4737]: I0126 18:57:29.722920 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7a68838-86b3-499a-86cd-943dcb86e129-kube-api-access-r2wzq" (OuterVolumeSpecName: "kube-api-access-r2wzq") pod "b7a68838-86b3-499a-86cd-943dcb86e129" (UID: "b7a68838-86b3-499a-86cd-943dcb86e129"). InnerVolumeSpecName "kube-api-access-r2wzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:57:29 crc kubenswrapper[4737]: I0126 18:57:29.817804 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2wzq\" (UniqueName: \"kubernetes.io/projected/b7a68838-86b3-499a-86cd-943dcb86e129-kube-api-access-r2wzq\") on node \"crc\" DevicePath \"\"" Jan 26 18:57:29 crc kubenswrapper[4737]: I0126 18:57:29.850634 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7a68838-86b3-499a-86cd-943dcb86e129-config" (OuterVolumeSpecName: "config") pod "b7a68838-86b3-499a-86cd-943dcb86e129" (UID: "b7a68838-86b3-499a-86cd-943dcb86e129"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:57:29 crc kubenswrapper[4737]: I0126 18:57:29.868884 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7a68838-86b3-499a-86cd-943dcb86e129-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b7a68838-86b3-499a-86cd-943dcb86e129" (UID: "b7a68838-86b3-499a-86cd-943dcb86e129"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:57:29 crc kubenswrapper[4737]: I0126 18:57:29.875501 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7a68838-86b3-499a-86cd-943dcb86e129-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b7a68838-86b3-499a-86cd-943dcb86e129" (UID: "b7a68838-86b3-499a-86cd-943dcb86e129"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:57:29 crc kubenswrapper[4737]: I0126 18:57:29.892587 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7a68838-86b3-499a-86cd-943dcb86e129-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b7a68838-86b3-499a-86cd-943dcb86e129" (UID: "b7a68838-86b3-499a-86cd-943dcb86e129"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:57:29 crc kubenswrapper[4737]: I0126 18:57:29.913906 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7a68838-86b3-499a-86cd-943dcb86e129-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b7a68838-86b3-499a-86cd-943dcb86e129" (UID: "b7a68838-86b3-499a-86cd-943dcb86e129"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:57:29 crc kubenswrapper[4737]: I0126 18:57:29.919905 4737 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b7a68838-86b3-499a-86cd-943dcb86e129-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 18:57:29 crc kubenswrapper[4737]: I0126 18:57:29.919947 4737 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b7a68838-86b3-499a-86cd-943dcb86e129-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 18:57:29 crc kubenswrapper[4737]: I0126 18:57:29.919959 4737 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7a68838-86b3-499a-86cd-943dcb86e129-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 18:57:29 crc kubenswrapper[4737]: I0126 18:57:29.919978 4737 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7a68838-86b3-499a-86cd-943dcb86e129-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:57:29 crc kubenswrapper[4737]: I0126 18:57:29.919990 4737 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b7a68838-86b3-499a-86cd-943dcb86e129-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 18:57:30 crc kubenswrapper[4737]: I0126 18:57:30.226354 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-z4djp" event={"ID":"b7a68838-86b3-499a-86cd-943dcb86e129","Type":"ContainerDied","Data":"921a42853efa0835a026db284e6c35f47c6d5b2309102f938d7500d7e27b8cdc"} Jan 26 18:57:30 crc kubenswrapper[4737]: I0126 18:57:30.226929 4737 scope.go:117] "RemoveContainer" containerID="6ea931668de6131c7939bbf3ffd3496f7ac394220cee3dc2c9c2db9dd9bd1784" Jan 26 18:57:30 crc kubenswrapper[4737]: I0126 18:57:30.227300 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-z4djp" Jan 26 18:57:30 crc kubenswrapper[4737]: I0126 18:57:30.242865 4737 generic.go:334] "Generic (PLEG): container finished" podID="017f5dce-64ad-4a66-be2f-1ced9ae7c9ce" containerID="e3f7fb73a02346192c2dd0c383b762e95c953006fc16f3cfa1990d8b470a7a91" exitCode=137 Jan 26 18:57:30 crc kubenswrapper[4737]: I0126 18:57:30.242940 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"017f5dce-64ad-4a66-be2f-1ced9ae7c9ce","Type":"ContainerDied","Data":"e3f7fb73a02346192c2dd0c383b762e95c953006fc16f3cfa1990d8b470a7a91"} Jan 26 18:57:30 crc kubenswrapper[4737]: I0126 18:57:30.242977 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"017f5dce-64ad-4a66-be2f-1ced9ae7c9ce","Type":"ContainerDied","Data":"a348a92319d82ec1b6a1a6efee2d6051c7051e7c03f3bd954c46587444f7d5fb"} Jan 26 18:57:30 crc kubenswrapper[4737]: I0126 18:57:30.242990 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a348a92319d82ec1b6a1a6efee2d6051c7051e7c03f3bd954c46587444f7d5fb" Jan 26 18:57:30 crc kubenswrapper[4737]: I0126 18:57:30.246359 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:57:30 crc kubenswrapper[4737]: I0126 18:57:30.253899 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47","Type":"ContainerStarted","Data":"91c3102f11843a9cb7070393df4a673b0f168a8d19f549b7bd2e530977b32587"} Jan 26 18:57:30 crc kubenswrapper[4737]: I0126 18:57:30.310713 4737 scope.go:117] "RemoveContainer" containerID="36c9eb0d5966f1a83c16dedc873c3a51d737a01844299a49d77401c67793c528" Jan 26 18:57:30 crc kubenswrapper[4737]: I0126 18:57:30.311721 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.311701088 podStartE2EDuration="3.311701088s" podCreationTimestamp="2026-01-26 18:57:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:57:30.301578801 +0000 UTC m=+1623.609773509" watchObservedRunningTime="2026-01-26 18:57:30.311701088 +0000 UTC m=+1623.619895796" Jan 26 18:57:30 crc kubenswrapper[4737]: I0126 18:57:30.377419 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-z4djp"] Jan 26 18:57:30 crc kubenswrapper[4737]: I0126 18:57:30.386436 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-z4djp"] Jan 26 18:57:30 crc kubenswrapper[4737]: I0126 18:57:30.433031 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/017f5dce-64ad-4a66-be2f-1ced9ae7c9ce-run-httpd\") pod \"017f5dce-64ad-4a66-be2f-1ced9ae7c9ce\" (UID: \"017f5dce-64ad-4a66-be2f-1ced9ae7c9ce\") " Jan 26 18:57:30 crc kubenswrapper[4737]: I0126 18:57:30.433195 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5r4j\" (UniqueName: \"kubernetes.io/projected/017f5dce-64ad-4a66-be2f-1ced9ae7c9ce-kube-api-access-p5r4j\") pod \"017f5dce-64ad-4a66-be2f-1ced9ae7c9ce\" (UID: \"017f5dce-64ad-4a66-be2f-1ced9ae7c9ce\") " Jan 26 18:57:30 crc kubenswrapper[4737]: I0126 18:57:30.433299 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/017f5dce-64ad-4a66-be2f-1ced9ae7c9ce-log-httpd\") pod \"017f5dce-64ad-4a66-be2f-1ced9ae7c9ce\" (UID: \"017f5dce-64ad-4a66-be2f-1ced9ae7c9ce\") " Jan 26 18:57:30 crc kubenswrapper[4737]: I0126 18:57:30.433357 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/017f5dce-64ad-4a66-be2f-1ced9ae7c9ce-combined-ca-bundle\") pod \"017f5dce-64ad-4a66-be2f-1ced9ae7c9ce\" (UID: \"017f5dce-64ad-4a66-be2f-1ced9ae7c9ce\") " Jan 26 18:57:30 crc kubenswrapper[4737]: I0126 18:57:30.433439 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/017f5dce-64ad-4a66-be2f-1ced9ae7c9ce-config-data\") pod \"017f5dce-64ad-4a66-be2f-1ced9ae7c9ce\" (UID: \"017f5dce-64ad-4a66-be2f-1ced9ae7c9ce\") " Jan 26 18:57:30 crc kubenswrapper[4737]: I0126 18:57:30.433516 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/017f5dce-64ad-4a66-be2f-1ced9ae7c9ce-scripts\") pod \"017f5dce-64ad-4a66-be2f-1ced9ae7c9ce\" (UID: \"017f5dce-64ad-4a66-be2f-1ced9ae7c9ce\") " Jan 26 18:57:30 crc kubenswrapper[4737]: I0126 18:57:30.433600 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/017f5dce-64ad-4a66-be2f-1ced9ae7c9ce-sg-core-conf-yaml\") pod \"017f5dce-64ad-4a66-be2f-1ced9ae7c9ce\" (UID: \"017f5dce-64ad-4a66-be2f-1ced9ae7c9ce\") " Jan 26 18:57:30 crc kubenswrapper[4737]: I0126 18:57:30.438600 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/017f5dce-64ad-4a66-be2f-1ced9ae7c9ce-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "017f5dce-64ad-4a66-be2f-1ced9ae7c9ce" (UID: "017f5dce-64ad-4a66-be2f-1ced9ae7c9ce"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:57:30 crc kubenswrapper[4737]: I0126 18:57:30.440523 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/017f5dce-64ad-4a66-be2f-1ced9ae7c9ce-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "017f5dce-64ad-4a66-be2f-1ced9ae7c9ce" (UID: "017f5dce-64ad-4a66-be2f-1ced9ae7c9ce"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:57:30 crc kubenswrapper[4737]: I0126 18:57:30.441263 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/017f5dce-64ad-4a66-be2f-1ced9ae7c9ce-scripts" (OuterVolumeSpecName: "scripts") pod "017f5dce-64ad-4a66-be2f-1ced9ae7c9ce" (UID: "017f5dce-64ad-4a66-be2f-1ced9ae7c9ce"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:57:30 crc kubenswrapper[4737]: I0126 18:57:30.441469 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/017f5dce-64ad-4a66-be2f-1ced9ae7c9ce-kube-api-access-p5r4j" (OuterVolumeSpecName: "kube-api-access-p5r4j") pod "017f5dce-64ad-4a66-be2f-1ced9ae7c9ce" (UID: "017f5dce-64ad-4a66-be2f-1ced9ae7c9ce"). InnerVolumeSpecName "kube-api-access-p5r4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:57:30 crc kubenswrapper[4737]: I0126 18:57:30.531847 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/017f5dce-64ad-4a66-be2f-1ced9ae7c9ce-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "017f5dce-64ad-4a66-be2f-1ced9ae7c9ce" (UID: "017f5dce-64ad-4a66-be2f-1ced9ae7c9ce"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:57:30 crc kubenswrapper[4737]: I0126 18:57:30.536708 4737 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/017f5dce-64ad-4a66-be2f-1ced9ae7c9ce-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 18:57:30 crc kubenswrapper[4737]: I0126 18:57:30.536744 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5r4j\" (UniqueName: \"kubernetes.io/projected/017f5dce-64ad-4a66-be2f-1ced9ae7c9ce-kube-api-access-p5r4j\") on node \"crc\" DevicePath \"\"" Jan 26 18:57:30 crc kubenswrapper[4737]: I0126 18:57:30.536755 4737 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/017f5dce-64ad-4a66-be2f-1ced9ae7c9ce-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 18:57:30 crc kubenswrapper[4737]: I0126 18:57:30.536764 4737 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/017f5dce-64ad-4a66-be2f-1ced9ae7c9ce-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:57:30 crc kubenswrapper[4737]: I0126 18:57:30.536773 4737 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/017f5dce-64ad-4a66-be2f-1ced9ae7c9ce-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 18:57:30 crc kubenswrapper[4737]: I0126 18:57:30.549488 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/017f5dce-64ad-4a66-be2f-1ced9ae7c9ce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "017f5dce-64ad-4a66-be2f-1ced9ae7c9ce" (UID: "017f5dce-64ad-4a66-be2f-1ced9ae7c9ce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:57:30 crc kubenswrapper[4737]: I0126 18:57:30.577799 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/017f5dce-64ad-4a66-be2f-1ced9ae7c9ce-config-data" (OuterVolumeSpecName: "config-data") pod "017f5dce-64ad-4a66-be2f-1ced9ae7c9ce" (UID: "017f5dce-64ad-4a66-be2f-1ced9ae7c9ce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:57:30 crc kubenswrapper[4737]: I0126 18:57:30.639052 4737 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/017f5dce-64ad-4a66-be2f-1ced9ae7c9ce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:57:30 crc kubenswrapper[4737]: I0126 18:57:30.639117 4737 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/017f5dce-64ad-4a66-be2f-1ced9ae7c9ce-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:57:30 crc kubenswrapper[4737]: I0126 18:57:30.948743 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:57:30 crc kubenswrapper[4737]: I0126 18:57:30.949211 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:57:30 crc kubenswrapper[4737]: I0126 18:57:30.999384 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7a68838-86b3-499a-86cd-943dcb86e129" path="/var/lib/kubelet/pods/b7a68838-86b3-499a-86cd-943dcb86e129/volumes" Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.269699 4737 generic.go:334] "Generic (PLEG): container finished" podID="5bea1a20-5eb7-4003-8fdd-43ecb5fb550a" containerID="b8f1aa0848e0a3f4d0a592fd5228b2391f3981971cb36c36e7aec34ce8cd5abb" exitCode=0 Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.269778 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-j866g" event={"ID":"5bea1a20-5eb7-4003-8fdd-43ecb5fb550a","Type":"ContainerDied","Data":"b8f1aa0848e0a3f4d0a592fd5228b2391f3981971cb36c36e7aec34ce8cd5abb"} Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.269818 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.334710 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-qcqmg"] Jan 26 18:57:31 crc kubenswrapper[4737]: E0126 18:57:31.337256 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="017f5dce-64ad-4a66-be2f-1ced9ae7c9ce" containerName="sg-core" Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.337381 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="017f5dce-64ad-4a66-be2f-1ced9ae7c9ce" containerName="sg-core" Jan 26 18:57:31 crc kubenswrapper[4737]: E0126 18:57:31.337533 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="017f5dce-64ad-4a66-be2f-1ced9ae7c9ce" containerName="ceilometer-central-agent" Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.337629 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="017f5dce-64ad-4a66-be2f-1ced9ae7c9ce" containerName="ceilometer-central-agent" Jan 26 18:57:31 crc kubenswrapper[4737]: E0126 18:57:31.337733 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7a68838-86b3-499a-86cd-943dcb86e129" containerName="init" Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.337846 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7a68838-86b3-499a-86cd-943dcb86e129" containerName="init" Jan 26 18:57:31 crc kubenswrapper[4737]: E0126 18:57:31.337925 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="017f5dce-64ad-4a66-be2f-1ced9ae7c9ce" containerName="proxy-httpd" Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.337975 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="017f5dce-64ad-4a66-be2f-1ced9ae7c9ce" containerName="proxy-httpd" Jan 26 18:57:31 crc kubenswrapper[4737]: E0126 18:57:31.338092 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7a68838-86b3-499a-86cd-943dcb86e129" containerName="dnsmasq-dns" Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.338183 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7a68838-86b3-499a-86cd-943dcb86e129" containerName="dnsmasq-dns" Jan 26 18:57:31 crc kubenswrapper[4737]: E0126 18:57:31.338335 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="017f5dce-64ad-4a66-be2f-1ced9ae7c9ce" containerName="ceilometer-notification-agent" Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.338419 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="017f5dce-64ad-4a66-be2f-1ced9ae7c9ce" containerName="ceilometer-notification-agent" Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.339130 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7a68838-86b3-499a-86cd-943dcb86e129" containerName="dnsmasq-dns" Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.339245 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="017f5dce-64ad-4a66-be2f-1ced9ae7c9ce" containerName="ceilometer-central-agent" Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.341147 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="017f5dce-64ad-4a66-be2f-1ced9ae7c9ce" containerName="ceilometer-notification-agent" Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.341317 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="017f5dce-64ad-4a66-be2f-1ced9ae7c9ce" containerName="sg-core" Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.341408 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="017f5dce-64ad-4a66-be2f-1ced9ae7c9ce" containerName="proxy-httpd" Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.343582 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-qcqmg" Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.438010 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-qcqmg"] Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.451854 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.469486 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.498147 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.503459 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.506548 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.506918 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.507028 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhckz\" (UniqueName: \"kubernetes.io/projected/410f0427-0248-40f9-adc7-33af510f7842-kube-api-access-jhckz\") pod \"aodh-db-create-qcqmg\" (UID: \"410f0427-0248-40f9-adc7-33af510f7842\") " pod="openstack/aodh-db-create-qcqmg" Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.507110 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/410f0427-0248-40f9-adc7-33af510f7842-operator-scripts\") pod \"aodh-db-create-qcqmg\" (UID: \"410f0427-0248-40f9-adc7-33af510f7842\") " pod="openstack/aodh-db-create-qcqmg" Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.574175 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.595624 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-7e93-account-create-update-gxsv8"] Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.597892 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-7e93-account-create-update-gxsv8" Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.601610 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.608430 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55s2q\" (UniqueName: \"kubernetes.io/projected/45673b67-f4a4-4100-adfa-6cdb3a83f093-kube-api-access-55s2q\") pod \"aodh-7e93-account-create-update-gxsv8\" (UID: \"45673b67-f4a4-4100-adfa-6cdb3a83f093\") " pod="openstack/aodh-7e93-account-create-update-gxsv8" Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.608496 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhckz\" (UniqueName: \"kubernetes.io/projected/410f0427-0248-40f9-adc7-33af510f7842-kube-api-access-jhckz\") pod \"aodh-db-create-qcqmg\" (UID: \"410f0427-0248-40f9-adc7-33af510f7842\") " pod="openstack/aodh-db-create-qcqmg" Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.608518 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rn54\" (UniqueName: \"kubernetes.io/projected/d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934-kube-api-access-8rn54\") pod \"ceilometer-0\" (UID: \"d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934\") " pod="openstack/ceilometer-0" Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.608537 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934-config-data\") pod \"ceilometer-0\" (UID: \"d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934\") " pod="openstack/ceilometer-0" Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.608566 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/410f0427-0248-40f9-adc7-33af510f7842-operator-scripts\") pod \"aodh-db-create-qcqmg\" (UID: \"410f0427-0248-40f9-adc7-33af510f7842\") " pod="openstack/aodh-db-create-qcqmg" Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.608717 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934-log-httpd\") pod \"ceilometer-0\" (UID: \"d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934\") " pod="openstack/ceilometer-0" Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.609213 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934-run-httpd\") pod \"ceilometer-0\" (UID: \"d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934\") " pod="openstack/ceilometer-0" Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.609253 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934\") " pod="openstack/ceilometer-0" Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.609407 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934\") " pod="openstack/ceilometer-0" Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.609462 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934-scripts\") pod \"ceilometer-0\" (UID: \"d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934\") " pod="openstack/ceilometer-0" Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.609699 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45673b67-f4a4-4100-adfa-6cdb3a83f093-operator-scripts\") pod \"aodh-7e93-account-create-update-gxsv8\" (UID: \"45673b67-f4a4-4100-adfa-6cdb3a83f093\") " pod="openstack/aodh-7e93-account-create-update-gxsv8" Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.609741 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/410f0427-0248-40f9-adc7-33af510f7842-operator-scripts\") pod \"aodh-db-create-qcqmg\" (UID: \"410f0427-0248-40f9-adc7-33af510f7842\") " pod="openstack/aodh-db-create-qcqmg" Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.637155 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-7e93-account-create-update-gxsv8"] Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.643324 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhckz\" (UniqueName: \"kubernetes.io/projected/410f0427-0248-40f9-adc7-33af510f7842-kube-api-access-jhckz\") pod \"aodh-db-create-qcqmg\" (UID: \"410f0427-0248-40f9-adc7-33af510f7842\") " pod="openstack/aodh-db-create-qcqmg" Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.687835 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-qcqmg" Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.712948 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rn54\" (UniqueName: \"kubernetes.io/projected/d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934-kube-api-access-8rn54\") pod \"ceilometer-0\" (UID: \"d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934\") " pod="openstack/ceilometer-0" Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.713016 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934-config-data\") pod \"ceilometer-0\" (UID: \"d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934\") " pod="openstack/ceilometer-0" Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.713430 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934-log-httpd\") pod \"ceilometer-0\" (UID: \"d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934\") " pod="openstack/ceilometer-0" Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.713572 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934-run-httpd\") pod \"ceilometer-0\" (UID: \"d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934\") " pod="openstack/ceilometer-0" Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.713613 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934\") " pod="openstack/ceilometer-0" Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.713662 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934\") " pod="openstack/ceilometer-0" Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.713699 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934-scripts\") pod \"ceilometer-0\" (UID: \"d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934\") " pod="openstack/ceilometer-0" Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.713778 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45673b67-f4a4-4100-adfa-6cdb3a83f093-operator-scripts\") pod \"aodh-7e93-account-create-update-gxsv8\" (UID: \"45673b67-f4a4-4100-adfa-6cdb3a83f093\") " pod="openstack/aodh-7e93-account-create-update-gxsv8" Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.713856 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55s2q\" (UniqueName: \"kubernetes.io/projected/45673b67-f4a4-4100-adfa-6cdb3a83f093-kube-api-access-55s2q\") pod \"aodh-7e93-account-create-update-gxsv8\" (UID: \"45673b67-f4a4-4100-adfa-6cdb3a83f093\") " pod="openstack/aodh-7e93-account-create-update-gxsv8" Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.715270 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934-log-httpd\") pod \"ceilometer-0\" (UID: \"d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934\") " pod="openstack/ceilometer-0" Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.718416 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934-run-httpd\") pod \"ceilometer-0\" (UID: \"d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934\") " pod="openstack/ceilometer-0" Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.719054 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45673b67-f4a4-4100-adfa-6cdb3a83f093-operator-scripts\") pod \"aodh-7e93-account-create-update-gxsv8\" (UID: \"45673b67-f4a4-4100-adfa-6cdb3a83f093\") " pod="openstack/aodh-7e93-account-create-update-gxsv8" Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.722412 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934-config-data\") pod \"ceilometer-0\" (UID: \"d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934\") " pod="openstack/ceilometer-0" Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.722528 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934\") " pod="openstack/ceilometer-0" Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.730058 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934\") " pod="openstack/ceilometer-0" Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.735384 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55s2q\" (UniqueName: \"kubernetes.io/projected/45673b67-f4a4-4100-adfa-6cdb3a83f093-kube-api-access-55s2q\") pod \"aodh-7e93-account-create-update-gxsv8\" (UID: \"45673b67-f4a4-4100-adfa-6cdb3a83f093\") " pod="openstack/aodh-7e93-account-create-update-gxsv8" Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.738248 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rn54\" (UniqueName: \"kubernetes.io/projected/d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934-kube-api-access-8rn54\") pod \"ceilometer-0\" (UID: \"d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934\") " pod="openstack/ceilometer-0" Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.740842 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934-scripts\") pod \"ceilometer-0\" (UID: \"d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934\") " pod="openstack/ceilometer-0" Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.851440 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:57:31 crc kubenswrapper[4737]: I0126 18:57:31.921690 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-7e93-account-create-update-gxsv8" Jan 26 18:57:32 crc kubenswrapper[4737]: I0126 18:57:32.552942 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-qcqmg"] Jan 26 18:57:32 crc kubenswrapper[4737]: W0126 18:57:32.556863 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod410f0427_0248_40f9_adc7_33af510f7842.slice/crio-ece33eef67ed50d0aa37a98e84f5a028b6cbcd3fc02018502b0b15739a1d32b0 WatchSource:0}: Error finding container ece33eef67ed50d0aa37a98e84f5a028b6cbcd3fc02018502b0b15739a1d32b0: Status 404 returned error can't find the container with id ece33eef67ed50d0aa37a98e84f5a028b6cbcd3fc02018502b0b15739a1d32b0 Jan 26 18:57:32 crc kubenswrapper[4737]: I0126 18:57:32.582917 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 18:57:32 crc kubenswrapper[4737]: I0126 18:57:32.583047 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 18:57:32 crc kubenswrapper[4737]: I0126 18:57:32.869009 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-7e93-account-create-update-gxsv8"] Jan 26 18:57:32 crc kubenswrapper[4737]: I0126 18:57:32.926865 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-j866g" Jan 26 18:57:32 crc kubenswrapper[4737]: I0126 18:57:32.997874 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="017f5dce-64ad-4a66-be2f-1ced9ae7c9ce" path="/var/lib/kubelet/pods/017f5dce-64ad-4a66-be2f-1ced9ae7c9ce/volumes" Jan 26 18:57:33 crc kubenswrapper[4737]: I0126 18:57:33.037772 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:57:33 crc kubenswrapper[4737]: W0126 18:57:33.045495 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4c0e0d5_e70d_4429_a1f0_cb2ee1aa4934.slice/crio-8b8562d63e011bd6117ba96b7aa5eb4410d1a9aea9b73ae89be4e0641c9133e2 WatchSource:0}: Error finding container 8b8562d63e011bd6117ba96b7aa5eb4410d1a9aea9b73ae89be4e0641c9133e2: Status 404 returned error can't find the container with id 8b8562d63e011bd6117ba96b7aa5eb4410d1a9aea9b73ae89be4e0641c9133e2 Jan 26 18:57:33 crc kubenswrapper[4737]: I0126 18:57:33.078502 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5bea1a20-5eb7-4003-8fdd-43ecb5fb550a-scripts\") pod \"5bea1a20-5eb7-4003-8fdd-43ecb5fb550a\" (UID: \"5bea1a20-5eb7-4003-8fdd-43ecb5fb550a\") " Jan 26 18:57:33 crc kubenswrapper[4737]: I0126 18:57:33.078654 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhdjm\" (UniqueName: \"kubernetes.io/projected/5bea1a20-5eb7-4003-8fdd-43ecb5fb550a-kube-api-access-lhdjm\") pod \"5bea1a20-5eb7-4003-8fdd-43ecb5fb550a\" (UID: \"5bea1a20-5eb7-4003-8fdd-43ecb5fb550a\") " Jan 26 18:57:33 crc kubenswrapper[4737]: I0126 18:57:33.078698 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bea1a20-5eb7-4003-8fdd-43ecb5fb550a-combined-ca-bundle\") pod \"5bea1a20-5eb7-4003-8fdd-43ecb5fb550a\" (UID: \"5bea1a20-5eb7-4003-8fdd-43ecb5fb550a\") " Jan 26 18:57:33 crc kubenswrapper[4737]: I0126 18:57:33.078814 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bea1a20-5eb7-4003-8fdd-43ecb5fb550a-config-data\") pod \"5bea1a20-5eb7-4003-8fdd-43ecb5fb550a\" (UID: \"5bea1a20-5eb7-4003-8fdd-43ecb5fb550a\") " Jan 26 18:57:33 crc kubenswrapper[4737]: I0126 18:57:33.086714 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bea1a20-5eb7-4003-8fdd-43ecb5fb550a-kube-api-access-lhdjm" (OuterVolumeSpecName: "kube-api-access-lhdjm") pod "5bea1a20-5eb7-4003-8fdd-43ecb5fb550a" (UID: "5bea1a20-5eb7-4003-8fdd-43ecb5fb550a"). InnerVolumeSpecName "kube-api-access-lhdjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:57:33 crc kubenswrapper[4737]: I0126 18:57:33.088337 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bea1a20-5eb7-4003-8fdd-43ecb5fb550a-scripts" (OuterVolumeSpecName: "scripts") pod "5bea1a20-5eb7-4003-8fdd-43ecb5fb550a" (UID: "5bea1a20-5eb7-4003-8fdd-43ecb5fb550a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:57:33 crc kubenswrapper[4737]: I0126 18:57:33.117109 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bea1a20-5eb7-4003-8fdd-43ecb5fb550a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5bea1a20-5eb7-4003-8fdd-43ecb5fb550a" (UID: "5bea1a20-5eb7-4003-8fdd-43ecb5fb550a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:57:33 crc kubenswrapper[4737]: I0126 18:57:33.131523 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bea1a20-5eb7-4003-8fdd-43ecb5fb550a-config-data" (OuterVolumeSpecName: "config-data") pod "5bea1a20-5eb7-4003-8fdd-43ecb5fb550a" (UID: "5bea1a20-5eb7-4003-8fdd-43ecb5fb550a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:57:33 crc kubenswrapper[4737]: I0126 18:57:33.181903 4737 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bea1a20-5eb7-4003-8fdd-43ecb5fb550a-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:57:33 crc kubenswrapper[4737]: I0126 18:57:33.181943 4737 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5bea1a20-5eb7-4003-8fdd-43ecb5fb550a-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:57:33 crc kubenswrapper[4737]: I0126 18:57:33.181956 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhdjm\" (UniqueName: \"kubernetes.io/projected/5bea1a20-5eb7-4003-8fdd-43ecb5fb550a-kube-api-access-lhdjm\") on node \"crc\" DevicePath \"\"" Jan 26 18:57:33 crc kubenswrapper[4737]: I0126 18:57:33.181972 4737 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bea1a20-5eb7-4003-8fdd-43ecb5fb550a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:57:33 crc kubenswrapper[4737]: I0126 18:57:33.352499 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-j866g" event={"ID":"5bea1a20-5eb7-4003-8fdd-43ecb5fb550a","Type":"ContainerDied","Data":"d3ae729ac56dae73f9db6ab9dd094497e24268a8c2a32ec105f143d660750dcd"} Jan 26 18:57:33 crc kubenswrapper[4737]: I0126 18:57:33.352725 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3ae729ac56dae73f9db6ab9dd094497e24268a8c2a32ec105f143d660750dcd" Jan 26 18:57:33 crc kubenswrapper[4737]: I0126 18:57:33.352591 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-j866g" Jan 26 18:57:33 crc kubenswrapper[4737]: I0126 18:57:33.361300 4737 generic.go:334] "Generic (PLEG): container finished" podID="e850b319-4b13-4da1-a138-3373c2c6ecd2" containerID="9c5c86e220b689720e2541702ca731231d3515f7071e96ed7256880fbe86cb2e" exitCode=0 Jan 26 18:57:33 crc kubenswrapper[4737]: I0126 18:57:33.361389 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-b22wn" event={"ID":"e850b319-4b13-4da1-a138-3373c2c6ecd2","Type":"ContainerDied","Data":"9c5c86e220b689720e2541702ca731231d3515f7071e96ed7256880fbe86cb2e"} Jan 26 18:57:33 crc kubenswrapper[4737]: I0126 18:57:33.369021 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934","Type":"ContainerStarted","Data":"8b8562d63e011bd6117ba96b7aa5eb4410d1a9aea9b73ae89be4e0641c9133e2"} Jan 26 18:57:33 crc kubenswrapper[4737]: I0126 18:57:33.374031 4737 generic.go:334] "Generic (PLEG): container finished" podID="410f0427-0248-40f9-adc7-33af510f7842" containerID="3ac8d17f683e9b94a8213e038309cebb9dd9baa77a53a58ffa3f54c75f7a7901" exitCode=0 Jan 26 18:57:33 crc kubenswrapper[4737]: I0126 18:57:33.374105 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-qcqmg" event={"ID":"410f0427-0248-40f9-adc7-33af510f7842","Type":"ContainerDied","Data":"3ac8d17f683e9b94a8213e038309cebb9dd9baa77a53a58ffa3f54c75f7a7901"} Jan 26 18:57:33 crc kubenswrapper[4737]: I0126 18:57:33.374130 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-qcqmg" event={"ID":"410f0427-0248-40f9-adc7-33af510f7842","Type":"ContainerStarted","Data":"ece33eef67ed50d0aa37a98e84f5a028b6cbcd3fc02018502b0b15739a1d32b0"} Jan 26 18:57:33 crc kubenswrapper[4737]: I0126 18:57:33.387768 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-7e93-account-create-update-gxsv8" event={"ID":"45673b67-f4a4-4100-adfa-6cdb3a83f093","Type":"ContainerStarted","Data":"45d0c47daf372770ba84bcb12ce44d388a940e5addc2214cfd895663946e0603"} Jan 26 18:57:33 crc kubenswrapper[4737]: I0126 18:57:33.511232 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 18:57:33 crc kubenswrapper[4737]: I0126 18:57:33.511712 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2a92049d-1c34-47c9-b128-366728af476a" containerName="nova-api-log" containerID="cri-o://90482cd04f867c621a9067e356b2c50c1620be34f6396a069eb7ce590115d872" gracePeriod=30 Jan 26 18:57:33 crc kubenswrapper[4737]: I0126 18:57:33.511866 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2a92049d-1c34-47c9-b128-366728af476a" containerName="nova-api-api" containerID="cri-o://c70535ceb1c154b2917d6d6227aa22d70ae2cfa94a14e6b7444dfdf3de6b23e0" gracePeriod=30 Jan 26 18:57:33 crc kubenswrapper[4737]: I0126 18:57:33.531544 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 18:57:33 crc kubenswrapper[4737]: I0126 18:57:33.531867 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="252aaba1-e252-4c10-b9de-8e6100e48267" containerName="nova-scheduler-scheduler" containerID="cri-o://285e46cc52fab39e08ffd257b90f7d72d6cc30c9c8e4df2a7d7263be1b2e3d30" gracePeriod=30 Jan 26 18:57:33 crc kubenswrapper[4737]: I0126 18:57:33.550424 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 18:57:33 crc kubenswrapper[4737]: I0126 18:57:33.551221 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47" containerName="nova-metadata-log" containerID="cri-o://8aeba4181ad38c48720e15d9a829ad3fd4d0b2a45d0ae003ed12e6bab9730f11" gracePeriod=30 Jan 26 18:57:33 crc kubenswrapper[4737]: I0126 18:57:33.551867 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47" containerName="nova-metadata-metadata" containerID="cri-o://91c3102f11843a9cb7070393df4a673b0f168a8d19f549b7bd2e530977b32587" gracePeriod=30 Jan 26 18:57:34 crc kubenswrapper[4737]: I0126 18:57:34.310364 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 18:57:34 crc kubenswrapper[4737]: I0126 18:57:34.412689 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934","Type":"ContainerStarted","Data":"e1e11b54eb022a29f79251415a23a4ebf6df41dfb4be8b44ac01f4ca9b08e539"} Jan 26 18:57:34 crc kubenswrapper[4737]: I0126 18:57:34.415448 4737 generic.go:334] "Generic (PLEG): container finished" podID="5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47" containerID="91c3102f11843a9cb7070393df4a673b0f168a8d19f549b7bd2e530977b32587" exitCode=0 Jan 26 18:57:34 crc kubenswrapper[4737]: I0126 18:57:34.415478 4737 generic.go:334] "Generic (PLEG): container finished" podID="5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47" containerID="8aeba4181ad38c48720e15d9a829ad3fd4d0b2a45d0ae003ed12e6bab9730f11" exitCode=143 Jan 26 18:57:34 crc kubenswrapper[4737]: I0126 18:57:34.415519 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47","Type":"ContainerDied","Data":"91c3102f11843a9cb7070393df4a673b0f168a8d19f549b7bd2e530977b32587"} Jan 26 18:57:34 crc kubenswrapper[4737]: I0126 18:57:34.415543 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47","Type":"ContainerDied","Data":"8aeba4181ad38c48720e15d9a829ad3fd4d0b2a45d0ae003ed12e6bab9730f11"} Jan 26 18:57:34 crc kubenswrapper[4737]: I0126 18:57:34.415554 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47","Type":"ContainerDied","Data":"636b7e1b50b77a2e77a176a4a6e4760244e78027dda02b05a2a38e277d2e7cac"} Jan 26 18:57:34 crc kubenswrapper[4737]: I0126 18:57:34.415570 4737 scope.go:117] "RemoveContainer" containerID="91c3102f11843a9cb7070393df4a673b0f168a8d19f549b7bd2e530977b32587" Jan 26 18:57:34 crc kubenswrapper[4737]: I0126 18:57:34.415704 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 18:57:34 crc kubenswrapper[4737]: I0126 18:57:34.419208 4737 generic.go:334] "Generic (PLEG): container finished" podID="45673b67-f4a4-4100-adfa-6cdb3a83f093" containerID="6e12c8d35ef900f0488ae3a40792a50def952d0040c672c6ceb17da7f17f4422" exitCode=0 Jan 26 18:57:34 crc kubenswrapper[4737]: I0126 18:57:34.419266 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-7e93-account-create-update-gxsv8" event={"ID":"45673b67-f4a4-4100-adfa-6cdb3a83f093","Type":"ContainerDied","Data":"6e12c8d35ef900f0488ae3a40792a50def952d0040c672c6ceb17da7f17f4422"} Jan 26 18:57:34 crc kubenswrapper[4737]: I0126 18:57:34.422369 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47-logs\") pod \"5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47\" (UID: \"5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47\") " Jan 26 18:57:34 crc kubenswrapper[4737]: I0126 18:57:34.422585 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47-combined-ca-bundle\") pod \"5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47\" (UID: \"5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47\") " Jan 26 18:57:34 crc kubenswrapper[4737]: I0126 18:57:34.422702 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47-config-data\") pod \"5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47\" (UID: \"5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47\") " Jan 26 18:57:34 crc kubenswrapper[4737]: I0126 18:57:34.422766 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47-nova-metadata-tls-certs\") pod \"5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47\" (UID: \"5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47\") " Jan 26 18:57:34 crc kubenswrapper[4737]: I0126 18:57:34.422922 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7gcp\" (UniqueName: \"kubernetes.io/projected/5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47-kube-api-access-k7gcp\") pod \"5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47\" (UID: \"5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47\") " Jan 26 18:57:34 crc kubenswrapper[4737]: I0126 18:57:34.426417 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47-logs" (OuterVolumeSpecName: "logs") pod "5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47" (UID: "5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:57:34 crc kubenswrapper[4737]: I0126 18:57:34.428843 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47-kube-api-access-k7gcp" (OuterVolumeSpecName: "kube-api-access-k7gcp") pod "5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47" (UID: "5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47"). InnerVolumeSpecName "kube-api-access-k7gcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:57:34 crc kubenswrapper[4737]: I0126 18:57:34.429772 4737 generic.go:334] "Generic (PLEG): container finished" podID="2a92049d-1c34-47c9-b128-366728af476a" containerID="90482cd04f867c621a9067e356b2c50c1620be34f6396a069eb7ce590115d872" exitCode=143 Jan 26 18:57:34 crc kubenswrapper[4737]: I0126 18:57:34.429984 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2a92049d-1c34-47c9-b128-366728af476a","Type":"ContainerDied","Data":"90482cd04f867c621a9067e356b2c50c1620be34f6396a069eb7ce590115d872"} Jan 26 18:57:34 crc kubenswrapper[4737]: I0126 18:57:34.476465 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47-config-data" (OuterVolumeSpecName: "config-data") pod "5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47" (UID: "5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:57:34 crc kubenswrapper[4737]: I0126 18:57:34.492417 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47" (UID: "5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:57:34 crc kubenswrapper[4737]: I0126 18:57:34.523246 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47" (UID: "5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:57:34 crc kubenswrapper[4737]: I0126 18:57:34.717744 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7gcp\" (UniqueName: \"kubernetes.io/projected/5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47-kube-api-access-k7gcp\") on node \"crc\" DevicePath \"\"" Jan 26 18:57:34 crc kubenswrapper[4737]: I0126 18:57:34.718285 4737 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47-logs\") on node \"crc\" DevicePath \"\"" Jan 26 18:57:34 crc kubenswrapper[4737]: I0126 18:57:34.718301 4737 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:57:34 crc kubenswrapper[4737]: I0126 18:57:34.718312 4737 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:57:34 crc kubenswrapper[4737]: I0126 18:57:34.718321 4737 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:57:34 crc kubenswrapper[4737]: I0126 18:57:34.813674 4737 scope.go:117] "RemoveContainer" containerID="8aeba4181ad38c48720e15d9a829ad3fd4d0b2a45d0ae003ed12e6bab9730f11" Jan 26 18:57:34 crc kubenswrapper[4737]: I0126 18:57:34.834767 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 18:57:34 crc kubenswrapper[4737]: I0126 18:57:34.860475 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 18:57:34 crc kubenswrapper[4737]: I0126 18:57:34.875141 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 26 18:57:34 crc kubenswrapper[4737]: E0126 18:57:34.875716 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bea1a20-5eb7-4003-8fdd-43ecb5fb550a" containerName="nova-manage" Jan 26 18:57:34 crc kubenswrapper[4737]: I0126 18:57:34.875755 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bea1a20-5eb7-4003-8fdd-43ecb5fb550a" containerName="nova-manage" Jan 26 18:57:34 crc kubenswrapper[4737]: E0126 18:57:34.875809 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47" containerName="nova-metadata-metadata" Jan 26 18:57:34 crc kubenswrapper[4737]: I0126 18:57:34.875816 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47" containerName="nova-metadata-metadata" Jan 26 18:57:34 crc kubenswrapper[4737]: E0126 18:57:34.875825 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47" containerName="nova-metadata-log" Jan 26 18:57:34 crc kubenswrapper[4737]: I0126 18:57:34.875832 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47" containerName="nova-metadata-log" Jan 26 18:57:34 crc kubenswrapper[4737]: I0126 18:57:34.876061 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47" containerName="nova-metadata-log" Jan 26 18:57:34 crc kubenswrapper[4737]: I0126 18:57:34.876089 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47" containerName="nova-metadata-metadata" Jan 26 18:57:34 crc kubenswrapper[4737]: I0126 18:57:34.876101 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bea1a20-5eb7-4003-8fdd-43ecb5fb550a" containerName="nova-manage" Jan 26 18:57:34 crc kubenswrapper[4737]: I0126 18:57:34.877430 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 18:57:34 crc kubenswrapper[4737]: I0126 18:57:34.878931 4737 scope.go:117] "RemoveContainer" containerID="91c3102f11843a9cb7070393df4a673b0f168a8d19f549b7bd2e530977b32587" Jan 26 18:57:34 crc kubenswrapper[4737]: I0126 18:57:34.883034 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 26 18:57:34 crc kubenswrapper[4737]: I0126 18:57:34.883227 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 18:57:34 crc kubenswrapper[4737]: I0126 18:57:34.883554 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 26 18:57:34 crc kubenswrapper[4737]: E0126 18:57:34.887577 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91c3102f11843a9cb7070393df4a673b0f168a8d19f549b7bd2e530977b32587\": container with ID starting with 91c3102f11843a9cb7070393df4a673b0f168a8d19f549b7bd2e530977b32587 not found: ID does not exist" containerID="91c3102f11843a9cb7070393df4a673b0f168a8d19f549b7bd2e530977b32587" Jan 26 18:57:34 crc kubenswrapper[4737]: I0126 18:57:34.887641 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91c3102f11843a9cb7070393df4a673b0f168a8d19f549b7bd2e530977b32587"} err="failed to get container status \"91c3102f11843a9cb7070393df4a673b0f168a8d19f549b7bd2e530977b32587\": rpc error: code = NotFound desc = could not find container \"91c3102f11843a9cb7070393df4a673b0f168a8d19f549b7bd2e530977b32587\": container with ID starting with 91c3102f11843a9cb7070393df4a673b0f168a8d19f549b7bd2e530977b32587 not found: ID does not exist" Jan 26 18:57:34 crc kubenswrapper[4737]: I0126 18:57:34.887680 4737 scope.go:117] "RemoveContainer" containerID="8aeba4181ad38c48720e15d9a829ad3fd4d0b2a45d0ae003ed12e6bab9730f11" Jan 26 18:57:34 crc kubenswrapper[4737]: E0126 18:57:34.895310 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8aeba4181ad38c48720e15d9a829ad3fd4d0b2a45d0ae003ed12e6bab9730f11\": container with ID starting with 8aeba4181ad38c48720e15d9a829ad3fd4d0b2a45d0ae003ed12e6bab9730f11 not found: ID does not exist" containerID="8aeba4181ad38c48720e15d9a829ad3fd4d0b2a45d0ae003ed12e6bab9730f11" Jan 26 18:57:34 crc kubenswrapper[4737]: I0126 18:57:34.895423 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8aeba4181ad38c48720e15d9a829ad3fd4d0b2a45d0ae003ed12e6bab9730f11"} err="failed to get container status \"8aeba4181ad38c48720e15d9a829ad3fd4d0b2a45d0ae003ed12e6bab9730f11\": rpc error: code = NotFound desc = could not find container \"8aeba4181ad38c48720e15d9a829ad3fd4d0b2a45d0ae003ed12e6bab9730f11\": container with ID starting with 8aeba4181ad38c48720e15d9a829ad3fd4d0b2a45d0ae003ed12e6bab9730f11 not found: ID does not exist" Jan 26 18:57:34 crc kubenswrapper[4737]: I0126 18:57:34.895490 4737 scope.go:117] "RemoveContainer" containerID="91c3102f11843a9cb7070393df4a673b0f168a8d19f549b7bd2e530977b32587" Jan 26 18:57:34 crc kubenswrapper[4737]: I0126 18:57:34.896291 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91c3102f11843a9cb7070393df4a673b0f168a8d19f549b7bd2e530977b32587"} err="failed to get container status \"91c3102f11843a9cb7070393df4a673b0f168a8d19f549b7bd2e530977b32587\": rpc error: code = NotFound desc = could not find container \"91c3102f11843a9cb7070393df4a673b0f168a8d19f549b7bd2e530977b32587\": container with ID starting with 91c3102f11843a9cb7070393df4a673b0f168a8d19f549b7bd2e530977b32587 not found: ID does not exist" Jan 26 18:57:34 crc kubenswrapper[4737]: I0126 18:57:34.896359 4737 scope.go:117] "RemoveContainer" containerID="8aeba4181ad38c48720e15d9a829ad3fd4d0b2a45d0ae003ed12e6bab9730f11" Jan 26 18:57:34 crc kubenswrapper[4737]: I0126 18:57:34.896720 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8aeba4181ad38c48720e15d9a829ad3fd4d0b2a45d0ae003ed12e6bab9730f11"} err="failed to get container status \"8aeba4181ad38c48720e15d9a829ad3fd4d0b2a45d0ae003ed12e6bab9730f11\": rpc error: code = NotFound desc = could not find container \"8aeba4181ad38c48720e15d9a829ad3fd4d0b2a45d0ae003ed12e6bab9730f11\": container with ID starting with 8aeba4181ad38c48720e15d9a829ad3fd4d0b2a45d0ae003ed12e6bab9730f11 not found: ID does not exist" Jan 26 18:57:34 crc kubenswrapper[4737]: I0126 18:57:34.922474 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edd548b7-dcc7-46ac-ac43-3ba6b63c903a-logs\") pod \"nova-metadata-0\" (UID: \"edd548b7-dcc7-46ac-ac43-3ba6b63c903a\") " pod="openstack/nova-metadata-0" Jan 26 18:57:34 crc kubenswrapper[4737]: I0126 18:57:34.922769 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edd548b7-dcc7-46ac-ac43-3ba6b63c903a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"edd548b7-dcc7-46ac-ac43-3ba6b63c903a\") " pod="openstack/nova-metadata-0" Jan 26 18:57:34 crc kubenswrapper[4737]: I0126 18:57:34.922972 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edd548b7-dcc7-46ac-ac43-3ba6b63c903a-config-data\") pod \"nova-metadata-0\" (UID: \"edd548b7-dcc7-46ac-ac43-3ba6b63c903a\") " pod="openstack/nova-metadata-0" Jan 26 18:57:34 crc kubenswrapper[4737]: I0126 18:57:34.923091 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfwsf\" (UniqueName: \"kubernetes.io/projected/edd548b7-dcc7-46ac-ac43-3ba6b63c903a-kube-api-access-sfwsf\") pod \"nova-metadata-0\" (UID: \"edd548b7-dcc7-46ac-ac43-3ba6b63c903a\") " pod="openstack/nova-metadata-0" Jan 26 18:57:34 crc kubenswrapper[4737]: I0126 18:57:34.923360 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/edd548b7-dcc7-46ac-ac43-3ba6b63c903a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"edd548b7-dcc7-46ac-ac43-3ba6b63c903a\") " pod="openstack/nova-metadata-0" Jan 26 18:57:35 crc kubenswrapper[4737]: I0126 18:57:35.011765 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47" path="/var/lib/kubelet/pods/5a1eca90-8a72-4b98-ac6b-c81a5d1e2e47/volumes" Jan 26 18:57:35 crc kubenswrapper[4737]: I0126 18:57:35.030573 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/edd548b7-dcc7-46ac-ac43-3ba6b63c903a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"edd548b7-dcc7-46ac-ac43-3ba6b63c903a\") " pod="openstack/nova-metadata-0" Jan 26 18:57:35 crc kubenswrapper[4737]: I0126 18:57:35.030750 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edd548b7-dcc7-46ac-ac43-3ba6b63c903a-logs\") pod \"nova-metadata-0\" (UID: \"edd548b7-dcc7-46ac-ac43-3ba6b63c903a\") " pod="openstack/nova-metadata-0" Jan 26 18:57:35 crc kubenswrapper[4737]: I0126 18:57:35.030931 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edd548b7-dcc7-46ac-ac43-3ba6b63c903a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"edd548b7-dcc7-46ac-ac43-3ba6b63c903a\") " pod="openstack/nova-metadata-0" Jan 26 18:57:35 crc kubenswrapper[4737]: I0126 18:57:35.031484 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edd548b7-dcc7-46ac-ac43-3ba6b63c903a-config-data\") pod \"nova-metadata-0\" (UID: \"edd548b7-dcc7-46ac-ac43-3ba6b63c903a\") " pod="openstack/nova-metadata-0" Jan 26 18:57:35 crc kubenswrapper[4737]: I0126 18:57:35.031607 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfwsf\" (UniqueName: \"kubernetes.io/projected/edd548b7-dcc7-46ac-ac43-3ba6b63c903a-kube-api-access-sfwsf\") pod \"nova-metadata-0\" (UID: \"edd548b7-dcc7-46ac-ac43-3ba6b63c903a\") " pod="openstack/nova-metadata-0" Jan 26 18:57:35 crc kubenswrapper[4737]: I0126 18:57:35.036911 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edd548b7-dcc7-46ac-ac43-3ba6b63c903a-logs\") pod \"nova-metadata-0\" (UID: \"edd548b7-dcc7-46ac-ac43-3ba6b63c903a\") " pod="openstack/nova-metadata-0" Jan 26 18:57:35 crc kubenswrapper[4737]: I0126 18:57:35.042755 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edd548b7-dcc7-46ac-ac43-3ba6b63c903a-config-data\") pod \"nova-metadata-0\" (UID: \"edd548b7-dcc7-46ac-ac43-3ba6b63c903a\") " pod="openstack/nova-metadata-0" Jan 26 18:57:35 crc kubenswrapper[4737]: I0126 18:57:35.043173 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/edd548b7-dcc7-46ac-ac43-3ba6b63c903a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"edd548b7-dcc7-46ac-ac43-3ba6b63c903a\") " pod="openstack/nova-metadata-0" Jan 26 18:57:35 crc kubenswrapper[4737]: I0126 18:57:35.048193 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edd548b7-dcc7-46ac-ac43-3ba6b63c903a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"edd548b7-dcc7-46ac-ac43-3ba6b63c903a\") " pod="openstack/nova-metadata-0" Jan 26 18:57:35 crc kubenswrapper[4737]: I0126 18:57:35.076953 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfwsf\" (UniqueName: \"kubernetes.io/projected/edd548b7-dcc7-46ac-ac43-3ba6b63c903a-kube-api-access-sfwsf\") pod \"nova-metadata-0\" (UID: \"edd548b7-dcc7-46ac-ac43-3ba6b63c903a\") " pod="openstack/nova-metadata-0" Jan 26 18:57:35 crc kubenswrapper[4737]: I0126 18:57:35.211268 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 18:57:35 crc kubenswrapper[4737]: I0126 18:57:35.380165 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-b22wn" Jan 26 18:57:35 crc kubenswrapper[4737]: I0126 18:57:35.448611 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e850b319-4b13-4da1-a138-3373c2c6ecd2-scripts\") pod \"e850b319-4b13-4da1-a138-3373c2c6ecd2\" (UID: \"e850b319-4b13-4da1-a138-3373c2c6ecd2\") " Jan 26 18:57:35 crc kubenswrapper[4737]: I0126 18:57:35.449149 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4zv5\" (UniqueName: \"kubernetes.io/projected/e850b319-4b13-4da1-a138-3373c2c6ecd2-kube-api-access-n4zv5\") pod \"e850b319-4b13-4da1-a138-3373c2c6ecd2\" (UID: \"e850b319-4b13-4da1-a138-3373c2c6ecd2\") " Jan 26 18:57:35 crc kubenswrapper[4737]: I0126 18:57:35.449219 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e850b319-4b13-4da1-a138-3373c2c6ecd2-config-data\") pod \"e850b319-4b13-4da1-a138-3373c2c6ecd2\" (UID: \"e850b319-4b13-4da1-a138-3373c2c6ecd2\") " Jan 26 18:57:35 crc kubenswrapper[4737]: I0126 18:57:35.449392 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e850b319-4b13-4da1-a138-3373c2c6ecd2-combined-ca-bundle\") pod \"e850b319-4b13-4da1-a138-3373c2c6ecd2\" (UID: \"e850b319-4b13-4da1-a138-3373c2c6ecd2\") " Jan 26 18:57:35 crc kubenswrapper[4737]: I0126 18:57:35.472793 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-qcqmg" Jan 26 18:57:35 crc kubenswrapper[4737]: I0126 18:57:35.473245 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e850b319-4b13-4da1-a138-3373c2c6ecd2-scripts" (OuterVolumeSpecName: "scripts") pod "e850b319-4b13-4da1-a138-3373c2c6ecd2" (UID: "e850b319-4b13-4da1-a138-3373c2c6ecd2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:57:35 crc kubenswrapper[4737]: I0126 18:57:35.473565 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e850b319-4b13-4da1-a138-3373c2c6ecd2-kube-api-access-n4zv5" (OuterVolumeSpecName: "kube-api-access-n4zv5") pod "e850b319-4b13-4da1-a138-3373c2c6ecd2" (UID: "e850b319-4b13-4da1-a138-3373c2c6ecd2"). InnerVolumeSpecName "kube-api-access-n4zv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:57:35 crc kubenswrapper[4737]: I0126 18:57:35.566429 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhckz\" (UniqueName: \"kubernetes.io/projected/410f0427-0248-40f9-adc7-33af510f7842-kube-api-access-jhckz\") pod \"410f0427-0248-40f9-adc7-33af510f7842\" (UID: \"410f0427-0248-40f9-adc7-33af510f7842\") " Jan 26 18:57:35 crc kubenswrapper[4737]: I0126 18:57:35.566640 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/410f0427-0248-40f9-adc7-33af510f7842-operator-scripts\") pod \"410f0427-0248-40f9-adc7-33af510f7842\" (UID: \"410f0427-0248-40f9-adc7-33af510f7842\") " Jan 26 18:57:35 crc kubenswrapper[4737]: I0126 18:57:35.567623 4737 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e850b319-4b13-4da1-a138-3373c2c6ecd2-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:57:35 crc kubenswrapper[4737]: I0126 18:57:35.567655 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4zv5\" (UniqueName: \"kubernetes.io/projected/e850b319-4b13-4da1-a138-3373c2c6ecd2-kube-api-access-n4zv5\") on node \"crc\" DevicePath \"\"" Jan 26 18:57:35 crc kubenswrapper[4737]: I0126 18:57:35.577537 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/410f0427-0248-40f9-adc7-33af510f7842-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "410f0427-0248-40f9-adc7-33af510f7842" (UID: "410f0427-0248-40f9-adc7-33af510f7842"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:57:35 crc kubenswrapper[4737]: I0126 18:57:35.582449 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-b22wn" event={"ID":"e850b319-4b13-4da1-a138-3373c2c6ecd2","Type":"ContainerDied","Data":"64dfe153be568f6f6eaab719860f8ca08491c47b2c88830f452128c252343d28"} Jan 26 18:57:35 crc kubenswrapper[4737]: I0126 18:57:35.582620 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64dfe153be568f6f6eaab719860f8ca08491c47b2c88830f452128c252343d28" Jan 26 18:57:35 crc kubenswrapper[4737]: I0126 18:57:35.582778 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-b22wn" Jan 26 18:57:35 crc kubenswrapper[4737]: I0126 18:57:35.583450 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e850b319-4b13-4da1-a138-3373c2c6ecd2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e850b319-4b13-4da1-a138-3373c2c6ecd2" (UID: "e850b319-4b13-4da1-a138-3373c2c6ecd2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:57:35 crc kubenswrapper[4737]: I0126 18:57:35.607942 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934","Type":"ContainerStarted","Data":"12fb973cf92d54b816ba6e75248ad6cdd24eb8fbb58d6dec2be31e78c3b0d77c"} Jan 26 18:57:35 crc kubenswrapper[4737]: I0126 18:57:35.628587 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/410f0427-0248-40f9-adc7-33af510f7842-kube-api-access-jhckz" (OuterVolumeSpecName: "kube-api-access-jhckz") pod "410f0427-0248-40f9-adc7-33af510f7842" (UID: "410f0427-0248-40f9-adc7-33af510f7842"). InnerVolumeSpecName "kube-api-access-jhckz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:57:35 crc kubenswrapper[4737]: I0126 18:57:35.628963 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e850b319-4b13-4da1-a138-3373c2c6ecd2-config-data" (OuterVolumeSpecName: "config-data") pod "e850b319-4b13-4da1-a138-3373c2c6ecd2" (UID: "e850b319-4b13-4da1-a138-3373c2c6ecd2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:57:35 crc kubenswrapper[4737]: I0126 18:57:35.645740 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-qcqmg" event={"ID":"410f0427-0248-40f9-adc7-33af510f7842","Type":"ContainerDied","Data":"ece33eef67ed50d0aa37a98e84f5a028b6cbcd3fc02018502b0b15739a1d32b0"} Jan 26 18:57:35 crc kubenswrapper[4737]: I0126 18:57:35.646119 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ece33eef67ed50d0aa37a98e84f5a028b6cbcd3fc02018502b0b15739a1d32b0" Jan 26 18:57:35 crc kubenswrapper[4737]: I0126 18:57:35.646185 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-qcqmg" Jan 26 18:57:35 crc kubenswrapper[4737]: I0126 18:57:35.671970 4737 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e850b319-4b13-4da1-a138-3373c2c6ecd2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:57:35 crc kubenswrapper[4737]: I0126 18:57:35.672021 4737 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/410f0427-0248-40f9-adc7-33af510f7842-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:57:35 crc kubenswrapper[4737]: I0126 18:57:35.672038 4737 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e850b319-4b13-4da1-a138-3373c2c6ecd2-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:57:35 crc kubenswrapper[4737]: I0126 18:57:35.672051 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhckz\" (UniqueName: \"kubernetes.io/projected/410f0427-0248-40f9-adc7-33af510f7842-kube-api-access-jhckz\") on node \"crc\" DevicePath \"\"" Jan 26 18:57:36 crc kubenswrapper[4737]: I0126 18:57:36.113175 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 18:57:36 crc kubenswrapper[4737]: I0126 18:57:36.448523 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-7e93-account-create-update-gxsv8" Jan 26 18:57:36 crc kubenswrapper[4737]: I0126 18:57:36.523046 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 26 18:57:36 crc kubenswrapper[4737]: E0126 18:57:36.523783 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45673b67-f4a4-4100-adfa-6cdb3a83f093" containerName="mariadb-account-create-update" Jan 26 18:57:36 crc kubenswrapper[4737]: I0126 18:57:36.523813 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="45673b67-f4a4-4100-adfa-6cdb3a83f093" containerName="mariadb-account-create-update" Jan 26 18:57:36 crc kubenswrapper[4737]: E0126 18:57:36.523843 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e850b319-4b13-4da1-a138-3373c2c6ecd2" containerName="nova-cell1-conductor-db-sync" Jan 26 18:57:36 crc kubenswrapper[4737]: I0126 18:57:36.523866 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="e850b319-4b13-4da1-a138-3373c2c6ecd2" containerName="nova-cell1-conductor-db-sync" Jan 26 18:57:36 crc kubenswrapper[4737]: E0126 18:57:36.523910 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="410f0427-0248-40f9-adc7-33af510f7842" containerName="mariadb-database-create" Jan 26 18:57:36 crc kubenswrapper[4737]: I0126 18:57:36.523920 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="410f0427-0248-40f9-adc7-33af510f7842" containerName="mariadb-database-create" Jan 26 18:57:36 crc kubenswrapper[4737]: I0126 18:57:36.524274 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="e850b319-4b13-4da1-a138-3373c2c6ecd2" containerName="nova-cell1-conductor-db-sync" Jan 26 18:57:36 crc kubenswrapper[4737]: I0126 18:57:36.524305 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="45673b67-f4a4-4100-adfa-6cdb3a83f093" containerName="mariadb-account-create-update" Jan 26 18:57:36 crc kubenswrapper[4737]: I0126 18:57:36.524325 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="410f0427-0248-40f9-adc7-33af510f7842" containerName="mariadb-database-create" Jan 26 18:57:36 crc kubenswrapper[4737]: I0126 18:57:36.525607 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 26 18:57:36 crc kubenswrapper[4737]: I0126 18:57:36.526008 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45673b67-f4a4-4100-adfa-6cdb3a83f093-operator-scripts\") pod \"45673b67-f4a4-4100-adfa-6cdb3a83f093\" (UID: \"45673b67-f4a4-4100-adfa-6cdb3a83f093\") " Jan 26 18:57:36 crc kubenswrapper[4737]: I0126 18:57:36.526105 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55s2q\" (UniqueName: \"kubernetes.io/projected/45673b67-f4a4-4100-adfa-6cdb3a83f093-kube-api-access-55s2q\") pod \"45673b67-f4a4-4100-adfa-6cdb3a83f093\" (UID: \"45673b67-f4a4-4100-adfa-6cdb3a83f093\") " Jan 26 18:57:36 crc kubenswrapper[4737]: I0126 18:57:36.527558 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45673b67-f4a4-4100-adfa-6cdb3a83f093-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "45673b67-f4a4-4100-adfa-6cdb3a83f093" (UID: "45673b67-f4a4-4100-adfa-6cdb3a83f093"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:57:36 crc kubenswrapper[4737]: I0126 18:57:36.529655 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 26 18:57:36 crc kubenswrapper[4737]: I0126 18:57:36.536671 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45673b67-f4a4-4100-adfa-6cdb3a83f093-kube-api-access-55s2q" (OuterVolumeSpecName: "kube-api-access-55s2q") pod "45673b67-f4a4-4100-adfa-6cdb3a83f093" (UID: "45673b67-f4a4-4100-adfa-6cdb3a83f093"). InnerVolumeSpecName "kube-api-access-55s2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:57:36 crc kubenswrapper[4737]: I0126 18:57:36.585703 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 26 18:57:36 crc kubenswrapper[4737]: I0126 18:57:36.598962 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 18:57:36 crc kubenswrapper[4737]: I0126 18:57:36.633767 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c62bab3-337a-4449-ac7f-63dedc641524-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8c62bab3-337a-4449-ac7f-63dedc641524\") " pod="openstack/nova-cell1-conductor-0" Jan 26 18:57:36 crc kubenswrapper[4737]: I0126 18:57:36.633824 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l95q4\" (UniqueName: \"kubernetes.io/projected/8c62bab3-337a-4449-ac7f-63dedc641524-kube-api-access-l95q4\") pod \"nova-cell1-conductor-0\" (UID: \"8c62bab3-337a-4449-ac7f-63dedc641524\") " pod="openstack/nova-cell1-conductor-0" Jan 26 18:57:36 crc kubenswrapper[4737]: I0126 18:57:36.633863 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c62bab3-337a-4449-ac7f-63dedc641524-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8c62bab3-337a-4449-ac7f-63dedc641524\") " pod="openstack/nova-cell1-conductor-0" Jan 26 18:57:36 crc kubenswrapper[4737]: I0126 18:57:36.634000 4737 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45673b67-f4a4-4100-adfa-6cdb3a83f093-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:57:36 crc kubenswrapper[4737]: I0126 18:57:36.634015 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55s2q\" (UniqueName: \"kubernetes.io/projected/45673b67-f4a4-4100-adfa-6cdb3a83f093-kube-api-access-55s2q\") on node \"crc\" DevicePath \"\"" Jan 26 18:57:36 crc kubenswrapper[4737]: I0126 18:57:36.703557 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"edd548b7-dcc7-46ac-ac43-3ba6b63c903a","Type":"ContainerStarted","Data":"10d526d891d2442ffbe1d9dbb86dd489ff37736db0231dc3417d39be137f6a19"} Jan 26 18:57:36 crc kubenswrapper[4737]: I0126 18:57:36.703967 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"edd548b7-dcc7-46ac-ac43-3ba6b63c903a","Type":"ContainerStarted","Data":"fb98baca1b2daa5f6811f2ea5b873246ea691c0bf079b5ee90ac083354278a81"} Jan 26 18:57:36 crc kubenswrapper[4737]: I0126 18:57:36.707484 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934","Type":"ContainerStarted","Data":"8ae55e8357565150f14c185ff1b3d5f4de9de9f10a99166d6a0027fd5f9f2eef"} Jan 26 18:57:36 crc kubenswrapper[4737]: I0126 18:57:36.709808 4737 generic.go:334] "Generic (PLEG): container finished" podID="252aaba1-e252-4c10-b9de-8e6100e48267" containerID="285e46cc52fab39e08ffd257b90f7d72d6cc30c9c8e4df2a7d7263be1b2e3d30" exitCode=0 Jan 26 18:57:36 crc kubenswrapper[4737]: I0126 18:57:36.709969 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"252aaba1-e252-4c10-b9de-8e6100e48267","Type":"ContainerDied","Data":"285e46cc52fab39e08ffd257b90f7d72d6cc30c9c8e4df2a7d7263be1b2e3d30"} Jan 26 18:57:36 crc kubenswrapper[4737]: I0126 18:57:36.710011 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"252aaba1-e252-4c10-b9de-8e6100e48267","Type":"ContainerDied","Data":"c4143507e4ba82f6a0954feb9209ecff4ddac0fc5607e824206028e34742ce1e"} Jan 26 18:57:36 crc kubenswrapper[4737]: I0126 18:57:36.710029 4737 scope.go:117] "RemoveContainer" containerID="285e46cc52fab39e08ffd257b90f7d72d6cc30c9c8e4df2a7d7263be1b2e3d30" Jan 26 18:57:36 crc kubenswrapper[4737]: I0126 18:57:36.710582 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 18:57:36 crc kubenswrapper[4737]: I0126 18:57:36.725236 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-7e93-account-create-update-gxsv8" event={"ID":"45673b67-f4a4-4100-adfa-6cdb3a83f093","Type":"ContainerDied","Data":"45d0c47daf372770ba84bcb12ce44d388a940e5addc2214cfd895663946e0603"} Jan 26 18:57:36 crc kubenswrapper[4737]: I0126 18:57:36.725280 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45d0c47daf372770ba84bcb12ce44d388a940e5addc2214cfd895663946e0603" Jan 26 18:57:36 crc kubenswrapper[4737]: I0126 18:57:36.725356 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-7e93-account-create-update-gxsv8" Jan 26 18:57:36 crc kubenswrapper[4737]: I0126 18:57:36.738321 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtzjq\" (UniqueName: \"kubernetes.io/projected/252aaba1-e252-4c10-b9de-8e6100e48267-kube-api-access-vtzjq\") pod \"252aaba1-e252-4c10-b9de-8e6100e48267\" (UID: \"252aaba1-e252-4c10-b9de-8e6100e48267\") " Jan 26 18:57:36 crc kubenswrapper[4737]: I0126 18:57:36.738974 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/252aaba1-e252-4c10-b9de-8e6100e48267-combined-ca-bundle\") pod \"252aaba1-e252-4c10-b9de-8e6100e48267\" (UID: \"252aaba1-e252-4c10-b9de-8e6100e48267\") " Jan 26 18:57:36 crc kubenswrapper[4737]: I0126 18:57:36.739247 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/252aaba1-e252-4c10-b9de-8e6100e48267-config-data\") pod \"252aaba1-e252-4c10-b9de-8e6100e48267\" (UID: \"252aaba1-e252-4c10-b9de-8e6100e48267\") " Jan 26 18:57:36 crc kubenswrapper[4737]: I0126 18:57:36.739788 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c62bab3-337a-4449-ac7f-63dedc641524-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8c62bab3-337a-4449-ac7f-63dedc641524\") " pod="openstack/nova-cell1-conductor-0" Jan 26 18:57:36 crc kubenswrapper[4737]: I0126 18:57:36.739825 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l95q4\" (UniqueName: \"kubernetes.io/projected/8c62bab3-337a-4449-ac7f-63dedc641524-kube-api-access-l95q4\") pod \"nova-cell1-conductor-0\" (UID: \"8c62bab3-337a-4449-ac7f-63dedc641524\") " pod="openstack/nova-cell1-conductor-0" Jan 26 18:57:36 crc kubenswrapper[4737]: I0126 18:57:36.739860 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c62bab3-337a-4449-ac7f-63dedc641524-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8c62bab3-337a-4449-ac7f-63dedc641524\") " pod="openstack/nova-cell1-conductor-0" Jan 26 18:57:36 crc kubenswrapper[4737]: I0126 18:57:36.744580 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/252aaba1-e252-4c10-b9de-8e6100e48267-kube-api-access-vtzjq" (OuterVolumeSpecName: "kube-api-access-vtzjq") pod "252aaba1-e252-4c10-b9de-8e6100e48267" (UID: "252aaba1-e252-4c10-b9de-8e6100e48267"). InnerVolumeSpecName "kube-api-access-vtzjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:57:36 crc kubenswrapper[4737]: I0126 18:57:36.759677 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c62bab3-337a-4449-ac7f-63dedc641524-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8c62bab3-337a-4449-ac7f-63dedc641524\") " pod="openstack/nova-cell1-conductor-0" Jan 26 18:57:36 crc kubenswrapper[4737]: I0126 18:57:36.760445 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c62bab3-337a-4449-ac7f-63dedc641524-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8c62bab3-337a-4449-ac7f-63dedc641524\") " pod="openstack/nova-cell1-conductor-0" Jan 26 18:57:36 crc kubenswrapper[4737]: I0126 18:57:36.768950 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l95q4\" (UniqueName: \"kubernetes.io/projected/8c62bab3-337a-4449-ac7f-63dedc641524-kube-api-access-l95q4\") pod \"nova-cell1-conductor-0\" (UID: \"8c62bab3-337a-4449-ac7f-63dedc641524\") " pod="openstack/nova-cell1-conductor-0" Jan 26 18:57:36 crc kubenswrapper[4737]: I0126 18:57:36.776058 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/252aaba1-e252-4c10-b9de-8e6100e48267-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "252aaba1-e252-4c10-b9de-8e6100e48267" (UID: "252aaba1-e252-4c10-b9de-8e6100e48267"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:57:36 crc kubenswrapper[4737]: I0126 18:57:36.791706 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/252aaba1-e252-4c10-b9de-8e6100e48267-config-data" (OuterVolumeSpecName: "config-data") pod "252aaba1-e252-4c10-b9de-8e6100e48267" (UID: "252aaba1-e252-4c10-b9de-8e6100e48267"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:57:36 crc kubenswrapper[4737]: I0126 18:57:36.797962 4737 scope.go:117] "RemoveContainer" containerID="285e46cc52fab39e08ffd257b90f7d72d6cc30c9c8e4df2a7d7263be1b2e3d30" Jan 26 18:57:36 crc kubenswrapper[4737]: E0126 18:57:36.798734 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"285e46cc52fab39e08ffd257b90f7d72d6cc30c9c8e4df2a7d7263be1b2e3d30\": container with ID starting with 285e46cc52fab39e08ffd257b90f7d72d6cc30c9c8e4df2a7d7263be1b2e3d30 not found: ID does not exist" containerID="285e46cc52fab39e08ffd257b90f7d72d6cc30c9c8e4df2a7d7263be1b2e3d30" Jan 26 18:57:36 crc kubenswrapper[4737]: I0126 18:57:36.798769 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"285e46cc52fab39e08ffd257b90f7d72d6cc30c9c8e4df2a7d7263be1b2e3d30"} err="failed to get container status \"285e46cc52fab39e08ffd257b90f7d72d6cc30c9c8e4df2a7d7263be1b2e3d30\": rpc error: code = NotFound desc = could not find container \"285e46cc52fab39e08ffd257b90f7d72d6cc30c9c8e4df2a7d7263be1b2e3d30\": container with ID starting with 285e46cc52fab39e08ffd257b90f7d72d6cc30c9c8e4df2a7d7263be1b2e3d30 not found: ID does not exist" Jan 26 18:57:36 crc kubenswrapper[4737]: I0126 18:57:36.842186 4737 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/252aaba1-e252-4c10-b9de-8e6100e48267-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:57:36 crc kubenswrapper[4737]: I0126 18:57:36.842218 4737 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/252aaba1-e252-4c10-b9de-8e6100e48267-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:57:36 crc kubenswrapper[4737]: I0126 18:57:36.842229 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtzjq\" (UniqueName: \"kubernetes.io/projected/252aaba1-e252-4c10-b9de-8e6100e48267-kube-api-access-vtzjq\") on node \"crc\" DevicePath \"\"" Jan 26 18:57:36 crc kubenswrapper[4737]: I0126 18:57:36.868765 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 26 18:57:37 crc kubenswrapper[4737]: I0126 18:57:37.168532 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 18:57:37 crc kubenswrapper[4737]: I0126 18:57:37.179375 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 18:57:37 crc kubenswrapper[4737]: I0126 18:57:37.191173 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 18:57:37 crc kubenswrapper[4737]: E0126 18:57:37.192041 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="252aaba1-e252-4c10-b9de-8e6100e48267" containerName="nova-scheduler-scheduler" Jan 26 18:57:37 crc kubenswrapper[4737]: I0126 18:57:37.192188 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="252aaba1-e252-4c10-b9de-8e6100e48267" containerName="nova-scheduler-scheduler" Jan 26 18:57:37 crc kubenswrapper[4737]: I0126 18:57:37.192457 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="252aaba1-e252-4c10-b9de-8e6100e48267" containerName="nova-scheduler-scheduler" Jan 26 18:57:37 crc kubenswrapper[4737]: I0126 18:57:37.193453 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 18:57:37 crc kubenswrapper[4737]: I0126 18:57:37.201412 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 18:57:37 crc kubenswrapper[4737]: I0126 18:57:37.201654 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 26 18:57:37 crc kubenswrapper[4737]: I0126 18:57:37.263034 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78gkp\" (UniqueName: \"kubernetes.io/projected/2500616a-d9a9-42fd-b442-f922082a19b8-kube-api-access-78gkp\") pod \"nova-scheduler-0\" (UID: \"2500616a-d9a9-42fd-b442-f922082a19b8\") " pod="openstack/nova-scheduler-0" Jan 26 18:57:37 crc kubenswrapper[4737]: I0126 18:57:37.263301 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2500616a-d9a9-42fd-b442-f922082a19b8-config-data\") pod \"nova-scheduler-0\" (UID: \"2500616a-d9a9-42fd-b442-f922082a19b8\") " pod="openstack/nova-scheduler-0" Jan 26 18:57:37 crc kubenswrapper[4737]: I0126 18:57:37.263531 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2500616a-d9a9-42fd-b442-f922082a19b8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2500616a-d9a9-42fd-b442-f922082a19b8\") " pod="openstack/nova-scheduler-0" Jan 26 18:57:37 crc kubenswrapper[4737]: I0126 18:57:37.297953 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 18:57:37 crc kubenswrapper[4737]: I0126 18:57:37.365189 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a92049d-1c34-47c9-b128-366728af476a-logs\") pod \"2a92049d-1c34-47c9-b128-366728af476a\" (UID: \"2a92049d-1c34-47c9-b128-366728af476a\") " Jan 26 18:57:37 crc kubenswrapper[4737]: I0126 18:57:37.365641 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a92049d-1c34-47c9-b128-366728af476a-combined-ca-bundle\") pod \"2a92049d-1c34-47c9-b128-366728af476a\" (UID: \"2a92049d-1c34-47c9-b128-366728af476a\") " Jan 26 18:57:37 crc kubenswrapper[4737]: I0126 18:57:37.365678 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a92049d-1c34-47c9-b128-366728af476a-config-data\") pod \"2a92049d-1c34-47c9-b128-366728af476a\" (UID: \"2a92049d-1c34-47c9-b128-366728af476a\") " Jan 26 18:57:37 crc kubenswrapper[4737]: I0126 18:57:37.365819 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9n9jq\" (UniqueName: \"kubernetes.io/projected/2a92049d-1c34-47c9-b128-366728af476a-kube-api-access-9n9jq\") pod \"2a92049d-1c34-47c9-b128-366728af476a\" (UID: \"2a92049d-1c34-47c9-b128-366728af476a\") " Jan 26 18:57:37 crc kubenswrapper[4737]: I0126 18:57:37.366113 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a92049d-1c34-47c9-b128-366728af476a-logs" (OuterVolumeSpecName: "logs") pod "2a92049d-1c34-47c9-b128-366728af476a" (UID: "2a92049d-1c34-47c9-b128-366728af476a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:57:37 crc kubenswrapper[4737]: I0126 18:57:37.366130 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2500616a-d9a9-42fd-b442-f922082a19b8-config-data\") pod \"nova-scheduler-0\" (UID: \"2500616a-d9a9-42fd-b442-f922082a19b8\") " pod="openstack/nova-scheduler-0" Jan 26 18:57:37 crc kubenswrapper[4737]: I0126 18:57:37.367637 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2500616a-d9a9-42fd-b442-f922082a19b8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2500616a-d9a9-42fd-b442-f922082a19b8\") " pod="openstack/nova-scheduler-0" Jan 26 18:57:37 crc kubenswrapper[4737]: I0126 18:57:37.368146 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78gkp\" (UniqueName: \"kubernetes.io/projected/2500616a-d9a9-42fd-b442-f922082a19b8-kube-api-access-78gkp\") pod \"nova-scheduler-0\" (UID: \"2500616a-d9a9-42fd-b442-f922082a19b8\") " pod="openstack/nova-scheduler-0" Jan 26 18:57:37 crc kubenswrapper[4737]: I0126 18:57:37.368324 4737 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a92049d-1c34-47c9-b128-366728af476a-logs\") on node \"crc\" DevicePath \"\"" Jan 26 18:57:37 crc kubenswrapper[4737]: I0126 18:57:37.374196 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2500616a-d9a9-42fd-b442-f922082a19b8-config-data\") pod \"nova-scheduler-0\" (UID: \"2500616a-d9a9-42fd-b442-f922082a19b8\") " pod="openstack/nova-scheduler-0" Jan 26 18:57:37 crc kubenswrapper[4737]: I0126 18:57:37.386104 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a92049d-1c34-47c9-b128-366728af476a-kube-api-access-9n9jq" (OuterVolumeSpecName: "kube-api-access-9n9jq") pod "2a92049d-1c34-47c9-b128-366728af476a" (UID: "2a92049d-1c34-47c9-b128-366728af476a"). InnerVolumeSpecName "kube-api-access-9n9jq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:57:37 crc kubenswrapper[4737]: I0126 18:57:37.397538 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78gkp\" (UniqueName: \"kubernetes.io/projected/2500616a-d9a9-42fd-b442-f922082a19b8-kube-api-access-78gkp\") pod \"nova-scheduler-0\" (UID: \"2500616a-d9a9-42fd-b442-f922082a19b8\") " pod="openstack/nova-scheduler-0" Jan 26 18:57:37 crc kubenswrapper[4737]: I0126 18:57:37.415338 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a92049d-1c34-47c9-b128-366728af476a-config-data" (OuterVolumeSpecName: "config-data") pod "2a92049d-1c34-47c9-b128-366728af476a" (UID: "2a92049d-1c34-47c9-b128-366728af476a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:57:37 crc kubenswrapper[4737]: I0126 18:57:37.420102 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2500616a-d9a9-42fd-b442-f922082a19b8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2500616a-d9a9-42fd-b442-f922082a19b8\") " pod="openstack/nova-scheduler-0" Jan 26 18:57:37 crc kubenswrapper[4737]: I0126 18:57:37.435203 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a92049d-1c34-47c9-b128-366728af476a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2a92049d-1c34-47c9-b128-366728af476a" (UID: "2a92049d-1c34-47c9-b128-366728af476a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:57:37 crc kubenswrapper[4737]: I0126 18:57:37.471624 4737 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a92049d-1c34-47c9-b128-366728af476a-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:57:37 crc kubenswrapper[4737]: I0126 18:57:37.471677 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9n9jq\" (UniqueName: \"kubernetes.io/projected/2a92049d-1c34-47c9-b128-366728af476a-kube-api-access-9n9jq\") on node \"crc\" DevicePath \"\"" Jan 26 18:57:37 crc kubenswrapper[4737]: I0126 18:57:37.471690 4737 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a92049d-1c34-47c9-b128-366728af476a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:57:37 crc kubenswrapper[4737]: W0126 18:57:37.563261 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c62bab3_337a_4449_ac7f_63dedc641524.slice/crio-237449f993b2a559b9c69e4658d6be389702aa661a5c850769e74c76778a8800 WatchSource:0}: Error finding container 237449f993b2a559b9c69e4658d6be389702aa661a5c850769e74c76778a8800: Status 404 returned error can't find the container with id 237449f993b2a559b9c69e4658d6be389702aa661a5c850769e74c76778a8800 Jan 26 18:57:37 crc kubenswrapper[4737]: I0126 18:57:37.563807 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 26 18:57:37 crc kubenswrapper[4737]: I0126 18:57:37.618706 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 18:57:37 crc kubenswrapper[4737]: I0126 18:57:37.755312 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"8c62bab3-337a-4449-ac7f-63dedc641524","Type":"ContainerStarted","Data":"237449f993b2a559b9c69e4658d6be389702aa661a5c850769e74c76778a8800"} Jan 26 18:57:37 crc kubenswrapper[4737]: I0126 18:57:37.777893 4737 generic.go:334] "Generic (PLEG): container finished" podID="2a92049d-1c34-47c9-b128-366728af476a" containerID="c70535ceb1c154b2917d6d6227aa22d70ae2cfa94a14e6b7444dfdf3de6b23e0" exitCode=0 Jan 26 18:57:37 crc kubenswrapper[4737]: I0126 18:57:37.778095 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2a92049d-1c34-47c9-b128-366728af476a","Type":"ContainerDied","Data":"c70535ceb1c154b2917d6d6227aa22d70ae2cfa94a14e6b7444dfdf3de6b23e0"} Jan 26 18:57:37 crc kubenswrapper[4737]: I0126 18:57:37.778143 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2a92049d-1c34-47c9-b128-366728af476a","Type":"ContainerDied","Data":"e3d1103dd282442d18e4998c54c155735870696ea0fa1d1d84b2df26b1982ce9"} Jan 26 18:57:37 crc kubenswrapper[4737]: I0126 18:57:37.778181 4737 scope.go:117] "RemoveContainer" containerID="c70535ceb1c154b2917d6d6227aa22d70ae2cfa94a14e6b7444dfdf3de6b23e0" Jan 26 18:57:37 crc kubenswrapper[4737]: I0126 18:57:37.784972 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 18:57:37 crc kubenswrapper[4737]: I0126 18:57:37.795740 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"edd548b7-dcc7-46ac-ac43-3ba6b63c903a","Type":"ContainerStarted","Data":"28013b9ec0f1f9f3bb8e98e8f8a262e6f3f2c7edcfdbec931ddaec24c8c15a96"} Jan 26 18:57:37 crc kubenswrapper[4737]: I0126 18:57:37.837491 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.837461106 podStartE2EDuration="3.837461106s" podCreationTimestamp="2026-01-26 18:57:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:57:37.829193902 +0000 UTC m=+1631.137388640" watchObservedRunningTime="2026-01-26 18:57:37.837461106 +0000 UTC m=+1631.145655814" Jan 26 18:57:37 crc kubenswrapper[4737]: I0126 18:57:37.886720 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 18:57:37 crc kubenswrapper[4737]: I0126 18:57:37.889906 4737 scope.go:117] "RemoveContainer" containerID="90482cd04f867c621a9067e356b2c50c1620be34f6396a069eb7ce590115d872" Jan 26 18:57:37 crc kubenswrapper[4737]: I0126 18:57:37.906110 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 26 18:57:37 crc kubenswrapper[4737]: I0126 18:57:37.933169 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 26 18:57:37 crc kubenswrapper[4737]: E0126 18:57:37.933950 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a92049d-1c34-47c9-b128-366728af476a" containerName="nova-api-log" Jan 26 18:57:37 crc kubenswrapper[4737]: I0126 18:57:37.933978 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a92049d-1c34-47c9-b128-366728af476a" containerName="nova-api-log" Jan 26 18:57:37 crc kubenswrapper[4737]: E0126 18:57:37.934024 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a92049d-1c34-47c9-b128-366728af476a" containerName="nova-api-api" Jan 26 18:57:37 crc kubenswrapper[4737]: I0126 18:57:37.934035 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a92049d-1c34-47c9-b128-366728af476a" containerName="nova-api-api" Jan 26 18:57:37 crc kubenswrapper[4737]: I0126 18:57:37.934319 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a92049d-1c34-47c9-b128-366728af476a" containerName="nova-api-api" Jan 26 18:57:37 crc kubenswrapper[4737]: I0126 18:57:37.934349 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a92049d-1c34-47c9-b128-366728af476a" containerName="nova-api-log" Jan 26 18:57:37 crc kubenswrapper[4737]: I0126 18:57:37.937550 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 18:57:37 crc kubenswrapper[4737]: I0126 18:57:37.945453 4737 scope.go:117] "RemoveContainer" containerID="c70535ceb1c154b2917d6d6227aa22d70ae2cfa94a14e6b7444dfdf3de6b23e0" Jan 26 18:57:37 crc kubenswrapper[4737]: I0126 18:57:37.945683 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 18:57:37 crc kubenswrapper[4737]: E0126 18:57:37.947054 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c70535ceb1c154b2917d6d6227aa22d70ae2cfa94a14e6b7444dfdf3de6b23e0\": container with ID starting with c70535ceb1c154b2917d6d6227aa22d70ae2cfa94a14e6b7444dfdf3de6b23e0 not found: ID does not exist" containerID="c70535ceb1c154b2917d6d6227aa22d70ae2cfa94a14e6b7444dfdf3de6b23e0" Jan 26 18:57:37 crc kubenswrapper[4737]: I0126 18:57:37.947128 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c70535ceb1c154b2917d6d6227aa22d70ae2cfa94a14e6b7444dfdf3de6b23e0"} err="failed to get container status \"c70535ceb1c154b2917d6d6227aa22d70ae2cfa94a14e6b7444dfdf3de6b23e0\": rpc error: code = NotFound desc = could not find container \"c70535ceb1c154b2917d6d6227aa22d70ae2cfa94a14e6b7444dfdf3de6b23e0\": container with ID starting with c70535ceb1c154b2917d6d6227aa22d70ae2cfa94a14e6b7444dfdf3de6b23e0 not found: ID does not exist" Jan 26 18:57:37 crc kubenswrapper[4737]: I0126 18:57:37.947167 4737 scope.go:117] "RemoveContainer" containerID="90482cd04f867c621a9067e356b2c50c1620be34f6396a069eb7ce590115d872" Jan 26 18:57:37 crc kubenswrapper[4737]: E0126 18:57:37.948983 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90482cd04f867c621a9067e356b2c50c1620be34f6396a069eb7ce590115d872\": container with ID starting with 90482cd04f867c621a9067e356b2c50c1620be34f6396a069eb7ce590115d872 not found: ID does not exist" containerID="90482cd04f867c621a9067e356b2c50c1620be34f6396a069eb7ce590115d872" Jan 26 18:57:37 crc kubenswrapper[4737]: I0126 18:57:37.949028 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90482cd04f867c621a9067e356b2c50c1620be34f6396a069eb7ce590115d872"} err="failed to get container status \"90482cd04f867c621a9067e356b2c50c1620be34f6396a069eb7ce590115d872\": rpc error: code = NotFound desc = could not find container \"90482cd04f867c621a9067e356b2c50c1620be34f6396a069eb7ce590115d872\": container with ID starting with 90482cd04f867c621a9067e356b2c50c1620be34f6396a069eb7ce590115d872 not found: ID does not exist" Jan 26 18:57:37 crc kubenswrapper[4737]: I0126 18:57:37.958915 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 18:57:38 crc kubenswrapper[4737]: I0126 18:57:38.103487 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z2nb\" (UniqueName: \"kubernetes.io/projected/84ee1644-a176-4279-920d-4b71999bdf59-kube-api-access-8z2nb\") pod \"nova-api-0\" (UID: \"84ee1644-a176-4279-920d-4b71999bdf59\") " pod="openstack/nova-api-0" Jan 26 18:57:38 crc kubenswrapper[4737]: I0126 18:57:38.103952 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84ee1644-a176-4279-920d-4b71999bdf59-logs\") pod \"nova-api-0\" (UID: \"84ee1644-a176-4279-920d-4b71999bdf59\") " pod="openstack/nova-api-0" Jan 26 18:57:38 crc kubenswrapper[4737]: I0126 18:57:38.104118 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84ee1644-a176-4279-920d-4b71999bdf59-config-data\") pod \"nova-api-0\" (UID: \"84ee1644-a176-4279-920d-4b71999bdf59\") " pod="openstack/nova-api-0" Jan 26 18:57:38 crc kubenswrapper[4737]: I0126 18:57:38.104196 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84ee1644-a176-4279-920d-4b71999bdf59-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"84ee1644-a176-4279-920d-4b71999bdf59\") " pod="openstack/nova-api-0" Jan 26 18:57:38 crc kubenswrapper[4737]: I0126 18:57:38.208147 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84ee1644-a176-4279-920d-4b71999bdf59-config-data\") pod \"nova-api-0\" (UID: \"84ee1644-a176-4279-920d-4b71999bdf59\") " pod="openstack/nova-api-0" Jan 26 18:57:38 crc kubenswrapper[4737]: I0126 18:57:38.208802 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84ee1644-a176-4279-920d-4b71999bdf59-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"84ee1644-a176-4279-920d-4b71999bdf59\") " pod="openstack/nova-api-0" Jan 26 18:57:38 crc kubenswrapper[4737]: I0126 18:57:38.208946 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z2nb\" (UniqueName: \"kubernetes.io/projected/84ee1644-a176-4279-920d-4b71999bdf59-kube-api-access-8z2nb\") pod \"nova-api-0\" (UID: \"84ee1644-a176-4279-920d-4b71999bdf59\") " pod="openstack/nova-api-0" Jan 26 18:57:38 crc kubenswrapper[4737]: I0126 18:57:38.208979 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84ee1644-a176-4279-920d-4b71999bdf59-logs\") pod \"nova-api-0\" (UID: \"84ee1644-a176-4279-920d-4b71999bdf59\") " pod="openstack/nova-api-0" Jan 26 18:57:38 crc kubenswrapper[4737]: I0126 18:57:38.209391 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84ee1644-a176-4279-920d-4b71999bdf59-logs\") pod \"nova-api-0\" (UID: \"84ee1644-a176-4279-920d-4b71999bdf59\") " pod="openstack/nova-api-0" Jan 26 18:57:38 crc kubenswrapper[4737]: I0126 18:57:38.230948 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84ee1644-a176-4279-920d-4b71999bdf59-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"84ee1644-a176-4279-920d-4b71999bdf59\") " pod="openstack/nova-api-0" Jan 26 18:57:38 crc kubenswrapper[4737]: I0126 18:57:38.233470 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84ee1644-a176-4279-920d-4b71999bdf59-config-data\") pod \"nova-api-0\" (UID: \"84ee1644-a176-4279-920d-4b71999bdf59\") " pod="openstack/nova-api-0" Jan 26 18:57:38 crc kubenswrapper[4737]: I0126 18:57:38.235429 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z2nb\" (UniqueName: \"kubernetes.io/projected/84ee1644-a176-4279-920d-4b71999bdf59-kube-api-access-8z2nb\") pod \"nova-api-0\" (UID: \"84ee1644-a176-4279-920d-4b71999bdf59\") " pod="openstack/nova-api-0" Jan 26 18:57:38 crc kubenswrapper[4737]: I0126 18:57:38.325751 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 18:57:38 crc kubenswrapper[4737]: W0126 18:57:38.333284 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2500616a_d9a9_42fd_b442_f922082a19b8.slice/crio-252eb0793cbe974156d87fe39861900806f5f68b5e4456c0c145454f3f9d138e WatchSource:0}: Error finding container 252eb0793cbe974156d87fe39861900806f5f68b5e4456c0c145454f3f9d138e: Status 404 returned error can't find the container with id 252eb0793cbe974156d87fe39861900806f5f68b5e4456c0c145454f3f9d138e Jan 26 18:57:38 crc kubenswrapper[4737]: I0126 18:57:38.340273 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 18:57:38 crc kubenswrapper[4737]: I0126 18:57:38.817995 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"8c62bab3-337a-4449-ac7f-63dedc641524","Type":"ContainerStarted","Data":"b65e2b9fa55ab988820d8856ac9cf47b99d189319f3fe04fea68155daed0baad"} Jan 26 18:57:38 crc kubenswrapper[4737]: I0126 18:57:38.818737 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 26 18:57:38 crc kubenswrapper[4737]: I0126 18:57:38.825608 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2500616a-d9a9-42fd-b442-f922082a19b8","Type":"ContainerStarted","Data":"1fdc2a941aa1602011a8f7ac6118ee190e992f9f96d0c097e4f452e1d40d8a1a"} Jan 26 18:57:38 crc kubenswrapper[4737]: I0126 18:57:38.825676 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2500616a-d9a9-42fd-b442-f922082a19b8","Type":"ContainerStarted","Data":"252eb0793cbe974156d87fe39861900806f5f68b5e4456c0c145454f3f9d138e"} Jan 26 18:57:38 crc kubenswrapper[4737]: I0126 18:57:38.837560 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934","Type":"ContainerStarted","Data":"1ab86a8d195cdea196b08f5823b0df352a870e6ab202a79c054a957a986a249d"} Jan 26 18:57:38 crc kubenswrapper[4737]: I0126 18:57:38.837615 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 18:57:38 crc kubenswrapper[4737]: I0126 18:57:38.853041 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.852998225 podStartE2EDuration="2.852998225s" podCreationTimestamp="2026-01-26 18:57:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:57:38.851940821 +0000 UTC m=+1632.160135529" watchObservedRunningTime="2026-01-26 18:57:38.852998225 +0000 UTC m=+1632.161192933" Jan 26 18:57:38 crc kubenswrapper[4737]: I0126 18:57:38.879485 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.8794566339999998 podStartE2EDuration="1.879456634s" podCreationTimestamp="2026-01-26 18:57:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:57:38.874236883 +0000 UTC m=+1632.182431591" watchObservedRunningTime="2026-01-26 18:57:38.879456634 +0000 UTC m=+1632.187651362" Jan 26 18:57:38 crc kubenswrapper[4737]: I0126 18:57:38.918175 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 18:57:38 crc kubenswrapper[4737]: I0126 18:57:38.932601 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.877569023 podStartE2EDuration="7.932575148s" podCreationTimestamp="2026-01-26 18:57:31 +0000 UTC" firstStartedPulling="2026-01-26 18:57:33.049498734 +0000 UTC m=+1626.357693442" lastFinishedPulling="2026-01-26 18:57:38.104504859 +0000 UTC m=+1631.412699567" observedRunningTime="2026-01-26 18:57:38.901752886 +0000 UTC m=+1632.209947604" watchObservedRunningTime="2026-01-26 18:57:38.932575148 +0000 UTC m=+1632.240769856" Jan 26 18:57:39 crc kubenswrapper[4737]: I0126 18:57:39.003856 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="252aaba1-e252-4c10-b9de-8e6100e48267" path="/var/lib/kubelet/pods/252aaba1-e252-4c10-b9de-8e6100e48267/volumes" Jan 26 18:57:39 crc kubenswrapper[4737]: I0126 18:57:39.004828 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a92049d-1c34-47c9-b128-366728af476a" path="/var/lib/kubelet/pods/2a92049d-1c34-47c9-b128-366728af476a/volumes" Jan 26 18:57:39 crc kubenswrapper[4737]: I0126 18:57:39.857967 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"84ee1644-a176-4279-920d-4b71999bdf59","Type":"ContainerStarted","Data":"5b7b068305aa8b3d924318ffcc53bd9376538108dce951e7ae134483255bf586"} Jan 26 18:57:39 crc kubenswrapper[4737]: I0126 18:57:39.859408 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"84ee1644-a176-4279-920d-4b71999bdf59","Type":"ContainerStarted","Data":"228cec0f85512893cc98a5a56c2fff4e23835b066650fd31dd5131743927a67a"} Jan 26 18:57:39 crc kubenswrapper[4737]: I0126 18:57:39.859506 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"84ee1644-a176-4279-920d-4b71999bdf59","Type":"ContainerStarted","Data":"00f20d3518625c4ca566f4a6b0a01afbb6ec5ed5d57060f4b6fefe64b98f7f2b"} Jan 26 18:57:39 crc kubenswrapper[4737]: I0126 18:57:39.884711 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.884674502 podStartE2EDuration="2.884674502s" podCreationTimestamp="2026-01-26 18:57:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:57:39.882809068 +0000 UTC m=+1633.191003776" watchObservedRunningTime="2026-01-26 18:57:39.884674502 +0000 UTC m=+1633.192869210" Jan 26 18:57:40 crc kubenswrapper[4737]: I0126 18:57:40.211371 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 18:57:40 crc kubenswrapper[4737]: I0126 18:57:40.211758 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 18:57:41 crc kubenswrapper[4737]: I0126 18:57:41.780441 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-8cnbf"] Jan 26 18:57:41 crc kubenswrapper[4737]: I0126 18:57:41.782250 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-8cnbf" Jan 26 18:57:41 crc kubenswrapper[4737]: I0126 18:57:41.785342 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 26 18:57:41 crc kubenswrapper[4737]: I0126 18:57:41.785547 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 26 18:57:41 crc kubenswrapper[4737]: I0126 18:57:41.785800 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 26 18:57:41 crc kubenswrapper[4737]: I0126 18:57:41.785904 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-5skxc" Jan 26 18:57:41 crc kubenswrapper[4737]: I0126 18:57:41.792077 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-8cnbf"] Jan 26 18:57:41 crc kubenswrapper[4737]: I0126 18:57:41.821362 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ab0f9a5-ed4e-434e-ac42-e5293fcf921c-scripts\") pod \"aodh-db-sync-8cnbf\" (UID: \"0ab0f9a5-ed4e-434e-ac42-e5293fcf921c\") " pod="openstack/aodh-db-sync-8cnbf" Jan 26 18:57:41 crc kubenswrapper[4737]: I0126 18:57:41.821503 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ab0f9a5-ed4e-434e-ac42-e5293fcf921c-config-data\") pod \"aodh-db-sync-8cnbf\" (UID: \"0ab0f9a5-ed4e-434e-ac42-e5293fcf921c\") " pod="openstack/aodh-db-sync-8cnbf" Jan 26 18:57:41 crc kubenswrapper[4737]: I0126 18:57:41.821572 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ab0f9a5-ed4e-434e-ac42-e5293fcf921c-combined-ca-bundle\") pod \"aodh-db-sync-8cnbf\" (UID: \"0ab0f9a5-ed4e-434e-ac42-e5293fcf921c\") " pod="openstack/aodh-db-sync-8cnbf" Jan 26 18:57:41 crc kubenswrapper[4737]: I0126 18:57:41.821720 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95njh\" (UniqueName: \"kubernetes.io/projected/0ab0f9a5-ed4e-434e-ac42-e5293fcf921c-kube-api-access-95njh\") pod \"aodh-db-sync-8cnbf\" (UID: \"0ab0f9a5-ed4e-434e-ac42-e5293fcf921c\") " pod="openstack/aodh-db-sync-8cnbf" Jan 26 18:57:41 crc kubenswrapper[4737]: I0126 18:57:41.923590 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ab0f9a5-ed4e-434e-ac42-e5293fcf921c-config-data\") pod \"aodh-db-sync-8cnbf\" (UID: \"0ab0f9a5-ed4e-434e-ac42-e5293fcf921c\") " pod="openstack/aodh-db-sync-8cnbf" Jan 26 18:57:41 crc kubenswrapper[4737]: I0126 18:57:41.923673 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ab0f9a5-ed4e-434e-ac42-e5293fcf921c-combined-ca-bundle\") pod \"aodh-db-sync-8cnbf\" (UID: \"0ab0f9a5-ed4e-434e-ac42-e5293fcf921c\") " pod="openstack/aodh-db-sync-8cnbf" Jan 26 18:57:41 crc kubenswrapper[4737]: I0126 18:57:41.923794 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95njh\" (UniqueName: \"kubernetes.io/projected/0ab0f9a5-ed4e-434e-ac42-e5293fcf921c-kube-api-access-95njh\") pod \"aodh-db-sync-8cnbf\" (UID: \"0ab0f9a5-ed4e-434e-ac42-e5293fcf921c\") " pod="openstack/aodh-db-sync-8cnbf" Jan 26 18:57:41 crc kubenswrapper[4737]: I0126 18:57:41.923818 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ab0f9a5-ed4e-434e-ac42-e5293fcf921c-scripts\") pod \"aodh-db-sync-8cnbf\" (UID: \"0ab0f9a5-ed4e-434e-ac42-e5293fcf921c\") " pod="openstack/aodh-db-sync-8cnbf" Jan 26 18:57:41 crc kubenswrapper[4737]: I0126 18:57:41.931276 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ab0f9a5-ed4e-434e-ac42-e5293fcf921c-scripts\") pod \"aodh-db-sync-8cnbf\" (UID: \"0ab0f9a5-ed4e-434e-ac42-e5293fcf921c\") " pod="openstack/aodh-db-sync-8cnbf" Jan 26 18:57:41 crc kubenswrapper[4737]: I0126 18:57:41.933144 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ab0f9a5-ed4e-434e-ac42-e5293fcf921c-combined-ca-bundle\") pod \"aodh-db-sync-8cnbf\" (UID: \"0ab0f9a5-ed4e-434e-ac42-e5293fcf921c\") " pod="openstack/aodh-db-sync-8cnbf" Jan 26 18:57:41 crc kubenswrapper[4737]: I0126 18:57:41.943122 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ab0f9a5-ed4e-434e-ac42-e5293fcf921c-config-data\") pod \"aodh-db-sync-8cnbf\" (UID: \"0ab0f9a5-ed4e-434e-ac42-e5293fcf921c\") " pod="openstack/aodh-db-sync-8cnbf" Jan 26 18:57:41 crc kubenswrapper[4737]: I0126 18:57:41.949675 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95njh\" (UniqueName: \"kubernetes.io/projected/0ab0f9a5-ed4e-434e-ac42-e5293fcf921c-kube-api-access-95njh\") pod \"aodh-db-sync-8cnbf\" (UID: \"0ab0f9a5-ed4e-434e-ac42-e5293fcf921c\") " pod="openstack/aodh-db-sync-8cnbf" Jan 26 18:57:42 crc kubenswrapper[4737]: I0126 18:57:42.119590 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-8cnbf" Jan 26 18:57:42 crc kubenswrapper[4737]: I0126 18:57:42.619803 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 26 18:57:42 crc kubenswrapper[4737]: I0126 18:57:42.721802 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-8cnbf"] Jan 26 18:57:42 crc kubenswrapper[4737]: W0126 18:57:42.736405 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ab0f9a5_ed4e_434e_ac42_e5293fcf921c.slice/crio-a27a838d3f2f231195df0257717970d398b6544478e1bbdfc0c2bd47c1e651a0 WatchSource:0}: Error finding container a27a838d3f2f231195df0257717970d398b6544478e1bbdfc0c2bd47c1e651a0: Status 404 returned error can't find the container with id a27a838d3f2f231195df0257717970d398b6544478e1bbdfc0c2bd47c1e651a0 Jan 26 18:57:42 crc kubenswrapper[4737]: I0126 18:57:42.905895 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-8cnbf" event={"ID":"0ab0f9a5-ed4e-434e-ac42-e5293fcf921c","Type":"ContainerStarted","Data":"a27a838d3f2f231195df0257717970d398b6544478e1bbdfc0c2bd47c1e651a0"} Jan 26 18:57:45 crc kubenswrapper[4737]: I0126 18:57:45.212683 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 18:57:45 crc kubenswrapper[4737]: I0126 18:57:45.213307 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 18:57:46 crc kubenswrapper[4737]: I0126 18:57:46.226308 4737 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="edd548b7-dcc7-46ac-ac43-3ba6b63c903a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.250:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 18:57:46 crc kubenswrapper[4737]: I0126 18:57:46.226308 4737 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="edd548b7-dcc7-46ac-ac43-3ba6b63c903a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.250:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 18:57:46 crc kubenswrapper[4737]: I0126 18:57:46.904243 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 26 18:57:47 crc kubenswrapper[4737]: I0126 18:57:47.620005 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 26 18:57:47 crc kubenswrapper[4737]: I0126 18:57:47.666197 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 26 18:57:48 crc kubenswrapper[4737]: I0126 18:57:48.058584 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 26 18:57:48 crc kubenswrapper[4737]: I0126 18:57:48.341409 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 18:57:48 crc kubenswrapper[4737]: I0126 18:57:48.341475 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 18:57:49 crc kubenswrapper[4737]: I0126 18:57:49.423375 4737 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="84ee1644-a176-4279-920d-4b71999bdf59" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.253:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 18:57:49 crc kubenswrapper[4737]: I0126 18:57:49.423536 4737 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="84ee1644-a176-4279-920d-4b71999bdf59" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.253:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 18:57:51 crc kubenswrapper[4737]: I0126 18:57:51.068213 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-8cnbf" event={"ID":"0ab0f9a5-ed4e-434e-ac42-e5293fcf921c","Type":"ContainerStarted","Data":"5580bb646518ef6e746d4366b7a2e9e14969d9e203b4138af6f9580bd603416d"} Jan 26 18:57:51 crc kubenswrapper[4737]: I0126 18:57:51.108133 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-8cnbf" podStartSLOduration=2.818321422 podStartE2EDuration="10.108104954s" podCreationTimestamp="2026-01-26 18:57:41 +0000 UTC" firstStartedPulling="2026-01-26 18:57:42.739104191 +0000 UTC m=+1636.047298899" lastFinishedPulling="2026-01-26 18:57:50.028887713 +0000 UTC m=+1643.337082431" observedRunningTime="2026-01-26 18:57:51.094171737 +0000 UTC m=+1644.402366455" watchObservedRunningTime="2026-01-26 18:57:51.108104954 +0000 UTC m=+1644.416299662" Jan 26 18:57:53 crc kubenswrapper[4737]: I0126 18:57:53.099004 4737 generic.go:334] "Generic (PLEG): container finished" podID="0ab0f9a5-ed4e-434e-ac42-e5293fcf921c" containerID="5580bb646518ef6e746d4366b7a2e9e14969d9e203b4138af6f9580bd603416d" exitCode=0 Jan 26 18:57:53 crc kubenswrapper[4737]: I0126 18:57:53.099103 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-8cnbf" event={"ID":"0ab0f9a5-ed4e-434e-ac42-e5293fcf921c","Type":"ContainerDied","Data":"5580bb646518ef6e746d4366b7a2e9e14969d9e203b4138af6f9580bd603416d"} Jan 26 18:57:54 crc kubenswrapper[4737]: I0126 18:57:54.784092 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-8cnbf" Jan 26 18:57:54 crc kubenswrapper[4737]: I0126 18:57:54.939426 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ab0f9a5-ed4e-434e-ac42-e5293fcf921c-config-data\") pod \"0ab0f9a5-ed4e-434e-ac42-e5293fcf921c\" (UID: \"0ab0f9a5-ed4e-434e-ac42-e5293fcf921c\") " Jan 26 18:57:54 crc kubenswrapper[4737]: I0126 18:57:54.940189 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ab0f9a5-ed4e-434e-ac42-e5293fcf921c-combined-ca-bundle\") pod \"0ab0f9a5-ed4e-434e-ac42-e5293fcf921c\" (UID: \"0ab0f9a5-ed4e-434e-ac42-e5293fcf921c\") " Jan 26 18:57:54 crc kubenswrapper[4737]: I0126 18:57:54.940259 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95njh\" (UniqueName: \"kubernetes.io/projected/0ab0f9a5-ed4e-434e-ac42-e5293fcf921c-kube-api-access-95njh\") pod \"0ab0f9a5-ed4e-434e-ac42-e5293fcf921c\" (UID: \"0ab0f9a5-ed4e-434e-ac42-e5293fcf921c\") " Jan 26 18:57:54 crc kubenswrapper[4737]: I0126 18:57:54.940514 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ab0f9a5-ed4e-434e-ac42-e5293fcf921c-scripts\") pod \"0ab0f9a5-ed4e-434e-ac42-e5293fcf921c\" (UID: \"0ab0f9a5-ed4e-434e-ac42-e5293fcf921c\") " Jan 26 18:57:54 crc kubenswrapper[4737]: I0126 18:57:54.948477 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ab0f9a5-ed4e-434e-ac42-e5293fcf921c-kube-api-access-95njh" (OuterVolumeSpecName: "kube-api-access-95njh") pod "0ab0f9a5-ed4e-434e-ac42-e5293fcf921c" (UID: "0ab0f9a5-ed4e-434e-ac42-e5293fcf921c"). InnerVolumeSpecName "kube-api-access-95njh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:57:54 crc kubenswrapper[4737]: I0126 18:57:54.950902 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ab0f9a5-ed4e-434e-ac42-e5293fcf921c-scripts" (OuterVolumeSpecName: "scripts") pod "0ab0f9a5-ed4e-434e-ac42-e5293fcf921c" (UID: "0ab0f9a5-ed4e-434e-ac42-e5293fcf921c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:57:54 crc kubenswrapper[4737]: I0126 18:57:54.976346 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ab0f9a5-ed4e-434e-ac42-e5293fcf921c-config-data" (OuterVolumeSpecName: "config-data") pod "0ab0f9a5-ed4e-434e-ac42-e5293fcf921c" (UID: "0ab0f9a5-ed4e-434e-ac42-e5293fcf921c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:57:54 crc kubenswrapper[4737]: I0126 18:57:54.981860 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ab0f9a5-ed4e-434e-ac42-e5293fcf921c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ab0f9a5-ed4e-434e-ac42-e5293fcf921c" (UID: "0ab0f9a5-ed4e-434e-ac42-e5293fcf921c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:57:55 crc kubenswrapper[4737]: I0126 18:57:55.043715 4737 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ab0f9a5-ed4e-434e-ac42-e5293fcf921c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:57:55 crc kubenswrapper[4737]: I0126 18:57:55.044151 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95njh\" (UniqueName: \"kubernetes.io/projected/0ab0f9a5-ed4e-434e-ac42-e5293fcf921c-kube-api-access-95njh\") on node \"crc\" DevicePath \"\"" Jan 26 18:57:55 crc kubenswrapper[4737]: I0126 18:57:55.044168 4737 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ab0f9a5-ed4e-434e-ac42-e5293fcf921c-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:57:55 crc kubenswrapper[4737]: I0126 18:57:55.044179 4737 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ab0f9a5-ed4e-434e-ac42-e5293fcf921c-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:57:55 crc kubenswrapper[4737]: I0126 18:57:55.124295 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-8cnbf" event={"ID":"0ab0f9a5-ed4e-434e-ac42-e5293fcf921c","Type":"ContainerDied","Data":"a27a838d3f2f231195df0257717970d398b6544478e1bbdfc0c2bd47c1e651a0"} Jan 26 18:57:55 crc kubenswrapper[4737]: I0126 18:57:55.124343 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a27a838d3f2f231195df0257717970d398b6544478e1bbdfc0c2bd47c1e651a0" Jan 26 18:57:55 crc kubenswrapper[4737]: I0126 18:57:55.124397 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-8cnbf" Jan 26 18:57:55 crc kubenswrapper[4737]: I0126 18:57:55.217764 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 18:57:55 crc kubenswrapper[4737]: I0126 18:57:55.220548 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 18:57:55 crc kubenswrapper[4737]: I0126 18:57:55.226385 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 18:57:56 crc kubenswrapper[4737]: I0126 18:57:56.141781 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 18:57:56 crc kubenswrapper[4737]: I0126 18:57:56.861106 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:57:56 crc kubenswrapper[4737]: I0126 18:57:56.918144 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 26 18:57:56 crc kubenswrapper[4737]: E0126 18:57:56.918820 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b82b7d0-7418-465f-a126-5882e578889b" containerName="nova-cell1-novncproxy-novncproxy" Jan 26 18:57:56 crc kubenswrapper[4737]: I0126 18:57:56.918843 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b82b7d0-7418-465f-a126-5882e578889b" containerName="nova-cell1-novncproxy-novncproxy" Jan 26 18:57:56 crc kubenswrapper[4737]: E0126 18:57:56.918866 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ab0f9a5-ed4e-434e-ac42-e5293fcf921c" containerName="aodh-db-sync" Jan 26 18:57:56 crc kubenswrapper[4737]: I0126 18:57:56.918878 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ab0f9a5-ed4e-434e-ac42-e5293fcf921c" containerName="aodh-db-sync" Jan 26 18:57:56 crc kubenswrapper[4737]: I0126 18:57:56.919208 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ab0f9a5-ed4e-434e-ac42-e5293fcf921c" containerName="aodh-db-sync" Jan 26 18:57:56 crc kubenswrapper[4737]: I0126 18:57:56.919241 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b82b7d0-7418-465f-a126-5882e578889b" containerName="nova-cell1-novncproxy-novncproxy" Jan 26 18:57:56 crc kubenswrapper[4737]: I0126 18:57:56.922004 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 26 18:57:56 crc kubenswrapper[4737]: I0126 18:57:56.926751 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d92c400c-6139-4277-b112-2c725f091503-scripts\") pod \"aodh-0\" (UID: \"d92c400c-6139-4277-b112-2c725f091503\") " pod="openstack/aodh-0" Jan 26 18:57:56 crc kubenswrapper[4737]: I0126 18:57:56.926790 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d92c400c-6139-4277-b112-2c725f091503-combined-ca-bundle\") pod \"aodh-0\" (UID: \"d92c400c-6139-4277-b112-2c725f091503\") " pod="openstack/aodh-0" Jan 26 18:57:56 crc kubenswrapper[4737]: I0126 18:57:56.926918 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v92w9\" (UniqueName: \"kubernetes.io/projected/d92c400c-6139-4277-b112-2c725f091503-kube-api-access-v92w9\") pod \"aodh-0\" (UID: \"d92c400c-6139-4277-b112-2c725f091503\") " pod="openstack/aodh-0" Jan 26 18:57:56 crc kubenswrapper[4737]: I0126 18:57:56.926936 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d92c400c-6139-4277-b112-2c725f091503-config-data\") pod \"aodh-0\" (UID: \"d92c400c-6139-4277-b112-2c725f091503\") " pod="openstack/aodh-0" Jan 26 18:57:56 crc kubenswrapper[4737]: I0126 18:57:56.927437 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 26 18:57:56 crc kubenswrapper[4737]: I0126 18:57:56.927492 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-5skxc" Jan 26 18:57:56 crc kubenswrapper[4737]: I0126 18:57:56.927450 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 26 18:57:56 crc kubenswrapper[4737]: I0126 18:57:56.932210 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 26 18:57:57 crc kubenswrapper[4737]: I0126 18:57:57.042795 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b82b7d0-7418-465f-a126-5882e578889b-combined-ca-bundle\") pod \"4b82b7d0-7418-465f-a126-5882e578889b\" (UID: \"4b82b7d0-7418-465f-a126-5882e578889b\") " Jan 26 18:57:57 crc kubenswrapper[4737]: I0126 18:57:57.042990 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hxjn\" (UniqueName: \"kubernetes.io/projected/4b82b7d0-7418-465f-a126-5882e578889b-kube-api-access-5hxjn\") pod \"4b82b7d0-7418-465f-a126-5882e578889b\" (UID: \"4b82b7d0-7418-465f-a126-5882e578889b\") " Jan 26 18:57:57 crc kubenswrapper[4737]: I0126 18:57:57.043123 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b82b7d0-7418-465f-a126-5882e578889b-config-data\") pod \"4b82b7d0-7418-465f-a126-5882e578889b\" (UID: \"4b82b7d0-7418-465f-a126-5882e578889b\") " Jan 26 18:57:57 crc kubenswrapper[4737]: I0126 18:57:57.045655 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d92c400c-6139-4277-b112-2c725f091503-config-data\") pod \"aodh-0\" (UID: \"d92c400c-6139-4277-b112-2c725f091503\") " pod="openstack/aodh-0" Jan 26 18:57:57 crc kubenswrapper[4737]: I0126 18:57:57.045704 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v92w9\" (UniqueName: \"kubernetes.io/projected/d92c400c-6139-4277-b112-2c725f091503-kube-api-access-v92w9\") pod \"aodh-0\" (UID: \"d92c400c-6139-4277-b112-2c725f091503\") " pod="openstack/aodh-0" Jan 26 18:57:57 crc kubenswrapper[4737]: I0126 18:57:57.046027 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d92c400c-6139-4277-b112-2c725f091503-scripts\") pod \"aodh-0\" (UID: \"d92c400c-6139-4277-b112-2c725f091503\") " pod="openstack/aodh-0" Jan 26 18:57:57 crc kubenswrapper[4737]: I0126 18:57:57.046046 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d92c400c-6139-4277-b112-2c725f091503-combined-ca-bundle\") pod \"aodh-0\" (UID: \"d92c400c-6139-4277-b112-2c725f091503\") " pod="openstack/aodh-0" Jan 26 18:57:57 crc kubenswrapper[4737]: I0126 18:57:57.051676 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d92c400c-6139-4277-b112-2c725f091503-scripts\") pod \"aodh-0\" (UID: \"d92c400c-6139-4277-b112-2c725f091503\") " pod="openstack/aodh-0" Jan 26 18:57:57 crc kubenswrapper[4737]: I0126 18:57:57.055708 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d92c400c-6139-4277-b112-2c725f091503-combined-ca-bundle\") pod \"aodh-0\" (UID: \"d92c400c-6139-4277-b112-2c725f091503\") " pod="openstack/aodh-0" Jan 26 18:57:57 crc kubenswrapper[4737]: I0126 18:57:57.064867 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d92c400c-6139-4277-b112-2c725f091503-config-data\") pod \"aodh-0\" (UID: \"d92c400c-6139-4277-b112-2c725f091503\") " pod="openstack/aodh-0" Jan 26 18:57:57 crc kubenswrapper[4737]: I0126 18:57:57.081333 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v92w9\" (UniqueName: \"kubernetes.io/projected/d92c400c-6139-4277-b112-2c725f091503-kube-api-access-v92w9\") pod \"aodh-0\" (UID: \"d92c400c-6139-4277-b112-2c725f091503\") " pod="openstack/aodh-0" Jan 26 18:57:57 crc kubenswrapper[4737]: I0126 18:57:57.087358 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b82b7d0-7418-465f-a126-5882e578889b-kube-api-access-5hxjn" (OuterVolumeSpecName: "kube-api-access-5hxjn") pod "4b82b7d0-7418-465f-a126-5882e578889b" (UID: "4b82b7d0-7418-465f-a126-5882e578889b"). InnerVolumeSpecName "kube-api-access-5hxjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:57:57 crc kubenswrapper[4737]: I0126 18:57:57.108726 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b82b7d0-7418-465f-a126-5882e578889b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b82b7d0-7418-465f-a126-5882e578889b" (UID: "4b82b7d0-7418-465f-a126-5882e578889b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:57:57 crc kubenswrapper[4737]: I0126 18:57:57.127457 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b82b7d0-7418-465f-a126-5882e578889b-config-data" (OuterVolumeSpecName: "config-data") pod "4b82b7d0-7418-465f-a126-5882e578889b" (UID: "4b82b7d0-7418-465f-a126-5882e578889b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:57:57 crc kubenswrapper[4737]: I0126 18:57:57.147709 4737 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b82b7d0-7418-465f-a126-5882e578889b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:57:57 crc kubenswrapper[4737]: I0126 18:57:57.147736 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hxjn\" (UniqueName: \"kubernetes.io/projected/4b82b7d0-7418-465f-a126-5882e578889b-kube-api-access-5hxjn\") on node \"crc\" DevicePath \"\"" Jan 26 18:57:57 crc kubenswrapper[4737]: I0126 18:57:57.147747 4737 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b82b7d0-7418-465f-a126-5882e578889b-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:57:57 crc kubenswrapper[4737]: I0126 18:57:57.148719 4737 generic.go:334] "Generic (PLEG): container finished" podID="4b82b7d0-7418-465f-a126-5882e578889b" containerID="c4caed85dfdaa08f522433e6649a9f5f1190c348bd22173b1ffdb9c004d73256" exitCode=137 Jan 26 18:57:57 crc kubenswrapper[4737]: I0126 18:57:57.149040 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:57:57 crc kubenswrapper[4737]: I0126 18:57:57.149140 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4b82b7d0-7418-465f-a126-5882e578889b","Type":"ContainerDied","Data":"c4caed85dfdaa08f522433e6649a9f5f1190c348bd22173b1ffdb9c004d73256"} Jan 26 18:57:57 crc kubenswrapper[4737]: I0126 18:57:57.149196 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4b82b7d0-7418-465f-a126-5882e578889b","Type":"ContainerDied","Data":"09d7715b1e424057d1e9d24a1bfda745683008af0b7dfb381a27b4be575acc78"} Jan 26 18:57:57 crc kubenswrapper[4737]: I0126 18:57:57.149218 4737 scope.go:117] "RemoveContainer" containerID="c4caed85dfdaa08f522433e6649a9f5f1190c348bd22173b1ffdb9c004d73256" Jan 26 18:57:57 crc kubenswrapper[4737]: I0126 18:57:57.250845 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 26 18:57:57 crc kubenswrapper[4737]: I0126 18:57:57.279431 4737 scope.go:117] "RemoveContainer" containerID="c4caed85dfdaa08f522433e6649a9f5f1190c348bd22173b1ffdb9c004d73256" Jan 26 18:57:57 crc kubenswrapper[4737]: E0126 18:57:57.281088 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4caed85dfdaa08f522433e6649a9f5f1190c348bd22173b1ffdb9c004d73256\": container with ID starting with c4caed85dfdaa08f522433e6649a9f5f1190c348bd22173b1ffdb9c004d73256 not found: ID does not exist" containerID="c4caed85dfdaa08f522433e6649a9f5f1190c348bd22173b1ffdb9c004d73256" Jan 26 18:57:57 crc kubenswrapper[4737]: I0126 18:57:57.281146 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4caed85dfdaa08f522433e6649a9f5f1190c348bd22173b1ffdb9c004d73256"} err="failed to get container status \"c4caed85dfdaa08f522433e6649a9f5f1190c348bd22173b1ffdb9c004d73256\": rpc error: code = NotFound desc = could not find container \"c4caed85dfdaa08f522433e6649a9f5f1190c348bd22173b1ffdb9c004d73256\": container with ID starting with c4caed85dfdaa08f522433e6649a9f5f1190c348bd22173b1ffdb9c004d73256 not found: ID does not exist" Jan 26 18:57:57 crc kubenswrapper[4737]: I0126 18:57:57.382121 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 18:57:57 crc kubenswrapper[4737]: I0126 18:57:57.402528 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 18:57:57 crc kubenswrapper[4737]: I0126 18:57:57.432941 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 18:57:57 crc kubenswrapper[4737]: I0126 18:57:57.434542 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:57:57 crc kubenswrapper[4737]: I0126 18:57:57.444222 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 26 18:57:57 crc kubenswrapper[4737]: I0126 18:57:57.444415 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 26 18:57:57 crc kubenswrapper[4737]: I0126 18:57:57.444525 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 26 18:57:57 crc kubenswrapper[4737]: I0126 18:57:57.451032 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 18:57:57 crc kubenswrapper[4737]: I0126 18:57:57.570129 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/32bea17c-5210-413d-81b5-e30c0dbc0c77-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"32bea17c-5210-413d-81b5-e30c0dbc0c77\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:57:57 crc kubenswrapper[4737]: I0126 18:57:57.570525 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8vq9\" (UniqueName: \"kubernetes.io/projected/32bea17c-5210-413d-81b5-e30c0dbc0c77-kube-api-access-d8vq9\") pod \"nova-cell1-novncproxy-0\" (UID: \"32bea17c-5210-413d-81b5-e30c0dbc0c77\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:57:57 crc kubenswrapper[4737]: I0126 18:57:57.570804 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32bea17c-5210-413d-81b5-e30c0dbc0c77-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"32bea17c-5210-413d-81b5-e30c0dbc0c77\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:57:57 crc kubenswrapper[4737]: I0126 18:57:57.570906 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/32bea17c-5210-413d-81b5-e30c0dbc0c77-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"32bea17c-5210-413d-81b5-e30c0dbc0c77\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:57:57 crc kubenswrapper[4737]: I0126 18:57:57.570988 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32bea17c-5210-413d-81b5-e30c0dbc0c77-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"32bea17c-5210-413d-81b5-e30c0dbc0c77\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:57:57 crc kubenswrapper[4737]: I0126 18:57:57.673121 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32bea17c-5210-413d-81b5-e30c0dbc0c77-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"32bea17c-5210-413d-81b5-e30c0dbc0c77\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:57:57 crc kubenswrapper[4737]: I0126 18:57:57.673245 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/32bea17c-5210-413d-81b5-e30c0dbc0c77-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"32bea17c-5210-413d-81b5-e30c0dbc0c77\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:57:57 crc kubenswrapper[4737]: I0126 18:57:57.673311 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32bea17c-5210-413d-81b5-e30c0dbc0c77-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"32bea17c-5210-413d-81b5-e30c0dbc0c77\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:57:57 crc kubenswrapper[4737]: I0126 18:57:57.673412 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/32bea17c-5210-413d-81b5-e30c0dbc0c77-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"32bea17c-5210-413d-81b5-e30c0dbc0c77\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:57:57 crc kubenswrapper[4737]: I0126 18:57:57.673474 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8vq9\" (UniqueName: \"kubernetes.io/projected/32bea17c-5210-413d-81b5-e30c0dbc0c77-kube-api-access-d8vq9\") pod \"nova-cell1-novncproxy-0\" (UID: \"32bea17c-5210-413d-81b5-e30c0dbc0c77\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:57:57 crc kubenswrapper[4737]: I0126 18:57:57.684490 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32bea17c-5210-413d-81b5-e30c0dbc0c77-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"32bea17c-5210-413d-81b5-e30c0dbc0c77\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:57:57 crc kubenswrapper[4737]: I0126 18:57:57.684579 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/32bea17c-5210-413d-81b5-e30c0dbc0c77-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"32bea17c-5210-413d-81b5-e30c0dbc0c77\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:57:57 crc kubenswrapper[4737]: I0126 18:57:57.689643 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32bea17c-5210-413d-81b5-e30c0dbc0c77-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"32bea17c-5210-413d-81b5-e30c0dbc0c77\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:57:57 crc kubenswrapper[4737]: I0126 18:57:57.706124 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/32bea17c-5210-413d-81b5-e30c0dbc0c77-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"32bea17c-5210-413d-81b5-e30c0dbc0c77\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:57:57 crc kubenswrapper[4737]: I0126 18:57:57.723696 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8vq9\" (UniqueName: \"kubernetes.io/projected/32bea17c-5210-413d-81b5-e30c0dbc0c77-kube-api-access-d8vq9\") pod \"nova-cell1-novncproxy-0\" (UID: \"32bea17c-5210-413d-81b5-e30c0dbc0c77\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:57:57 crc kubenswrapper[4737]: I0126 18:57:57.856751 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:57:58 crc kubenswrapper[4737]: I0126 18:57:58.030103 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 26 18:57:58 crc kubenswrapper[4737]: I0126 18:57:58.194391 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d92c400c-6139-4277-b112-2c725f091503","Type":"ContainerStarted","Data":"c20dea87a4e7e4833431c87aab7388815e36efba3dd58e68a004067566744dcd"} Jan 26 18:57:58 crc kubenswrapper[4737]: I0126 18:57:58.351349 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 18:57:58 crc kubenswrapper[4737]: I0126 18:57:58.353657 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 18:57:58 crc kubenswrapper[4737]: I0126 18:57:58.358128 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 18:57:58 crc kubenswrapper[4737]: I0126 18:57:58.359781 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 18:57:58 crc kubenswrapper[4737]: I0126 18:57:58.485897 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 18:57:58 crc kubenswrapper[4737]: I0126 18:57:58.995822 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b82b7d0-7418-465f-a126-5882e578889b" path="/var/lib/kubelet/pods/4b82b7d0-7418-465f-a126-5882e578889b/volumes" Jan 26 18:57:59 crc kubenswrapper[4737]: I0126 18:57:59.219221 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"32bea17c-5210-413d-81b5-e30c0dbc0c77","Type":"ContainerStarted","Data":"5c2b72e3d9b39f7adc2cad6cdd62369206bf9878adb0efa8fabdaf1fcf6111dc"} Jan 26 18:57:59 crc kubenswrapper[4737]: I0126 18:57:59.219289 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"32bea17c-5210-413d-81b5-e30c0dbc0c77","Type":"ContainerStarted","Data":"f3b983d8cc9e00a3177cca5ca7077e545f1a5462d6f188cbf86948d04cdbe7dc"} Jan 26 18:57:59 crc kubenswrapper[4737]: I0126 18:57:59.219314 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 18:57:59 crc kubenswrapper[4737]: I0126 18:57:59.223375 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 18:57:59 crc kubenswrapper[4737]: I0126 18:57:59.260929 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.260904964 podStartE2EDuration="2.260904964s" podCreationTimestamp="2026-01-26 18:57:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:57:59.240875916 +0000 UTC m=+1652.549070614" watchObservedRunningTime="2026-01-26 18:57:59.260904964 +0000 UTC m=+1652.569099672" Jan 26 18:57:59 crc kubenswrapper[4737]: I0126 18:57:59.615400 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-2rt9q"] Jan 26 18:57:59 crc kubenswrapper[4737]: I0126 18:57:59.623669 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-2rt9q" Jan 26 18:57:59 crc kubenswrapper[4737]: I0126 18:57:59.713221 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38de7871-ef90-4700-b77f-abf3c4f9a99d-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-2rt9q\" (UID: \"38de7871-ef90-4700-b77f-abf3c4f9a99d\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-2rt9q" Jan 26 18:57:59 crc kubenswrapper[4737]: I0126 18:57:59.713402 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38de7871-ef90-4700-b77f-abf3c4f9a99d-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-2rt9q\" (UID: \"38de7871-ef90-4700-b77f-abf3c4f9a99d\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-2rt9q" Jan 26 18:57:59 crc kubenswrapper[4737]: I0126 18:57:59.713467 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38de7871-ef90-4700-b77f-abf3c4f9a99d-config\") pod \"dnsmasq-dns-6b7bbf7cf9-2rt9q\" (UID: \"38de7871-ef90-4700-b77f-abf3c4f9a99d\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-2rt9q" Jan 26 18:57:59 crc kubenswrapper[4737]: I0126 18:57:59.713758 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8nss\" (UniqueName: \"kubernetes.io/projected/38de7871-ef90-4700-b77f-abf3c4f9a99d-kube-api-access-l8nss\") pod \"dnsmasq-dns-6b7bbf7cf9-2rt9q\" (UID: \"38de7871-ef90-4700-b77f-abf3c4f9a99d\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-2rt9q" Jan 26 18:57:59 crc kubenswrapper[4737]: I0126 18:57:59.716891 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38de7871-ef90-4700-b77f-abf3c4f9a99d-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-2rt9q\" (UID: \"38de7871-ef90-4700-b77f-abf3c4f9a99d\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-2rt9q" Jan 26 18:57:59 crc kubenswrapper[4737]: I0126 18:57:59.716950 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38de7871-ef90-4700-b77f-abf3c4f9a99d-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-2rt9q\" (UID: \"38de7871-ef90-4700-b77f-abf3c4f9a99d\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-2rt9q" Jan 26 18:57:59 crc kubenswrapper[4737]: I0126 18:57:59.768151 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-2rt9q"] Jan 26 18:57:59 crc kubenswrapper[4737]: I0126 18:57:59.820442 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38de7871-ef90-4700-b77f-abf3c4f9a99d-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-2rt9q\" (UID: \"38de7871-ef90-4700-b77f-abf3c4f9a99d\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-2rt9q" Jan 26 18:57:59 crc kubenswrapper[4737]: I0126 18:57:59.820537 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38de7871-ef90-4700-b77f-abf3c4f9a99d-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-2rt9q\" (UID: \"38de7871-ef90-4700-b77f-abf3c4f9a99d\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-2rt9q" Jan 26 18:57:59 crc kubenswrapper[4737]: I0126 18:57:59.820566 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38de7871-ef90-4700-b77f-abf3c4f9a99d-config\") pod \"dnsmasq-dns-6b7bbf7cf9-2rt9q\" (UID: \"38de7871-ef90-4700-b77f-abf3c4f9a99d\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-2rt9q" Jan 26 18:57:59 crc kubenswrapper[4737]: I0126 18:57:59.820673 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8nss\" (UniqueName: \"kubernetes.io/projected/38de7871-ef90-4700-b77f-abf3c4f9a99d-kube-api-access-l8nss\") pod \"dnsmasq-dns-6b7bbf7cf9-2rt9q\" (UID: \"38de7871-ef90-4700-b77f-abf3c4f9a99d\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-2rt9q" Jan 26 18:57:59 crc kubenswrapper[4737]: I0126 18:57:59.820735 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38de7871-ef90-4700-b77f-abf3c4f9a99d-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-2rt9q\" (UID: \"38de7871-ef90-4700-b77f-abf3c4f9a99d\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-2rt9q" Jan 26 18:57:59 crc kubenswrapper[4737]: I0126 18:57:59.820762 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38de7871-ef90-4700-b77f-abf3c4f9a99d-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-2rt9q\" (UID: \"38de7871-ef90-4700-b77f-abf3c4f9a99d\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-2rt9q" Jan 26 18:57:59 crc kubenswrapper[4737]: I0126 18:57:59.821913 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38de7871-ef90-4700-b77f-abf3c4f9a99d-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-2rt9q\" (UID: \"38de7871-ef90-4700-b77f-abf3c4f9a99d\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-2rt9q" Jan 26 18:57:59 crc kubenswrapper[4737]: I0126 18:57:59.822039 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38de7871-ef90-4700-b77f-abf3c4f9a99d-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-2rt9q\" (UID: \"38de7871-ef90-4700-b77f-abf3c4f9a99d\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-2rt9q" Jan 26 18:57:59 crc kubenswrapper[4737]: I0126 18:57:59.822782 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38de7871-ef90-4700-b77f-abf3c4f9a99d-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-2rt9q\" (UID: \"38de7871-ef90-4700-b77f-abf3c4f9a99d\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-2rt9q" Jan 26 18:57:59 crc kubenswrapper[4737]: I0126 18:57:59.822936 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38de7871-ef90-4700-b77f-abf3c4f9a99d-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-2rt9q\" (UID: \"38de7871-ef90-4700-b77f-abf3c4f9a99d\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-2rt9q" Jan 26 18:57:59 crc kubenswrapper[4737]: I0126 18:57:59.825051 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38de7871-ef90-4700-b77f-abf3c4f9a99d-config\") pod \"dnsmasq-dns-6b7bbf7cf9-2rt9q\" (UID: \"38de7871-ef90-4700-b77f-abf3c4f9a99d\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-2rt9q" Jan 26 18:57:59 crc kubenswrapper[4737]: I0126 18:57:59.853409 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8nss\" (UniqueName: \"kubernetes.io/projected/38de7871-ef90-4700-b77f-abf3c4f9a99d-kube-api-access-l8nss\") pod \"dnsmasq-dns-6b7bbf7cf9-2rt9q\" (UID: \"38de7871-ef90-4700-b77f-abf3c4f9a99d\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-2rt9q" Jan 26 18:57:59 crc kubenswrapper[4737]: I0126 18:57:59.973354 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-2rt9q" Jan 26 18:58:00 crc kubenswrapper[4737]: I0126 18:58:00.644391 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-2rt9q"] Jan 26 18:58:00 crc kubenswrapper[4737]: I0126 18:58:00.950991 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:58:00 crc kubenswrapper[4737]: I0126 18:58:00.951351 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:58:01 crc kubenswrapper[4737]: I0126 18:58:01.247245 4737 generic.go:334] "Generic (PLEG): container finished" podID="38de7871-ef90-4700-b77f-abf3c4f9a99d" containerID="e4f2f5c857c1c7e95c45d76e27956cde41b8ff646f347ec3ac87ede251084f09" exitCode=0 Jan 26 18:58:01 crc kubenswrapper[4737]: I0126 18:58:01.247389 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-2rt9q" event={"ID":"38de7871-ef90-4700-b77f-abf3c4f9a99d","Type":"ContainerDied","Data":"e4f2f5c857c1c7e95c45d76e27956cde41b8ff646f347ec3ac87ede251084f09"} Jan 26 18:58:01 crc kubenswrapper[4737]: I0126 18:58:01.247470 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-2rt9q" event={"ID":"38de7871-ef90-4700-b77f-abf3c4f9a99d","Type":"ContainerStarted","Data":"690056ace108600476ac20610e5d45511c30302065c1b1edf704d484f9d9451f"} Jan 26 18:58:01 crc kubenswrapper[4737]: I0126 18:58:01.492008 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:58:01 crc kubenswrapper[4737]: I0126 18:58:01.492659 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934" containerName="ceilometer-central-agent" containerID="cri-o://e1e11b54eb022a29f79251415a23a4ebf6df41dfb4be8b44ac01f4ca9b08e539" gracePeriod=30 Jan 26 18:58:01 crc kubenswrapper[4737]: I0126 18:58:01.493284 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934" containerName="proxy-httpd" containerID="cri-o://1ab86a8d195cdea196b08f5823b0df352a870e6ab202a79c054a957a986a249d" gracePeriod=30 Jan 26 18:58:01 crc kubenswrapper[4737]: I0126 18:58:01.493338 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934" containerName="ceilometer-notification-agent" containerID="cri-o://12fb973cf92d54b816ba6e75248ad6cdd24eb8fbb58d6dec2be31e78c3b0d77c" gracePeriod=30 Jan 26 18:58:01 crc kubenswrapper[4737]: I0126 18:58:01.493454 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934" containerName="sg-core" containerID="cri-o://8ae55e8357565150f14c185ff1b3d5f4de9de9f10a99166d6a0027fd5f9f2eef" gracePeriod=30 Jan 26 18:58:01 crc kubenswrapper[4737]: I0126 18:58:01.510792 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.248:3000/\": EOF" Jan 26 18:58:01 crc kubenswrapper[4737]: I0126 18:58:01.852962 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.248:3000/\": dial tcp 10.217.0.248:3000: connect: connection refused" Jan 26 18:58:02 crc kubenswrapper[4737]: I0126 18:58:02.261823 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-2rt9q" event={"ID":"38de7871-ef90-4700-b77f-abf3c4f9a99d","Type":"ContainerStarted","Data":"3895191728f2e0a03e3de77c7fbfeda4fe6b2bc3cdfcc08cd0e5deefe97a9c53"} Jan 26 18:58:02 crc kubenswrapper[4737]: I0126 18:58:02.262160 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7bbf7cf9-2rt9q" Jan 26 18:58:02 crc kubenswrapper[4737]: I0126 18:58:02.265558 4737 generic.go:334] "Generic (PLEG): container finished" podID="d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934" containerID="1ab86a8d195cdea196b08f5823b0df352a870e6ab202a79c054a957a986a249d" exitCode=0 Jan 26 18:58:02 crc kubenswrapper[4737]: I0126 18:58:02.265598 4737 generic.go:334] "Generic (PLEG): container finished" podID="d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934" containerID="8ae55e8357565150f14c185ff1b3d5f4de9de9f10a99166d6a0027fd5f9f2eef" exitCode=2 Jan 26 18:58:02 crc kubenswrapper[4737]: I0126 18:58:02.265611 4737 generic.go:334] "Generic (PLEG): container finished" podID="d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934" containerID="e1e11b54eb022a29f79251415a23a4ebf6df41dfb4be8b44ac01f4ca9b08e539" exitCode=0 Jan 26 18:58:02 crc kubenswrapper[4737]: I0126 18:58:02.265633 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934","Type":"ContainerDied","Data":"1ab86a8d195cdea196b08f5823b0df352a870e6ab202a79c054a957a986a249d"} Jan 26 18:58:02 crc kubenswrapper[4737]: I0126 18:58:02.265676 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934","Type":"ContainerDied","Data":"8ae55e8357565150f14c185ff1b3d5f4de9de9f10a99166d6a0027fd5f9f2eef"} Jan 26 18:58:02 crc kubenswrapper[4737]: I0126 18:58:02.265689 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934","Type":"ContainerDied","Data":"e1e11b54eb022a29f79251415a23a4ebf6df41dfb4be8b44ac01f4ca9b08e539"} Jan 26 18:58:02 crc kubenswrapper[4737]: I0126 18:58:02.301203 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7bbf7cf9-2rt9q" podStartSLOduration=3.301184373 podStartE2EDuration="3.301184373s" podCreationTimestamp="2026-01-26 18:57:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:58:02.295461509 +0000 UTC m=+1655.603656227" watchObservedRunningTime="2026-01-26 18:58:02.301184373 +0000 UTC m=+1655.609379081" Jan 26 18:58:02 crc kubenswrapper[4737]: I0126 18:58:02.857125 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:58:03 crc kubenswrapper[4737]: I0126 18:58:03.281057 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d92c400c-6139-4277-b112-2c725f091503","Type":"ContainerStarted","Data":"ae2faf3ae608c3d65856cb1ab3ec25312be31135813a4451fd83abd4b2873d79"} Jan 26 18:58:03 crc kubenswrapper[4737]: I0126 18:58:03.624550 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 18:58:03 crc kubenswrapper[4737]: I0126 18:58:03.628294 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="84ee1644-a176-4279-920d-4b71999bdf59" containerName="nova-api-log" containerID="cri-o://228cec0f85512893cc98a5a56c2fff4e23835b066650fd31dd5131743927a67a" gracePeriod=30 Jan 26 18:58:03 crc kubenswrapper[4737]: I0126 18:58:03.629025 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="84ee1644-a176-4279-920d-4b71999bdf59" containerName="nova-api-api" containerID="cri-o://5b7b068305aa8b3d924318ffcc53bd9376538108dce951e7ae134483255bf586" gracePeriod=30 Jan 26 18:58:04 crc kubenswrapper[4737]: I0126 18:58:04.030028 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 26 18:58:04 crc kubenswrapper[4737]: I0126 18:58:04.299392 4737 generic.go:334] "Generic (PLEG): container finished" podID="84ee1644-a176-4279-920d-4b71999bdf59" containerID="228cec0f85512893cc98a5a56c2fff4e23835b066650fd31dd5131743927a67a" exitCode=143 Jan 26 18:58:04 crc kubenswrapper[4737]: I0126 18:58:04.299437 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"84ee1644-a176-4279-920d-4b71999bdf59","Type":"ContainerDied","Data":"228cec0f85512893cc98a5a56c2fff4e23835b066650fd31dd5131743927a67a"} Jan 26 18:58:05 crc kubenswrapper[4737]: I0126 18:58:05.327541 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d92c400c-6139-4277-b112-2c725f091503","Type":"ContainerStarted","Data":"228cb53225b133ba970d38952a89d6b7e65288fe451e11399506d94635f4d480"} Jan 26 18:58:05 crc kubenswrapper[4737]: I0126 18:58:05.333926 4737 generic.go:334] "Generic (PLEG): container finished" podID="d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934" containerID="12fb973cf92d54b816ba6e75248ad6cdd24eb8fbb58d6dec2be31e78c3b0d77c" exitCode=0 Jan 26 18:58:05 crc kubenswrapper[4737]: I0126 18:58:05.333973 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934","Type":"ContainerDied","Data":"12fb973cf92d54b816ba6e75248ad6cdd24eb8fbb58d6dec2be31e78c3b0d77c"} Jan 26 18:58:05 crc kubenswrapper[4737]: I0126 18:58:05.728211 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:58:05 crc kubenswrapper[4737]: I0126 18:58:05.913336 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934-combined-ca-bundle\") pod \"d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934\" (UID: \"d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934\") " Jan 26 18:58:05 crc kubenswrapper[4737]: I0126 18:58:05.913490 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rn54\" (UniqueName: \"kubernetes.io/projected/d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934-kube-api-access-8rn54\") pod \"d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934\" (UID: \"d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934\") " Jan 26 18:58:05 crc kubenswrapper[4737]: I0126 18:58:05.913513 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934-log-httpd\") pod \"d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934\" (UID: \"d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934\") " Jan 26 18:58:05 crc kubenswrapper[4737]: I0126 18:58:05.913597 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934-sg-core-conf-yaml\") pod \"d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934\" (UID: \"d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934\") " Jan 26 18:58:05 crc kubenswrapper[4737]: I0126 18:58:05.913650 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934-run-httpd\") pod \"d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934\" (UID: \"d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934\") " Jan 26 18:58:05 crc kubenswrapper[4737]: I0126 18:58:05.913987 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934" (UID: "d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:58:05 crc kubenswrapper[4737]: I0126 18:58:05.914228 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934" (UID: "d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:58:05 crc kubenswrapper[4737]: I0126 18:58:05.915028 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934-scripts\") pod \"d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934\" (UID: \"d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934\") " Jan 26 18:58:05 crc kubenswrapper[4737]: I0126 18:58:05.915083 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934-config-data\") pod \"d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934\" (UID: \"d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934\") " Jan 26 18:58:05 crc kubenswrapper[4737]: I0126 18:58:05.915829 4737 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 18:58:05 crc kubenswrapper[4737]: I0126 18:58:05.915850 4737 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 18:58:05 crc kubenswrapper[4737]: I0126 18:58:05.924283 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934-kube-api-access-8rn54" (OuterVolumeSpecName: "kube-api-access-8rn54") pod "d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934" (UID: "d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934"). InnerVolumeSpecName "kube-api-access-8rn54". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:58:05 crc kubenswrapper[4737]: I0126 18:58:05.932698 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934-scripts" (OuterVolumeSpecName: "scripts") pod "d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934" (UID: "d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:58:05 crc kubenswrapper[4737]: I0126 18:58:05.969694 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934" (UID: "d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:58:06 crc kubenswrapper[4737]: I0126 18:58:06.032180 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rn54\" (UniqueName: \"kubernetes.io/projected/d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934-kube-api-access-8rn54\") on node \"crc\" DevicePath \"\"" Jan 26 18:58:06 crc kubenswrapper[4737]: I0126 18:58:06.032220 4737 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 18:58:06 crc kubenswrapper[4737]: I0126 18:58:06.032233 4737 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:58:06 crc kubenswrapper[4737]: I0126 18:58:06.049222 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934" (UID: "d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:58:06 crc kubenswrapper[4737]: I0126 18:58:06.111411 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934-config-data" (OuterVolumeSpecName: "config-data") pod "d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934" (UID: "d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:58:06 crc kubenswrapper[4737]: I0126 18:58:06.134799 4737 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:58:06 crc kubenswrapper[4737]: I0126 18:58:06.134861 4737 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:58:06 crc kubenswrapper[4737]: I0126 18:58:06.373014 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934","Type":"ContainerDied","Data":"8b8562d63e011bd6117ba96b7aa5eb4410d1a9aea9b73ae89be4e0641c9133e2"} Jan 26 18:58:06 crc kubenswrapper[4737]: I0126 18:58:06.373089 4737 scope.go:117] "RemoveContainer" containerID="1ab86a8d195cdea196b08f5823b0df352a870e6ab202a79c054a957a986a249d" Jan 26 18:58:06 crc kubenswrapper[4737]: I0126 18:58:06.373141 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:58:06 crc kubenswrapper[4737]: I0126 18:58:06.428772 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:58:06 crc kubenswrapper[4737]: I0126 18:58:06.447043 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:58:06 crc kubenswrapper[4737]: I0126 18:58:06.461805 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:58:06 crc kubenswrapper[4737]: E0126 18:58:06.462414 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934" containerName="ceilometer-central-agent" Jan 26 18:58:06 crc kubenswrapper[4737]: I0126 18:58:06.462445 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934" containerName="ceilometer-central-agent" Jan 26 18:58:06 crc kubenswrapper[4737]: E0126 18:58:06.462494 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934" containerName="proxy-httpd" Jan 26 18:58:06 crc kubenswrapper[4737]: I0126 18:58:06.462504 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934" containerName="proxy-httpd" Jan 26 18:58:06 crc kubenswrapper[4737]: E0126 18:58:06.462514 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934" containerName="sg-core" Jan 26 18:58:06 crc kubenswrapper[4737]: I0126 18:58:06.462523 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934" containerName="sg-core" Jan 26 18:58:06 crc kubenswrapper[4737]: E0126 18:58:06.462534 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934" containerName="ceilometer-notification-agent" Jan 26 18:58:06 crc kubenswrapper[4737]: I0126 18:58:06.462543 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934" containerName="ceilometer-notification-agent" Jan 26 18:58:06 crc kubenswrapper[4737]: I0126 18:58:06.462886 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934" containerName="ceilometer-notification-agent" Jan 26 18:58:06 crc kubenswrapper[4737]: I0126 18:58:06.462959 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934" containerName="proxy-httpd" Jan 26 18:58:06 crc kubenswrapper[4737]: I0126 18:58:06.462986 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934" containerName="ceilometer-central-agent" Jan 26 18:58:06 crc kubenswrapper[4737]: I0126 18:58:06.463009 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934" containerName="sg-core" Jan 26 18:58:06 crc kubenswrapper[4737]: I0126 18:58:06.469511 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:58:06 crc kubenswrapper[4737]: I0126 18:58:06.470721 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:58:06 crc kubenswrapper[4737]: I0126 18:58:06.472856 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 18:58:06 crc kubenswrapper[4737]: I0126 18:58:06.475699 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 18:58:06 crc kubenswrapper[4737]: I0126 18:58:06.597772 4737 scope.go:117] "RemoveContainer" containerID="8ae55e8357565150f14c185ff1b3d5f4de9de9f10a99166d6a0027fd5f9f2eef" Jan 26 18:58:06 crc kubenswrapper[4737]: I0126 18:58:06.645893 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89f98c62-56e0-456d-a719-2d79c54a3c79-config-data\") pod \"ceilometer-0\" (UID: \"89f98c62-56e0-456d-a719-2d79c54a3c79\") " pod="openstack/ceilometer-0" Jan 26 18:58:06 crc kubenswrapper[4737]: I0126 18:58:06.646013 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48ngg\" (UniqueName: \"kubernetes.io/projected/89f98c62-56e0-456d-a719-2d79c54a3c79-kube-api-access-48ngg\") pod \"ceilometer-0\" (UID: \"89f98c62-56e0-456d-a719-2d79c54a3c79\") " pod="openstack/ceilometer-0" Jan 26 18:58:06 crc kubenswrapper[4737]: I0126 18:58:06.646139 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89f98c62-56e0-456d-a719-2d79c54a3c79-log-httpd\") pod \"ceilometer-0\" (UID: \"89f98c62-56e0-456d-a719-2d79c54a3c79\") " pod="openstack/ceilometer-0" Jan 26 18:58:06 crc kubenswrapper[4737]: I0126 18:58:06.646219 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/89f98c62-56e0-456d-a719-2d79c54a3c79-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"89f98c62-56e0-456d-a719-2d79c54a3c79\") " pod="openstack/ceilometer-0" Jan 26 18:58:06 crc kubenswrapper[4737]: I0126 18:58:06.646338 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89f98c62-56e0-456d-a719-2d79c54a3c79-scripts\") pod \"ceilometer-0\" (UID: \"89f98c62-56e0-456d-a719-2d79c54a3c79\") " pod="openstack/ceilometer-0" Jan 26 18:58:06 crc kubenswrapper[4737]: I0126 18:58:06.646409 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89f98c62-56e0-456d-a719-2d79c54a3c79-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"89f98c62-56e0-456d-a719-2d79c54a3c79\") " pod="openstack/ceilometer-0" Jan 26 18:58:06 crc kubenswrapper[4737]: I0126 18:58:06.646569 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89f98c62-56e0-456d-a719-2d79c54a3c79-run-httpd\") pod \"ceilometer-0\" (UID: \"89f98c62-56e0-456d-a719-2d79c54a3c79\") " pod="openstack/ceilometer-0" Jan 26 18:58:06 crc kubenswrapper[4737]: I0126 18:58:06.654100 4737 scope.go:117] "RemoveContainer" containerID="12fb973cf92d54b816ba6e75248ad6cdd24eb8fbb58d6dec2be31e78c3b0d77c" Jan 26 18:58:06 crc kubenswrapper[4737]: I0126 18:58:06.729204 4737 scope.go:117] "RemoveContainer" containerID="e1e11b54eb022a29f79251415a23a4ebf6df41dfb4be8b44ac01f4ca9b08e539" Jan 26 18:58:06 crc kubenswrapper[4737]: I0126 18:58:06.749456 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89f98c62-56e0-456d-a719-2d79c54a3c79-config-data\") pod \"ceilometer-0\" (UID: \"89f98c62-56e0-456d-a719-2d79c54a3c79\") " pod="openstack/ceilometer-0" Jan 26 18:58:06 crc kubenswrapper[4737]: I0126 18:58:06.749599 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48ngg\" (UniqueName: \"kubernetes.io/projected/89f98c62-56e0-456d-a719-2d79c54a3c79-kube-api-access-48ngg\") pod \"ceilometer-0\" (UID: \"89f98c62-56e0-456d-a719-2d79c54a3c79\") " pod="openstack/ceilometer-0" Jan 26 18:58:06 crc kubenswrapper[4737]: I0126 18:58:06.749748 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89f98c62-56e0-456d-a719-2d79c54a3c79-log-httpd\") pod \"ceilometer-0\" (UID: \"89f98c62-56e0-456d-a719-2d79c54a3c79\") " pod="openstack/ceilometer-0" Jan 26 18:58:06 crc kubenswrapper[4737]: I0126 18:58:06.749864 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/89f98c62-56e0-456d-a719-2d79c54a3c79-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"89f98c62-56e0-456d-a719-2d79c54a3c79\") " pod="openstack/ceilometer-0" Jan 26 18:58:06 crc kubenswrapper[4737]: I0126 18:58:06.750005 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89f98c62-56e0-456d-a719-2d79c54a3c79-scripts\") pod \"ceilometer-0\" (UID: \"89f98c62-56e0-456d-a719-2d79c54a3c79\") " pod="openstack/ceilometer-0" Jan 26 18:58:06 crc kubenswrapper[4737]: I0126 18:58:06.750141 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89f98c62-56e0-456d-a719-2d79c54a3c79-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"89f98c62-56e0-456d-a719-2d79c54a3c79\") " pod="openstack/ceilometer-0" Jan 26 18:58:06 crc kubenswrapper[4737]: I0126 18:58:06.750197 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89f98c62-56e0-456d-a719-2d79c54a3c79-run-httpd\") pod \"ceilometer-0\" (UID: \"89f98c62-56e0-456d-a719-2d79c54a3c79\") " pod="openstack/ceilometer-0" Jan 26 18:58:06 crc kubenswrapper[4737]: I0126 18:58:06.750463 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89f98c62-56e0-456d-a719-2d79c54a3c79-log-httpd\") pod \"ceilometer-0\" (UID: \"89f98c62-56e0-456d-a719-2d79c54a3c79\") " pod="openstack/ceilometer-0" Jan 26 18:58:06 crc kubenswrapper[4737]: I0126 18:58:06.750907 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89f98c62-56e0-456d-a719-2d79c54a3c79-run-httpd\") pod \"ceilometer-0\" (UID: \"89f98c62-56e0-456d-a719-2d79c54a3c79\") " pod="openstack/ceilometer-0" Jan 26 18:58:06 crc kubenswrapper[4737]: I0126 18:58:06.754295 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89f98c62-56e0-456d-a719-2d79c54a3c79-scripts\") pod \"ceilometer-0\" (UID: \"89f98c62-56e0-456d-a719-2d79c54a3c79\") " pod="openstack/ceilometer-0" Jan 26 18:58:06 crc kubenswrapper[4737]: I0126 18:58:06.754693 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89f98c62-56e0-456d-a719-2d79c54a3c79-config-data\") pod \"ceilometer-0\" (UID: \"89f98c62-56e0-456d-a719-2d79c54a3c79\") " pod="openstack/ceilometer-0" Jan 26 18:58:06 crc kubenswrapper[4737]: I0126 18:58:06.755365 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/89f98c62-56e0-456d-a719-2d79c54a3c79-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"89f98c62-56e0-456d-a719-2d79c54a3c79\") " pod="openstack/ceilometer-0" Jan 26 18:58:06 crc kubenswrapper[4737]: I0126 18:58:06.755733 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89f98c62-56e0-456d-a719-2d79c54a3c79-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"89f98c62-56e0-456d-a719-2d79c54a3c79\") " pod="openstack/ceilometer-0" Jan 26 18:58:06 crc kubenswrapper[4737]: I0126 18:58:06.783644 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48ngg\" (UniqueName: \"kubernetes.io/projected/89f98c62-56e0-456d-a719-2d79c54a3c79-kube-api-access-48ngg\") pod \"ceilometer-0\" (UID: \"89f98c62-56e0-456d-a719-2d79c54a3c79\") " pod="openstack/ceilometer-0" Jan 26 18:58:06 crc kubenswrapper[4737]: I0126 18:58:06.800553 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:58:07 crc kubenswrapper[4737]: I0126 18:58:07.001491 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934" path="/var/lib/kubelet/pods/d4c0e0d5-e70d-4429-a1f0-cb2ee1aa4934/volumes" Jan 26 18:58:07 crc kubenswrapper[4737]: I0126 18:58:07.408599 4737 generic.go:334] "Generic (PLEG): container finished" podID="84ee1644-a176-4279-920d-4b71999bdf59" containerID="5b7b068305aa8b3d924318ffcc53bd9376538108dce951e7ae134483255bf586" exitCode=0 Jan 26 18:58:07 crc kubenswrapper[4737]: I0126 18:58:07.408709 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"84ee1644-a176-4279-920d-4b71999bdf59","Type":"ContainerDied","Data":"5b7b068305aa8b3d924318ffcc53bd9376538108dce951e7ae134483255bf586"} Jan 26 18:58:07 crc kubenswrapper[4737]: I0126 18:58:07.413064 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d92c400c-6139-4277-b112-2c725f091503","Type":"ContainerStarted","Data":"ba8bafdc35e24c25acaf2aaa91eec230d2fafa07358896278cdc457dc05fe2db"} Jan 26 18:58:07 crc kubenswrapper[4737]: I0126 18:58:07.422491 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:58:07 crc kubenswrapper[4737]: I0126 18:58:07.472967 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:58:07 crc kubenswrapper[4737]: I0126 18:58:07.654868 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 18:58:07 crc kubenswrapper[4737]: I0126 18:58:07.699967 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84ee1644-a176-4279-920d-4b71999bdf59-config-data\") pod \"84ee1644-a176-4279-920d-4b71999bdf59\" (UID: \"84ee1644-a176-4279-920d-4b71999bdf59\") " Jan 26 18:58:07 crc kubenswrapper[4737]: I0126 18:58:07.700092 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84ee1644-a176-4279-920d-4b71999bdf59-logs\") pod \"84ee1644-a176-4279-920d-4b71999bdf59\" (UID: \"84ee1644-a176-4279-920d-4b71999bdf59\") " Jan 26 18:58:07 crc kubenswrapper[4737]: I0126 18:58:07.700135 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8z2nb\" (UniqueName: \"kubernetes.io/projected/84ee1644-a176-4279-920d-4b71999bdf59-kube-api-access-8z2nb\") pod \"84ee1644-a176-4279-920d-4b71999bdf59\" (UID: \"84ee1644-a176-4279-920d-4b71999bdf59\") " Jan 26 18:58:07 crc kubenswrapper[4737]: I0126 18:58:07.700499 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84ee1644-a176-4279-920d-4b71999bdf59-combined-ca-bundle\") pod \"84ee1644-a176-4279-920d-4b71999bdf59\" (UID: \"84ee1644-a176-4279-920d-4b71999bdf59\") " Jan 26 18:58:07 crc kubenswrapper[4737]: I0126 18:58:07.700683 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84ee1644-a176-4279-920d-4b71999bdf59-logs" (OuterVolumeSpecName: "logs") pod "84ee1644-a176-4279-920d-4b71999bdf59" (UID: "84ee1644-a176-4279-920d-4b71999bdf59"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:58:07 crc kubenswrapper[4737]: I0126 18:58:07.701308 4737 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84ee1644-a176-4279-920d-4b71999bdf59-logs\") on node \"crc\" DevicePath \"\"" Jan 26 18:58:07 crc kubenswrapper[4737]: I0126 18:58:07.720386 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84ee1644-a176-4279-920d-4b71999bdf59-kube-api-access-8z2nb" (OuterVolumeSpecName: "kube-api-access-8z2nb") pod "84ee1644-a176-4279-920d-4b71999bdf59" (UID: "84ee1644-a176-4279-920d-4b71999bdf59"). InnerVolumeSpecName "kube-api-access-8z2nb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:58:07 crc kubenswrapper[4737]: I0126 18:58:07.742204 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84ee1644-a176-4279-920d-4b71999bdf59-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "84ee1644-a176-4279-920d-4b71999bdf59" (UID: "84ee1644-a176-4279-920d-4b71999bdf59"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:58:07 crc kubenswrapper[4737]: I0126 18:58:07.746599 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84ee1644-a176-4279-920d-4b71999bdf59-config-data" (OuterVolumeSpecName: "config-data") pod "84ee1644-a176-4279-920d-4b71999bdf59" (UID: "84ee1644-a176-4279-920d-4b71999bdf59"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:58:07 crc kubenswrapper[4737]: I0126 18:58:07.803789 4737 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84ee1644-a176-4279-920d-4b71999bdf59-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:58:07 crc kubenswrapper[4737]: I0126 18:58:07.803835 4737 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84ee1644-a176-4279-920d-4b71999bdf59-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:58:07 crc kubenswrapper[4737]: I0126 18:58:07.803848 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8z2nb\" (UniqueName: \"kubernetes.io/projected/84ee1644-a176-4279-920d-4b71999bdf59-kube-api-access-8z2nb\") on node \"crc\" DevicePath \"\"" Jan 26 18:58:07 crc kubenswrapper[4737]: I0126 18:58:07.857661 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:58:07 crc kubenswrapper[4737]: I0126 18:58:07.890007 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:58:08 crc kubenswrapper[4737]: I0126 18:58:08.433934 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"89f98c62-56e0-456d-a719-2d79c54a3c79","Type":"ContainerStarted","Data":"8507fbfd71a7c8f95cd523c227a3a54fb2414c3c9ddef7f5bf9fab65118452d1"} Jan 26 18:58:08 crc kubenswrapper[4737]: I0126 18:58:08.434316 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"89f98c62-56e0-456d-a719-2d79c54a3c79","Type":"ContainerStarted","Data":"d9e3ad62d5b5ec73fe2c0d250ab3c3e51324055f49ffbebb8b77e96313b7f4f7"} Jan 26 18:58:08 crc kubenswrapper[4737]: I0126 18:58:08.438480 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"84ee1644-a176-4279-920d-4b71999bdf59","Type":"ContainerDied","Data":"00f20d3518625c4ca566f4a6b0a01afbb6ec5ed5d57060f4b6fefe64b98f7f2b"} Jan 26 18:58:08 crc kubenswrapper[4737]: I0126 18:58:08.438517 4737 scope.go:117] "RemoveContainer" containerID="5b7b068305aa8b3d924318ffcc53bd9376538108dce951e7ae134483255bf586" Jan 26 18:58:08 crc kubenswrapper[4737]: I0126 18:58:08.438645 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 18:58:08 crc kubenswrapper[4737]: I0126 18:58:08.456855 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 26 18:58:08 crc kubenswrapper[4737]: I0126 18:58:08.572136 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 18:58:08 crc kubenswrapper[4737]: I0126 18:58:08.615928 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 26 18:58:08 crc kubenswrapper[4737]: I0126 18:58:08.651322 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 26 18:58:08 crc kubenswrapper[4737]: E0126 18:58:08.652023 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84ee1644-a176-4279-920d-4b71999bdf59" containerName="nova-api-log" Jan 26 18:58:08 crc kubenswrapper[4737]: I0126 18:58:08.652045 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="84ee1644-a176-4279-920d-4b71999bdf59" containerName="nova-api-log" Jan 26 18:58:08 crc kubenswrapper[4737]: E0126 18:58:08.652104 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84ee1644-a176-4279-920d-4b71999bdf59" containerName="nova-api-api" Jan 26 18:58:08 crc kubenswrapper[4737]: I0126 18:58:08.652117 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="84ee1644-a176-4279-920d-4b71999bdf59" containerName="nova-api-api" Jan 26 18:58:08 crc kubenswrapper[4737]: I0126 18:58:08.652384 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="84ee1644-a176-4279-920d-4b71999bdf59" containerName="nova-api-api" Jan 26 18:58:08 crc kubenswrapper[4737]: I0126 18:58:08.652416 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="84ee1644-a176-4279-920d-4b71999bdf59" containerName="nova-api-log" Jan 26 18:58:08 crc kubenswrapper[4737]: I0126 18:58:08.654022 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 18:58:08 crc kubenswrapper[4737]: I0126 18:58:08.658919 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 18:58:08 crc kubenswrapper[4737]: I0126 18:58:08.676917 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 26 18:58:08 crc kubenswrapper[4737]: I0126 18:58:08.677143 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 26 18:58:08 crc kubenswrapper[4737]: I0126 18:58:08.677294 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 18:58:08 crc kubenswrapper[4737]: I0126 18:58:08.827977 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41b95787-7a5f-4e14-98f2-e2d9500a9df6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"41b95787-7a5f-4e14-98f2-e2d9500a9df6\") " pod="openstack/nova-api-0" Jan 26 18:58:08 crc kubenswrapper[4737]: I0126 18:58:08.828062 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41b95787-7a5f-4e14-98f2-e2d9500a9df6-config-data\") pod \"nova-api-0\" (UID: \"41b95787-7a5f-4e14-98f2-e2d9500a9df6\") " pod="openstack/nova-api-0" Jan 26 18:58:08 crc kubenswrapper[4737]: I0126 18:58:08.828183 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41b95787-7a5f-4e14-98f2-e2d9500a9df6-logs\") pod \"nova-api-0\" (UID: \"41b95787-7a5f-4e14-98f2-e2d9500a9df6\") " pod="openstack/nova-api-0" Jan 26 18:58:08 crc kubenswrapper[4737]: I0126 18:58:08.828230 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6vmc\" (UniqueName: \"kubernetes.io/projected/41b95787-7a5f-4e14-98f2-e2d9500a9df6-kube-api-access-n6vmc\") pod \"nova-api-0\" (UID: \"41b95787-7a5f-4e14-98f2-e2d9500a9df6\") " pod="openstack/nova-api-0" Jan 26 18:58:08 crc kubenswrapper[4737]: I0126 18:58:08.828281 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/41b95787-7a5f-4e14-98f2-e2d9500a9df6-public-tls-certs\") pod \"nova-api-0\" (UID: \"41b95787-7a5f-4e14-98f2-e2d9500a9df6\") " pod="openstack/nova-api-0" Jan 26 18:58:08 crc kubenswrapper[4737]: I0126 18:58:08.828406 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/41b95787-7a5f-4e14-98f2-e2d9500a9df6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"41b95787-7a5f-4e14-98f2-e2d9500a9df6\") " pod="openstack/nova-api-0" Jan 26 18:58:08 crc kubenswrapper[4737]: I0126 18:58:08.896174 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-xzv46"] Jan 26 18:58:08 crc kubenswrapper[4737]: I0126 18:58:08.898044 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-xzv46" Jan 26 18:58:08 crc kubenswrapper[4737]: I0126 18:58:08.900946 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 26 18:58:08 crc kubenswrapper[4737]: I0126 18:58:08.903258 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 26 18:58:08 crc kubenswrapper[4737]: I0126 18:58:08.930477 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41b95787-7a5f-4e14-98f2-e2d9500a9df6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"41b95787-7a5f-4e14-98f2-e2d9500a9df6\") " pod="openstack/nova-api-0" Jan 26 18:58:08 crc kubenswrapper[4737]: I0126 18:58:08.930531 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41b95787-7a5f-4e14-98f2-e2d9500a9df6-config-data\") pod \"nova-api-0\" (UID: \"41b95787-7a5f-4e14-98f2-e2d9500a9df6\") " pod="openstack/nova-api-0" Jan 26 18:58:08 crc kubenswrapper[4737]: I0126 18:58:08.930587 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41b95787-7a5f-4e14-98f2-e2d9500a9df6-logs\") pod \"nova-api-0\" (UID: \"41b95787-7a5f-4e14-98f2-e2d9500a9df6\") " pod="openstack/nova-api-0" Jan 26 18:58:08 crc kubenswrapper[4737]: I0126 18:58:08.930619 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6vmc\" (UniqueName: \"kubernetes.io/projected/41b95787-7a5f-4e14-98f2-e2d9500a9df6-kube-api-access-n6vmc\") pod \"nova-api-0\" (UID: \"41b95787-7a5f-4e14-98f2-e2d9500a9df6\") " pod="openstack/nova-api-0" Jan 26 18:58:08 crc kubenswrapper[4737]: I0126 18:58:08.930647 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/41b95787-7a5f-4e14-98f2-e2d9500a9df6-public-tls-certs\") pod \"nova-api-0\" (UID: \"41b95787-7a5f-4e14-98f2-e2d9500a9df6\") " pod="openstack/nova-api-0" Jan 26 18:58:08 crc kubenswrapper[4737]: I0126 18:58:08.930721 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/41b95787-7a5f-4e14-98f2-e2d9500a9df6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"41b95787-7a5f-4e14-98f2-e2d9500a9df6\") " pod="openstack/nova-api-0" Jan 26 18:58:08 crc kubenswrapper[4737]: I0126 18:58:08.935761 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-xzv46"] Jan 26 18:58:08 crc kubenswrapper[4737]: I0126 18:58:08.935938 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41b95787-7a5f-4e14-98f2-e2d9500a9df6-logs\") pod \"nova-api-0\" (UID: \"41b95787-7a5f-4e14-98f2-e2d9500a9df6\") " pod="openstack/nova-api-0" Jan 26 18:58:08 crc kubenswrapper[4737]: I0126 18:58:08.938314 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/41b95787-7a5f-4e14-98f2-e2d9500a9df6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"41b95787-7a5f-4e14-98f2-e2d9500a9df6\") " pod="openstack/nova-api-0" Jan 26 18:58:08 crc kubenswrapper[4737]: I0126 18:58:08.950155 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41b95787-7a5f-4e14-98f2-e2d9500a9df6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"41b95787-7a5f-4e14-98f2-e2d9500a9df6\") " pod="openstack/nova-api-0" Jan 26 18:58:08 crc kubenswrapper[4737]: I0126 18:58:08.953498 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/41b95787-7a5f-4e14-98f2-e2d9500a9df6-public-tls-certs\") pod \"nova-api-0\" (UID: \"41b95787-7a5f-4e14-98f2-e2d9500a9df6\") " pod="openstack/nova-api-0" Jan 26 18:58:08 crc kubenswrapper[4737]: I0126 18:58:08.954300 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41b95787-7a5f-4e14-98f2-e2d9500a9df6-config-data\") pod \"nova-api-0\" (UID: \"41b95787-7a5f-4e14-98f2-e2d9500a9df6\") " pod="openstack/nova-api-0" Jan 26 18:58:08 crc kubenswrapper[4737]: I0126 18:58:08.961316 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6vmc\" (UniqueName: \"kubernetes.io/projected/41b95787-7a5f-4e14-98f2-e2d9500a9df6-kube-api-access-n6vmc\") pod \"nova-api-0\" (UID: \"41b95787-7a5f-4e14-98f2-e2d9500a9df6\") " pod="openstack/nova-api-0" Jan 26 18:58:09 crc kubenswrapper[4737]: I0126 18:58:09.002586 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84ee1644-a176-4279-920d-4b71999bdf59" path="/var/lib/kubelet/pods/84ee1644-a176-4279-920d-4b71999bdf59/volumes" Jan 26 18:58:09 crc kubenswrapper[4737]: I0126 18:58:09.032431 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2c3196f-2796-452a-ab7f-59145e00d722-scripts\") pod \"nova-cell1-cell-mapping-xzv46\" (UID: \"d2c3196f-2796-452a-ab7f-59145e00d722\") " pod="openstack/nova-cell1-cell-mapping-xzv46" Jan 26 18:58:09 crc kubenswrapper[4737]: I0126 18:58:09.032497 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2c3196f-2796-452a-ab7f-59145e00d722-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-xzv46\" (UID: \"d2c3196f-2796-452a-ab7f-59145e00d722\") " pod="openstack/nova-cell1-cell-mapping-xzv46" Jan 26 18:58:09 crc kubenswrapper[4737]: I0126 18:58:09.032538 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t28wc\" (UniqueName: \"kubernetes.io/projected/d2c3196f-2796-452a-ab7f-59145e00d722-kube-api-access-t28wc\") pod \"nova-cell1-cell-mapping-xzv46\" (UID: \"d2c3196f-2796-452a-ab7f-59145e00d722\") " pod="openstack/nova-cell1-cell-mapping-xzv46" Jan 26 18:58:09 crc kubenswrapper[4737]: I0126 18:58:09.032699 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2c3196f-2796-452a-ab7f-59145e00d722-config-data\") pod \"nova-cell1-cell-mapping-xzv46\" (UID: \"d2c3196f-2796-452a-ab7f-59145e00d722\") " pod="openstack/nova-cell1-cell-mapping-xzv46" Jan 26 18:58:09 crc kubenswrapper[4737]: I0126 18:58:09.049924 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 18:58:09 crc kubenswrapper[4737]: I0126 18:58:09.135484 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2c3196f-2796-452a-ab7f-59145e00d722-scripts\") pod \"nova-cell1-cell-mapping-xzv46\" (UID: \"d2c3196f-2796-452a-ab7f-59145e00d722\") " pod="openstack/nova-cell1-cell-mapping-xzv46" Jan 26 18:58:09 crc kubenswrapper[4737]: I0126 18:58:09.135546 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2c3196f-2796-452a-ab7f-59145e00d722-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-xzv46\" (UID: \"d2c3196f-2796-452a-ab7f-59145e00d722\") " pod="openstack/nova-cell1-cell-mapping-xzv46" Jan 26 18:58:09 crc kubenswrapper[4737]: I0126 18:58:09.135598 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t28wc\" (UniqueName: \"kubernetes.io/projected/d2c3196f-2796-452a-ab7f-59145e00d722-kube-api-access-t28wc\") pod \"nova-cell1-cell-mapping-xzv46\" (UID: \"d2c3196f-2796-452a-ab7f-59145e00d722\") " pod="openstack/nova-cell1-cell-mapping-xzv46" Jan 26 18:58:09 crc kubenswrapper[4737]: I0126 18:58:09.137859 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2c3196f-2796-452a-ab7f-59145e00d722-config-data\") pod \"nova-cell1-cell-mapping-xzv46\" (UID: \"d2c3196f-2796-452a-ab7f-59145e00d722\") " pod="openstack/nova-cell1-cell-mapping-xzv46" Jan 26 18:58:09 crc kubenswrapper[4737]: I0126 18:58:09.139687 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2c3196f-2796-452a-ab7f-59145e00d722-scripts\") pod \"nova-cell1-cell-mapping-xzv46\" (UID: \"d2c3196f-2796-452a-ab7f-59145e00d722\") " pod="openstack/nova-cell1-cell-mapping-xzv46" Jan 26 18:58:09 crc kubenswrapper[4737]: I0126 18:58:09.143660 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2c3196f-2796-452a-ab7f-59145e00d722-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-xzv46\" (UID: \"d2c3196f-2796-452a-ab7f-59145e00d722\") " pod="openstack/nova-cell1-cell-mapping-xzv46" Jan 26 18:58:09 crc kubenswrapper[4737]: I0126 18:58:09.143793 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2c3196f-2796-452a-ab7f-59145e00d722-config-data\") pod \"nova-cell1-cell-mapping-xzv46\" (UID: \"d2c3196f-2796-452a-ab7f-59145e00d722\") " pod="openstack/nova-cell1-cell-mapping-xzv46" Jan 26 18:58:09 crc kubenswrapper[4737]: I0126 18:58:09.155342 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t28wc\" (UniqueName: \"kubernetes.io/projected/d2c3196f-2796-452a-ab7f-59145e00d722-kube-api-access-t28wc\") pod \"nova-cell1-cell-mapping-xzv46\" (UID: \"d2c3196f-2796-452a-ab7f-59145e00d722\") " pod="openstack/nova-cell1-cell-mapping-xzv46" Jan 26 18:58:09 crc kubenswrapper[4737]: I0126 18:58:09.383775 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-xzv46" Jan 26 18:58:09 crc kubenswrapper[4737]: I0126 18:58:09.474794 4737 scope.go:117] "RemoveContainer" containerID="228cec0f85512893cc98a5a56c2fff4e23835b066650fd31dd5131743927a67a" Jan 26 18:58:09 crc kubenswrapper[4737]: I0126 18:58:09.976238 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7bbf7cf9-2rt9q" Jan 26 18:58:10 crc kubenswrapper[4737]: I0126 18:58:10.114462 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-xzv46"] Jan 26 18:58:10 crc kubenswrapper[4737]: W0126 18:58:10.115321 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2c3196f_2796_452a_ab7f_59145e00d722.slice/crio-5be5d690e068872134d6601c1071987e72a312caf463273744441ffd1a3bebc9 WatchSource:0}: Error finding container 5be5d690e068872134d6601c1071987e72a312caf463273744441ffd1a3bebc9: Status 404 returned error can't find the container with id 5be5d690e068872134d6601c1071987e72a312caf463273744441ffd1a3bebc9 Jan 26 18:58:10 crc kubenswrapper[4737]: I0126 18:58:10.232792 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 18:58:10 crc kubenswrapper[4737]: W0126 18:58:10.277226 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41b95787_7a5f_4e14_98f2_e2d9500a9df6.slice/crio-84c325de66a34510bdc87f64b53fbe96d77ce6eb3b7015b5731523859705a700 WatchSource:0}: Error finding container 84c325de66a34510bdc87f64b53fbe96d77ce6eb3b7015b5731523859705a700: Status 404 returned error can't find the container with id 84c325de66a34510bdc87f64b53fbe96d77ce6eb3b7015b5731523859705a700 Jan 26 18:58:10 crc kubenswrapper[4737]: I0126 18:58:10.296288 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-c8p2s"] Jan 26 18:58:10 crc kubenswrapper[4737]: I0126 18:58:10.297088 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-9b86998b5-c8p2s" podUID="4b6804db-b6cc-41ad-bb1a-603bdca29f7f" containerName="dnsmasq-dns" containerID="cri-o://e9571c9baea36e025096e54b33009a1b78d2a2c98391d28b6d9992276e4ac403" gracePeriod=10 Jan 26 18:58:10 crc kubenswrapper[4737]: I0126 18:58:10.514711 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"89f98c62-56e0-456d-a719-2d79c54a3c79","Type":"ContainerStarted","Data":"5a1cfaccbf52cd1801673ace83c1b829f4eb1a0aea443631dfa554501b1c5652"} Jan 26 18:58:10 crc kubenswrapper[4737]: I0126 18:58:10.525969 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-xzv46" event={"ID":"d2c3196f-2796-452a-ab7f-59145e00d722","Type":"ContainerStarted","Data":"5be5d690e068872134d6601c1071987e72a312caf463273744441ffd1a3bebc9"} Jan 26 18:58:10 crc kubenswrapper[4737]: I0126 18:58:10.532452 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"41b95787-7a5f-4e14-98f2-e2d9500a9df6","Type":"ContainerStarted","Data":"84c325de66a34510bdc87f64b53fbe96d77ce6eb3b7015b5731523859705a700"} Jan 26 18:58:10 crc kubenswrapper[4737]: I0126 18:58:10.547877 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d92c400c-6139-4277-b112-2c725f091503","Type":"ContainerStarted","Data":"d2ff1e8a6e90f827f895ef6913bc212be4a0f6110366bb52f0b5db94ef510261"} Jan 26 18:58:10 crc kubenswrapper[4737]: I0126 18:58:10.548192 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="d92c400c-6139-4277-b112-2c725f091503" containerName="aodh-api" containerID="cri-o://ae2faf3ae608c3d65856cb1ab3ec25312be31135813a4451fd83abd4b2873d79" gracePeriod=30 Jan 26 18:58:10 crc kubenswrapper[4737]: I0126 18:58:10.548536 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="d92c400c-6139-4277-b112-2c725f091503" containerName="aodh-notifier" containerID="cri-o://ba8bafdc35e24c25acaf2aaa91eec230d2fafa07358896278cdc457dc05fe2db" gracePeriod=30 Jan 26 18:58:10 crc kubenswrapper[4737]: I0126 18:58:10.548674 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="d92c400c-6139-4277-b112-2c725f091503" containerName="aodh-listener" containerID="cri-o://d2ff1e8a6e90f827f895ef6913bc212be4a0f6110366bb52f0b5db94ef510261" gracePeriod=30 Jan 26 18:58:10 crc kubenswrapper[4737]: I0126 18:58:10.548558 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="d92c400c-6139-4277-b112-2c725f091503" containerName="aodh-evaluator" containerID="cri-o://228cb53225b133ba970d38952a89d6b7e65288fe451e11399506d94635f4d480" gracePeriod=30 Jan 26 18:58:10 crc kubenswrapper[4737]: I0126 18:58:10.615579 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=3.038822865 podStartE2EDuration="14.615551067s" podCreationTimestamp="2026-01-26 18:57:56 +0000 UTC" firstStartedPulling="2026-01-26 18:57:58.03842949 +0000 UTC m=+1651.346624198" lastFinishedPulling="2026-01-26 18:58:09.615157692 +0000 UTC m=+1662.923352400" observedRunningTime="2026-01-26 18:58:10.595107208 +0000 UTC m=+1663.903301916" watchObservedRunningTime="2026-01-26 18:58:10.615551067 +0000 UTC m=+1663.923745775" Jan 26 18:58:11 crc kubenswrapper[4737]: I0126 18:58:11.498835 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-c8p2s" Jan 26 18:58:11 crc kubenswrapper[4737]: I0126 18:58:11.598709 4737 generic.go:334] "Generic (PLEG): container finished" podID="d92c400c-6139-4277-b112-2c725f091503" containerID="228cb53225b133ba970d38952a89d6b7e65288fe451e11399506d94635f4d480" exitCode=0 Jan 26 18:58:11 crc kubenswrapper[4737]: I0126 18:58:11.598761 4737 generic.go:334] "Generic (PLEG): container finished" podID="d92c400c-6139-4277-b112-2c725f091503" containerID="ae2faf3ae608c3d65856cb1ab3ec25312be31135813a4451fd83abd4b2873d79" exitCode=0 Jan 26 18:58:11 crc kubenswrapper[4737]: I0126 18:58:11.598806 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d92c400c-6139-4277-b112-2c725f091503","Type":"ContainerDied","Data":"228cb53225b133ba970d38952a89d6b7e65288fe451e11399506d94635f4d480"} Jan 26 18:58:11 crc kubenswrapper[4737]: I0126 18:58:11.598854 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d92c400c-6139-4277-b112-2c725f091503","Type":"ContainerDied","Data":"ae2faf3ae608c3d65856cb1ab3ec25312be31135813a4451fd83abd4b2873d79"} Jan 26 18:58:11 crc kubenswrapper[4737]: I0126 18:58:11.602317 4737 generic.go:334] "Generic (PLEG): container finished" podID="4b6804db-b6cc-41ad-bb1a-603bdca29f7f" containerID="e9571c9baea36e025096e54b33009a1b78d2a2c98391d28b6d9992276e4ac403" exitCode=0 Jan 26 18:58:11 crc kubenswrapper[4737]: I0126 18:58:11.602396 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-c8p2s" event={"ID":"4b6804db-b6cc-41ad-bb1a-603bdca29f7f","Type":"ContainerDied","Data":"e9571c9baea36e025096e54b33009a1b78d2a2c98391d28b6d9992276e4ac403"} Jan 26 18:58:11 crc kubenswrapper[4737]: I0126 18:58:11.602445 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-c8p2s" event={"ID":"4b6804db-b6cc-41ad-bb1a-603bdca29f7f","Type":"ContainerDied","Data":"b4349625c82eadd172ffdac233b98962f55b9d2ad99eec67fe31d80c9379255f"} Jan 26 18:58:11 crc kubenswrapper[4737]: I0126 18:58:11.602465 4737 scope.go:117] "RemoveContainer" containerID="e9571c9baea36e025096e54b33009a1b78d2a2c98391d28b6d9992276e4ac403" Jan 26 18:58:11 crc kubenswrapper[4737]: I0126 18:58:11.602654 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-c8p2s" Jan 26 18:58:11 crc kubenswrapper[4737]: I0126 18:58:11.610099 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"89f98c62-56e0-456d-a719-2d79c54a3c79","Type":"ContainerStarted","Data":"8e1d31ccbcf1363583f8c8c673fbaedb73b115c08c7c45de94a6792e2a3597b9"} Jan 26 18:58:11 crc kubenswrapper[4737]: I0126 18:58:11.615258 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-xzv46" event={"ID":"d2c3196f-2796-452a-ab7f-59145e00d722","Type":"ContainerStarted","Data":"b87cd7a0a35b679ccd76d7661f35f934ceb713391288b1d35c9a4830710d2f82"} Jan 26 18:58:11 crc kubenswrapper[4737]: I0126 18:58:11.623476 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"41b95787-7a5f-4e14-98f2-e2d9500a9df6","Type":"ContainerStarted","Data":"876d62ad6e2c9dc5cc6e191575777de8a5a69b9d05f15e8453aa93461c913a7b"} Jan 26 18:58:11 crc kubenswrapper[4737]: I0126 18:58:11.623522 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"41b95787-7a5f-4e14-98f2-e2d9500a9df6","Type":"ContainerStarted","Data":"6db023feb220430c3fb72c715f6535d2d8effd9ed0c16a65355fd304803e322f"} Jan 26 18:58:11 crc kubenswrapper[4737]: I0126 18:58:11.641046 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b6804db-b6cc-41ad-bb1a-603bdca29f7f-dns-svc\") pod \"4b6804db-b6cc-41ad-bb1a-603bdca29f7f\" (UID: \"4b6804db-b6cc-41ad-bb1a-603bdca29f7f\") " Jan 26 18:58:11 crc kubenswrapper[4737]: I0126 18:58:11.641349 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4b6804db-b6cc-41ad-bb1a-603bdca29f7f-dns-swift-storage-0\") pod \"4b6804db-b6cc-41ad-bb1a-603bdca29f7f\" (UID: \"4b6804db-b6cc-41ad-bb1a-603bdca29f7f\") " Jan 26 18:58:11 crc kubenswrapper[4737]: I0126 18:58:11.641401 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b6804db-b6cc-41ad-bb1a-603bdca29f7f-config\") pod \"4b6804db-b6cc-41ad-bb1a-603bdca29f7f\" (UID: \"4b6804db-b6cc-41ad-bb1a-603bdca29f7f\") " Jan 26 18:58:11 crc kubenswrapper[4737]: I0126 18:58:11.641475 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjstg\" (UniqueName: \"kubernetes.io/projected/4b6804db-b6cc-41ad-bb1a-603bdca29f7f-kube-api-access-zjstg\") pod \"4b6804db-b6cc-41ad-bb1a-603bdca29f7f\" (UID: \"4b6804db-b6cc-41ad-bb1a-603bdca29f7f\") " Jan 26 18:58:11 crc kubenswrapper[4737]: I0126 18:58:11.641563 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b6804db-b6cc-41ad-bb1a-603bdca29f7f-ovsdbserver-nb\") pod \"4b6804db-b6cc-41ad-bb1a-603bdca29f7f\" (UID: \"4b6804db-b6cc-41ad-bb1a-603bdca29f7f\") " Jan 26 18:58:11 crc kubenswrapper[4737]: I0126 18:58:11.641588 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4b6804db-b6cc-41ad-bb1a-603bdca29f7f-ovsdbserver-sb\") pod \"4b6804db-b6cc-41ad-bb1a-603bdca29f7f\" (UID: \"4b6804db-b6cc-41ad-bb1a-603bdca29f7f\") " Jan 26 18:58:11 crc kubenswrapper[4737]: I0126 18:58:11.650720 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b6804db-b6cc-41ad-bb1a-603bdca29f7f-kube-api-access-zjstg" (OuterVolumeSpecName: "kube-api-access-zjstg") pod "4b6804db-b6cc-41ad-bb1a-603bdca29f7f" (UID: "4b6804db-b6cc-41ad-bb1a-603bdca29f7f"). InnerVolumeSpecName "kube-api-access-zjstg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:58:11 crc kubenswrapper[4737]: I0126 18:58:11.651979 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-xzv46" podStartSLOduration=3.651955235 podStartE2EDuration="3.651955235s" podCreationTimestamp="2026-01-26 18:58:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:58:11.638177392 +0000 UTC m=+1664.946372100" watchObservedRunningTime="2026-01-26 18:58:11.651955235 +0000 UTC m=+1664.960149943" Jan 26 18:58:11 crc kubenswrapper[4737]: I0126 18:58:11.670768 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.670748635 podStartE2EDuration="3.670748635s" podCreationTimestamp="2026-01-26 18:58:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:58:11.661820466 +0000 UTC m=+1664.970015194" watchObservedRunningTime="2026-01-26 18:58:11.670748635 +0000 UTC m=+1664.978943343" Jan 26 18:58:11 crc kubenswrapper[4737]: I0126 18:58:11.685182 4737 scope.go:117] "RemoveContainer" containerID="aee62a199182feb54c12831e27f38c9b6c79049a2c17fc7561602ad72ca61e28" Jan 26 18:58:11 crc kubenswrapper[4737]: I0126 18:58:11.739228 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b6804db-b6cc-41ad-bb1a-603bdca29f7f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4b6804db-b6cc-41ad-bb1a-603bdca29f7f" (UID: "4b6804db-b6cc-41ad-bb1a-603bdca29f7f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:58:11 crc kubenswrapper[4737]: I0126 18:58:11.746145 4737 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b6804db-b6cc-41ad-bb1a-603bdca29f7f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 18:58:11 crc kubenswrapper[4737]: I0126 18:58:11.746175 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjstg\" (UniqueName: \"kubernetes.io/projected/4b6804db-b6cc-41ad-bb1a-603bdca29f7f-kube-api-access-zjstg\") on node \"crc\" DevicePath \"\"" Jan 26 18:58:11 crc kubenswrapper[4737]: I0126 18:58:11.752291 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b6804db-b6cc-41ad-bb1a-603bdca29f7f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4b6804db-b6cc-41ad-bb1a-603bdca29f7f" (UID: "4b6804db-b6cc-41ad-bb1a-603bdca29f7f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:58:11 crc kubenswrapper[4737]: I0126 18:58:11.756586 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b6804db-b6cc-41ad-bb1a-603bdca29f7f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4b6804db-b6cc-41ad-bb1a-603bdca29f7f" (UID: "4b6804db-b6cc-41ad-bb1a-603bdca29f7f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:58:11 crc kubenswrapper[4737]: I0126 18:58:11.800987 4737 scope.go:117] "RemoveContainer" containerID="e9571c9baea36e025096e54b33009a1b78d2a2c98391d28b6d9992276e4ac403" Jan 26 18:58:11 crc kubenswrapper[4737]: E0126 18:58:11.801983 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9571c9baea36e025096e54b33009a1b78d2a2c98391d28b6d9992276e4ac403\": container with ID starting with e9571c9baea36e025096e54b33009a1b78d2a2c98391d28b6d9992276e4ac403 not found: ID does not exist" containerID="e9571c9baea36e025096e54b33009a1b78d2a2c98391d28b6d9992276e4ac403" Jan 26 18:58:11 crc kubenswrapper[4737]: I0126 18:58:11.802018 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9571c9baea36e025096e54b33009a1b78d2a2c98391d28b6d9992276e4ac403"} err="failed to get container status \"e9571c9baea36e025096e54b33009a1b78d2a2c98391d28b6d9992276e4ac403\": rpc error: code = NotFound desc = could not find container \"e9571c9baea36e025096e54b33009a1b78d2a2c98391d28b6d9992276e4ac403\": container with ID starting with e9571c9baea36e025096e54b33009a1b78d2a2c98391d28b6d9992276e4ac403 not found: ID does not exist" Jan 26 18:58:11 crc kubenswrapper[4737]: I0126 18:58:11.802040 4737 scope.go:117] "RemoveContainer" containerID="aee62a199182feb54c12831e27f38c9b6c79049a2c17fc7561602ad72ca61e28" Jan 26 18:58:11 crc kubenswrapper[4737]: E0126 18:58:11.802866 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aee62a199182feb54c12831e27f38c9b6c79049a2c17fc7561602ad72ca61e28\": container with ID starting with aee62a199182feb54c12831e27f38c9b6c79049a2c17fc7561602ad72ca61e28 not found: ID does not exist" containerID="aee62a199182feb54c12831e27f38c9b6c79049a2c17fc7561602ad72ca61e28" Jan 26 18:58:11 crc kubenswrapper[4737]: I0126 18:58:11.802891 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aee62a199182feb54c12831e27f38c9b6c79049a2c17fc7561602ad72ca61e28"} err="failed to get container status \"aee62a199182feb54c12831e27f38c9b6c79049a2c17fc7561602ad72ca61e28\": rpc error: code = NotFound desc = could not find container \"aee62a199182feb54c12831e27f38c9b6c79049a2c17fc7561602ad72ca61e28\": container with ID starting with aee62a199182feb54c12831e27f38c9b6c79049a2c17fc7561602ad72ca61e28 not found: ID does not exist" Jan 26 18:58:11 crc kubenswrapper[4737]: I0126 18:58:11.827594 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b6804db-b6cc-41ad-bb1a-603bdca29f7f-config" (OuterVolumeSpecName: "config") pod "4b6804db-b6cc-41ad-bb1a-603bdca29f7f" (UID: "4b6804db-b6cc-41ad-bb1a-603bdca29f7f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:58:11 crc kubenswrapper[4737]: I0126 18:58:11.848722 4737 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4b6804db-b6cc-41ad-bb1a-603bdca29f7f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 18:58:11 crc kubenswrapper[4737]: I0126 18:58:11.848755 4737 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b6804db-b6cc-41ad-bb1a-603bdca29f7f-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:58:11 crc kubenswrapper[4737]: I0126 18:58:11.848764 4737 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4b6804db-b6cc-41ad-bb1a-603bdca29f7f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 18:58:11 crc kubenswrapper[4737]: I0126 18:58:11.856515 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b6804db-b6cc-41ad-bb1a-603bdca29f7f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4b6804db-b6cc-41ad-bb1a-603bdca29f7f" (UID: "4b6804db-b6cc-41ad-bb1a-603bdca29f7f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:58:11 crc kubenswrapper[4737]: I0126 18:58:11.967304 4737 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b6804db-b6cc-41ad-bb1a-603bdca29f7f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 18:58:12 crc kubenswrapper[4737]: I0126 18:58:12.004082 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-c8p2s"] Jan 26 18:58:12 crc kubenswrapper[4737]: I0126 18:58:12.039251 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-c8p2s"] Jan 26 18:58:12 crc kubenswrapper[4737]: I0126 18:58:12.659143 4737 generic.go:334] "Generic (PLEG): container finished" podID="d92c400c-6139-4277-b112-2c725f091503" containerID="ba8bafdc35e24c25acaf2aaa91eec230d2fafa07358896278cdc457dc05fe2db" exitCode=0 Jan 26 18:58:12 crc kubenswrapper[4737]: I0126 18:58:12.659381 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d92c400c-6139-4277-b112-2c725f091503","Type":"ContainerDied","Data":"ba8bafdc35e24c25acaf2aaa91eec230d2fafa07358896278cdc457dc05fe2db"} Jan 26 18:58:13 crc kubenswrapper[4737]: I0126 18:58:13.028595 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b6804db-b6cc-41ad-bb1a-603bdca29f7f" path="/var/lib/kubelet/pods/4b6804db-b6cc-41ad-bb1a-603bdca29f7f/volumes" Jan 26 18:58:13 crc kubenswrapper[4737]: I0126 18:58:13.680387 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"89f98c62-56e0-456d-a719-2d79c54a3c79","Type":"ContainerStarted","Data":"90d8783d8c5e61efe1cb15a12357840c0dcd921e8cf7322dff932aed4382e5cb"} Jan 26 18:58:13 crc kubenswrapper[4737]: I0126 18:58:13.680703 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 18:58:13 crc kubenswrapper[4737]: I0126 18:58:13.680689 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="89f98c62-56e0-456d-a719-2d79c54a3c79" containerName="sg-core" containerID="cri-o://8e1d31ccbcf1363583f8c8c673fbaedb73b115c08c7c45de94a6792e2a3597b9" gracePeriod=30 Jan 26 18:58:13 crc kubenswrapper[4737]: I0126 18:58:13.680637 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="89f98c62-56e0-456d-a719-2d79c54a3c79" containerName="ceilometer-notification-agent" containerID="cri-o://5a1cfaccbf52cd1801673ace83c1b829f4eb1a0aea443631dfa554501b1c5652" gracePeriod=30 Jan 26 18:58:13 crc kubenswrapper[4737]: I0126 18:58:13.680602 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="89f98c62-56e0-456d-a719-2d79c54a3c79" containerName="proxy-httpd" containerID="cri-o://90d8783d8c5e61efe1cb15a12357840c0dcd921e8cf7322dff932aed4382e5cb" gracePeriod=30 Jan 26 18:58:13 crc kubenswrapper[4737]: I0126 18:58:13.680554 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="89f98c62-56e0-456d-a719-2d79c54a3c79" containerName="ceilometer-central-agent" containerID="cri-o://8507fbfd71a7c8f95cd523c227a3a54fb2414c3c9ddef7f5bf9fab65118452d1" gracePeriod=30 Jan 26 18:58:13 crc kubenswrapper[4737]: I0126 18:58:13.717007 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.229568138 podStartE2EDuration="7.716988878s" podCreationTimestamp="2026-01-26 18:58:06 +0000 UTC" firstStartedPulling="2026-01-26 18:58:07.449281528 +0000 UTC m=+1660.757476236" lastFinishedPulling="2026-01-26 18:58:12.936702268 +0000 UTC m=+1666.244896976" observedRunningTime="2026-01-26 18:58:13.705643673 +0000 UTC m=+1667.013838381" watchObservedRunningTime="2026-01-26 18:58:13.716988878 +0000 UTC m=+1667.025183586" Jan 26 18:58:14 crc kubenswrapper[4737]: I0126 18:58:14.695169 4737 generic.go:334] "Generic (PLEG): container finished" podID="89f98c62-56e0-456d-a719-2d79c54a3c79" containerID="90d8783d8c5e61efe1cb15a12357840c0dcd921e8cf7322dff932aed4382e5cb" exitCode=0 Jan 26 18:58:14 crc kubenswrapper[4737]: I0126 18:58:14.695559 4737 generic.go:334] "Generic (PLEG): container finished" podID="89f98c62-56e0-456d-a719-2d79c54a3c79" containerID="8e1d31ccbcf1363583f8c8c673fbaedb73b115c08c7c45de94a6792e2a3597b9" exitCode=2 Jan 26 18:58:14 crc kubenswrapper[4737]: I0126 18:58:14.695574 4737 generic.go:334] "Generic (PLEG): container finished" podID="89f98c62-56e0-456d-a719-2d79c54a3c79" containerID="5a1cfaccbf52cd1801673ace83c1b829f4eb1a0aea443631dfa554501b1c5652" exitCode=0 Jan 26 18:58:14 crc kubenswrapper[4737]: I0126 18:58:14.695234 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"89f98c62-56e0-456d-a719-2d79c54a3c79","Type":"ContainerDied","Data":"90d8783d8c5e61efe1cb15a12357840c0dcd921e8cf7322dff932aed4382e5cb"} Jan 26 18:58:14 crc kubenswrapper[4737]: I0126 18:58:14.695618 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"89f98c62-56e0-456d-a719-2d79c54a3c79","Type":"ContainerDied","Data":"8e1d31ccbcf1363583f8c8c673fbaedb73b115c08c7c45de94a6792e2a3597b9"} Jan 26 18:58:14 crc kubenswrapper[4737]: I0126 18:58:14.695637 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"89f98c62-56e0-456d-a719-2d79c54a3c79","Type":"ContainerDied","Data":"5a1cfaccbf52cd1801673ace83c1b829f4eb1a0aea443631dfa554501b1c5652"} Jan 26 18:58:16 crc kubenswrapper[4737]: I0126 18:58:16.721267 4737 generic.go:334] "Generic (PLEG): container finished" podID="89f98c62-56e0-456d-a719-2d79c54a3c79" containerID="8507fbfd71a7c8f95cd523c227a3a54fb2414c3c9ddef7f5bf9fab65118452d1" exitCode=0 Jan 26 18:58:16 crc kubenswrapper[4737]: I0126 18:58:16.721327 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"89f98c62-56e0-456d-a719-2d79c54a3c79","Type":"ContainerDied","Data":"8507fbfd71a7c8f95cd523c227a3a54fb2414c3c9ddef7f5bf9fab65118452d1"} Jan 26 18:58:16 crc kubenswrapper[4737]: I0126 18:58:16.868547 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:58:17 crc kubenswrapper[4737]: I0126 18:58:17.003478 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/89f98c62-56e0-456d-a719-2d79c54a3c79-sg-core-conf-yaml\") pod \"89f98c62-56e0-456d-a719-2d79c54a3c79\" (UID: \"89f98c62-56e0-456d-a719-2d79c54a3c79\") " Jan 26 18:58:17 crc kubenswrapper[4737]: I0126 18:58:17.003856 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89f98c62-56e0-456d-a719-2d79c54a3c79-run-httpd\") pod \"89f98c62-56e0-456d-a719-2d79c54a3c79\" (UID: \"89f98c62-56e0-456d-a719-2d79c54a3c79\") " Jan 26 18:58:17 crc kubenswrapper[4737]: I0126 18:58:17.003978 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89f98c62-56e0-456d-a719-2d79c54a3c79-log-httpd\") pod \"89f98c62-56e0-456d-a719-2d79c54a3c79\" (UID: \"89f98c62-56e0-456d-a719-2d79c54a3c79\") " Jan 26 18:58:17 crc kubenswrapper[4737]: I0126 18:58:17.004138 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89f98c62-56e0-456d-a719-2d79c54a3c79-combined-ca-bundle\") pod \"89f98c62-56e0-456d-a719-2d79c54a3c79\" (UID: \"89f98c62-56e0-456d-a719-2d79c54a3c79\") " Jan 26 18:58:17 crc kubenswrapper[4737]: I0126 18:58:17.004167 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89f98c62-56e0-456d-a719-2d79c54a3c79-scripts\") pod \"89f98c62-56e0-456d-a719-2d79c54a3c79\" (UID: \"89f98c62-56e0-456d-a719-2d79c54a3c79\") " Jan 26 18:58:17 crc kubenswrapper[4737]: I0126 18:58:17.004189 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89f98c62-56e0-456d-a719-2d79c54a3c79-config-data\") pod \"89f98c62-56e0-456d-a719-2d79c54a3c79\" (UID: \"89f98c62-56e0-456d-a719-2d79c54a3c79\") " Jan 26 18:58:17 crc kubenswrapper[4737]: I0126 18:58:17.004235 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48ngg\" (UniqueName: \"kubernetes.io/projected/89f98c62-56e0-456d-a719-2d79c54a3c79-kube-api-access-48ngg\") pod \"89f98c62-56e0-456d-a719-2d79c54a3c79\" (UID: \"89f98c62-56e0-456d-a719-2d79c54a3c79\") " Jan 26 18:58:17 crc kubenswrapper[4737]: I0126 18:58:17.004489 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89f98c62-56e0-456d-a719-2d79c54a3c79-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "89f98c62-56e0-456d-a719-2d79c54a3c79" (UID: "89f98c62-56e0-456d-a719-2d79c54a3c79"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:58:17 crc kubenswrapper[4737]: I0126 18:58:17.004550 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89f98c62-56e0-456d-a719-2d79c54a3c79-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "89f98c62-56e0-456d-a719-2d79c54a3c79" (UID: "89f98c62-56e0-456d-a719-2d79c54a3c79"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:58:17 crc kubenswrapper[4737]: I0126 18:58:17.005027 4737 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89f98c62-56e0-456d-a719-2d79c54a3c79-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 18:58:17 crc kubenswrapper[4737]: I0126 18:58:17.005053 4737 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89f98c62-56e0-456d-a719-2d79c54a3c79-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 18:58:17 crc kubenswrapper[4737]: I0126 18:58:17.010632 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89f98c62-56e0-456d-a719-2d79c54a3c79-scripts" (OuterVolumeSpecName: "scripts") pod "89f98c62-56e0-456d-a719-2d79c54a3c79" (UID: "89f98c62-56e0-456d-a719-2d79c54a3c79"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:58:17 crc kubenswrapper[4737]: I0126 18:58:17.014611 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89f98c62-56e0-456d-a719-2d79c54a3c79-kube-api-access-48ngg" (OuterVolumeSpecName: "kube-api-access-48ngg") pod "89f98c62-56e0-456d-a719-2d79c54a3c79" (UID: "89f98c62-56e0-456d-a719-2d79c54a3c79"). InnerVolumeSpecName "kube-api-access-48ngg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:58:17 crc kubenswrapper[4737]: I0126 18:58:17.043587 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89f98c62-56e0-456d-a719-2d79c54a3c79-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "89f98c62-56e0-456d-a719-2d79c54a3c79" (UID: "89f98c62-56e0-456d-a719-2d79c54a3c79"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:58:17 crc kubenswrapper[4737]: I0126 18:58:17.101983 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89f98c62-56e0-456d-a719-2d79c54a3c79-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "89f98c62-56e0-456d-a719-2d79c54a3c79" (UID: "89f98c62-56e0-456d-a719-2d79c54a3c79"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:58:17 crc kubenswrapper[4737]: I0126 18:58:17.107056 4737 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89f98c62-56e0-456d-a719-2d79c54a3c79-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:58:17 crc kubenswrapper[4737]: I0126 18:58:17.107102 4737 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89f98c62-56e0-456d-a719-2d79c54a3c79-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:58:17 crc kubenswrapper[4737]: I0126 18:58:17.107115 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48ngg\" (UniqueName: \"kubernetes.io/projected/89f98c62-56e0-456d-a719-2d79c54a3c79-kube-api-access-48ngg\") on node \"crc\" DevicePath \"\"" Jan 26 18:58:17 crc kubenswrapper[4737]: I0126 18:58:17.107129 4737 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/89f98c62-56e0-456d-a719-2d79c54a3c79-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 18:58:17 crc kubenswrapper[4737]: I0126 18:58:17.140687 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89f98c62-56e0-456d-a719-2d79c54a3c79-config-data" (OuterVolumeSpecName: "config-data") pod "89f98c62-56e0-456d-a719-2d79c54a3c79" (UID: "89f98c62-56e0-456d-a719-2d79c54a3c79"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:58:17 crc kubenswrapper[4737]: I0126 18:58:17.209295 4737 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89f98c62-56e0-456d-a719-2d79c54a3c79-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:58:17 crc kubenswrapper[4737]: I0126 18:58:17.740329 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"89f98c62-56e0-456d-a719-2d79c54a3c79","Type":"ContainerDied","Data":"d9e3ad62d5b5ec73fe2c0d250ab3c3e51324055f49ffbebb8b77e96313b7f4f7"} Jan 26 18:58:17 crc kubenswrapper[4737]: I0126 18:58:17.740398 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:58:17 crc kubenswrapper[4737]: I0126 18:58:17.740780 4737 scope.go:117] "RemoveContainer" containerID="90d8783d8c5e61efe1cb15a12357840c0dcd921e8cf7322dff932aed4382e5cb" Jan 26 18:58:17 crc kubenswrapper[4737]: I0126 18:58:17.745892 4737 generic.go:334] "Generic (PLEG): container finished" podID="d2c3196f-2796-452a-ab7f-59145e00d722" containerID="b87cd7a0a35b679ccd76d7661f35f934ceb713391288b1d35c9a4830710d2f82" exitCode=0 Jan 26 18:58:17 crc kubenswrapper[4737]: I0126 18:58:17.745925 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-xzv46" event={"ID":"d2c3196f-2796-452a-ab7f-59145e00d722","Type":"ContainerDied","Data":"b87cd7a0a35b679ccd76d7661f35f934ceb713391288b1d35c9a4830710d2f82"} Jan 26 18:58:17 crc kubenswrapper[4737]: I0126 18:58:17.779254 4737 scope.go:117] "RemoveContainer" containerID="8e1d31ccbcf1363583f8c8c673fbaedb73b115c08c7c45de94a6792e2a3597b9" Jan 26 18:58:17 crc kubenswrapper[4737]: I0126 18:58:17.797993 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:58:17 crc kubenswrapper[4737]: I0126 18:58:17.807451 4737 scope.go:117] "RemoveContainer" containerID="5a1cfaccbf52cd1801673ace83c1b829f4eb1a0aea443631dfa554501b1c5652" Jan 26 18:58:17 crc kubenswrapper[4737]: I0126 18:58:17.820705 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:58:17 crc kubenswrapper[4737]: I0126 18:58:17.848829 4737 scope.go:117] "RemoveContainer" containerID="8507fbfd71a7c8f95cd523c227a3a54fb2414c3c9ddef7f5bf9fab65118452d1" Jan 26 18:58:17 crc kubenswrapper[4737]: I0126 18:58:17.857721 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:58:17 crc kubenswrapper[4737]: E0126 18:58:17.858319 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b6804db-b6cc-41ad-bb1a-603bdca29f7f" containerName="dnsmasq-dns" Jan 26 18:58:17 crc kubenswrapper[4737]: I0126 18:58:17.858342 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b6804db-b6cc-41ad-bb1a-603bdca29f7f" containerName="dnsmasq-dns" Jan 26 18:58:17 crc kubenswrapper[4737]: E0126 18:58:17.858354 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89f98c62-56e0-456d-a719-2d79c54a3c79" containerName="ceilometer-central-agent" Jan 26 18:58:17 crc kubenswrapper[4737]: I0126 18:58:17.858362 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="89f98c62-56e0-456d-a719-2d79c54a3c79" containerName="ceilometer-central-agent" Jan 26 18:58:17 crc kubenswrapper[4737]: E0126 18:58:17.858377 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b6804db-b6cc-41ad-bb1a-603bdca29f7f" containerName="init" Jan 26 18:58:17 crc kubenswrapper[4737]: I0126 18:58:17.858383 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b6804db-b6cc-41ad-bb1a-603bdca29f7f" containerName="init" Jan 26 18:58:17 crc kubenswrapper[4737]: E0126 18:58:17.858417 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89f98c62-56e0-456d-a719-2d79c54a3c79" containerName="ceilometer-notification-agent" Jan 26 18:58:17 crc kubenswrapper[4737]: I0126 18:58:17.858423 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="89f98c62-56e0-456d-a719-2d79c54a3c79" containerName="ceilometer-notification-agent" Jan 26 18:58:17 crc kubenswrapper[4737]: E0126 18:58:17.858436 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89f98c62-56e0-456d-a719-2d79c54a3c79" containerName="proxy-httpd" Jan 26 18:58:17 crc kubenswrapper[4737]: I0126 18:58:17.858442 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="89f98c62-56e0-456d-a719-2d79c54a3c79" containerName="proxy-httpd" Jan 26 18:58:17 crc kubenswrapper[4737]: E0126 18:58:17.858457 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89f98c62-56e0-456d-a719-2d79c54a3c79" containerName="sg-core" Jan 26 18:58:17 crc kubenswrapper[4737]: I0126 18:58:17.858463 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="89f98c62-56e0-456d-a719-2d79c54a3c79" containerName="sg-core" Jan 26 18:58:17 crc kubenswrapper[4737]: I0126 18:58:17.858692 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="89f98c62-56e0-456d-a719-2d79c54a3c79" containerName="sg-core" Jan 26 18:58:17 crc kubenswrapper[4737]: I0126 18:58:17.858706 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="89f98c62-56e0-456d-a719-2d79c54a3c79" containerName="ceilometer-notification-agent" Jan 26 18:58:17 crc kubenswrapper[4737]: I0126 18:58:17.858724 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b6804db-b6cc-41ad-bb1a-603bdca29f7f" containerName="dnsmasq-dns" Jan 26 18:58:17 crc kubenswrapper[4737]: I0126 18:58:17.858744 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="89f98c62-56e0-456d-a719-2d79c54a3c79" containerName="ceilometer-central-agent" Jan 26 18:58:17 crc kubenswrapper[4737]: I0126 18:58:17.858757 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="89f98c62-56e0-456d-a719-2d79c54a3c79" containerName="proxy-httpd" Jan 26 18:58:17 crc kubenswrapper[4737]: I0126 18:58:17.861277 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:58:17 crc kubenswrapper[4737]: I0126 18:58:17.864439 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 18:58:17 crc kubenswrapper[4737]: I0126 18:58:17.865889 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 18:58:17 crc kubenswrapper[4737]: I0126 18:58:17.889884 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:58:17 crc kubenswrapper[4737]: I0126 18:58:17.932583 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7228e6d5-f15d-4152-919c-fe757191dad0-scripts\") pod \"ceilometer-0\" (UID: \"7228e6d5-f15d-4152-919c-fe757191dad0\") " pod="openstack/ceilometer-0" Jan 26 18:58:17 crc kubenswrapper[4737]: I0126 18:58:17.932644 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7228e6d5-f15d-4152-919c-fe757191dad0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7228e6d5-f15d-4152-919c-fe757191dad0\") " pod="openstack/ceilometer-0" Jan 26 18:58:17 crc kubenswrapper[4737]: I0126 18:58:17.932812 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7228e6d5-f15d-4152-919c-fe757191dad0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7228e6d5-f15d-4152-919c-fe757191dad0\") " pod="openstack/ceilometer-0" Jan 26 18:58:17 crc kubenswrapper[4737]: I0126 18:58:17.932894 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7228e6d5-f15d-4152-919c-fe757191dad0-run-httpd\") pod \"ceilometer-0\" (UID: \"7228e6d5-f15d-4152-919c-fe757191dad0\") " pod="openstack/ceilometer-0" Jan 26 18:58:17 crc kubenswrapper[4737]: I0126 18:58:17.932958 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7228e6d5-f15d-4152-919c-fe757191dad0-config-data\") pod \"ceilometer-0\" (UID: \"7228e6d5-f15d-4152-919c-fe757191dad0\") " pod="openstack/ceilometer-0" Jan 26 18:58:17 crc kubenswrapper[4737]: I0126 18:58:17.933019 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7228e6d5-f15d-4152-919c-fe757191dad0-log-httpd\") pod \"ceilometer-0\" (UID: \"7228e6d5-f15d-4152-919c-fe757191dad0\") " pod="openstack/ceilometer-0" Jan 26 18:58:17 crc kubenswrapper[4737]: I0126 18:58:17.933056 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhkvb\" (UniqueName: \"kubernetes.io/projected/7228e6d5-f15d-4152-919c-fe757191dad0-kube-api-access-nhkvb\") pod \"ceilometer-0\" (UID: \"7228e6d5-f15d-4152-919c-fe757191dad0\") " pod="openstack/ceilometer-0" Jan 26 18:58:18 crc kubenswrapper[4737]: I0126 18:58:18.034646 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7228e6d5-f15d-4152-919c-fe757191dad0-scripts\") pod \"ceilometer-0\" (UID: \"7228e6d5-f15d-4152-919c-fe757191dad0\") " pod="openstack/ceilometer-0" Jan 26 18:58:18 crc kubenswrapper[4737]: I0126 18:58:18.034766 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7228e6d5-f15d-4152-919c-fe757191dad0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7228e6d5-f15d-4152-919c-fe757191dad0\") " pod="openstack/ceilometer-0" Jan 26 18:58:18 crc kubenswrapper[4737]: I0126 18:58:18.034848 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7228e6d5-f15d-4152-919c-fe757191dad0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7228e6d5-f15d-4152-919c-fe757191dad0\") " pod="openstack/ceilometer-0" Jan 26 18:58:18 crc kubenswrapper[4737]: I0126 18:58:18.034899 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7228e6d5-f15d-4152-919c-fe757191dad0-run-httpd\") pod \"ceilometer-0\" (UID: \"7228e6d5-f15d-4152-919c-fe757191dad0\") " pod="openstack/ceilometer-0" Jan 26 18:58:18 crc kubenswrapper[4737]: I0126 18:58:18.034972 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7228e6d5-f15d-4152-919c-fe757191dad0-config-data\") pod \"ceilometer-0\" (UID: \"7228e6d5-f15d-4152-919c-fe757191dad0\") " pod="openstack/ceilometer-0" Jan 26 18:58:18 crc kubenswrapper[4737]: I0126 18:58:18.035005 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7228e6d5-f15d-4152-919c-fe757191dad0-log-httpd\") pod \"ceilometer-0\" (UID: \"7228e6d5-f15d-4152-919c-fe757191dad0\") " pod="openstack/ceilometer-0" Jan 26 18:58:18 crc kubenswrapper[4737]: I0126 18:58:18.035036 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhkvb\" (UniqueName: \"kubernetes.io/projected/7228e6d5-f15d-4152-919c-fe757191dad0-kube-api-access-nhkvb\") pod \"ceilometer-0\" (UID: \"7228e6d5-f15d-4152-919c-fe757191dad0\") " pod="openstack/ceilometer-0" Jan 26 18:58:18 crc kubenswrapper[4737]: I0126 18:58:18.035802 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7228e6d5-f15d-4152-919c-fe757191dad0-run-httpd\") pod \"ceilometer-0\" (UID: \"7228e6d5-f15d-4152-919c-fe757191dad0\") " pod="openstack/ceilometer-0" Jan 26 18:58:18 crc kubenswrapper[4737]: I0126 18:58:18.035914 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7228e6d5-f15d-4152-919c-fe757191dad0-log-httpd\") pod \"ceilometer-0\" (UID: \"7228e6d5-f15d-4152-919c-fe757191dad0\") " pod="openstack/ceilometer-0" Jan 26 18:58:18 crc kubenswrapper[4737]: I0126 18:58:18.041138 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7228e6d5-f15d-4152-919c-fe757191dad0-config-data\") pod \"ceilometer-0\" (UID: \"7228e6d5-f15d-4152-919c-fe757191dad0\") " pod="openstack/ceilometer-0" Jan 26 18:58:18 crc kubenswrapper[4737]: I0126 18:58:18.042114 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7228e6d5-f15d-4152-919c-fe757191dad0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7228e6d5-f15d-4152-919c-fe757191dad0\") " pod="openstack/ceilometer-0" Jan 26 18:58:18 crc kubenswrapper[4737]: I0126 18:58:18.052697 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7228e6d5-f15d-4152-919c-fe757191dad0-scripts\") pod \"ceilometer-0\" (UID: \"7228e6d5-f15d-4152-919c-fe757191dad0\") " pod="openstack/ceilometer-0" Jan 26 18:58:18 crc kubenswrapper[4737]: I0126 18:58:18.052920 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7228e6d5-f15d-4152-919c-fe757191dad0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7228e6d5-f15d-4152-919c-fe757191dad0\") " pod="openstack/ceilometer-0" Jan 26 18:58:18 crc kubenswrapper[4737]: I0126 18:58:18.055781 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhkvb\" (UniqueName: \"kubernetes.io/projected/7228e6d5-f15d-4152-919c-fe757191dad0-kube-api-access-nhkvb\") pod \"ceilometer-0\" (UID: \"7228e6d5-f15d-4152-919c-fe757191dad0\") " pod="openstack/ceilometer-0" Jan 26 18:58:18 crc kubenswrapper[4737]: I0126 18:58:18.189548 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:58:18 crc kubenswrapper[4737]: I0126 18:58:18.784330 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:58:18 crc kubenswrapper[4737]: W0126 18:58:18.787217 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7228e6d5_f15d_4152_919c_fe757191dad0.slice/crio-0be5a2472ca4eab6531b1c5172e98afe6419f35aa44839c3e03b31058ea8f1c3 WatchSource:0}: Error finding container 0be5a2472ca4eab6531b1c5172e98afe6419f35aa44839c3e03b31058ea8f1c3: Status 404 returned error can't find the container with id 0be5a2472ca4eab6531b1c5172e98afe6419f35aa44839c3e03b31058ea8f1c3 Jan 26 18:58:19 crc kubenswrapper[4737]: I0126 18:58:19.003198 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89f98c62-56e0-456d-a719-2d79c54a3c79" path="/var/lib/kubelet/pods/89f98c62-56e0-456d-a719-2d79c54a3c79/volumes" Jan 26 18:58:19 crc kubenswrapper[4737]: I0126 18:58:19.052653 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 18:58:19 crc kubenswrapper[4737]: I0126 18:58:19.052703 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 18:58:19 crc kubenswrapper[4737]: I0126 18:58:19.428968 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-xzv46" Jan 26 18:58:19 crc kubenswrapper[4737]: I0126 18:58:19.477474 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t28wc\" (UniqueName: \"kubernetes.io/projected/d2c3196f-2796-452a-ab7f-59145e00d722-kube-api-access-t28wc\") pod \"d2c3196f-2796-452a-ab7f-59145e00d722\" (UID: \"d2c3196f-2796-452a-ab7f-59145e00d722\") " Jan 26 18:58:19 crc kubenswrapper[4737]: I0126 18:58:19.477586 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2c3196f-2796-452a-ab7f-59145e00d722-combined-ca-bundle\") pod \"d2c3196f-2796-452a-ab7f-59145e00d722\" (UID: \"d2c3196f-2796-452a-ab7f-59145e00d722\") " Jan 26 18:58:19 crc kubenswrapper[4737]: I0126 18:58:19.477708 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2c3196f-2796-452a-ab7f-59145e00d722-config-data\") pod \"d2c3196f-2796-452a-ab7f-59145e00d722\" (UID: \"d2c3196f-2796-452a-ab7f-59145e00d722\") " Jan 26 18:58:19 crc kubenswrapper[4737]: I0126 18:58:19.477975 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2c3196f-2796-452a-ab7f-59145e00d722-scripts\") pod \"d2c3196f-2796-452a-ab7f-59145e00d722\" (UID: \"d2c3196f-2796-452a-ab7f-59145e00d722\") " Jan 26 18:58:19 crc kubenswrapper[4737]: I0126 18:58:19.487137 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2c3196f-2796-452a-ab7f-59145e00d722-scripts" (OuterVolumeSpecName: "scripts") pod "d2c3196f-2796-452a-ab7f-59145e00d722" (UID: "d2c3196f-2796-452a-ab7f-59145e00d722"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:58:19 crc kubenswrapper[4737]: I0126 18:58:19.496870 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2c3196f-2796-452a-ab7f-59145e00d722-kube-api-access-t28wc" (OuterVolumeSpecName: "kube-api-access-t28wc") pod "d2c3196f-2796-452a-ab7f-59145e00d722" (UID: "d2c3196f-2796-452a-ab7f-59145e00d722"). InnerVolumeSpecName "kube-api-access-t28wc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:58:19 crc kubenswrapper[4737]: I0126 18:58:19.526715 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2c3196f-2796-452a-ab7f-59145e00d722-config-data" (OuterVolumeSpecName: "config-data") pod "d2c3196f-2796-452a-ab7f-59145e00d722" (UID: "d2c3196f-2796-452a-ab7f-59145e00d722"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:58:19 crc kubenswrapper[4737]: I0126 18:58:19.533173 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2c3196f-2796-452a-ab7f-59145e00d722-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d2c3196f-2796-452a-ab7f-59145e00d722" (UID: "d2c3196f-2796-452a-ab7f-59145e00d722"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:58:19 crc kubenswrapper[4737]: I0126 18:58:19.581654 4737 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2c3196f-2796-452a-ab7f-59145e00d722-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:58:19 crc kubenswrapper[4737]: I0126 18:58:19.581693 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t28wc\" (UniqueName: \"kubernetes.io/projected/d2c3196f-2796-452a-ab7f-59145e00d722-kube-api-access-t28wc\") on node \"crc\" DevicePath \"\"" Jan 26 18:58:19 crc kubenswrapper[4737]: I0126 18:58:19.581708 4737 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2c3196f-2796-452a-ab7f-59145e00d722-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:58:19 crc kubenswrapper[4737]: I0126 18:58:19.581722 4737 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2c3196f-2796-452a-ab7f-59145e00d722-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:58:19 crc kubenswrapper[4737]: I0126 18:58:19.776609 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7228e6d5-f15d-4152-919c-fe757191dad0","Type":"ContainerStarted","Data":"54bf97dd170e0a84b9489d653395c0cc1ce55eba0c03f6408ddeec8d9e48eef5"} Jan 26 18:58:19 crc kubenswrapper[4737]: I0126 18:58:19.776670 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7228e6d5-f15d-4152-919c-fe757191dad0","Type":"ContainerStarted","Data":"0be5a2472ca4eab6531b1c5172e98afe6419f35aa44839c3e03b31058ea8f1c3"} Jan 26 18:58:19 crc kubenswrapper[4737]: I0126 18:58:19.781450 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-xzv46" event={"ID":"d2c3196f-2796-452a-ab7f-59145e00d722","Type":"ContainerDied","Data":"5be5d690e068872134d6601c1071987e72a312caf463273744441ffd1a3bebc9"} Jan 26 18:58:19 crc kubenswrapper[4737]: I0126 18:58:19.781487 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5be5d690e068872134d6601c1071987e72a312caf463273744441ffd1a3bebc9" Jan 26 18:58:19 crc kubenswrapper[4737]: I0126 18:58:19.781603 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-xzv46" Jan 26 18:58:19 crc kubenswrapper[4737]: I0126 18:58:19.993157 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 18:58:19 crc kubenswrapper[4737]: I0126 18:58:19.994008 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="41b95787-7a5f-4e14-98f2-e2d9500a9df6" containerName="nova-api-log" containerID="cri-o://6db023feb220430c3fb72c715f6535d2d8effd9ed0c16a65355fd304803e322f" gracePeriod=30 Jan 26 18:58:19 crc kubenswrapper[4737]: I0126 18:58:19.994712 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="41b95787-7a5f-4e14-98f2-e2d9500a9df6" containerName="nova-api-api" containerID="cri-o://876d62ad6e2c9dc5cc6e191575777de8a5a69b9d05f15e8453aa93461c913a7b" gracePeriod=30 Jan 26 18:58:20 crc kubenswrapper[4737]: I0126 18:58:20.027587 4737 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="41b95787-7a5f-4e14-98f2-e2d9500a9df6" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.3:8774/\": EOF" Jan 26 18:58:20 crc kubenswrapper[4737]: I0126 18:58:20.031671 4737 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="41b95787-7a5f-4e14-98f2-e2d9500a9df6" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.3:8774/\": EOF" Jan 26 18:58:20 crc kubenswrapper[4737]: I0126 18:58:20.043225 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 18:58:20 crc kubenswrapper[4737]: I0126 18:58:20.043678 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="2500616a-d9a9-42fd-b442-f922082a19b8" containerName="nova-scheduler-scheduler" containerID="cri-o://1fdc2a941aa1602011a8f7ac6118ee190e992f9f96d0c097e4f452e1d40d8a1a" gracePeriod=30 Jan 26 18:58:20 crc kubenswrapper[4737]: I0126 18:58:20.073882 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 18:58:20 crc kubenswrapper[4737]: I0126 18:58:20.074195 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="edd548b7-dcc7-46ac-ac43-3ba6b63c903a" containerName="nova-metadata-log" containerID="cri-o://10d526d891d2442ffbe1d9dbb86dd489ff37736db0231dc3417d39be137f6a19" gracePeriod=30 Jan 26 18:58:20 crc kubenswrapper[4737]: I0126 18:58:20.074457 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="edd548b7-dcc7-46ac-ac43-3ba6b63c903a" containerName="nova-metadata-metadata" containerID="cri-o://28013b9ec0f1f9f3bb8e98e8f8a262e6f3f2c7edcfdbec931ddaec24c8c15a96" gracePeriod=30 Jan 26 18:58:20 crc kubenswrapper[4737]: I0126 18:58:20.796957 4737 generic.go:334] "Generic (PLEG): container finished" podID="41b95787-7a5f-4e14-98f2-e2d9500a9df6" containerID="6db023feb220430c3fb72c715f6535d2d8effd9ed0c16a65355fd304803e322f" exitCode=143 Jan 26 18:58:20 crc kubenswrapper[4737]: I0126 18:58:20.797050 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"41b95787-7a5f-4e14-98f2-e2d9500a9df6","Type":"ContainerDied","Data":"6db023feb220430c3fb72c715f6535d2d8effd9ed0c16a65355fd304803e322f"} Jan 26 18:58:20 crc kubenswrapper[4737]: I0126 18:58:20.799822 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7228e6d5-f15d-4152-919c-fe757191dad0","Type":"ContainerStarted","Data":"8fbda44cb6642c3a88a2dc82188f3e0b3389ac2b2f28bc62fb3be0d40669ec05"} Jan 26 18:58:20 crc kubenswrapper[4737]: I0126 18:58:20.801937 4737 generic.go:334] "Generic (PLEG): container finished" podID="edd548b7-dcc7-46ac-ac43-3ba6b63c903a" containerID="10d526d891d2442ffbe1d9dbb86dd489ff37736db0231dc3417d39be137f6a19" exitCode=143 Jan 26 18:58:20 crc kubenswrapper[4737]: I0126 18:58:20.801981 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"edd548b7-dcc7-46ac-ac43-3ba6b63c903a","Type":"ContainerDied","Data":"10d526d891d2442ffbe1d9dbb86dd489ff37736db0231dc3417d39be137f6a19"} Jan 26 18:58:21 crc kubenswrapper[4737]: I0126 18:58:21.817003 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7228e6d5-f15d-4152-919c-fe757191dad0","Type":"ContainerStarted","Data":"99913179cb969e98760e34961e2e04cb75bef946735f87fc2a7382a0f43842ea"} Jan 26 18:58:22 crc kubenswrapper[4737]: E0126 18:58:22.620879 4737 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1fdc2a941aa1602011a8f7ac6118ee190e992f9f96d0c097e4f452e1d40d8a1a is running failed: container process not found" containerID="1fdc2a941aa1602011a8f7ac6118ee190e992f9f96d0c097e4f452e1d40d8a1a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 18:58:22 crc kubenswrapper[4737]: E0126 18:58:22.622392 4737 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1fdc2a941aa1602011a8f7ac6118ee190e992f9f96d0c097e4f452e1d40d8a1a is running failed: container process not found" containerID="1fdc2a941aa1602011a8f7ac6118ee190e992f9f96d0c097e4f452e1d40d8a1a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 18:58:22 crc kubenswrapper[4737]: E0126 18:58:22.622705 4737 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1fdc2a941aa1602011a8f7ac6118ee190e992f9f96d0c097e4f452e1d40d8a1a is running failed: container process not found" containerID="1fdc2a941aa1602011a8f7ac6118ee190e992f9f96d0c097e4f452e1d40d8a1a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 18:58:22 crc kubenswrapper[4737]: E0126 18:58:22.622743 4737 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1fdc2a941aa1602011a8f7ac6118ee190e992f9f96d0c097e4f452e1d40d8a1a is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="2500616a-d9a9-42fd-b442-f922082a19b8" containerName="nova-scheduler-scheduler" Jan 26 18:58:22 crc kubenswrapper[4737]: I0126 18:58:22.834092 4737 generic.go:334] "Generic (PLEG): container finished" podID="2500616a-d9a9-42fd-b442-f922082a19b8" containerID="1fdc2a941aa1602011a8f7ac6118ee190e992f9f96d0c097e4f452e1d40d8a1a" exitCode=0 Jan 26 18:58:22 crc kubenswrapper[4737]: I0126 18:58:22.834115 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2500616a-d9a9-42fd-b442-f922082a19b8","Type":"ContainerDied","Data":"1fdc2a941aa1602011a8f7ac6118ee190e992f9f96d0c097e4f452e1d40d8a1a"} Jan 26 18:58:23 crc kubenswrapper[4737]: I0126 18:58:23.150705 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 18:58:23 crc kubenswrapper[4737]: I0126 18:58:23.266777 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="edd548b7-dcc7-46ac-ac43-3ba6b63c903a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.250:8775/\": read tcp 10.217.0.2:52368->10.217.0.250:8775: read: connection reset by peer" Jan 26 18:58:23 crc kubenswrapper[4737]: I0126 18:58:23.267121 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="edd548b7-dcc7-46ac-ac43-3ba6b63c903a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.250:8775/\": read tcp 10.217.0.2:52372->10.217.0.250:8775: read: connection reset by peer" Jan 26 18:58:23 crc kubenswrapper[4737]: I0126 18:58:23.291589 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2500616a-d9a9-42fd-b442-f922082a19b8-combined-ca-bundle\") pod \"2500616a-d9a9-42fd-b442-f922082a19b8\" (UID: \"2500616a-d9a9-42fd-b442-f922082a19b8\") " Jan 26 18:58:23 crc kubenswrapper[4737]: I0126 18:58:23.291682 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2500616a-d9a9-42fd-b442-f922082a19b8-config-data\") pod \"2500616a-d9a9-42fd-b442-f922082a19b8\" (UID: \"2500616a-d9a9-42fd-b442-f922082a19b8\") " Jan 26 18:58:23 crc kubenswrapper[4737]: I0126 18:58:23.292147 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78gkp\" (UniqueName: \"kubernetes.io/projected/2500616a-d9a9-42fd-b442-f922082a19b8-kube-api-access-78gkp\") pod \"2500616a-d9a9-42fd-b442-f922082a19b8\" (UID: \"2500616a-d9a9-42fd-b442-f922082a19b8\") " Jan 26 18:58:23 crc kubenswrapper[4737]: I0126 18:58:23.309331 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2500616a-d9a9-42fd-b442-f922082a19b8-kube-api-access-78gkp" (OuterVolumeSpecName: "kube-api-access-78gkp") pod "2500616a-d9a9-42fd-b442-f922082a19b8" (UID: "2500616a-d9a9-42fd-b442-f922082a19b8"). InnerVolumeSpecName "kube-api-access-78gkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:58:23 crc kubenswrapper[4737]: I0126 18:58:23.352196 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2500616a-d9a9-42fd-b442-f922082a19b8-config-data" (OuterVolumeSpecName: "config-data") pod "2500616a-d9a9-42fd-b442-f922082a19b8" (UID: "2500616a-d9a9-42fd-b442-f922082a19b8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:58:23 crc kubenswrapper[4737]: I0126 18:58:23.396431 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2500616a-d9a9-42fd-b442-f922082a19b8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2500616a-d9a9-42fd-b442-f922082a19b8" (UID: "2500616a-d9a9-42fd-b442-f922082a19b8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:58:23 crc kubenswrapper[4737]: I0126 18:58:23.396516 4737 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2500616a-d9a9-42fd-b442-f922082a19b8-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:58:23 crc kubenswrapper[4737]: I0126 18:58:23.396545 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78gkp\" (UniqueName: \"kubernetes.io/projected/2500616a-d9a9-42fd-b442-f922082a19b8-kube-api-access-78gkp\") on node \"crc\" DevicePath \"\"" Jan 26 18:58:23 crc kubenswrapper[4737]: I0126 18:58:23.502005 4737 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2500616a-d9a9-42fd-b442-f922082a19b8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:58:23 crc kubenswrapper[4737]: I0126 18:58:23.849780 4737 generic.go:334] "Generic (PLEG): container finished" podID="edd548b7-dcc7-46ac-ac43-3ba6b63c903a" containerID="28013b9ec0f1f9f3bb8e98e8f8a262e6f3f2c7edcfdbec931ddaec24c8c15a96" exitCode=0 Jan 26 18:58:23 crc kubenswrapper[4737]: I0126 18:58:23.849870 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"edd548b7-dcc7-46ac-ac43-3ba6b63c903a","Type":"ContainerDied","Data":"28013b9ec0f1f9f3bb8e98e8f8a262e6f3f2c7edcfdbec931ddaec24c8c15a96"} Jan 26 18:58:23 crc kubenswrapper[4737]: I0126 18:58:23.849912 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"edd548b7-dcc7-46ac-ac43-3ba6b63c903a","Type":"ContainerDied","Data":"fb98baca1b2daa5f6811f2ea5b873246ea691c0bf079b5ee90ac083354278a81"} Jan 26 18:58:23 crc kubenswrapper[4737]: I0126 18:58:23.849926 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb98baca1b2daa5f6811f2ea5b873246ea691c0bf079b5ee90ac083354278a81" Jan 26 18:58:23 crc kubenswrapper[4737]: I0126 18:58:23.851786 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2500616a-d9a9-42fd-b442-f922082a19b8","Type":"ContainerDied","Data":"252eb0793cbe974156d87fe39861900806f5f68b5e4456c0c145454f3f9d138e"} Jan 26 18:58:23 crc kubenswrapper[4737]: I0126 18:58:23.851832 4737 scope.go:117] "RemoveContainer" containerID="1fdc2a941aa1602011a8f7ac6118ee190e992f9f96d0c097e4f452e1d40d8a1a" Jan 26 18:58:23 crc kubenswrapper[4737]: I0126 18:58:23.851994 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 18:58:23 crc kubenswrapper[4737]: I0126 18:58:23.862987 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7228e6d5-f15d-4152-919c-fe757191dad0","Type":"ContainerStarted","Data":"f2ef4d6692d291899f76821abc19adc66851e3228f986f581319a98593b12e2c"} Jan 26 18:58:23 crc kubenswrapper[4737]: I0126 18:58:23.864915 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 18:58:23 crc kubenswrapper[4737]: I0126 18:58:23.914023 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.776225348 podStartE2EDuration="6.914004025s" podCreationTimestamp="2026-01-26 18:58:17 +0000 UTC" firstStartedPulling="2026-01-26 18:58:18.792919323 +0000 UTC m=+1672.101114041" lastFinishedPulling="2026-01-26 18:58:22.93069801 +0000 UTC m=+1676.238892718" observedRunningTime="2026-01-26 18:58:23.884439763 +0000 UTC m=+1677.192634481" watchObservedRunningTime="2026-01-26 18:58:23.914004025 +0000 UTC m=+1677.222198723" Jan 26 18:58:23 crc kubenswrapper[4737]: I0126 18:58:23.935180 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 18:58:23 crc kubenswrapper[4737]: I0126 18:58:23.949380 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 18:58:23 crc kubenswrapper[4737]: I0126 18:58:23.970610 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 18:58:23 crc kubenswrapper[4737]: I0126 18:58:23.996712 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 18:58:23 crc kubenswrapper[4737]: E0126 18:58:23.997577 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edd548b7-dcc7-46ac-ac43-3ba6b63c903a" containerName="nova-metadata-log" Jan 26 18:58:23 crc kubenswrapper[4737]: I0126 18:58:23.997606 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="edd548b7-dcc7-46ac-ac43-3ba6b63c903a" containerName="nova-metadata-log" Jan 26 18:58:23 crc kubenswrapper[4737]: E0126 18:58:23.997629 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2c3196f-2796-452a-ab7f-59145e00d722" containerName="nova-manage" Jan 26 18:58:23 crc kubenswrapper[4737]: I0126 18:58:23.997639 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2c3196f-2796-452a-ab7f-59145e00d722" containerName="nova-manage" Jan 26 18:58:23 crc kubenswrapper[4737]: E0126 18:58:23.997661 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edd548b7-dcc7-46ac-ac43-3ba6b63c903a" containerName="nova-metadata-metadata" Jan 26 18:58:23 crc kubenswrapper[4737]: I0126 18:58:23.997669 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="edd548b7-dcc7-46ac-ac43-3ba6b63c903a" containerName="nova-metadata-metadata" Jan 26 18:58:23 crc kubenswrapper[4737]: E0126 18:58:23.997686 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2500616a-d9a9-42fd-b442-f922082a19b8" containerName="nova-scheduler-scheduler" Jan 26 18:58:23 crc kubenswrapper[4737]: I0126 18:58:23.997694 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="2500616a-d9a9-42fd-b442-f922082a19b8" containerName="nova-scheduler-scheduler" Jan 26 18:58:23 crc kubenswrapper[4737]: I0126 18:58:23.997927 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="2500616a-d9a9-42fd-b442-f922082a19b8" containerName="nova-scheduler-scheduler" Jan 26 18:58:23 crc kubenswrapper[4737]: I0126 18:58:23.997950 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="edd548b7-dcc7-46ac-ac43-3ba6b63c903a" containerName="nova-metadata-metadata" Jan 26 18:58:23 crc kubenswrapper[4737]: I0126 18:58:23.997961 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="edd548b7-dcc7-46ac-ac43-3ba6b63c903a" containerName="nova-metadata-log" Jan 26 18:58:23 crc kubenswrapper[4737]: I0126 18:58:23.997972 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2c3196f-2796-452a-ab7f-59145e00d722" containerName="nova-manage" Jan 26 18:58:24 crc kubenswrapper[4737]: I0126 18:58:24.005349 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 18:58:24 crc kubenswrapper[4737]: I0126 18:58:24.010384 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 26 18:58:24 crc kubenswrapper[4737]: I0126 18:58:24.014312 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 18:58:24 crc kubenswrapper[4737]: I0126 18:58:24.015437 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edd548b7-dcc7-46ac-ac43-3ba6b63c903a-config-data\") pod \"edd548b7-dcc7-46ac-ac43-3ba6b63c903a\" (UID: \"edd548b7-dcc7-46ac-ac43-3ba6b63c903a\") " Jan 26 18:58:24 crc kubenswrapper[4737]: I0126 18:58:24.015499 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edd548b7-dcc7-46ac-ac43-3ba6b63c903a-combined-ca-bundle\") pod \"edd548b7-dcc7-46ac-ac43-3ba6b63c903a\" (UID: \"edd548b7-dcc7-46ac-ac43-3ba6b63c903a\") " Jan 26 18:58:24 crc kubenswrapper[4737]: I0126 18:58:24.015699 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edd548b7-dcc7-46ac-ac43-3ba6b63c903a-logs\") pod \"edd548b7-dcc7-46ac-ac43-3ba6b63c903a\" (UID: \"edd548b7-dcc7-46ac-ac43-3ba6b63c903a\") " Jan 26 18:58:24 crc kubenswrapper[4737]: I0126 18:58:24.015740 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfwsf\" (UniqueName: \"kubernetes.io/projected/edd548b7-dcc7-46ac-ac43-3ba6b63c903a-kube-api-access-sfwsf\") pod \"edd548b7-dcc7-46ac-ac43-3ba6b63c903a\" (UID: \"edd548b7-dcc7-46ac-ac43-3ba6b63c903a\") " Jan 26 18:58:24 crc kubenswrapper[4737]: I0126 18:58:24.015767 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/edd548b7-dcc7-46ac-ac43-3ba6b63c903a-nova-metadata-tls-certs\") pod \"edd548b7-dcc7-46ac-ac43-3ba6b63c903a\" (UID: \"edd548b7-dcc7-46ac-ac43-3ba6b63c903a\") " Jan 26 18:58:24 crc kubenswrapper[4737]: I0126 18:58:24.016653 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edd548b7-dcc7-46ac-ac43-3ba6b63c903a-logs" (OuterVolumeSpecName: "logs") pod "edd548b7-dcc7-46ac-ac43-3ba6b63c903a" (UID: "edd548b7-dcc7-46ac-ac43-3ba6b63c903a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:58:24 crc kubenswrapper[4737]: I0126 18:58:24.033379 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edd548b7-dcc7-46ac-ac43-3ba6b63c903a-kube-api-access-sfwsf" (OuterVolumeSpecName: "kube-api-access-sfwsf") pod "edd548b7-dcc7-46ac-ac43-3ba6b63c903a" (UID: "edd548b7-dcc7-46ac-ac43-3ba6b63c903a"). InnerVolumeSpecName "kube-api-access-sfwsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:58:24 crc kubenswrapper[4737]: I0126 18:58:24.063467 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edd548b7-dcc7-46ac-ac43-3ba6b63c903a-config-data" (OuterVolumeSpecName: "config-data") pod "edd548b7-dcc7-46ac-ac43-3ba6b63c903a" (UID: "edd548b7-dcc7-46ac-ac43-3ba6b63c903a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:58:24 crc kubenswrapper[4737]: I0126 18:58:24.096566 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edd548b7-dcc7-46ac-ac43-3ba6b63c903a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "edd548b7-dcc7-46ac-ac43-3ba6b63c903a" (UID: "edd548b7-dcc7-46ac-ac43-3ba6b63c903a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:58:24 crc kubenswrapper[4737]: I0126 18:58:24.119057 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edd548b7-dcc7-46ac-ac43-3ba6b63c903a-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "edd548b7-dcc7-46ac-ac43-3ba6b63c903a" (UID: "edd548b7-dcc7-46ac-ac43-3ba6b63c903a"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:58:24 crc kubenswrapper[4737]: I0126 18:58:24.120247 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a901aed9-dbba-43e3-bf8c-f6026e3ea49d-config-data\") pod \"nova-scheduler-0\" (UID: \"a901aed9-dbba-43e3-bf8c-f6026e3ea49d\") " pod="openstack/nova-scheduler-0" Jan 26 18:58:24 crc kubenswrapper[4737]: I0126 18:58:24.120456 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lznnd\" (UniqueName: \"kubernetes.io/projected/a901aed9-dbba-43e3-bf8c-f6026e3ea49d-kube-api-access-lznnd\") pod \"nova-scheduler-0\" (UID: \"a901aed9-dbba-43e3-bf8c-f6026e3ea49d\") " pod="openstack/nova-scheduler-0" Jan 26 18:58:24 crc kubenswrapper[4737]: I0126 18:58:24.120507 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a901aed9-dbba-43e3-bf8c-f6026e3ea49d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a901aed9-dbba-43e3-bf8c-f6026e3ea49d\") " pod="openstack/nova-scheduler-0" Jan 26 18:58:24 crc kubenswrapper[4737]: I0126 18:58:24.121274 4737 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edd548b7-dcc7-46ac-ac43-3ba6b63c903a-logs\") on node \"crc\" DevicePath \"\"" Jan 26 18:58:24 crc kubenswrapper[4737]: I0126 18:58:24.121302 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfwsf\" (UniqueName: \"kubernetes.io/projected/edd548b7-dcc7-46ac-ac43-3ba6b63c903a-kube-api-access-sfwsf\") on node \"crc\" DevicePath \"\"" Jan 26 18:58:24 crc kubenswrapper[4737]: I0126 18:58:24.121315 4737 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/edd548b7-dcc7-46ac-ac43-3ba6b63c903a-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:58:24 crc kubenswrapper[4737]: I0126 18:58:24.121327 4737 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edd548b7-dcc7-46ac-ac43-3ba6b63c903a-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:58:24 crc kubenswrapper[4737]: I0126 18:58:24.121340 4737 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edd548b7-dcc7-46ac-ac43-3ba6b63c903a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:58:24 crc kubenswrapper[4737]: I0126 18:58:24.223284 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a901aed9-dbba-43e3-bf8c-f6026e3ea49d-config-data\") pod \"nova-scheduler-0\" (UID: \"a901aed9-dbba-43e3-bf8c-f6026e3ea49d\") " pod="openstack/nova-scheduler-0" Jan 26 18:58:24 crc kubenswrapper[4737]: I0126 18:58:24.223404 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lznnd\" (UniqueName: \"kubernetes.io/projected/a901aed9-dbba-43e3-bf8c-f6026e3ea49d-kube-api-access-lznnd\") pod \"nova-scheduler-0\" (UID: \"a901aed9-dbba-43e3-bf8c-f6026e3ea49d\") " pod="openstack/nova-scheduler-0" Jan 26 18:58:24 crc kubenswrapper[4737]: I0126 18:58:24.223459 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a901aed9-dbba-43e3-bf8c-f6026e3ea49d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a901aed9-dbba-43e3-bf8c-f6026e3ea49d\") " pod="openstack/nova-scheduler-0" Jan 26 18:58:24 crc kubenswrapper[4737]: I0126 18:58:24.228122 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a901aed9-dbba-43e3-bf8c-f6026e3ea49d-config-data\") pod \"nova-scheduler-0\" (UID: \"a901aed9-dbba-43e3-bf8c-f6026e3ea49d\") " pod="openstack/nova-scheduler-0" Jan 26 18:58:24 crc kubenswrapper[4737]: I0126 18:58:24.229560 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a901aed9-dbba-43e3-bf8c-f6026e3ea49d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a901aed9-dbba-43e3-bf8c-f6026e3ea49d\") " pod="openstack/nova-scheduler-0" Jan 26 18:58:24 crc kubenswrapper[4737]: I0126 18:58:24.248439 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lznnd\" (UniqueName: \"kubernetes.io/projected/a901aed9-dbba-43e3-bf8c-f6026e3ea49d-kube-api-access-lznnd\") pod \"nova-scheduler-0\" (UID: \"a901aed9-dbba-43e3-bf8c-f6026e3ea49d\") " pod="openstack/nova-scheduler-0" Jan 26 18:58:24 crc kubenswrapper[4737]: I0126 18:58:24.505162 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 18:58:24 crc kubenswrapper[4737]: I0126 18:58:24.886908 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 18:58:24 crc kubenswrapper[4737]: I0126 18:58:24.943817 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 18:58:24 crc kubenswrapper[4737]: I0126 18:58:24.962579 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 18:58:24 crc kubenswrapper[4737]: I0126 18:58:24.977240 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 26 18:58:24 crc kubenswrapper[4737]: I0126 18:58:24.980353 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 18:58:24 crc kubenswrapper[4737]: I0126 18:58:24.984858 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 26 18:58:24 crc kubenswrapper[4737]: I0126 18:58:24.985577 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 26 18:58:25 crc kubenswrapper[4737]: I0126 18:58:25.010306 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2500616a-d9a9-42fd-b442-f922082a19b8" path="/var/lib/kubelet/pods/2500616a-d9a9-42fd-b442-f922082a19b8/volumes" Jan 26 18:58:25 crc kubenswrapper[4737]: I0126 18:58:25.015129 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edd548b7-dcc7-46ac-ac43-3ba6b63c903a" path="/var/lib/kubelet/pods/edd548b7-dcc7-46ac-ac43-3ba6b63c903a/volumes" Jan 26 18:58:25 crc kubenswrapper[4737]: I0126 18:58:25.016254 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 18:58:25 crc kubenswrapper[4737]: I0126 18:58:25.064048 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 18:58:25 crc kubenswrapper[4737]: W0126 18:58:25.065407 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda901aed9_dbba_43e3_bf8c_f6026e3ea49d.slice/crio-14ea29d083fc8fa61dd8b63bc801e2ab206b819f58e4fd3c688da8a55d194246 WatchSource:0}: Error finding container 14ea29d083fc8fa61dd8b63bc801e2ab206b819f58e4fd3c688da8a55d194246: Status 404 returned error can't find the container with id 14ea29d083fc8fa61dd8b63bc801e2ab206b819f58e4fd3c688da8a55d194246 Jan 26 18:58:25 crc kubenswrapper[4737]: I0126 18:58:25.152812 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e472c4b-c138-4b34-b972-84afd363d6dd-config-data\") pod \"nova-metadata-0\" (UID: \"4e472c4b-c138-4b34-b972-84afd363d6dd\") " pod="openstack/nova-metadata-0" Jan 26 18:58:25 crc kubenswrapper[4737]: I0126 18:58:25.152887 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e472c4b-c138-4b34-b972-84afd363d6dd-logs\") pod \"nova-metadata-0\" (UID: \"4e472c4b-c138-4b34-b972-84afd363d6dd\") " pod="openstack/nova-metadata-0" Jan 26 18:58:25 crc kubenswrapper[4737]: I0126 18:58:25.152931 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e472c4b-c138-4b34-b972-84afd363d6dd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4e472c4b-c138-4b34-b972-84afd363d6dd\") " pod="openstack/nova-metadata-0" Jan 26 18:58:25 crc kubenswrapper[4737]: I0126 18:58:25.153011 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhdxg\" (UniqueName: \"kubernetes.io/projected/4e472c4b-c138-4b34-b972-84afd363d6dd-kube-api-access-hhdxg\") pod \"nova-metadata-0\" (UID: \"4e472c4b-c138-4b34-b972-84afd363d6dd\") " pod="openstack/nova-metadata-0" Jan 26 18:58:25 crc kubenswrapper[4737]: I0126 18:58:25.153051 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e472c4b-c138-4b34-b972-84afd363d6dd-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4e472c4b-c138-4b34-b972-84afd363d6dd\") " pod="openstack/nova-metadata-0" Jan 26 18:58:25 crc kubenswrapper[4737]: I0126 18:58:25.255796 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e472c4b-c138-4b34-b972-84afd363d6dd-config-data\") pod \"nova-metadata-0\" (UID: \"4e472c4b-c138-4b34-b972-84afd363d6dd\") " pod="openstack/nova-metadata-0" Jan 26 18:58:25 crc kubenswrapper[4737]: I0126 18:58:25.255878 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e472c4b-c138-4b34-b972-84afd363d6dd-logs\") pod \"nova-metadata-0\" (UID: \"4e472c4b-c138-4b34-b972-84afd363d6dd\") " pod="openstack/nova-metadata-0" Jan 26 18:58:25 crc kubenswrapper[4737]: I0126 18:58:25.255931 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e472c4b-c138-4b34-b972-84afd363d6dd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4e472c4b-c138-4b34-b972-84afd363d6dd\") " pod="openstack/nova-metadata-0" Jan 26 18:58:25 crc kubenswrapper[4737]: I0126 18:58:25.256000 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhdxg\" (UniqueName: \"kubernetes.io/projected/4e472c4b-c138-4b34-b972-84afd363d6dd-kube-api-access-hhdxg\") pod \"nova-metadata-0\" (UID: \"4e472c4b-c138-4b34-b972-84afd363d6dd\") " pod="openstack/nova-metadata-0" Jan 26 18:58:25 crc kubenswrapper[4737]: I0126 18:58:25.256050 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e472c4b-c138-4b34-b972-84afd363d6dd-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4e472c4b-c138-4b34-b972-84afd363d6dd\") " pod="openstack/nova-metadata-0" Jan 26 18:58:25 crc kubenswrapper[4737]: I0126 18:58:25.256997 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e472c4b-c138-4b34-b972-84afd363d6dd-logs\") pod \"nova-metadata-0\" (UID: \"4e472c4b-c138-4b34-b972-84afd363d6dd\") " pod="openstack/nova-metadata-0" Jan 26 18:58:25 crc kubenswrapper[4737]: I0126 18:58:25.262891 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e472c4b-c138-4b34-b972-84afd363d6dd-config-data\") pod \"nova-metadata-0\" (UID: \"4e472c4b-c138-4b34-b972-84afd363d6dd\") " pod="openstack/nova-metadata-0" Jan 26 18:58:25 crc kubenswrapper[4737]: I0126 18:58:25.265971 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e472c4b-c138-4b34-b972-84afd363d6dd-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4e472c4b-c138-4b34-b972-84afd363d6dd\") " pod="openstack/nova-metadata-0" Jan 26 18:58:25 crc kubenswrapper[4737]: I0126 18:58:25.272487 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e472c4b-c138-4b34-b972-84afd363d6dd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4e472c4b-c138-4b34-b972-84afd363d6dd\") " pod="openstack/nova-metadata-0" Jan 26 18:58:25 crc kubenswrapper[4737]: I0126 18:58:25.275421 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhdxg\" (UniqueName: \"kubernetes.io/projected/4e472c4b-c138-4b34-b972-84afd363d6dd-kube-api-access-hhdxg\") pod \"nova-metadata-0\" (UID: \"4e472c4b-c138-4b34-b972-84afd363d6dd\") " pod="openstack/nova-metadata-0" Jan 26 18:58:25 crc kubenswrapper[4737]: I0126 18:58:25.321398 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 18:58:25 crc kubenswrapper[4737]: I0126 18:58:25.887969 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 18:58:25 crc kubenswrapper[4737]: I0126 18:58:25.939690 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4e472c4b-c138-4b34-b972-84afd363d6dd","Type":"ContainerStarted","Data":"b2e493d836b124ee26eeb85962c9a81dfeebf21a20c9511abd63a2dabbd91aae"} Jan 26 18:58:25 crc kubenswrapper[4737]: I0126 18:58:25.956299 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a901aed9-dbba-43e3-bf8c-f6026e3ea49d","Type":"ContainerStarted","Data":"c68f1e45cb8a3bde84cdd8bec80236049e80bc38bc1db92b015c730c0f5a0734"} Jan 26 18:58:25 crc kubenswrapper[4737]: I0126 18:58:25.956379 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a901aed9-dbba-43e3-bf8c-f6026e3ea49d","Type":"ContainerStarted","Data":"14ea29d083fc8fa61dd8b63bc801e2ab206b819f58e4fd3c688da8a55d194246"} Jan 26 18:58:25 crc kubenswrapper[4737]: I0126 18:58:25.961554 4737 generic.go:334] "Generic (PLEG): container finished" podID="41b95787-7a5f-4e14-98f2-e2d9500a9df6" containerID="876d62ad6e2c9dc5cc6e191575777de8a5a69b9d05f15e8453aa93461c913a7b" exitCode=0 Jan 26 18:58:25 crc kubenswrapper[4737]: I0126 18:58:25.961915 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"41b95787-7a5f-4e14-98f2-e2d9500a9df6","Type":"ContainerDied","Data":"876d62ad6e2c9dc5cc6e191575777de8a5a69b9d05f15e8453aa93461c913a7b"} Jan 26 18:58:25 crc kubenswrapper[4737]: I0126 18:58:25.981311 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.981286041 podStartE2EDuration="2.981286041s" podCreationTimestamp="2026-01-26 18:58:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:58:25.974392849 +0000 UTC m=+1679.282587557" watchObservedRunningTime="2026-01-26 18:58:25.981286041 +0000 UTC m=+1679.289480749" Jan 26 18:58:26 crc kubenswrapper[4737]: I0126 18:58:26.188882 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 18:58:26 crc kubenswrapper[4737]: I0126 18:58:26.226027 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41b95787-7a5f-4e14-98f2-e2d9500a9df6-config-data\") pod \"41b95787-7a5f-4e14-98f2-e2d9500a9df6\" (UID: \"41b95787-7a5f-4e14-98f2-e2d9500a9df6\") " Jan 26 18:58:26 crc kubenswrapper[4737]: I0126 18:58:26.228114 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41b95787-7a5f-4e14-98f2-e2d9500a9df6-combined-ca-bundle\") pod \"41b95787-7a5f-4e14-98f2-e2d9500a9df6\" (UID: \"41b95787-7a5f-4e14-98f2-e2d9500a9df6\") " Jan 26 18:58:26 crc kubenswrapper[4737]: I0126 18:58:26.238037 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/41b95787-7a5f-4e14-98f2-e2d9500a9df6-internal-tls-certs\") pod \"41b95787-7a5f-4e14-98f2-e2d9500a9df6\" (UID: \"41b95787-7a5f-4e14-98f2-e2d9500a9df6\") " Jan 26 18:58:26 crc kubenswrapper[4737]: I0126 18:58:26.248324 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6vmc\" (UniqueName: \"kubernetes.io/projected/41b95787-7a5f-4e14-98f2-e2d9500a9df6-kube-api-access-n6vmc\") pod \"41b95787-7a5f-4e14-98f2-e2d9500a9df6\" (UID: \"41b95787-7a5f-4e14-98f2-e2d9500a9df6\") " Jan 26 18:58:26 crc kubenswrapper[4737]: I0126 18:58:26.260130 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/41b95787-7a5f-4e14-98f2-e2d9500a9df6-public-tls-certs\") pod \"41b95787-7a5f-4e14-98f2-e2d9500a9df6\" (UID: \"41b95787-7a5f-4e14-98f2-e2d9500a9df6\") " Jan 26 18:58:26 crc kubenswrapper[4737]: I0126 18:58:26.260400 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41b95787-7a5f-4e14-98f2-e2d9500a9df6-logs\") pod \"41b95787-7a5f-4e14-98f2-e2d9500a9df6\" (UID: \"41b95787-7a5f-4e14-98f2-e2d9500a9df6\") " Jan 26 18:58:26 crc kubenswrapper[4737]: I0126 18:58:26.269466 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41b95787-7a5f-4e14-98f2-e2d9500a9df6-logs" (OuterVolumeSpecName: "logs") pod "41b95787-7a5f-4e14-98f2-e2d9500a9df6" (UID: "41b95787-7a5f-4e14-98f2-e2d9500a9df6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:58:26 crc kubenswrapper[4737]: I0126 18:58:26.284405 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41b95787-7a5f-4e14-98f2-e2d9500a9df6-kube-api-access-n6vmc" (OuterVolumeSpecName: "kube-api-access-n6vmc") pod "41b95787-7a5f-4e14-98f2-e2d9500a9df6" (UID: "41b95787-7a5f-4e14-98f2-e2d9500a9df6"). InnerVolumeSpecName "kube-api-access-n6vmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:58:26 crc kubenswrapper[4737]: I0126 18:58:26.289370 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6vmc\" (UniqueName: \"kubernetes.io/projected/41b95787-7a5f-4e14-98f2-e2d9500a9df6-kube-api-access-n6vmc\") on node \"crc\" DevicePath \"\"" Jan 26 18:58:26 crc kubenswrapper[4737]: I0126 18:58:26.289571 4737 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41b95787-7a5f-4e14-98f2-e2d9500a9df6-logs\") on node \"crc\" DevicePath \"\"" Jan 26 18:58:26 crc kubenswrapper[4737]: I0126 18:58:26.465424 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41b95787-7a5f-4e14-98f2-e2d9500a9df6-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "41b95787-7a5f-4e14-98f2-e2d9500a9df6" (UID: "41b95787-7a5f-4e14-98f2-e2d9500a9df6"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:58:26 crc kubenswrapper[4737]: I0126 18:58:26.468174 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41b95787-7a5f-4e14-98f2-e2d9500a9df6-config-data" (OuterVolumeSpecName: "config-data") pod "41b95787-7a5f-4e14-98f2-e2d9500a9df6" (UID: "41b95787-7a5f-4e14-98f2-e2d9500a9df6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:58:26 crc kubenswrapper[4737]: I0126 18:58:26.471120 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41b95787-7a5f-4e14-98f2-e2d9500a9df6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "41b95787-7a5f-4e14-98f2-e2d9500a9df6" (UID: "41b95787-7a5f-4e14-98f2-e2d9500a9df6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:58:26 crc kubenswrapper[4737]: I0126 18:58:26.495959 4737 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41b95787-7a5f-4e14-98f2-e2d9500a9df6-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:58:26 crc kubenswrapper[4737]: I0126 18:58:26.496242 4737 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41b95787-7a5f-4e14-98f2-e2d9500a9df6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:58:26 crc kubenswrapper[4737]: I0126 18:58:26.496330 4737 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/41b95787-7a5f-4e14-98f2-e2d9500a9df6-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:58:26 crc kubenswrapper[4737]: I0126 18:58:26.498216 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41b95787-7a5f-4e14-98f2-e2d9500a9df6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "41b95787-7a5f-4e14-98f2-e2d9500a9df6" (UID: "41b95787-7a5f-4e14-98f2-e2d9500a9df6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:58:26 crc kubenswrapper[4737]: I0126 18:58:26.598665 4737 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/41b95787-7a5f-4e14-98f2-e2d9500a9df6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:58:26 crc kubenswrapper[4737]: I0126 18:58:26.979462 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"41b95787-7a5f-4e14-98f2-e2d9500a9df6","Type":"ContainerDied","Data":"84c325de66a34510bdc87f64b53fbe96d77ce6eb3b7015b5731523859705a700"} Jan 26 18:58:26 crc kubenswrapper[4737]: I0126 18:58:26.979578 4737 scope.go:117] "RemoveContainer" containerID="876d62ad6e2c9dc5cc6e191575777de8a5a69b9d05f15e8453aa93461c913a7b" Jan 26 18:58:26 crc kubenswrapper[4737]: I0126 18:58:26.980909 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 18:58:27 crc kubenswrapper[4737]: I0126 18:58:27.001199 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4e472c4b-c138-4b34-b972-84afd363d6dd","Type":"ContainerStarted","Data":"c9f758859441f124ed9e7708afe988efd97740583fd728550e38208c07c1cffa"} Jan 26 18:58:27 crc kubenswrapper[4737]: I0126 18:58:27.001620 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4e472c4b-c138-4b34-b972-84afd363d6dd","Type":"ContainerStarted","Data":"5c7d2b59a9c4a507d020410d888a01fe39852db9ef3a36c242bac430fb5d3051"} Jan 26 18:58:27 crc kubenswrapper[4737]: I0126 18:58:27.016774 4737 scope.go:117] "RemoveContainer" containerID="6db023feb220430c3fb72c715f6535d2d8effd9ed0c16a65355fd304803e322f" Jan 26 18:58:27 crc kubenswrapper[4737]: I0126 18:58:27.058876 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.058848443 podStartE2EDuration="3.058848443s" podCreationTimestamp="2026-01-26 18:58:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:58:27.035606318 +0000 UTC m=+1680.343801036" watchObservedRunningTime="2026-01-26 18:58:27.058848443 +0000 UTC m=+1680.367043151" Jan 26 18:58:27 crc kubenswrapper[4737]: I0126 18:58:27.094991 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 18:58:27 crc kubenswrapper[4737]: I0126 18:58:27.125264 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 26 18:58:27 crc kubenswrapper[4737]: I0126 18:58:27.137812 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 26 18:58:27 crc kubenswrapper[4737]: E0126 18:58:27.138415 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41b95787-7a5f-4e14-98f2-e2d9500a9df6" containerName="nova-api-api" Jan 26 18:58:27 crc kubenswrapper[4737]: I0126 18:58:27.138440 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="41b95787-7a5f-4e14-98f2-e2d9500a9df6" containerName="nova-api-api" Jan 26 18:58:27 crc kubenswrapper[4737]: E0126 18:58:27.138491 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41b95787-7a5f-4e14-98f2-e2d9500a9df6" containerName="nova-api-log" Jan 26 18:58:27 crc kubenswrapper[4737]: I0126 18:58:27.138501 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="41b95787-7a5f-4e14-98f2-e2d9500a9df6" containerName="nova-api-log" Jan 26 18:58:27 crc kubenswrapper[4737]: I0126 18:58:27.138823 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="41b95787-7a5f-4e14-98f2-e2d9500a9df6" containerName="nova-api-log" Jan 26 18:58:27 crc kubenswrapper[4737]: I0126 18:58:27.138853 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="41b95787-7a5f-4e14-98f2-e2d9500a9df6" containerName="nova-api-api" Jan 26 18:58:27 crc kubenswrapper[4737]: I0126 18:58:27.140445 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 18:58:27 crc kubenswrapper[4737]: I0126 18:58:27.143312 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 26 18:58:27 crc kubenswrapper[4737]: I0126 18:58:27.143613 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 26 18:58:27 crc kubenswrapper[4737]: I0126 18:58:27.143843 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 18:58:27 crc kubenswrapper[4737]: I0126 18:58:27.150626 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 18:58:27 crc kubenswrapper[4737]: I0126 18:58:27.221953 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26xph\" (UniqueName: \"kubernetes.io/projected/dc6d57aa-811b-482e-abc2-5048e523ce88-kube-api-access-26xph\") pod \"nova-api-0\" (UID: \"dc6d57aa-811b-482e-abc2-5048e523ce88\") " pod="openstack/nova-api-0" Jan 26 18:58:27 crc kubenswrapper[4737]: I0126 18:58:27.222026 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc6d57aa-811b-482e-abc2-5048e523ce88-public-tls-certs\") pod \"nova-api-0\" (UID: \"dc6d57aa-811b-482e-abc2-5048e523ce88\") " pod="openstack/nova-api-0" Jan 26 18:58:27 crc kubenswrapper[4737]: I0126 18:58:27.222055 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc6d57aa-811b-482e-abc2-5048e523ce88-logs\") pod \"nova-api-0\" (UID: \"dc6d57aa-811b-482e-abc2-5048e523ce88\") " pod="openstack/nova-api-0" Jan 26 18:58:27 crc kubenswrapper[4737]: I0126 18:58:27.222144 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc6d57aa-811b-482e-abc2-5048e523ce88-internal-tls-certs\") pod \"nova-api-0\" (UID: \"dc6d57aa-811b-482e-abc2-5048e523ce88\") " pod="openstack/nova-api-0" Jan 26 18:58:27 crc kubenswrapper[4737]: I0126 18:58:27.222281 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc6d57aa-811b-482e-abc2-5048e523ce88-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dc6d57aa-811b-482e-abc2-5048e523ce88\") " pod="openstack/nova-api-0" Jan 26 18:58:27 crc kubenswrapper[4737]: I0126 18:58:27.222356 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc6d57aa-811b-482e-abc2-5048e523ce88-config-data\") pod \"nova-api-0\" (UID: \"dc6d57aa-811b-482e-abc2-5048e523ce88\") " pod="openstack/nova-api-0" Jan 26 18:58:27 crc kubenswrapper[4737]: I0126 18:58:27.326363 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc6d57aa-811b-482e-abc2-5048e523ce88-internal-tls-certs\") pod \"nova-api-0\" (UID: \"dc6d57aa-811b-482e-abc2-5048e523ce88\") " pod="openstack/nova-api-0" Jan 26 18:58:27 crc kubenswrapper[4737]: I0126 18:58:27.326601 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc6d57aa-811b-482e-abc2-5048e523ce88-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dc6d57aa-811b-482e-abc2-5048e523ce88\") " pod="openstack/nova-api-0" Jan 26 18:58:27 crc kubenswrapper[4737]: I0126 18:58:27.326741 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc6d57aa-811b-482e-abc2-5048e523ce88-config-data\") pod \"nova-api-0\" (UID: \"dc6d57aa-811b-482e-abc2-5048e523ce88\") " pod="openstack/nova-api-0" Jan 26 18:58:27 crc kubenswrapper[4737]: I0126 18:58:27.326893 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26xph\" (UniqueName: \"kubernetes.io/projected/dc6d57aa-811b-482e-abc2-5048e523ce88-kube-api-access-26xph\") pod \"nova-api-0\" (UID: \"dc6d57aa-811b-482e-abc2-5048e523ce88\") " pod="openstack/nova-api-0" Jan 26 18:58:27 crc kubenswrapper[4737]: I0126 18:58:27.326976 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc6d57aa-811b-482e-abc2-5048e523ce88-public-tls-certs\") pod \"nova-api-0\" (UID: \"dc6d57aa-811b-482e-abc2-5048e523ce88\") " pod="openstack/nova-api-0" Jan 26 18:58:27 crc kubenswrapper[4737]: I0126 18:58:27.327030 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc6d57aa-811b-482e-abc2-5048e523ce88-logs\") pod \"nova-api-0\" (UID: \"dc6d57aa-811b-482e-abc2-5048e523ce88\") " pod="openstack/nova-api-0" Jan 26 18:58:27 crc kubenswrapper[4737]: I0126 18:58:27.327500 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc6d57aa-811b-482e-abc2-5048e523ce88-logs\") pod \"nova-api-0\" (UID: \"dc6d57aa-811b-482e-abc2-5048e523ce88\") " pod="openstack/nova-api-0" Jan 26 18:58:27 crc kubenswrapper[4737]: I0126 18:58:27.331252 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc6d57aa-811b-482e-abc2-5048e523ce88-public-tls-certs\") pod \"nova-api-0\" (UID: \"dc6d57aa-811b-482e-abc2-5048e523ce88\") " pod="openstack/nova-api-0" Jan 26 18:58:27 crc kubenswrapper[4737]: I0126 18:58:27.332042 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc6d57aa-811b-482e-abc2-5048e523ce88-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dc6d57aa-811b-482e-abc2-5048e523ce88\") " pod="openstack/nova-api-0" Jan 26 18:58:27 crc kubenswrapper[4737]: I0126 18:58:27.333574 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc6d57aa-811b-482e-abc2-5048e523ce88-config-data\") pod \"nova-api-0\" (UID: \"dc6d57aa-811b-482e-abc2-5048e523ce88\") " pod="openstack/nova-api-0" Jan 26 18:58:27 crc kubenswrapper[4737]: I0126 18:58:27.333892 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc6d57aa-811b-482e-abc2-5048e523ce88-internal-tls-certs\") pod \"nova-api-0\" (UID: \"dc6d57aa-811b-482e-abc2-5048e523ce88\") " pod="openstack/nova-api-0" Jan 26 18:58:27 crc kubenswrapper[4737]: I0126 18:58:27.350262 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26xph\" (UniqueName: \"kubernetes.io/projected/dc6d57aa-811b-482e-abc2-5048e523ce88-kube-api-access-26xph\") pod \"nova-api-0\" (UID: \"dc6d57aa-811b-482e-abc2-5048e523ce88\") " pod="openstack/nova-api-0" Jan 26 18:58:27 crc kubenswrapper[4737]: I0126 18:58:27.460452 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 18:58:28 crc kubenswrapper[4737]: I0126 18:58:28.030987 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 18:58:28 crc kubenswrapper[4737]: W0126 18:58:28.035568 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc6d57aa_811b_482e_abc2_5048e523ce88.slice/crio-0f17009a167daf9ecdb3942aa0a2240db786da747076c785d997bc949f1a7085 WatchSource:0}: Error finding container 0f17009a167daf9ecdb3942aa0a2240db786da747076c785d997bc949f1a7085: Status 404 returned error can't find the container with id 0f17009a167daf9ecdb3942aa0a2240db786da747076c785d997bc949f1a7085 Jan 26 18:58:29 crc kubenswrapper[4737]: I0126 18:58:29.003334 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41b95787-7a5f-4e14-98f2-e2d9500a9df6" path="/var/lib/kubelet/pods/41b95787-7a5f-4e14-98f2-e2d9500a9df6/volumes" Jan 26 18:58:29 crc kubenswrapper[4737]: I0126 18:58:29.010091 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dc6d57aa-811b-482e-abc2-5048e523ce88","Type":"ContainerStarted","Data":"8e107799f630a4e323d8a7906b2644e795e3a663f0c95211e73c9181a3972147"} Jan 26 18:58:29 crc kubenswrapper[4737]: I0126 18:58:29.010162 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dc6d57aa-811b-482e-abc2-5048e523ce88","Type":"ContainerStarted","Data":"29d76f2dca778fb49819bfb50b273fa5c49f83955a8fb59beb2fd90e45dc3585"} Jan 26 18:58:29 crc kubenswrapper[4737]: I0126 18:58:29.010179 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dc6d57aa-811b-482e-abc2-5048e523ce88","Type":"ContainerStarted","Data":"0f17009a167daf9ecdb3942aa0a2240db786da747076c785d997bc949f1a7085"} Jan 26 18:58:29 crc kubenswrapper[4737]: I0126 18:58:29.039287 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.039257615 podStartE2EDuration="2.039257615s" podCreationTimestamp="2026-01-26 18:58:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:58:29.03008292 +0000 UTC m=+1682.338277648" watchObservedRunningTime="2026-01-26 18:58:29.039257615 +0000 UTC m=+1682.347452323" Jan 26 18:58:29 crc kubenswrapper[4737]: I0126 18:58:29.506290 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 26 18:58:30 crc kubenswrapper[4737]: I0126 18:58:30.322865 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 18:58:30 crc kubenswrapper[4737]: I0126 18:58:30.323637 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 18:58:30 crc kubenswrapper[4737]: I0126 18:58:30.949636 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:58:30 crc kubenswrapper[4737]: I0126 18:58:30.950005 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:58:30 crc kubenswrapper[4737]: I0126 18:58:30.950095 4737 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" Jan 26 18:58:30 crc kubenswrapper[4737]: I0126 18:58:30.951132 4737 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1118354a04db19a991298cf7d8a2d128f4afb57f133e36502b231054abcee336"} pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 18:58:30 crc kubenswrapper[4737]: I0126 18:58:30.951193 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" containerID="cri-o://1118354a04db19a991298cf7d8a2d128f4afb57f133e36502b231054abcee336" gracePeriod=600 Jan 26 18:58:31 crc kubenswrapper[4737]: E0126 18:58:31.072492 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 18:58:32 crc kubenswrapper[4737]: I0126 18:58:32.060666 4737 generic.go:334] "Generic (PLEG): container finished" podID="afd75772-7900-46c3-b392-afb075e1cc08" containerID="1118354a04db19a991298cf7d8a2d128f4afb57f133e36502b231054abcee336" exitCode=0 Jan 26 18:58:32 crc kubenswrapper[4737]: I0126 18:58:32.060748 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" event={"ID":"afd75772-7900-46c3-b392-afb075e1cc08","Type":"ContainerDied","Data":"1118354a04db19a991298cf7d8a2d128f4afb57f133e36502b231054abcee336"} Jan 26 18:58:32 crc kubenswrapper[4737]: I0126 18:58:32.061034 4737 scope.go:117] "RemoveContainer" containerID="2e00b45a79587ca6768c3a9f0e09f0e494c418f3da2b1b4af85ad9741a3fdd5c" Jan 26 18:58:32 crc kubenswrapper[4737]: I0126 18:58:32.061852 4737 scope.go:117] "RemoveContainer" containerID="1118354a04db19a991298cf7d8a2d128f4afb57f133e36502b231054abcee336" Jan 26 18:58:32 crc kubenswrapper[4737]: E0126 18:58:32.062365 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 18:58:33 crc kubenswrapper[4737]: E0126 18:58:33.577783 4737 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41b95787_7a5f_4e14_98f2_e2d9500a9df6.slice/crio-84c325de66a34510bdc87f64b53fbe96d77ce6eb3b7015b5731523859705a700\": RecentStats: unable to find data in memory cache]" Jan 26 18:58:34 crc kubenswrapper[4737]: I0126 18:58:34.506049 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 26 18:58:34 crc kubenswrapper[4737]: I0126 18:58:34.541573 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 26 18:58:34 crc kubenswrapper[4737]: E0126 18:58:34.577956 4737 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41b95787_7a5f_4e14_98f2_e2d9500a9df6.slice/crio-84c325de66a34510bdc87f64b53fbe96d77ce6eb3b7015b5731523859705a700\": RecentStats: unable to find data in memory cache]" Jan 26 18:58:35 crc kubenswrapper[4737]: I0126 18:58:35.156385 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 26 18:58:35 crc kubenswrapper[4737]: I0126 18:58:35.322859 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 18:58:35 crc kubenswrapper[4737]: I0126 18:58:35.323382 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 18:58:36 crc kubenswrapper[4737]: I0126 18:58:36.338352 4737 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4e472c4b-c138-4b34-b972-84afd363d6dd" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.7:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 18:58:36 crc kubenswrapper[4737]: I0126 18:58:36.338345 4737 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4e472c4b-c138-4b34-b972-84afd363d6dd" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.7:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 18:58:37 crc kubenswrapper[4737]: I0126 18:58:37.461558 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 18:58:37 crc kubenswrapper[4737]: I0126 18:58:37.461971 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 18:58:38 crc kubenswrapper[4737]: I0126 18:58:38.473485 4737 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="dc6d57aa-811b-482e-abc2-5048e523ce88" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.8:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 18:58:38 crc kubenswrapper[4737]: I0126 18:58:38.473801 4737 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="dc6d57aa-811b-482e-abc2-5048e523ce88" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.8:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.107230 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.185503 4737 generic.go:334] "Generic (PLEG): container finished" podID="d92c400c-6139-4277-b112-2c725f091503" containerID="d2ff1e8a6e90f827f895ef6913bc212be4a0f6110366bb52f0b5db94ef510261" exitCode=137 Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.186102 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d92c400c-6139-4277-b112-2c725f091503","Type":"ContainerDied","Data":"d2ff1e8a6e90f827f895ef6913bc212be4a0f6110366bb52f0b5db94ef510261"} Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.186155 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d92c400c-6139-4277-b112-2c725f091503","Type":"ContainerDied","Data":"c20dea87a4e7e4833431c87aab7388815e36efba3dd58e68a004067566744dcd"} Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.186183 4737 scope.go:117] "RemoveContainer" containerID="d2ff1e8a6e90f827f895ef6913bc212be4a0f6110366bb52f0b5db94ef510261" Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.186393 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.218906 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d92c400c-6139-4277-b112-2c725f091503-scripts\") pod \"d92c400c-6139-4277-b112-2c725f091503\" (UID: \"d92c400c-6139-4277-b112-2c725f091503\") " Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.219206 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d92c400c-6139-4277-b112-2c725f091503-config-data\") pod \"d92c400c-6139-4277-b112-2c725f091503\" (UID: \"d92c400c-6139-4277-b112-2c725f091503\") " Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.219333 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v92w9\" (UniqueName: \"kubernetes.io/projected/d92c400c-6139-4277-b112-2c725f091503-kube-api-access-v92w9\") pod \"d92c400c-6139-4277-b112-2c725f091503\" (UID: \"d92c400c-6139-4277-b112-2c725f091503\") " Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.219383 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d92c400c-6139-4277-b112-2c725f091503-combined-ca-bundle\") pod \"d92c400c-6139-4277-b112-2c725f091503\" (UID: \"d92c400c-6139-4277-b112-2c725f091503\") " Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.228330 4737 scope.go:117] "RemoveContainer" containerID="ba8bafdc35e24c25acaf2aaa91eec230d2fafa07358896278cdc457dc05fe2db" Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.228527 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d92c400c-6139-4277-b112-2c725f091503-scripts" (OuterVolumeSpecName: "scripts") pod "d92c400c-6139-4277-b112-2c725f091503" (UID: "d92c400c-6139-4277-b112-2c725f091503"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.272339 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d92c400c-6139-4277-b112-2c725f091503-kube-api-access-v92w9" (OuterVolumeSpecName: "kube-api-access-v92w9") pod "d92c400c-6139-4277-b112-2c725f091503" (UID: "d92c400c-6139-4277-b112-2c725f091503"). InnerVolumeSpecName "kube-api-access-v92w9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.324460 4737 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d92c400c-6139-4277-b112-2c725f091503-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.324498 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v92w9\" (UniqueName: \"kubernetes.io/projected/d92c400c-6139-4277-b112-2c725f091503-kube-api-access-v92w9\") on node \"crc\" DevicePath \"\"" Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.372516 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d92c400c-6139-4277-b112-2c725f091503-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d92c400c-6139-4277-b112-2c725f091503" (UID: "d92c400c-6139-4277-b112-2c725f091503"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.395302 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d92c400c-6139-4277-b112-2c725f091503-config-data" (OuterVolumeSpecName: "config-data") pod "d92c400c-6139-4277-b112-2c725f091503" (UID: "d92c400c-6139-4277-b112-2c725f091503"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.410776 4737 scope.go:117] "RemoveContainer" containerID="228cb53225b133ba970d38952a89d6b7e65288fe451e11399506d94635f4d480" Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.427312 4737 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d92c400c-6139-4277-b112-2c725f091503-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.427352 4737 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d92c400c-6139-4277-b112-2c725f091503-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.442086 4737 scope.go:117] "RemoveContainer" containerID="ae2faf3ae608c3d65856cb1ab3ec25312be31135813a4451fd83abd4b2873d79" Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.470626 4737 scope.go:117] "RemoveContainer" containerID="d2ff1e8a6e90f827f895ef6913bc212be4a0f6110366bb52f0b5db94ef510261" Jan 26 18:58:41 crc kubenswrapper[4737]: E0126 18:58:41.471128 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2ff1e8a6e90f827f895ef6913bc212be4a0f6110366bb52f0b5db94ef510261\": container with ID starting with d2ff1e8a6e90f827f895ef6913bc212be4a0f6110366bb52f0b5db94ef510261 not found: ID does not exist" containerID="d2ff1e8a6e90f827f895ef6913bc212be4a0f6110366bb52f0b5db94ef510261" Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.471212 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2ff1e8a6e90f827f895ef6913bc212be4a0f6110366bb52f0b5db94ef510261"} err="failed to get container status \"d2ff1e8a6e90f827f895ef6913bc212be4a0f6110366bb52f0b5db94ef510261\": rpc error: code = NotFound desc = could not find container \"d2ff1e8a6e90f827f895ef6913bc212be4a0f6110366bb52f0b5db94ef510261\": container with ID starting with d2ff1e8a6e90f827f895ef6913bc212be4a0f6110366bb52f0b5db94ef510261 not found: ID does not exist" Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.471263 4737 scope.go:117] "RemoveContainer" containerID="ba8bafdc35e24c25acaf2aaa91eec230d2fafa07358896278cdc457dc05fe2db" Jan 26 18:58:41 crc kubenswrapper[4737]: E0126 18:58:41.471723 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba8bafdc35e24c25acaf2aaa91eec230d2fafa07358896278cdc457dc05fe2db\": container with ID starting with ba8bafdc35e24c25acaf2aaa91eec230d2fafa07358896278cdc457dc05fe2db not found: ID does not exist" containerID="ba8bafdc35e24c25acaf2aaa91eec230d2fafa07358896278cdc457dc05fe2db" Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.471799 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba8bafdc35e24c25acaf2aaa91eec230d2fafa07358896278cdc457dc05fe2db"} err="failed to get container status \"ba8bafdc35e24c25acaf2aaa91eec230d2fafa07358896278cdc457dc05fe2db\": rpc error: code = NotFound desc = could not find container \"ba8bafdc35e24c25acaf2aaa91eec230d2fafa07358896278cdc457dc05fe2db\": container with ID starting with ba8bafdc35e24c25acaf2aaa91eec230d2fafa07358896278cdc457dc05fe2db not found: ID does not exist" Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.471851 4737 scope.go:117] "RemoveContainer" containerID="228cb53225b133ba970d38952a89d6b7e65288fe451e11399506d94635f4d480" Jan 26 18:58:41 crc kubenswrapper[4737]: E0126 18:58:41.472830 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"228cb53225b133ba970d38952a89d6b7e65288fe451e11399506d94635f4d480\": container with ID starting with 228cb53225b133ba970d38952a89d6b7e65288fe451e11399506d94635f4d480 not found: ID does not exist" containerID="228cb53225b133ba970d38952a89d6b7e65288fe451e11399506d94635f4d480" Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.472883 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"228cb53225b133ba970d38952a89d6b7e65288fe451e11399506d94635f4d480"} err="failed to get container status \"228cb53225b133ba970d38952a89d6b7e65288fe451e11399506d94635f4d480\": rpc error: code = NotFound desc = could not find container \"228cb53225b133ba970d38952a89d6b7e65288fe451e11399506d94635f4d480\": container with ID starting with 228cb53225b133ba970d38952a89d6b7e65288fe451e11399506d94635f4d480 not found: ID does not exist" Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.472930 4737 scope.go:117] "RemoveContainer" containerID="ae2faf3ae608c3d65856cb1ab3ec25312be31135813a4451fd83abd4b2873d79" Jan 26 18:58:41 crc kubenswrapper[4737]: E0126 18:58:41.473314 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae2faf3ae608c3d65856cb1ab3ec25312be31135813a4451fd83abd4b2873d79\": container with ID starting with ae2faf3ae608c3d65856cb1ab3ec25312be31135813a4451fd83abd4b2873d79 not found: ID does not exist" containerID="ae2faf3ae608c3d65856cb1ab3ec25312be31135813a4451fd83abd4b2873d79" Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.473339 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae2faf3ae608c3d65856cb1ab3ec25312be31135813a4451fd83abd4b2873d79"} err="failed to get container status \"ae2faf3ae608c3d65856cb1ab3ec25312be31135813a4451fd83abd4b2873d79\": rpc error: code = NotFound desc = could not find container \"ae2faf3ae608c3d65856cb1ab3ec25312be31135813a4451fd83abd4b2873d79\": container with ID starting with ae2faf3ae608c3d65856cb1ab3ec25312be31135813a4451fd83abd4b2873d79 not found: ID does not exist" Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.560703 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.578802 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.603775 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 26 18:58:41 crc kubenswrapper[4737]: E0126 18:58:41.604505 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d92c400c-6139-4277-b112-2c725f091503" containerName="aodh-evaluator" Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.604526 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="d92c400c-6139-4277-b112-2c725f091503" containerName="aodh-evaluator" Jan 26 18:58:41 crc kubenswrapper[4737]: E0126 18:58:41.604535 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d92c400c-6139-4277-b112-2c725f091503" containerName="aodh-notifier" Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.604541 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="d92c400c-6139-4277-b112-2c725f091503" containerName="aodh-notifier" Jan 26 18:58:41 crc kubenswrapper[4737]: E0126 18:58:41.604555 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d92c400c-6139-4277-b112-2c725f091503" containerName="aodh-listener" Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.604564 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="d92c400c-6139-4277-b112-2c725f091503" containerName="aodh-listener" Jan 26 18:58:41 crc kubenswrapper[4737]: E0126 18:58:41.604589 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d92c400c-6139-4277-b112-2c725f091503" containerName="aodh-api" Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.604595 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="d92c400c-6139-4277-b112-2c725f091503" containerName="aodh-api" Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.604825 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="d92c400c-6139-4277-b112-2c725f091503" containerName="aodh-api" Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.604849 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="d92c400c-6139-4277-b112-2c725f091503" containerName="aodh-listener" Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.604866 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="d92c400c-6139-4277-b112-2c725f091503" containerName="aodh-notifier" Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.604875 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="d92c400c-6139-4277-b112-2c725f091503" containerName="aodh-evaluator" Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.607039 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.610289 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-5skxc" Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.610356 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.610347 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.610578 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.610859 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.619096 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.738707 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxdgv\" (UniqueName: \"kubernetes.io/projected/b6e782a5-335e-4e15-b264-73a1433e49a8-kube-api-access-fxdgv\") pod \"aodh-0\" (UID: \"b6e782a5-335e-4e15-b264-73a1433e49a8\") " pod="openstack/aodh-0" Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.738775 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6e782a5-335e-4e15-b264-73a1433e49a8-scripts\") pod \"aodh-0\" (UID: \"b6e782a5-335e-4e15-b264-73a1433e49a8\") " pod="openstack/aodh-0" Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.738801 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6e782a5-335e-4e15-b264-73a1433e49a8-internal-tls-certs\") pod \"aodh-0\" (UID: \"b6e782a5-335e-4e15-b264-73a1433e49a8\") " pod="openstack/aodh-0" Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.738878 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6e782a5-335e-4e15-b264-73a1433e49a8-combined-ca-bundle\") pod \"aodh-0\" (UID: \"b6e782a5-335e-4e15-b264-73a1433e49a8\") " pod="openstack/aodh-0" Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.738977 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6e782a5-335e-4e15-b264-73a1433e49a8-config-data\") pod \"aodh-0\" (UID: \"b6e782a5-335e-4e15-b264-73a1433e49a8\") " pod="openstack/aodh-0" Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.739252 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6e782a5-335e-4e15-b264-73a1433e49a8-public-tls-certs\") pod \"aodh-0\" (UID: \"b6e782a5-335e-4e15-b264-73a1433e49a8\") " pod="openstack/aodh-0" Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.842672 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxdgv\" (UniqueName: \"kubernetes.io/projected/b6e782a5-335e-4e15-b264-73a1433e49a8-kube-api-access-fxdgv\") pod \"aodh-0\" (UID: \"b6e782a5-335e-4e15-b264-73a1433e49a8\") " pod="openstack/aodh-0" Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.842760 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6e782a5-335e-4e15-b264-73a1433e49a8-scripts\") pod \"aodh-0\" (UID: \"b6e782a5-335e-4e15-b264-73a1433e49a8\") " pod="openstack/aodh-0" Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.842786 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6e782a5-335e-4e15-b264-73a1433e49a8-internal-tls-certs\") pod \"aodh-0\" (UID: \"b6e782a5-335e-4e15-b264-73a1433e49a8\") " pod="openstack/aodh-0" Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.842839 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6e782a5-335e-4e15-b264-73a1433e49a8-combined-ca-bundle\") pod \"aodh-0\" (UID: \"b6e782a5-335e-4e15-b264-73a1433e49a8\") " pod="openstack/aodh-0" Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.842995 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6e782a5-335e-4e15-b264-73a1433e49a8-config-data\") pod \"aodh-0\" (UID: \"b6e782a5-335e-4e15-b264-73a1433e49a8\") " pod="openstack/aodh-0" Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.843923 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6e782a5-335e-4e15-b264-73a1433e49a8-public-tls-certs\") pod \"aodh-0\" (UID: \"b6e782a5-335e-4e15-b264-73a1433e49a8\") " pod="openstack/aodh-0" Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.848619 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6e782a5-335e-4e15-b264-73a1433e49a8-scripts\") pod \"aodh-0\" (UID: \"b6e782a5-335e-4e15-b264-73a1433e49a8\") " pod="openstack/aodh-0" Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.849002 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6e782a5-335e-4e15-b264-73a1433e49a8-combined-ca-bundle\") pod \"aodh-0\" (UID: \"b6e782a5-335e-4e15-b264-73a1433e49a8\") " pod="openstack/aodh-0" Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.849503 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6e782a5-335e-4e15-b264-73a1433e49a8-internal-tls-certs\") pod \"aodh-0\" (UID: \"b6e782a5-335e-4e15-b264-73a1433e49a8\") " pod="openstack/aodh-0" Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.856191 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6e782a5-335e-4e15-b264-73a1433e49a8-config-data\") pod \"aodh-0\" (UID: \"b6e782a5-335e-4e15-b264-73a1433e49a8\") " pod="openstack/aodh-0" Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.858193 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6e782a5-335e-4e15-b264-73a1433e49a8-public-tls-certs\") pod \"aodh-0\" (UID: \"b6e782a5-335e-4e15-b264-73a1433e49a8\") " pod="openstack/aodh-0" Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.868757 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxdgv\" (UniqueName: \"kubernetes.io/projected/b6e782a5-335e-4e15-b264-73a1433e49a8-kube-api-access-fxdgv\") pod \"aodh-0\" (UID: \"b6e782a5-335e-4e15-b264-73a1433e49a8\") " pod="openstack/aodh-0" Jan 26 18:58:41 crc kubenswrapper[4737]: I0126 18:58:41.994337 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 26 18:58:42 crc kubenswrapper[4737]: W0126 18:58:42.508485 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6e782a5_335e_4e15_b264_73a1433e49a8.slice/crio-7f8f5d8acd8e0bb41e7509c1295c2ee88fd36b931d20e940ba1afae653c3e8a8 WatchSource:0}: Error finding container 7f8f5d8acd8e0bb41e7509c1295c2ee88fd36b931d20e940ba1afae653c3e8a8: Status 404 returned error can't find the container with id 7f8f5d8acd8e0bb41e7509c1295c2ee88fd36b931d20e940ba1afae653c3e8a8 Jan 26 18:58:42 crc kubenswrapper[4737]: I0126 18:58:42.509760 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 26 18:58:43 crc kubenswrapper[4737]: I0126 18:58:42.999904 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d92c400c-6139-4277-b112-2c725f091503" path="/var/lib/kubelet/pods/d92c400c-6139-4277-b112-2c725f091503/volumes" Jan 26 18:58:43 crc kubenswrapper[4737]: I0126 18:58:43.211652 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b6e782a5-335e-4e15-b264-73a1433e49a8","Type":"ContainerStarted","Data":"7f8f5d8acd8e0bb41e7509c1295c2ee88fd36b931d20e940ba1afae653c3e8a8"} Jan 26 18:58:43 crc kubenswrapper[4737]: E0126 18:58:43.924473 4737 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41b95787_7a5f_4e14_98f2_e2d9500a9df6.slice/crio-84c325de66a34510bdc87f64b53fbe96d77ce6eb3b7015b5731523859705a700\": RecentStats: unable to find data in memory cache]" Jan 26 18:58:44 crc kubenswrapper[4737]: I0126 18:58:43.982959 4737 scope.go:117] "RemoveContainer" containerID="1118354a04db19a991298cf7d8a2d128f4afb57f133e36502b231054abcee336" Jan 26 18:58:44 crc kubenswrapper[4737]: E0126 18:58:43.983407 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 18:58:44 crc kubenswrapper[4737]: I0126 18:58:44.746159 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b6e782a5-335e-4e15-b264-73a1433e49a8","Type":"ContainerStarted","Data":"e6308bf8ea79d7cb29c981c7aae95fb92ae7625fedaf96040c606475c5136c5e"} Jan 26 18:58:45 crc kubenswrapper[4737]: I0126 18:58:45.329449 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 18:58:45 crc kubenswrapper[4737]: I0126 18:58:45.333024 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 18:58:45 crc kubenswrapper[4737]: I0126 18:58:45.336719 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 18:58:45 crc kubenswrapper[4737]: I0126 18:58:45.766216 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b6e782a5-335e-4e15-b264-73a1433e49a8","Type":"ContainerStarted","Data":"c1828109c44925e8788da5cc4feb78f716c57158a1bb287520d16b6f2bf768a6"} Jan 26 18:58:45 crc kubenswrapper[4737]: I0126 18:58:45.766611 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b6e782a5-335e-4e15-b264-73a1433e49a8","Type":"ContainerStarted","Data":"4e6da485e3fa31da590ef6ae3811c6aa55a644fed8a1b6c2a1033c771bd84091"} Jan 26 18:58:45 crc kubenswrapper[4737]: I0126 18:58:45.774422 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 18:58:46 crc kubenswrapper[4737]: I0126 18:58:46.781481 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b6e782a5-335e-4e15-b264-73a1433e49a8","Type":"ContainerStarted","Data":"94580c31fcb8b54b50cee8e33ce948e14949b72730dfa3ca9f36ef3f38abcd59"} Jan 26 18:58:46 crc kubenswrapper[4737]: I0126 18:58:46.823567 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.15030002 podStartE2EDuration="5.823537269s" podCreationTimestamp="2026-01-26 18:58:41 +0000 UTC" firstStartedPulling="2026-01-26 18:58:42.51102516 +0000 UTC m=+1695.819219858" lastFinishedPulling="2026-01-26 18:58:46.184262389 +0000 UTC m=+1699.492457107" observedRunningTime="2026-01-26 18:58:46.805194069 +0000 UTC m=+1700.113388777" watchObservedRunningTime="2026-01-26 18:58:46.823537269 +0000 UTC m=+1700.131731987" Jan 26 18:58:47 crc kubenswrapper[4737]: I0126 18:58:47.469461 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 18:58:47 crc kubenswrapper[4737]: I0126 18:58:47.471267 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 18:58:47 crc kubenswrapper[4737]: I0126 18:58:47.475748 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 18:58:47 crc kubenswrapper[4737]: I0126 18:58:47.477644 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 18:58:47 crc kubenswrapper[4737]: I0126 18:58:47.793239 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 18:58:47 crc kubenswrapper[4737]: I0126 18:58:47.801809 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 18:58:48 crc kubenswrapper[4737]: E0126 18:58:48.108191 4737 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41b95787_7a5f_4e14_98f2_e2d9500a9df6.slice/crio-84c325de66a34510bdc87f64b53fbe96d77ce6eb3b7015b5731523859705a700\": RecentStats: unable to find data in memory cache]" Jan 26 18:58:48 crc kubenswrapper[4737]: E0126 18:58:48.108500 4737 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41b95787_7a5f_4e14_98f2_e2d9500a9df6.slice/crio-84c325de66a34510bdc87f64b53fbe96d77ce6eb3b7015b5731523859705a700\": RecentStats: unable to find data in memory cache]" Jan 26 18:58:48 crc kubenswrapper[4737]: I0126 18:58:48.197295 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 26 18:58:49 crc kubenswrapper[4737]: E0126 18:58:49.915526 4737 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41b95787_7a5f_4e14_98f2_e2d9500a9df6.slice/crio-84c325de66a34510bdc87f64b53fbe96d77ce6eb3b7015b5731523859705a700\": RecentStats: unable to find data in memory cache]" Jan 26 18:58:52 crc kubenswrapper[4737]: I0126 18:58:52.620504 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 18:58:52 crc kubenswrapper[4737]: I0126 18:58:52.621724 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="aba2f81e-11de-4d89-ab90-34ca36d205d6" containerName="kube-state-metrics" containerID="cri-o://c04895f54990b88c5519934e14d7ebee5009b74f57e0554fc193a7549f810162" gracePeriod=30 Jan 26 18:58:52 crc kubenswrapper[4737]: I0126 18:58:52.724036 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 26 18:58:52 crc kubenswrapper[4737]: I0126 18:58:52.724283 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="7686b11b-6dd6-4748-9358-79a3885e118a" containerName="mysqld-exporter" containerID="cri-o://9253bbafc6a499c29336cc16c2ebfe243ba2163593aa21d9015a2048ec239a99" gracePeriod=30 Jan 26 18:58:52 crc kubenswrapper[4737]: I0126 18:58:52.850484 4737 generic.go:334] "Generic (PLEG): container finished" podID="aba2f81e-11de-4d89-ab90-34ca36d205d6" containerID="c04895f54990b88c5519934e14d7ebee5009b74f57e0554fc193a7549f810162" exitCode=2 Jan 26 18:58:52 crc kubenswrapper[4737]: I0126 18:58:52.850916 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"aba2f81e-11de-4d89-ab90-34ca36d205d6","Type":"ContainerDied","Data":"c04895f54990b88c5519934e14d7ebee5009b74f57e0554fc193a7549f810162"} Jan 26 18:58:53 crc kubenswrapper[4737]: I0126 18:58:53.255690 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 18:58:53 crc kubenswrapper[4737]: I0126 18:58:53.279782 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9vcw\" (UniqueName: \"kubernetes.io/projected/aba2f81e-11de-4d89-ab90-34ca36d205d6-kube-api-access-w9vcw\") pod \"aba2f81e-11de-4d89-ab90-34ca36d205d6\" (UID: \"aba2f81e-11de-4d89-ab90-34ca36d205d6\") " Jan 26 18:58:53 crc kubenswrapper[4737]: I0126 18:58:53.295372 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aba2f81e-11de-4d89-ab90-34ca36d205d6-kube-api-access-w9vcw" (OuterVolumeSpecName: "kube-api-access-w9vcw") pod "aba2f81e-11de-4d89-ab90-34ca36d205d6" (UID: "aba2f81e-11de-4d89-ab90-34ca36d205d6"). InnerVolumeSpecName "kube-api-access-w9vcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:58:53 crc kubenswrapper[4737]: I0126 18:58:53.341258 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 26 18:58:53 crc kubenswrapper[4737]: I0126 18:58:53.383581 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7686b11b-6dd6-4748-9358-79a3885e118a-config-data\") pod \"7686b11b-6dd6-4748-9358-79a3885e118a\" (UID: \"7686b11b-6dd6-4748-9358-79a3885e118a\") " Jan 26 18:58:53 crc kubenswrapper[4737]: I0126 18:58:53.384094 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7686b11b-6dd6-4748-9358-79a3885e118a-combined-ca-bundle\") pod \"7686b11b-6dd6-4748-9358-79a3885e118a\" (UID: \"7686b11b-6dd6-4748-9358-79a3885e118a\") " Jan 26 18:58:53 crc kubenswrapper[4737]: I0126 18:58:53.384156 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7w6zg\" (UniqueName: \"kubernetes.io/projected/7686b11b-6dd6-4748-9358-79a3885e118a-kube-api-access-7w6zg\") pod \"7686b11b-6dd6-4748-9358-79a3885e118a\" (UID: \"7686b11b-6dd6-4748-9358-79a3885e118a\") " Jan 26 18:58:53 crc kubenswrapper[4737]: I0126 18:58:53.384815 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9vcw\" (UniqueName: \"kubernetes.io/projected/aba2f81e-11de-4d89-ab90-34ca36d205d6-kube-api-access-w9vcw\") on node \"crc\" DevicePath \"\"" Jan 26 18:58:53 crc kubenswrapper[4737]: I0126 18:58:53.389255 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7686b11b-6dd6-4748-9358-79a3885e118a-kube-api-access-7w6zg" (OuterVolumeSpecName: "kube-api-access-7w6zg") pod "7686b11b-6dd6-4748-9358-79a3885e118a" (UID: "7686b11b-6dd6-4748-9358-79a3885e118a"). InnerVolumeSpecName "kube-api-access-7w6zg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:58:53 crc kubenswrapper[4737]: I0126 18:58:53.426092 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7686b11b-6dd6-4748-9358-79a3885e118a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7686b11b-6dd6-4748-9358-79a3885e118a" (UID: "7686b11b-6dd6-4748-9358-79a3885e118a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:58:53 crc kubenswrapper[4737]: I0126 18:58:53.485881 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7686b11b-6dd6-4748-9358-79a3885e118a-config-data" (OuterVolumeSpecName: "config-data") pod "7686b11b-6dd6-4748-9358-79a3885e118a" (UID: "7686b11b-6dd6-4748-9358-79a3885e118a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:58:53 crc kubenswrapper[4737]: I0126 18:58:53.486167 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7686b11b-6dd6-4748-9358-79a3885e118a-config-data\") pod \"7686b11b-6dd6-4748-9358-79a3885e118a\" (UID: \"7686b11b-6dd6-4748-9358-79a3885e118a\") " Jan 26 18:58:53 crc kubenswrapper[4737]: W0126 18:58:53.486311 4737 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/7686b11b-6dd6-4748-9358-79a3885e118a/volumes/kubernetes.io~secret/config-data Jan 26 18:58:53 crc kubenswrapper[4737]: I0126 18:58:53.486323 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7686b11b-6dd6-4748-9358-79a3885e118a-config-data" (OuterVolumeSpecName: "config-data") pod "7686b11b-6dd6-4748-9358-79a3885e118a" (UID: "7686b11b-6dd6-4748-9358-79a3885e118a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:58:53 crc kubenswrapper[4737]: I0126 18:58:53.487417 4737 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7686b11b-6dd6-4748-9358-79a3885e118a-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:58:53 crc kubenswrapper[4737]: I0126 18:58:53.487445 4737 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7686b11b-6dd6-4748-9358-79a3885e118a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:58:53 crc kubenswrapper[4737]: I0126 18:58:53.487460 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7w6zg\" (UniqueName: \"kubernetes.io/projected/7686b11b-6dd6-4748-9358-79a3885e118a-kube-api-access-7w6zg\") on node \"crc\" DevicePath \"\"" Jan 26 18:58:53 crc kubenswrapper[4737]: I0126 18:58:53.863403 4737 generic.go:334] "Generic (PLEG): container finished" podID="7686b11b-6dd6-4748-9358-79a3885e118a" containerID="9253bbafc6a499c29336cc16c2ebfe243ba2163593aa21d9015a2048ec239a99" exitCode=2 Jan 26 18:58:53 crc kubenswrapper[4737]: I0126 18:58:53.863473 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 26 18:58:53 crc kubenswrapper[4737]: I0126 18:58:53.863547 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"7686b11b-6dd6-4748-9358-79a3885e118a","Type":"ContainerDied","Data":"9253bbafc6a499c29336cc16c2ebfe243ba2163593aa21d9015a2048ec239a99"} Jan 26 18:58:53 crc kubenswrapper[4737]: I0126 18:58:53.863642 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"7686b11b-6dd6-4748-9358-79a3885e118a","Type":"ContainerDied","Data":"f9622f3c4edcb1044f695b8bcd667deb079c1982cc6323f18ac12ad84653fef4"} Jan 26 18:58:53 crc kubenswrapper[4737]: I0126 18:58:53.863666 4737 scope.go:117] "RemoveContainer" containerID="9253bbafc6a499c29336cc16c2ebfe243ba2163593aa21d9015a2048ec239a99" Jan 26 18:58:53 crc kubenswrapper[4737]: I0126 18:58:53.866002 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"aba2f81e-11de-4d89-ab90-34ca36d205d6","Type":"ContainerDied","Data":"a524c26effe0029f371a7ffb021d11f06bc363ce1a05d7072314e40b8034c390"} Jan 26 18:58:53 crc kubenswrapper[4737]: I0126 18:58:53.866186 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 18:58:53 crc kubenswrapper[4737]: I0126 18:58:53.894926 4737 scope.go:117] "RemoveContainer" containerID="9253bbafc6a499c29336cc16c2ebfe243ba2163593aa21d9015a2048ec239a99" Jan 26 18:58:53 crc kubenswrapper[4737]: E0126 18:58:53.895558 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9253bbafc6a499c29336cc16c2ebfe243ba2163593aa21d9015a2048ec239a99\": container with ID starting with 9253bbafc6a499c29336cc16c2ebfe243ba2163593aa21d9015a2048ec239a99 not found: ID does not exist" containerID="9253bbafc6a499c29336cc16c2ebfe243ba2163593aa21d9015a2048ec239a99" Jan 26 18:58:53 crc kubenswrapper[4737]: I0126 18:58:53.898178 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9253bbafc6a499c29336cc16c2ebfe243ba2163593aa21d9015a2048ec239a99"} err="failed to get container status \"9253bbafc6a499c29336cc16c2ebfe243ba2163593aa21d9015a2048ec239a99\": rpc error: code = NotFound desc = could not find container \"9253bbafc6a499c29336cc16c2ebfe243ba2163593aa21d9015a2048ec239a99\": container with ID starting with 9253bbafc6a499c29336cc16c2ebfe243ba2163593aa21d9015a2048ec239a99 not found: ID does not exist" Jan 26 18:58:53 crc kubenswrapper[4737]: I0126 18:58:53.898243 4737 scope.go:117] "RemoveContainer" containerID="c04895f54990b88c5519934e14d7ebee5009b74f57e0554fc193a7549f810162" Jan 26 18:58:53 crc kubenswrapper[4737]: I0126 18:58:53.920210 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 18:58:53 crc kubenswrapper[4737]: I0126 18:58:53.940377 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 18:58:53 crc kubenswrapper[4737]: I0126 18:58:53.955598 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 26 18:58:53 crc kubenswrapper[4737]: I0126 18:58:53.970849 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 26 18:58:54 crc kubenswrapper[4737]: I0126 18:58:54.001519 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 18:58:54 crc kubenswrapper[4737]: E0126 18:58:54.002480 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aba2f81e-11de-4d89-ab90-34ca36d205d6" containerName="kube-state-metrics" Jan 26 18:58:54 crc kubenswrapper[4737]: I0126 18:58:54.002633 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="aba2f81e-11de-4d89-ab90-34ca36d205d6" containerName="kube-state-metrics" Jan 26 18:58:54 crc kubenswrapper[4737]: E0126 18:58:54.002706 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7686b11b-6dd6-4748-9358-79a3885e118a" containerName="mysqld-exporter" Jan 26 18:58:54 crc kubenswrapper[4737]: I0126 18:58:54.002759 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="7686b11b-6dd6-4748-9358-79a3885e118a" containerName="mysqld-exporter" Jan 26 18:58:54 crc kubenswrapper[4737]: I0126 18:58:54.003030 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="aba2f81e-11de-4d89-ab90-34ca36d205d6" containerName="kube-state-metrics" Jan 26 18:58:54 crc kubenswrapper[4737]: I0126 18:58:54.003135 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="7686b11b-6dd6-4748-9358-79a3885e118a" containerName="mysqld-exporter" Jan 26 18:58:54 crc kubenswrapper[4737]: I0126 18:58:54.004229 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 18:58:54 crc kubenswrapper[4737]: I0126 18:58:54.008165 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 26 18:58:54 crc kubenswrapper[4737]: I0126 18:58:54.008252 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 26 18:58:54 crc kubenswrapper[4737]: I0126 18:58:54.016191 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 18:58:54 crc kubenswrapper[4737]: I0126 18:58:54.035003 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Jan 26 18:58:54 crc kubenswrapper[4737]: I0126 18:58:54.037403 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 26 18:58:54 crc kubenswrapper[4737]: I0126 18:58:54.040611 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Jan 26 18:58:54 crc kubenswrapper[4737]: I0126 18:58:54.044659 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Jan 26 18:58:54 crc kubenswrapper[4737]: I0126 18:58:54.070258 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 26 18:58:54 crc kubenswrapper[4737]: I0126 18:58:54.109545 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s665p\" (UniqueName: \"kubernetes.io/projected/3cc067c6-ba98-4534-a9d8-2028c6e0ccf6-kube-api-access-s665p\") pod \"mysqld-exporter-0\" (UID: \"3cc067c6-ba98-4534-a9d8-2028c6e0ccf6\") " pod="openstack/mysqld-exporter-0" Jan 26 18:58:54 crc kubenswrapper[4737]: I0126 18:58:54.110013 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cc067c6-ba98-4534-a9d8-2028c6e0ccf6-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"3cc067c6-ba98-4534-a9d8-2028c6e0ccf6\") " pod="openstack/mysqld-exporter-0" Jan 26 18:58:54 crc kubenswrapper[4737]: I0126 18:58:54.110144 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cc067c6-ba98-4534-a9d8-2028c6e0ccf6-config-data\") pod \"mysqld-exporter-0\" (UID: \"3cc067c6-ba98-4534-a9d8-2028c6e0ccf6\") " pod="openstack/mysqld-exporter-0" Jan 26 18:58:54 crc kubenswrapper[4737]: I0126 18:58:54.110220 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c57d0600-f0a4-43d2-b974-ced2346aae55-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"c57d0600-f0a4-43d2-b974-ced2346aae55\") " pod="openstack/kube-state-metrics-0" Jan 26 18:58:54 crc kubenswrapper[4737]: I0126 18:58:54.110321 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c57d0600-f0a4-43d2-b974-ced2346aae55-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"c57d0600-f0a4-43d2-b974-ced2346aae55\") " pod="openstack/kube-state-metrics-0" Jan 26 18:58:54 crc kubenswrapper[4737]: I0126 18:58:54.110373 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl658\" (UniqueName: \"kubernetes.io/projected/c57d0600-f0a4-43d2-b974-ced2346aae55-kube-api-access-zl658\") pod \"kube-state-metrics-0\" (UID: \"c57d0600-f0a4-43d2-b974-ced2346aae55\") " pod="openstack/kube-state-metrics-0" Jan 26 18:58:54 crc kubenswrapper[4737]: I0126 18:58:54.110615 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c57d0600-f0a4-43d2-b974-ced2346aae55-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"c57d0600-f0a4-43d2-b974-ced2346aae55\") " pod="openstack/kube-state-metrics-0" Jan 26 18:58:54 crc kubenswrapper[4737]: I0126 18:58:54.110664 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cc067c6-ba98-4534-a9d8-2028c6e0ccf6-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"3cc067c6-ba98-4534-a9d8-2028c6e0ccf6\") " pod="openstack/mysqld-exporter-0" Jan 26 18:58:54 crc kubenswrapper[4737]: I0126 18:58:54.215738 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c57d0600-f0a4-43d2-b974-ced2346aae55-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"c57d0600-f0a4-43d2-b974-ced2346aae55\") " pod="openstack/kube-state-metrics-0" Jan 26 18:58:54 crc kubenswrapper[4737]: I0126 18:58:54.217573 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl658\" (UniqueName: \"kubernetes.io/projected/c57d0600-f0a4-43d2-b974-ced2346aae55-kube-api-access-zl658\") pod \"kube-state-metrics-0\" (UID: \"c57d0600-f0a4-43d2-b974-ced2346aae55\") " pod="openstack/kube-state-metrics-0" Jan 26 18:58:54 crc kubenswrapper[4737]: I0126 18:58:54.217776 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c57d0600-f0a4-43d2-b974-ced2346aae55-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"c57d0600-f0a4-43d2-b974-ced2346aae55\") " pod="openstack/kube-state-metrics-0" Jan 26 18:58:54 crc kubenswrapper[4737]: I0126 18:58:54.217852 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cc067c6-ba98-4534-a9d8-2028c6e0ccf6-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"3cc067c6-ba98-4534-a9d8-2028c6e0ccf6\") " pod="openstack/mysqld-exporter-0" Jan 26 18:58:54 crc kubenswrapper[4737]: I0126 18:58:54.217890 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s665p\" (UniqueName: \"kubernetes.io/projected/3cc067c6-ba98-4534-a9d8-2028c6e0ccf6-kube-api-access-s665p\") pod \"mysqld-exporter-0\" (UID: \"3cc067c6-ba98-4534-a9d8-2028c6e0ccf6\") " pod="openstack/mysqld-exporter-0" Jan 26 18:58:54 crc kubenswrapper[4737]: I0126 18:58:54.218169 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cc067c6-ba98-4534-a9d8-2028c6e0ccf6-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"3cc067c6-ba98-4534-a9d8-2028c6e0ccf6\") " pod="openstack/mysqld-exporter-0" Jan 26 18:58:54 crc kubenswrapper[4737]: I0126 18:58:54.218218 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cc067c6-ba98-4534-a9d8-2028c6e0ccf6-config-data\") pod \"mysqld-exporter-0\" (UID: \"3cc067c6-ba98-4534-a9d8-2028c6e0ccf6\") " pod="openstack/mysqld-exporter-0" Jan 26 18:58:54 crc kubenswrapper[4737]: I0126 18:58:54.218264 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c57d0600-f0a4-43d2-b974-ced2346aae55-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"c57d0600-f0a4-43d2-b974-ced2346aae55\") " pod="openstack/kube-state-metrics-0" Jan 26 18:58:54 crc kubenswrapper[4737]: I0126 18:58:54.222160 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c57d0600-f0a4-43d2-b974-ced2346aae55-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"c57d0600-f0a4-43d2-b974-ced2346aae55\") " pod="openstack/kube-state-metrics-0" Jan 26 18:58:54 crc kubenswrapper[4737]: I0126 18:58:54.224219 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c57d0600-f0a4-43d2-b974-ced2346aae55-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"c57d0600-f0a4-43d2-b974-ced2346aae55\") " pod="openstack/kube-state-metrics-0" Jan 26 18:58:54 crc kubenswrapper[4737]: I0126 18:58:54.224336 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cc067c6-ba98-4534-a9d8-2028c6e0ccf6-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"3cc067c6-ba98-4534-a9d8-2028c6e0ccf6\") " pod="openstack/mysqld-exporter-0" Jan 26 18:58:54 crc kubenswrapper[4737]: I0126 18:58:54.224430 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c57d0600-f0a4-43d2-b974-ced2346aae55-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"c57d0600-f0a4-43d2-b974-ced2346aae55\") " pod="openstack/kube-state-metrics-0" Jan 26 18:58:54 crc kubenswrapper[4737]: I0126 18:58:54.224453 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cc067c6-ba98-4534-a9d8-2028c6e0ccf6-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"3cc067c6-ba98-4534-a9d8-2028c6e0ccf6\") " pod="openstack/mysqld-exporter-0" Jan 26 18:58:54 crc kubenswrapper[4737]: I0126 18:58:54.238893 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cc067c6-ba98-4534-a9d8-2028c6e0ccf6-config-data\") pod \"mysqld-exporter-0\" (UID: \"3cc067c6-ba98-4534-a9d8-2028c6e0ccf6\") " pod="openstack/mysqld-exporter-0" Jan 26 18:58:54 crc kubenswrapper[4737]: I0126 18:58:54.244152 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl658\" (UniqueName: \"kubernetes.io/projected/c57d0600-f0a4-43d2-b974-ced2346aae55-kube-api-access-zl658\") pod \"kube-state-metrics-0\" (UID: \"c57d0600-f0a4-43d2-b974-ced2346aae55\") " pod="openstack/kube-state-metrics-0" Jan 26 18:58:54 crc kubenswrapper[4737]: I0126 18:58:54.248926 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s665p\" (UniqueName: \"kubernetes.io/projected/3cc067c6-ba98-4534-a9d8-2028c6e0ccf6-kube-api-access-s665p\") pod \"mysqld-exporter-0\" (UID: \"3cc067c6-ba98-4534-a9d8-2028c6e0ccf6\") " pod="openstack/mysqld-exporter-0" Jan 26 18:58:54 crc kubenswrapper[4737]: I0126 18:58:54.340141 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 18:58:54 crc kubenswrapper[4737]: I0126 18:58:54.363273 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 26 18:58:54 crc kubenswrapper[4737]: I0126 18:58:54.979273 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 18:58:55 crc kubenswrapper[4737]: I0126 18:58:55.003176 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7686b11b-6dd6-4748-9358-79a3885e118a" path="/var/lib/kubelet/pods/7686b11b-6dd6-4748-9358-79a3885e118a/volumes" Jan 26 18:58:55 crc kubenswrapper[4737]: I0126 18:58:55.004036 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aba2f81e-11de-4d89-ab90-34ca36d205d6" path="/var/lib/kubelet/pods/aba2f81e-11de-4d89-ab90-34ca36d205d6/volumes" Jan 26 18:58:55 crc kubenswrapper[4737]: E0126 18:58:55.048932 4737 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41b95787_7a5f_4e14_98f2_e2d9500a9df6.slice/crio-84c325de66a34510bdc87f64b53fbe96d77ce6eb3b7015b5731523859705a700\": RecentStats: unable to find data in memory cache]" Jan 26 18:58:55 crc kubenswrapper[4737]: I0126 18:58:55.100639 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 26 18:58:55 crc kubenswrapper[4737]: W0126 18:58:55.108492 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3cc067c6_ba98_4534_a9d8_2028c6e0ccf6.slice/crio-30086110eda54df60559d94d8f46ef41c219472dc0df4bdb1ecbc18e3b19dd99 WatchSource:0}: Error finding container 30086110eda54df60559d94d8f46ef41c219472dc0df4bdb1ecbc18e3b19dd99: Status 404 returned error can't find the container with id 30086110eda54df60559d94d8f46ef41c219472dc0df4bdb1ecbc18e3b19dd99 Jan 26 18:58:55 crc kubenswrapper[4737]: I0126 18:58:55.195033 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:58:55 crc kubenswrapper[4737]: I0126 18:58:55.195615 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7228e6d5-f15d-4152-919c-fe757191dad0" containerName="sg-core" containerID="cri-o://99913179cb969e98760e34961e2e04cb75bef946735f87fc2a7382a0f43842ea" gracePeriod=30 Jan 26 18:58:55 crc kubenswrapper[4737]: I0126 18:58:55.195666 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7228e6d5-f15d-4152-919c-fe757191dad0" containerName="proxy-httpd" containerID="cri-o://f2ef4d6692d291899f76821abc19adc66851e3228f986f581319a98593b12e2c" gracePeriod=30 Jan 26 18:58:55 crc kubenswrapper[4737]: I0126 18:58:55.195681 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7228e6d5-f15d-4152-919c-fe757191dad0" containerName="ceilometer-notification-agent" containerID="cri-o://8fbda44cb6642c3a88a2dc82188f3e0b3389ac2b2f28bc62fb3be0d40669ec05" gracePeriod=30 Jan 26 18:58:55 crc kubenswrapper[4737]: I0126 18:58:55.195382 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7228e6d5-f15d-4152-919c-fe757191dad0" containerName="ceilometer-central-agent" containerID="cri-o://54bf97dd170e0a84b9489d653395c0cc1ce55eba0c03f6408ddeec8d9e48eef5" gracePeriod=30 Jan 26 18:58:55 crc kubenswrapper[4737]: I0126 18:58:55.917757 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"3cc067c6-ba98-4534-a9d8-2028c6e0ccf6","Type":"ContainerStarted","Data":"30086110eda54df60559d94d8f46ef41c219472dc0df4bdb1ecbc18e3b19dd99"} Jan 26 18:58:55 crc kubenswrapper[4737]: I0126 18:58:55.920911 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c57d0600-f0a4-43d2-b974-ced2346aae55","Type":"ContainerStarted","Data":"6a07b70d07595158ccb9e61b43eb1b6fc14667454b29982bdab67168156ac3a0"} Jan 26 18:58:55 crc kubenswrapper[4737]: I0126 18:58:55.920963 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c57d0600-f0a4-43d2-b974-ced2346aae55","Type":"ContainerStarted","Data":"f8d454676aa649018309434a55a6bb2893c0a4efe4e1d9f2e3e8f835b2c6d2e2"} Jan 26 18:58:55 crc kubenswrapper[4737]: I0126 18:58:55.921001 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 26 18:58:55 crc kubenswrapper[4737]: I0126 18:58:55.927155 4737 generic.go:334] "Generic (PLEG): container finished" podID="7228e6d5-f15d-4152-919c-fe757191dad0" containerID="f2ef4d6692d291899f76821abc19adc66851e3228f986f581319a98593b12e2c" exitCode=0 Jan 26 18:58:55 crc kubenswrapper[4737]: I0126 18:58:55.927195 4737 generic.go:334] "Generic (PLEG): container finished" podID="7228e6d5-f15d-4152-919c-fe757191dad0" containerID="99913179cb969e98760e34961e2e04cb75bef946735f87fc2a7382a0f43842ea" exitCode=2 Jan 26 18:58:55 crc kubenswrapper[4737]: I0126 18:58:55.927209 4737 generic.go:334] "Generic (PLEG): container finished" podID="7228e6d5-f15d-4152-919c-fe757191dad0" containerID="54bf97dd170e0a84b9489d653395c0cc1ce55eba0c03f6408ddeec8d9e48eef5" exitCode=0 Jan 26 18:58:55 crc kubenswrapper[4737]: I0126 18:58:55.927231 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7228e6d5-f15d-4152-919c-fe757191dad0","Type":"ContainerDied","Data":"f2ef4d6692d291899f76821abc19adc66851e3228f986f581319a98593b12e2c"} Jan 26 18:58:55 crc kubenswrapper[4737]: I0126 18:58:55.927287 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7228e6d5-f15d-4152-919c-fe757191dad0","Type":"ContainerDied","Data":"99913179cb969e98760e34961e2e04cb75bef946735f87fc2a7382a0f43842ea"} Jan 26 18:58:55 crc kubenswrapper[4737]: I0126 18:58:55.927298 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7228e6d5-f15d-4152-919c-fe757191dad0","Type":"ContainerDied","Data":"54bf97dd170e0a84b9489d653395c0cc1ce55eba0c03f6408ddeec8d9e48eef5"} Jan 26 18:58:55 crc kubenswrapper[4737]: I0126 18:58:55.970311 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.592095838 podStartE2EDuration="2.970287053s" podCreationTimestamp="2026-01-26 18:58:53 +0000 UTC" firstStartedPulling="2026-01-26 18:58:55.001827586 +0000 UTC m=+1708.310022304" lastFinishedPulling="2026-01-26 18:58:55.380018811 +0000 UTC m=+1708.688213519" observedRunningTime="2026-01-26 18:58:55.960838942 +0000 UTC m=+1709.269033650" watchObservedRunningTime="2026-01-26 18:58:55.970287053 +0000 UTC m=+1709.278481761" Jan 26 18:58:56 crc kubenswrapper[4737]: I0126 18:58:56.940306 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"3cc067c6-ba98-4534-a9d8-2028c6e0ccf6","Type":"ContainerStarted","Data":"e6457ffd95a643e6831013803cc4497aa4d283839446e9f8ce818153dc05855d"} Jan 26 18:58:57 crc kubenswrapper[4737]: I0126 18:58:57.964438 4737 generic.go:334] "Generic (PLEG): container finished" podID="7228e6d5-f15d-4152-919c-fe757191dad0" containerID="8fbda44cb6642c3a88a2dc82188f3e0b3389ac2b2f28bc62fb3be0d40669ec05" exitCode=0 Jan 26 18:58:57 crc kubenswrapper[4737]: I0126 18:58:57.964490 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7228e6d5-f15d-4152-919c-fe757191dad0","Type":"ContainerDied","Data":"8fbda44cb6642c3a88a2dc82188f3e0b3389ac2b2f28bc62fb3be0d40669ec05"} Jan 26 18:58:58 crc kubenswrapper[4737]: I0126 18:58:58.264339 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:58:58 crc kubenswrapper[4737]: I0126 18:58:58.294124 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=4.748710695 podStartE2EDuration="5.294104815s" podCreationTimestamp="2026-01-26 18:58:53 +0000 UTC" firstStartedPulling="2026-01-26 18:58:55.110810768 +0000 UTC m=+1708.419005476" lastFinishedPulling="2026-01-26 18:58:55.656204888 +0000 UTC m=+1708.964399596" observedRunningTime="2026-01-26 18:58:56.962328121 +0000 UTC m=+1710.270522829" watchObservedRunningTime="2026-01-26 18:58:58.294104815 +0000 UTC m=+1711.602299513" Jan 26 18:58:58 crc kubenswrapper[4737]: I0126 18:58:58.322091 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7228e6d5-f15d-4152-919c-fe757191dad0-log-httpd\") pod \"7228e6d5-f15d-4152-919c-fe757191dad0\" (UID: \"7228e6d5-f15d-4152-919c-fe757191dad0\") " Jan 26 18:58:58 crc kubenswrapper[4737]: I0126 18:58:58.322340 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7228e6d5-f15d-4152-919c-fe757191dad0-config-data\") pod \"7228e6d5-f15d-4152-919c-fe757191dad0\" (UID: \"7228e6d5-f15d-4152-919c-fe757191dad0\") " Jan 26 18:58:58 crc kubenswrapper[4737]: I0126 18:58:58.322360 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7228e6d5-f15d-4152-919c-fe757191dad0-scripts\") pod \"7228e6d5-f15d-4152-919c-fe757191dad0\" (UID: \"7228e6d5-f15d-4152-919c-fe757191dad0\") " Jan 26 18:58:58 crc kubenswrapper[4737]: I0126 18:58:58.322378 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7228e6d5-f15d-4152-919c-fe757191dad0-run-httpd\") pod \"7228e6d5-f15d-4152-919c-fe757191dad0\" (UID: \"7228e6d5-f15d-4152-919c-fe757191dad0\") " Jan 26 18:58:58 crc kubenswrapper[4737]: I0126 18:58:58.322764 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7228e6d5-f15d-4152-919c-fe757191dad0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7228e6d5-f15d-4152-919c-fe757191dad0" (UID: "7228e6d5-f15d-4152-919c-fe757191dad0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:58:58 crc kubenswrapper[4737]: I0126 18:58:58.322979 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7228e6d5-f15d-4152-919c-fe757191dad0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7228e6d5-f15d-4152-919c-fe757191dad0" (UID: "7228e6d5-f15d-4152-919c-fe757191dad0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:58:58 crc kubenswrapper[4737]: I0126 18:58:58.325334 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7228e6d5-f15d-4152-919c-fe757191dad0-combined-ca-bundle\") pod \"7228e6d5-f15d-4152-919c-fe757191dad0\" (UID: \"7228e6d5-f15d-4152-919c-fe757191dad0\") " Jan 26 18:58:58 crc kubenswrapper[4737]: I0126 18:58:58.325453 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7228e6d5-f15d-4152-919c-fe757191dad0-sg-core-conf-yaml\") pod \"7228e6d5-f15d-4152-919c-fe757191dad0\" (UID: \"7228e6d5-f15d-4152-919c-fe757191dad0\") " Jan 26 18:58:58 crc kubenswrapper[4737]: I0126 18:58:58.325516 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhkvb\" (UniqueName: \"kubernetes.io/projected/7228e6d5-f15d-4152-919c-fe757191dad0-kube-api-access-nhkvb\") pod \"7228e6d5-f15d-4152-919c-fe757191dad0\" (UID: \"7228e6d5-f15d-4152-919c-fe757191dad0\") " Jan 26 18:58:58 crc kubenswrapper[4737]: I0126 18:58:58.329258 4737 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7228e6d5-f15d-4152-919c-fe757191dad0-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 18:58:58 crc kubenswrapper[4737]: I0126 18:58:58.329289 4737 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7228e6d5-f15d-4152-919c-fe757191dad0-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 18:58:58 crc kubenswrapper[4737]: I0126 18:58:58.333384 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7228e6d5-f15d-4152-919c-fe757191dad0-kube-api-access-nhkvb" (OuterVolumeSpecName: "kube-api-access-nhkvb") pod "7228e6d5-f15d-4152-919c-fe757191dad0" (UID: "7228e6d5-f15d-4152-919c-fe757191dad0"). InnerVolumeSpecName "kube-api-access-nhkvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:58:58 crc kubenswrapper[4737]: I0126 18:58:58.335835 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7228e6d5-f15d-4152-919c-fe757191dad0-scripts" (OuterVolumeSpecName: "scripts") pod "7228e6d5-f15d-4152-919c-fe757191dad0" (UID: "7228e6d5-f15d-4152-919c-fe757191dad0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:58:58 crc kubenswrapper[4737]: I0126 18:58:58.392196 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7228e6d5-f15d-4152-919c-fe757191dad0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7228e6d5-f15d-4152-919c-fe757191dad0" (UID: "7228e6d5-f15d-4152-919c-fe757191dad0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:58:58 crc kubenswrapper[4737]: I0126 18:58:58.431450 4737 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7228e6d5-f15d-4152-919c-fe757191dad0-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:58:58 crc kubenswrapper[4737]: I0126 18:58:58.431482 4737 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7228e6d5-f15d-4152-919c-fe757191dad0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 18:58:58 crc kubenswrapper[4737]: I0126 18:58:58.431492 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhkvb\" (UniqueName: \"kubernetes.io/projected/7228e6d5-f15d-4152-919c-fe757191dad0-kube-api-access-nhkvb\") on node \"crc\" DevicePath \"\"" Jan 26 18:58:58 crc kubenswrapper[4737]: I0126 18:58:58.480053 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7228e6d5-f15d-4152-919c-fe757191dad0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7228e6d5-f15d-4152-919c-fe757191dad0" (UID: "7228e6d5-f15d-4152-919c-fe757191dad0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:58:58 crc kubenswrapper[4737]: I0126 18:58:58.505270 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7228e6d5-f15d-4152-919c-fe757191dad0-config-data" (OuterVolumeSpecName: "config-data") pod "7228e6d5-f15d-4152-919c-fe757191dad0" (UID: "7228e6d5-f15d-4152-919c-fe757191dad0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:58:58 crc kubenswrapper[4737]: I0126 18:58:58.533445 4737 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7228e6d5-f15d-4152-919c-fe757191dad0-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:58:58 crc kubenswrapper[4737]: I0126 18:58:58.533482 4737 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7228e6d5-f15d-4152-919c-fe757191dad0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:58:58 crc kubenswrapper[4737]: I0126 18:58:58.978363 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7228e6d5-f15d-4152-919c-fe757191dad0","Type":"ContainerDied","Data":"0be5a2472ca4eab6531b1c5172e98afe6419f35aa44839c3e03b31058ea8f1c3"} Jan 26 18:58:58 crc kubenswrapper[4737]: I0126 18:58:58.978728 4737 scope.go:117] "RemoveContainer" containerID="f2ef4d6692d291899f76821abc19adc66851e3228f986f581319a98593b12e2c" Jan 26 18:58:58 crc kubenswrapper[4737]: I0126 18:58:58.978440 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:58:58 crc kubenswrapper[4737]: I0126 18:58:58.982121 4737 scope.go:117] "RemoveContainer" containerID="1118354a04db19a991298cf7d8a2d128f4afb57f133e36502b231054abcee336" Jan 26 18:58:58 crc kubenswrapper[4737]: E0126 18:58:58.982484 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 18:58:59 crc kubenswrapper[4737]: I0126 18:58:59.007564 4737 scope.go:117] "RemoveContainer" containerID="99913179cb969e98760e34961e2e04cb75bef946735f87fc2a7382a0f43842ea" Jan 26 18:58:59 crc kubenswrapper[4737]: I0126 18:58:59.019045 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:58:59 crc kubenswrapper[4737]: I0126 18:58:59.034535 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:58:59 crc kubenswrapper[4737]: I0126 18:58:59.041540 4737 scope.go:117] "RemoveContainer" containerID="8fbda44cb6642c3a88a2dc82188f3e0b3389ac2b2f28bc62fb3be0d40669ec05" Jan 26 18:58:59 crc kubenswrapper[4737]: I0126 18:58:59.066454 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:58:59 crc kubenswrapper[4737]: E0126 18:58:59.067177 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7228e6d5-f15d-4152-919c-fe757191dad0" containerName="ceilometer-central-agent" Jan 26 18:58:59 crc kubenswrapper[4737]: I0126 18:58:59.067202 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="7228e6d5-f15d-4152-919c-fe757191dad0" containerName="ceilometer-central-agent" Jan 26 18:58:59 crc kubenswrapper[4737]: E0126 18:58:59.067258 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7228e6d5-f15d-4152-919c-fe757191dad0" containerName="ceilometer-notification-agent" Jan 26 18:58:59 crc kubenswrapper[4737]: I0126 18:58:59.067268 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="7228e6d5-f15d-4152-919c-fe757191dad0" containerName="ceilometer-notification-agent" Jan 26 18:58:59 crc kubenswrapper[4737]: E0126 18:58:59.067284 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7228e6d5-f15d-4152-919c-fe757191dad0" containerName="proxy-httpd" Jan 26 18:58:59 crc kubenswrapper[4737]: I0126 18:58:59.067291 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="7228e6d5-f15d-4152-919c-fe757191dad0" containerName="proxy-httpd" Jan 26 18:58:59 crc kubenswrapper[4737]: E0126 18:58:59.067318 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7228e6d5-f15d-4152-919c-fe757191dad0" containerName="sg-core" Jan 26 18:58:59 crc kubenswrapper[4737]: I0126 18:58:59.067326 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="7228e6d5-f15d-4152-919c-fe757191dad0" containerName="sg-core" Jan 26 18:58:59 crc kubenswrapper[4737]: I0126 18:58:59.067610 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="7228e6d5-f15d-4152-919c-fe757191dad0" containerName="proxy-httpd" Jan 26 18:58:59 crc kubenswrapper[4737]: I0126 18:58:59.067623 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="7228e6d5-f15d-4152-919c-fe757191dad0" containerName="ceilometer-notification-agent" Jan 26 18:58:59 crc kubenswrapper[4737]: I0126 18:58:59.067644 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="7228e6d5-f15d-4152-919c-fe757191dad0" containerName="ceilometer-central-agent" Jan 26 18:58:59 crc kubenswrapper[4737]: I0126 18:58:59.067683 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="7228e6d5-f15d-4152-919c-fe757191dad0" containerName="sg-core" Jan 26 18:58:59 crc kubenswrapper[4737]: I0126 18:58:59.074964 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:58:59 crc kubenswrapper[4737]: I0126 18:58:59.079468 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 18:58:59 crc kubenswrapper[4737]: I0126 18:58:59.082877 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 26 18:58:59 crc kubenswrapper[4737]: I0126 18:58:59.083162 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 18:58:59 crc kubenswrapper[4737]: I0126 18:58:59.084301 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:58:59 crc kubenswrapper[4737]: I0126 18:58:59.114731 4737 scope.go:117] "RemoveContainer" containerID="54bf97dd170e0a84b9489d653395c0cc1ce55eba0c03f6408ddeec8d9e48eef5" Jan 26 18:58:59 crc kubenswrapper[4737]: I0126 18:58:59.156352 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/629351f3-688a-4450-930b-98759732daee-run-httpd\") pod \"ceilometer-0\" (UID: \"629351f3-688a-4450-930b-98759732daee\") " pod="openstack/ceilometer-0" Jan 26 18:58:59 crc kubenswrapper[4737]: I0126 18:58:59.156429 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/629351f3-688a-4450-930b-98759732daee-log-httpd\") pod \"ceilometer-0\" (UID: \"629351f3-688a-4450-930b-98759732daee\") " pod="openstack/ceilometer-0" Jan 26 18:58:59 crc kubenswrapper[4737]: I0126 18:58:59.156496 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/629351f3-688a-4450-930b-98759732daee-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"629351f3-688a-4450-930b-98759732daee\") " pod="openstack/ceilometer-0" Jan 26 18:58:59 crc kubenswrapper[4737]: I0126 18:58:59.156610 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/629351f3-688a-4450-930b-98759732daee-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"629351f3-688a-4450-930b-98759732daee\") " pod="openstack/ceilometer-0" Jan 26 18:58:59 crc kubenswrapper[4737]: I0126 18:58:59.156642 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/629351f3-688a-4450-930b-98759732daee-scripts\") pod \"ceilometer-0\" (UID: \"629351f3-688a-4450-930b-98759732daee\") " pod="openstack/ceilometer-0" Jan 26 18:58:59 crc kubenswrapper[4737]: I0126 18:58:59.156941 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sk5w\" (UniqueName: \"kubernetes.io/projected/629351f3-688a-4450-930b-98759732daee-kube-api-access-9sk5w\") pod \"ceilometer-0\" (UID: \"629351f3-688a-4450-930b-98759732daee\") " pod="openstack/ceilometer-0" Jan 26 18:58:59 crc kubenswrapper[4737]: I0126 18:58:59.157285 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/629351f3-688a-4450-930b-98759732daee-config-data\") pod \"ceilometer-0\" (UID: \"629351f3-688a-4450-930b-98759732daee\") " pod="openstack/ceilometer-0" Jan 26 18:58:59 crc kubenswrapper[4737]: I0126 18:58:59.157323 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/629351f3-688a-4450-930b-98759732daee-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"629351f3-688a-4450-930b-98759732daee\") " pod="openstack/ceilometer-0" Jan 26 18:58:59 crc kubenswrapper[4737]: I0126 18:58:59.260634 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/629351f3-688a-4450-930b-98759732daee-run-httpd\") pod \"ceilometer-0\" (UID: \"629351f3-688a-4450-930b-98759732daee\") " pod="openstack/ceilometer-0" Jan 26 18:58:59 crc kubenswrapper[4737]: I0126 18:58:59.260729 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/629351f3-688a-4450-930b-98759732daee-log-httpd\") pod \"ceilometer-0\" (UID: \"629351f3-688a-4450-930b-98759732daee\") " pod="openstack/ceilometer-0" Jan 26 18:58:59 crc kubenswrapper[4737]: I0126 18:58:59.260791 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/629351f3-688a-4450-930b-98759732daee-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"629351f3-688a-4450-930b-98759732daee\") " pod="openstack/ceilometer-0" Jan 26 18:58:59 crc kubenswrapper[4737]: I0126 18:58:59.260820 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/629351f3-688a-4450-930b-98759732daee-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"629351f3-688a-4450-930b-98759732daee\") " pod="openstack/ceilometer-0" Jan 26 18:58:59 crc kubenswrapper[4737]: I0126 18:58:59.260846 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/629351f3-688a-4450-930b-98759732daee-scripts\") pod \"ceilometer-0\" (UID: \"629351f3-688a-4450-930b-98759732daee\") " pod="openstack/ceilometer-0" Jan 26 18:58:59 crc kubenswrapper[4737]: I0126 18:58:59.260920 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sk5w\" (UniqueName: \"kubernetes.io/projected/629351f3-688a-4450-930b-98759732daee-kube-api-access-9sk5w\") pod \"ceilometer-0\" (UID: \"629351f3-688a-4450-930b-98759732daee\") " pod="openstack/ceilometer-0" Jan 26 18:58:59 crc kubenswrapper[4737]: I0126 18:58:59.261020 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/629351f3-688a-4450-930b-98759732daee-config-data\") pod \"ceilometer-0\" (UID: \"629351f3-688a-4450-930b-98759732daee\") " pod="openstack/ceilometer-0" Jan 26 18:58:59 crc kubenswrapper[4737]: I0126 18:58:59.261452 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/629351f3-688a-4450-930b-98759732daee-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"629351f3-688a-4450-930b-98759732daee\") " pod="openstack/ceilometer-0" Jan 26 18:58:59 crc kubenswrapper[4737]: I0126 18:58:59.261918 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/629351f3-688a-4450-930b-98759732daee-run-httpd\") pod \"ceilometer-0\" (UID: \"629351f3-688a-4450-930b-98759732daee\") " pod="openstack/ceilometer-0" Jan 26 18:58:59 crc kubenswrapper[4737]: I0126 18:58:59.262174 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/629351f3-688a-4450-930b-98759732daee-log-httpd\") pod \"ceilometer-0\" (UID: \"629351f3-688a-4450-930b-98759732daee\") " pod="openstack/ceilometer-0" Jan 26 18:58:59 crc kubenswrapper[4737]: I0126 18:58:59.268021 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/629351f3-688a-4450-930b-98759732daee-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"629351f3-688a-4450-930b-98759732daee\") " pod="openstack/ceilometer-0" Jan 26 18:58:59 crc kubenswrapper[4737]: I0126 18:58:59.268082 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/629351f3-688a-4450-930b-98759732daee-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"629351f3-688a-4450-930b-98759732daee\") " pod="openstack/ceilometer-0" Jan 26 18:58:59 crc kubenswrapper[4737]: I0126 18:58:59.268406 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/629351f3-688a-4450-930b-98759732daee-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"629351f3-688a-4450-930b-98759732daee\") " pod="openstack/ceilometer-0" Jan 26 18:58:59 crc kubenswrapper[4737]: I0126 18:58:59.268832 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/629351f3-688a-4450-930b-98759732daee-scripts\") pod \"ceilometer-0\" (UID: \"629351f3-688a-4450-930b-98759732daee\") " pod="openstack/ceilometer-0" Jan 26 18:58:59 crc kubenswrapper[4737]: I0126 18:58:59.272988 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/629351f3-688a-4450-930b-98759732daee-config-data\") pod \"ceilometer-0\" (UID: \"629351f3-688a-4450-930b-98759732daee\") " pod="openstack/ceilometer-0" Jan 26 18:58:59 crc kubenswrapper[4737]: I0126 18:58:59.280921 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sk5w\" (UniqueName: \"kubernetes.io/projected/629351f3-688a-4450-930b-98759732daee-kube-api-access-9sk5w\") pod \"ceilometer-0\" (UID: \"629351f3-688a-4450-930b-98759732daee\") " pod="openstack/ceilometer-0" Jan 26 18:58:59 crc kubenswrapper[4737]: I0126 18:58:59.415638 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:58:59 crc kubenswrapper[4737]: I0126 18:58:59.939744 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:58:59 crc kubenswrapper[4737]: I0126 18:58:59.950824 4737 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 18:58:59 crc kubenswrapper[4737]: I0126 18:58:59.992309 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"629351f3-688a-4450-930b-98759732daee","Type":"ContainerStarted","Data":"f2d88a91a1e530f50b624d7fd63ed8ca7538d42be22e5fee7e2f2213cb3d5cef"} Jan 26 18:59:00 crc kubenswrapper[4737]: I0126 18:59:00.997099 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7228e6d5-f15d-4152-919c-fe757191dad0" path="/var/lib/kubelet/pods/7228e6d5-f15d-4152-919c-fe757191dad0/volumes" Jan 26 18:59:01 crc kubenswrapper[4737]: I0126 18:59:01.007346 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"629351f3-688a-4450-930b-98759732daee","Type":"ContainerStarted","Data":"2c50860ca4a9da5dc0f485977cdf5a4020849c40cee13623bc83834f80a45e23"} Jan 26 18:59:02 crc kubenswrapper[4737]: I0126 18:59:02.019869 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"629351f3-688a-4450-930b-98759732daee","Type":"ContainerStarted","Data":"afdd10e69c9cb4808109297b0a86ec5530ef1264faed9c71ad190e23a2438be2"} Jan 26 18:59:02 crc kubenswrapper[4737]: I0126 18:59:02.289674 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-kdfn7"] Jan 26 18:59:02 crc kubenswrapper[4737]: I0126 18:59:02.302587 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-kdfn7"] Jan 26 18:59:02 crc kubenswrapper[4737]: I0126 18:59:02.378694 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-k2vkj"] Jan 26 18:59:02 crc kubenswrapper[4737]: I0126 18:59:02.380500 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-k2vkj" Jan 26 18:59:02 crc kubenswrapper[4737]: I0126 18:59:02.390195 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-k2vkj"] Jan 26 18:59:02 crc kubenswrapper[4737]: I0126 18:59:02.446804 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f3a0926-ce79-4117-b8e6-96fcf0a492fc-config-data\") pod \"heat-db-sync-k2vkj\" (UID: \"7f3a0926-ce79-4117-b8e6-96fcf0a492fc\") " pod="openstack/heat-db-sync-k2vkj" Jan 26 18:59:02 crc kubenswrapper[4737]: I0126 18:59:02.447154 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8pdh\" (UniqueName: \"kubernetes.io/projected/7f3a0926-ce79-4117-b8e6-96fcf0a492fc-kube-api-access-t8pdh\") pod \"heat-db-sync-k2vkj\" (UID: \"7f3a0926-ce79-4117-b8e6-96fcf0a492fc\") " pod="openstack/heat-db-sync-k2vkj" Jan 26 18:59:02 crc kubenswrapper[4737]: I0126 18:59:02.447360 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f3a0926-ce79-4117-b8e6-96fcf0a492fc-combined-ca-bundle\") pod \"heat-db-sync-k2vkj\" (UID: \"7f3a0926-ce79-4117-b8e6-96fcf0a492fc\") " pod="openstack/heat-db-sync-k2vkj" Jan 26 18:59:02 crc kubenswrapper[4737]: I0126 18:59:02.550311 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f3a0926-ce79-4117-b8e6-96fcf0a492fc-combined-ca-bundle\") pod \"heat-db-sync-k2vkj\" (UID: \"7f3a0926-ce79-4117-b8e6-96fcf0a492fc\") " pod="openstack/heat-db-sync-k2vkj" Jan 26 18:59:02 crc kubenswrapper[4737]: I0126 18:59:02.550411 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f3a0926-ce79-4117-b8e6-96fcf0a492fc-config-data\") pod \"heat-db-sync-k2vkj\" (UID: \"7f3a0926-ce79-4117-b8e6-96fcf0a492fc\") " pod="openstack/heat-db-sync-k2vkj" Jan 26 18:59:02 crc kubenswrapper[4737]: I0126 18:59:02.550451 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8pdh\" (UniqueName: \"kubernetes.io/projected/7f3a0926-ce79-4117-b8e6-96fcf0a492fc-kube-api-access-t8pdh\") pod \"heat-db-sync-k2vkj\" (UID: \"7f3a0926-ce79-4117-b8e6-96fcf0a492fc\") " pod="openstack/heat-db-sync-k2vkj" Jan 26 18:59:02 crc kubenswrapper[4737]: I0126 18:59:02.557041 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f3a0926-ce79-4117-b8e6-96fcf0a492fc-combined-ca-bundle\") pod \"heat-db-sync-k2vkj\" (UID: \"7f3a0926-ce79-4117-b8e6-96fcf0a492fc\") " pod="openstack/heat-db-sync-k2vkj" Jan 26 18:59:02 crc kubenswrapper[4737]: I0126 18:59:02.559976 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f3a0926-ce79-4117-b8e6-96fcf0a492fc-config-data\") pod \"heat-db-sync-k2vkj\" (UID: \"7f3a0926-ce79-4117-b8e6-96fcf0a492fc\") " pod="openstack/heat-db-sync-k2vkj" Jan 26 18:59:02 crc kubenswrapper[4737]: I0126 18:59:02.581724 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8pdh\" (UniqueName: \"kubernetes.io/projected/7f3a0926-ce79-4117-b8e6-96fcf0a492fc-kube-api-access-t8pdh\") pod \"heat-db-sync-k2vkj\" (UID: \"7f3a0926-ce79-4117-b8e6-96fcf0a492fc\") " pod="openstack/heat-db-sync-k2vkj" Jan 26 18:59:02 crc kubenswrapper[4737]: I0126 18:59:02.764267 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-k2vkj" Jan 26 18:59:03 crc kubenswrapper[4737]: I0126 18:59:03.005152 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54a9f74e-fc12-43b7-aca3-0594480e0222" path="/var/lib/kubelet/pods/54a9f74e-fc12-43b7-aca3-0594480e0222/volumes" Jan 26 18:59:03 crc kubenswrapper[4737]: I0126 18:59:03.285625 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-k2vkj"] Jan 26 18:59:04 crc kubenswrapper[4737]: I0126 18:59:04.081806 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-k2vkj" event={"ID":"7f3a0926-ce79-4117-b8e6-96fcf0a492fc","Type":"ContainerStarted","Data":"12297f3720eeef90ca194ba9d7786e83bed69b06c2febbfef59f7f8b3d2df749"} Jan 26 18:59:04 crc kubenswrapper[4737]: I0126 18:59:04.091027 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"629351f3-688a-4450-930b-98759732daee","Type":"ContainerStarted","Data":"7028fcd936a3232e457861a365506088802285df015c2fc8b53cf8a2caeaf637"} Jan 26 18:59:04 crc kubenswrapper[4737]: I0126 18:59:04.358942 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 26 18:59:04 crc kubenswrapper[4737]: E0126 18:59:04.583194 4737 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41b95787_7a5f_4e14_98f2_e2d9500a9df6.slice/crio-84c325de66a34510bdc87f64b53fbe96d77ce6eb3b7015b5731523859705a700\": RecentStats: unable to find data in memory cache]" Jan 26 18:59:05 crc kubenswrapper[4737]: E0126 18:59:05.134186 4737 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41b95787_7a5f_4e14_98f2_e2d9500a9df6.slice/crio-84c325de66a34510bdc87f64b53fbe96d77ce6eb3b7015b5731523859705a700\": RecentStats: unable to find data in memory cache]" Jan 26 18:59:05 crc kubenswrapper[4737]: I0126 18:59:05.275129 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 26 18:59:06 crc kubenswrapper[4737]: I0126 18:59:06.156627 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"629351f3-688a-4450-930b-98759732daee","Type":"ContainerStarted","Data":"833900c22601f0bc3e9d1bfde27ee06ce1a256e58c8b476ae1c5e3fa98514f2b"} Jan 26 18:59:06 crc kubenswrapper[4737]: I0126 18:59:06.159231 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 18:59:06 crc kubenswrapper[4737]: I0126 18:59:06.162312 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:59:06 crc kubenswrapper[4737]: I0126 18:59:06.206363 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.340411815 podStartE2EDuration="7.206339482s" podCreationTimestamp="2026-01-26 18:58:59 +0000 UTC" firstStartedPulling="2026-01-26 18:58:59.950634274 +0000 UTC m=+1713.258828982" lastFinishedPulling="2026-01-26 18:59:04.816561941 +0000 UTC m=+1718.124756649" observedRunningTime="2026-01-26 18:59:06.191510406 +0000 UTC m=+1719.499705114" watchObservedRunningTime="2026-01-26 18:59:06.206339482 +0000 UTC m=+1719.514534190" Jan 26 18:59:06 crc kubenswrapper[4737]: I0126 18:59:06.573007 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 18:59:08 crc kubenswrapper[4737]: I0126 18:59:08.193061 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="629351f3-688a-4450-930b-98759732daee" containerName="ceilometer-central-agent" containerID="cri-o://2c50860ca4a9da5dc0f485977cdf5a4020849c40cee13623bc83834f80a45e23" gracePeriod=30 Jan 26 18:59:08 crc kubenswrapper[4737]: I0126 18:59:08.194779 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="629351f3-688a-4450-930b-98759732daee" containerName="proxy-httpd" containerID="cri-o://833900c22601f0bc3e9d1bfde27ee06ce1a256e58c8b476ae1c5e3fa98514f2b" gracePeriod=30 Jan 26 18:59:08 crc kubenswrapper[4737]: I0126 18:59:08.194919 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="629351f3-688a-4450-930b-98759732daee" containerName="ceilometer-notification-agent" containerID="cri-o://afdd10e69c9cb4808109297b0a86ec5530ef1264faed9c71ad190e23a2438be2" gracePeriod=30 Jan 26 18:59:08 crc kubenswrapper[4737]: I0126 18:59:08.194962 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="629351f3-688a-4450-930b-98759732daee" containerName="sg-core" containerID="cri-o://7028fcd936a3232e457861a365506088802285df015c2fc8b53cf8a2caeaf637" gracePeriod=30 Jan 26 18:59:09 crc kubenswrapper[4737]: I0126 18:59:09.226328 4737 generic.go:334] "Generic (PLEG): container finished" podID="629351f3-688a-4450-930b-98759732daee" containerID="833900c22601f0bc3e9d1bfde27ee06ce1a256e58c8b476ae1c5e3fa98514f2b" exitCode=0 Jan 26 18:59:09 crc kubenswrapper[4737]: I0126 18:59:09.226677 4737 generic.go:334] "Generic (PLEG): container finished" podID="629351f3-688a-4450-930b-98759732daee" containerID="7028fcd936a3232e457861a365506088802285df015c2fc8b53cf8a2caeaf637" exitCode=2 Jan 26 18:59:09 crc kubenswrapper[4737]: I0126 18:59:09.226689 4737 generic.go:334] "Generic (PLEG): container finished" podID="629351f3-688a-4450-930b-98759732daee" containerID="afdd10e69c9cb4808109297b0a86ec5530ef1264faed9c71ad190e23a2438be2" exitCode=0 Jan 26 18:59:09 crc kubenswrapper[4737]: I0126 18:59:09.226697 4737 generic.go:334] "Generic (PLEG): container finished" podID="629351f3-688a-4450-930b-98759732daee" containerID="2c50860ca4a9da5dc0f485977cdf5a4020849c40cee13623bc83834f80a45e23" exitCode=0 Jan 26 18:59:09 crc kubenswrapper[4737]: I0126 18:59:09.226725 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"629351f3-688a-4450-930b-98759732daee","Type":"ContainerDied","Data":"833900c22601f0bc3e9d1bfde27ee06ce1a256e58c8b476ae1c5e3fa98514f2b"} Jan 26 18:59:09 crc kubenswrapper[4737]: I0126 18:59:09.226758 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"629351f3-688a-4450-930b-98759732daee","Type":"ContainerDied","Data":"7028fcd936a3232e457861a365506088802285df015c2fc8b53cf8a2caeaf637"} Jan 26 18:59:09 crc kubenswrapper[4737]: I0126 18:59:09.226770 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"629351f3-688a-4450-930b-98759732daee","Type":"ContainerDied","Data":"afdd10e69c9cb4808109297b0a86ec5530ef1264faed9c71ad190e23a2438be2"} Jan 26 18:59:09 crc kubenswrapper[4737]: I0126 18:59:09.226780 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"629351f3-688a-4450-930b-98759732daee","Type":"ContainerDied","Data":"2c50860ca4a9da5dc0f485977cdf5a4020849c40cee13623bc83834f80a45e23"} Jan 26 18:59:09 crc kubenswrapper[4737]: I0126 18:59:09.564917 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:59:09 crc kubenswrapper[4737]: I0126 18:59:09.692566 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/629351f3-688a-4450-930b-98759732daee-sg-core-conf-yaml\") pod \"629351f3-688a-4450-930b-98759732daee\" (UID: \"629351f3-688a-4450-930b-98759732daee\") " Jan 26 18:59:09 crc kubenswrapper[4737]: I0126 18:59:09.692757 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/629351f3-688a-4450-930b-98759732daee-ceilometer-tls-certs\") pod \"629351f3-688a-4450-930b-98759732daee\" (UID: \"629351f3-688a-4450-930b-98759732daee\") " Jan 26 18:59:09 crc kubenswrapper[4737]: I0126 18:59:09.692884 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/629351f3-688a-4450-930b-98759732daee-combined-ca-bundle\") pod \"629351f3-688a-4450-930b-98759732daee\" (UID: \"629351f3-688a-4450-930b-98759732daee\") " Jan 26 18:59:09 crc kubenswrapper[4737]: I0126 18:59:09.692923 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/629351f3-688a-4450-930b-98759732daee-scripts\") pod \"629351f3-688a-4450-930b-98759732daee\" (UID: \"629351f3-688a-4450-930b-98759732daee\") " Jan 26 18:59:09 crc kubenswrapper[4737]: I0126 18:59:09.693018 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/629351f3-688a-4450-930b-98759732daee-log-httpd\") pod \"629351f3-688a-4450-930b-98759732daee\" (UID: \"629351f3-688a-4450-930b-98759732daee\") " Jan 26 18:59:09 crc kubenswrapper[4737]: I0126 18:59:09.693844 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/629351f3-688a-4450-930b-98759732daee-run-httpd\") pod \"629351f3-688a-4450-930b-98759732daee\" (UID: \"629351f3-688a-4450-930b-98759732daee\") " Jan 26 18:59:09 crc kubenswrapper[4737]: I0126 18:59:09.693899 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/629351f3-688a-4450-930b-98759732daee-config-data\") pod \"629351f3-688a-4450-930b-98759732daee\" (UID: \"629351f3-688a-4450-930b-98759732daee\") " Jan 26 18:59:09 crc kubenswrapper[4737]: I0126 18:59:09.694020 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9sk5w\" (UniqueName: \"kubernetes.io/projected/629351f3-688a-4450-930b-98759732daee-kube-api-access-9sk5w\") pod \"629351f3-688a-4450-930b-98759732daee\" (UID: \"629351f3-688a-4450-930b-98759732daee\") " Jan 26 18:59:09 crc kubenswrapper[4737]: I0126 18:59:09.695333 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/629351f3-688a-4450-930b-98759732daee-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "629351f3-688a-4450-930b-98759732daee" (UID: "629351f3-688a-4450-930b-98759732daee"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:59:09 crc kubenswrapper[4737]: I0126 18:59:09.695414 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/629351f3-688a-4450-930b-98759732daee-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "629351f3-688a-4450-930b-98759732daee" (UID: "629351f3-688a-4450-930b-98759732daee"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:59:09 crc kubenswrapper[4737]: I0126 18:59:09.697844 4737 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/629351f3-688a-4450-930b-98759732daee-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 18:59:09 crc kubenswrapper[4737]: I0126 18:59:09.697880 4737 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/629351f3-688a-4450-930b-98759732daee-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 18:59:09 crc kubenswrapper[4737]: I0126 18:59:09.710341 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/629351f3-688a-4450-930b-98759732daee-kube-api-access-9sk5w" (OuterVolumeSpecName: "kube-api-access-9sk5w") pod "629351f3-688a-4450-930b-98759732daee" (UID: "629351f3-688a-4450-930b-98759732daee"). InnerVolumeSpecName "kube-api-access-9sk5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:59:09 crc kubenswrapper[4737]: I0126 18:59:09.727170 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/629351f3-688a-4450-930b-98759732daee-scripts" (OuterVolumeSpecName: "scripts") pod "629351f3-688a-4450-930b-98759732daee" (UID: "629351f3-688a-4450-930b-98759732daee"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:59:09 crc kubenswrapper[4737]: I0126 18:59:09.736658 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/629351f3-688a-4450-930b-98759732daee-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "629351f3-688a-4450-930b-98759732daee" (UID: "629351f3-688a-4450-930b-98759732daee"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:59:09 crc kubenswrapper[4737]: I0126 18:59:09.769247 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/629351f3-688a-4450-930b-98759732daee-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "629351f3-688a-4450-930b-98759732daee" (UID: "629351f3-688a-4450-930b-98759732daee"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:59:09 crc kubenswrapper[4737]: I0126 18:59:09.799868 4737 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/629351f3-688a-4450-930b-98759732daee-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 18:59:09 crc kubenswrapper[4737]: I0126 18:59:09.799904 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9sk5w\" (UniqueName: \"kubernetes.io/projected/629351f3-688a-4450-930b-98759732daee-kube-api-access-9sk5w\") on node \"crc\" DevicePath \"\"" Jan 26 18:59:09 crc kubenswrapper[4737]: I0126 18:59:09.799915 4737 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/629351f3-688a-4450-930b-98759732daee-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 18:59:09 crc kubenswrapper[4737]: I0126 18:59:09.799923 4737 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/629351f3-688a-4450-930b-98759732daee-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:59:09 crc kubenswrapper[4737]: I0126 18:59:09.840917 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/629351f3-688a-4450-930b-98759732daee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "629351f3-688a-4450-930b-98759732daee" (UID: "629351f3-688a-4450-930b-98759732daee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:59:09 crc kubenswrapper[4737]: I0126 18:59:09.897871 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/629351f3-688a-4450-930b-98759732daee-config-data" (OuterVolumeSpecName: "config-data") pod "629351f3-688a-4450-930b-98759732daee" (UID: "629351f3-688a-4450-930b-98759732daee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:59:09 crc kubenswrapper[4737]: I0126 18:59:09.902990 4737 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/629351f3-688a-4450-930b-98759732daee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:59:09 crc kubenswrapper[4737]: I0126 18:59:09.903041 4737 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/629351f3-688a-4450-930b-98759732daee-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:59:10 crc kubenswrapper[4737]: I0126 18:59:10.250419 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"629351f3-688a-4450-930b-98759732daee","Type":"ContainerDied","Data":"f2d88a91a1e530f50b624d7fd63ed8ca7538d42be22e5fee7e2f2213cb3d5cef"} Jan 26 18:59:10 crc kubenswrapper[4737]: I0126 18:59:10.250854 4737 scope.go:117] "RemoveContainer" containerID="833900c22601f0bc3e9d1bfde27ee06ce1a256e58c8b476ae1c5e3fa98514f2b" Jan 26 18:59:10 crc kubenswrapper[4737]: I0126 18:59:10.250743 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:59:10 crc kubenswrapper[4737]: I0126 18:59:10.309875 4737 scope.go:117] "RemoveContainer" containerID="7028fcd936a3232e457861a365506088802285df015c2fc8b53cf8a2caeaf637" Jan 26 18:59:10 crc kubenswrapper[4737]: I0126 18:59:10.311209 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:59:10 crc kubenswrapper[4737]: I0126 18:59:10.342220 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:59:10 crc kubenswrapper[4737]: I0126 18:59:10.371703 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:59:10 crc kubenswrapper[4737]: E0126 18:59:10.372683 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="629351f3-688a-4450-930b-98759732daee" containerName="sg-core" Jan 26 18:59:10 crc kubenswrapper[4737]: I0126 18:59:10.372704 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="629351f3-688a-4450-930b-98759732daee" containerName="sg-core" Jan 26 18:59:10 crc kubenswrapper[4737]: E0126 18:59:10.372740 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="629351f3-688a-4450-930b-98759732daee" containerName="ceilometer-central-agent" Jan 26 18:59:10 crc kubenswrapper[4737]: I0126 18:59:10.372748 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="629351f3-688a-4450-930b-98759732daee" containerName="ceilometer-central-agent" Jan 26 18:59:10 crc kubenswrapper[4737]: E0126 18:59:10.372756 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="629351f3-688a-4450-930b-98759732daee" containerName="ceilometer-notification-agent" Jan 26 18:59:10 crc kubenswrapper[4737]: I0126 18:59:10.372767 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="629351f3-688a-4450-930b-98759732daee" containerName="ceilometer-notification-agent" Jan 26 18:59:10 crc kubenswrapper[4737]: E0126 18:59:10.372778 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="629351f3-688a-4450-930b-98759732daee" containerName="proxy-httpd" Jan 26 18:59:10 crc kubenswrapper[4737]: I0126 18:59:10.372794 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="629351f3-688a-4450-930b-98759732daee" containerName="proxy-httpd" Jan 26 18:59:10 crc kubenswrapper[4737]: I0126 18:59:10.373464 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="629351f3-688a-4450-930b-98759732daee" containerName="ceilometer-notification-agent" Jan 26 18:59:10 crc kubenswrapper[4737]: I0126 18:59:10.373484 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="629351f3-688a-4450-930b-98759732daee" containerName="sg-core" Jan 26 18:59:10 crc kubenswrapper[4737]: I0126 18:59:10.373517 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="629351f3-688a-4450-930b-98759732daee" containerName="ceilometer-central-agent" Jan 26 18:59:10 crc kubenswrapper[4737]: I0126 18:59:10.373526 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="629351f3-688a-4450-930b-98759732daee" containerName="proxy-httpd" Jan 26 18:59:10 crc kubenswrapper[4737]: I0126 18:59:10.379322 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:59:10 crc kubenswrapper[4737]: I0126 18:59:10.383386 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 18:59:10 crc kubenswrapper[4737]: I0126 18:59:10.385644 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 18:59:10 crc kubenswrapper[4737]: I0126 18:59:10.385965 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 26 18:59:10 crc kubenswrapper[4737]: I0126 18:59:10.392847 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:59:10 crc kubenswrapper[4737]: I0126 18:59:10.401458 4737 scope.go:117] "RemoveContainer" containerID="afdd10e69c9cb4808109297b0a86ec5530ef1264faed9c71ad190e23a2438be2" Jan 26 18:59:10 crc kubenswrapper[4737]: I0126 18:59:10.440481 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/43f4c1d0-e222-4099-ad1a-73d3c9d9530a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"43f4c1d0-e222-4099-ad1a-73d3c9d9530a\") " pod="openstack/ceilometer-0" Jan 26 18:59:10 crc kubenswrapper[4737]: I0126 18:59:10.440586 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/43f4c1d0-e222-4099-ad1a-73d3c9d9530a-log-httpd\") pod \"ceilometer-0\" (UID: \"43f4c1d0-e222-4099-ad1a-73d3c9d9530a\") " pod="openstack/ceilometer-0" Jan 26 18:59:10 crc kubenswrapper[4737]: I0126 18:59:10.440629 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43f4c1d0-e222-4099-ad1a-73d3c9d9530a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"43f4c1d0-e222-4099-ad1a-73d3c9d9530a\") " pod="openstack/ceilometer-0" Jan 26 18:59:10 crc kubenswrapper[4737]: I0126 18:59:10.440901 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/43f4c1d0-e222-4099-ad1a-73d3c9d9530a-run-httpd\") pod \"ceilometer-0\" (UID: \"43f4c1d0-e222-4099-ad1a-73d3c9d9530a\") " pod="openstack/ceilometer-0" Jan 26 18:59:10 crc kubenswrapper[4737]: I0126 18:59:10.441066 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kt25\" (UniqueName: \"kubernetes.io/projected/43f4c1d0-e222-4099-ad1a-73d3c9d9530a-kube-api-access-4kt25\") pod \"ceilometer-0\" (UID: \"43f4c1d0-e222-4099-ad1a-73d3c9d9530a\") " pod="openstack/ceilometer-0" Jan 26 18:59:10 crc kubenswrapper[4737]: I0126 18:59:10.441257 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43f4c1d0-e222-4099-ad1a-73d3c9d9530a-config-data\") pod \"ceilometer-0\" (UID: \"43f4c1d0-e222-4099-ad1a-73d3c9d9530a\") " pod="openstack/ceilometer-0" Jan 26 18:59:10 crc kubenswrapper[4737]: I0126 18:59:10.441386 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43f4c1d0-e222-4099-ad1a-73d3c9d9530a-scripts\") pod \"ceilometer-0\" (UID: \"43f4c1d0-e222-4099-ad1a-73d3c9d9530a\") " pod="openstack/ceilometer-0" Jan 26 18:59:10 crc kubenswrapper[4737]: I0126 18:59:10.441420 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/43f4c1d0-e222-4099-ad1a-73d3c9d9530a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"43f4c1d0-e222-4099-ad1a-73d3c9d9530a\") " pod="openstack/ceilometer-0" Jan 26 18:59:10 crc kubenswrapper[4737]: I0126 18:59:10.475182 4737 scope.go:117] "RemoveContainer" containerID="2c50860ca4a9da5dc0f485977cdf5a4020849c40cee13623bc83834f80a45e23" Jan 26 18:59:10 crc kubenswrapper[4737]: I0126 18:59:10.543479 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/43f4c1d0-e222-4099-ad1a-73d3c9d9530a-log-httpd\") pod \"ceilometer-0\" (UID: \"43f4c1d0-e222-4099-ad1a-73d3c9d9530a\") " pod="openstack/ceilometer-0" Jan 26 18:59:10 crc kubenswrapper[4737]: I0126 18:59:10.543542 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43f4c1d0-e222-4099-ad1a-73d3c9d9530a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"43f4c1d0-e222-4099-ad1a-73d3c9d9530a\") " pod="openstack/ceilometer-0" Jan 26 18:59:10 crc kubenswrapper[4737]: I0126 18:59:10.544197 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/43f4c1d0-e222-4099-ad1a-73d3c9d9530a-run-httpd\") pod \"ceilometer-0\" (UID: \"43f4c1d0-e222-4099-ad1a-73d3c9d9530a\") " pod="openstack/ceilometer-0" Jan 26 18:59:10 crc kubenswrapper[4737]: I0126 18:59:10.544267 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/43f4c1d0-e222-4099-ad1a-73d3c9d9530a-run-httpd\") pod \"ceilometer-0\" (UID: \"43f4c1d0-e222-4099-ad1a-73d3c9d9530a\") " pod="openstack/ceilometer-0" Jan 26 18:59:10 crc kubenswrapper[4737]: I0126 18:59:10.544986 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/43f4c1d0-e222-4099-ad1a-73d3c9d9530a-log-httpd\") pod \"ceilometer-0\" (UID: \"43f4c1d0-e222-4099-ad1a-73d3c9d9530a\") " pod="openstack/ceilometer-0" Jan 26 18:59:10 crc kubenswrapper[4737]: I0126 18:59:10.545349 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kt25\" (UniqueName: \"kubernetes.io/projected/43f4c1d0-e222-4099-ad1a-73d3c9d9530a-kube-api-access-4kt25\") pod \"ceilometer-0\" (UID: \"43f4c1d0-e222-4099-ad1a-73d3c9d9530a\") " pod="openstack/ceilometer-0" Jan 26 18:59:10 crc kubenswrapper[4737]: I0126 18:59:10.545434 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43f4c1d0-e222-4099-ad1a-73d3c9d9530a-config-data\") pod \"ceilometer-0\" (UID: \"43f4c1d0-e222-4099-ad1a-73d3c9d9530a\") " pod="openstack/ceilometer-0" Jan 26 18:59:10 crc kubenswrapper[4737]: I0126 18:59:10.545507 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43f4c1d0-e222-4099-ad1a-73d3c9d9530a-scripts\") pod \"ceilometer-0\" (UID: \"43f4c1d0-e222-4099-ad1a-73d3c9d9530a\") " pod="openstack/ceilometer-0" Jan 26 18:59:10 crc kubenswrapper[4737]: I0126 18:59:10.545542 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/43f4c1d0-e222-4099-ad1a-73d3c9d9530a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"43f4c1d0-e222-4099-ad1a-73d3c9d9530a\") " pod="openstack/ceilometer-0" Jan 26 18:59:10 crc kubenswrapper[4737]: I0126 18:59:10.545607 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/43f4c1d0-e222-4099-ad1a-73d3c9d9530a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"43f4c1d0-e222-4099-ad1a-73d3c9d9530a\") " pod="openstack/ceilometer-0" Jan 26 18:59:10 crc kubenswrapper[4737]: I0126 18:59:10.550902 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43f4c1d0-e222-4099-ad1a-73d3c9d9530a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"43f4c1d0-e222-4099-ad1a-73d3c9d9530a\") " pod="openstack/ceilometer-0" Jan 26 18:59:10 crc kubenswrapper[4737]: I0126 18:59:10.551370 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43f4c1d0-e222-4099-ad1a-73d3c9d9530a-scripts\") pod \"ceilometer-0\" (UID: \"43f4c1d0-e222-4099-ad1a-73d3c9d9530a\") " pod="openstack/ceilometer-0" Jan 26 18:59:10 crc kubenswrapper[4737]: I0126 18:59:10.551875 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/43f4c1d0-e222-4099-ad1a-73d3c9d9530a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"43f4c1d0-e222-4099-ad1a-73d3c9d9530a\") " pod="openstack/ceilometer-0" Jan 26 18:59:10 crc kubenswrapper[4737]: I0126 18:59:10.554852 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/43f4c1d0-e222-4099-ad1a-73d3c9d9530a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"43f4c1d0-e222-4099-ad1a-73d3c9d9530a\") " pod="openstack/ceilometer-0" Jan 26 18:59:10 crc kubenswrapper[4737]: I0126 18:59:10.569149 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43f4c1d0-e222-4099-ad1a-73d3c9d9530a-config-data\") pod \"ceilometer-0\" (UID: \"43f4c1d0-e222-4099-ad1a-73d3c9d9530a\") " pod="openstack/ceilometer-0" Jan 26 18:59:10 crc kubenswrapper[4737]: I0126 18:59:10.569454 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kt25\" (UniqueName: \"kubernetes.io/projected/43f4c1d0-e222-4099-ad1a-73d3c9d9530a-kube-api-access-4kt25\") pod \"ceilometer-0\" (UID: \"43f4c1d0-e222-4099-ad1a-73d3c9d9530a\") " pod="openstack/ceilometer-0" Jan 26 18:59:10 crc kubenswrapper[4737]: I0126 18:59:10.719879 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 18:59:11 crc kubenswrapper[4737]: I0126 18:59:11.011605 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="629351f3-688a-4450-930b-98759732daee" path="/var/lib/kubelet/pods/629351f3-688a-4450-930b-98759732daee/volumes" Jan 26 18:59:11 crc kubenswrapper[4737]: I0126 18:59:11.468281 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-2" podUID="ca2ccc7a-b591-4abe-b133-f959b5445611" containerName="rabbitmq" containerID="cri-o://afd662fed630029ff5f2e324a72eedc21f44c56b09e0acccce1a15ca6ba0a38d" gracePeriod=604794 Jan 26 18:59:12 crc kubenswrapper[4737]: I0126 18:59:12.477033 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="89a3c35d-3e74-49b8-ae17-81808321d00d" containerName="rabbitmq" containerID="cri-o://06ea35a5ccb8ba1fbe6e8de8565abfd8337b400abc61eb1d009c2e44d87e15bc" gracePeriod=604795 Jan 26 18:59:13 crc kubenswrapper[4737]: I0126 18:59:13.983209 4737 scope.go:117] "RemoveContainer" containerID="1118354a04db19a991298cf7d8a2d128f4afb57f133e36502b231054abcee336" Jan 26 18:59:13 crc kubenswrapper[4737]: E0126 18:59:13.984190 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 18:59:15 crc kubenswrapper[4737]: E0126 18:59:15.513159 4737 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41b95787_7a5f_4e14_98f2_e2d9500a9df6.slice/crio-84c325de66a34510bdc87f64b53fbe96d77ce6eb3b7015b5731523859705a700\": RecentStats: unable to find data in memory cache]" Jan 26 18:59:16 crc kubenswrapper[4737]: I0126 18:59:16.181897 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="ca2ccc7a-b591-4abe-b133-f959b5445611" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Jan 26 18:59:16 crc kubenswrapper[4737]: I0126 18:59:16.484091 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="89a3c35d-3e74-49b8-ae17-81808321d00d" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.132:5671: connect: connection refused" Jan 26 18:59:19 crc kubenswrapper[4737]: I0126 18:59:19.383882 4737 generic.go:334] "Generic (PLEG): container finished" podID="89a3c35d-3e74-49b8-ae17-81808321d00d" containerID="06ea35a5ccb8ba1fbe6e8de8565abfd8337b400abc61eb1d009c2e44d87e15bc" exitCode=0 Jan 26 18:59:19 crc kubenswrapper[4737]: I0126 18:59:19.383999 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"89a3c35d-3e74-49b8-ae17-81808321d00d","Type":"ContainerDied","Data":"06ea35a5ccb8ba1fbe6e8de8565abfd8337b400abc61eb1d009c2e44d87e15bc"} Jan 26 18:59:19 crc kubenswrapper[4737]: I0126 18:59:19.389207 4737 generic.go:334] "Generic (PLEG): container finished" podID="ca2ccc7a-b591-4abe-b133-f959b5445611" containerID="afd662fed630029ff5f2e324a72eedc21f44c56b09e0acccce1a15ca6ba0a38d" exitCode=0 Jan 26 18:59:19 crc kubenswrapper[4737]: I0126 18:59:19.389244 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"ca2ccc7a-b591-4abe-b133-f959b5445611","Type":"ContainerDied","Data":"afd662fed630029ff5f2e324a72eedc21f44c56b09e0acccce1a15ca6ba0a38d"} Jan 26 18:59:19 crc kubenswrapper[4737]: E0126 18:59:19.574760 4737 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41b95787_7a5f_4e14_98f2_e2d9500a9df6.slice/crio-84c325de66a34510bdc87f64b53fbe96d77ce6eb3b7015b5731523859705a700\": RecentStats: unable to find data in memory cache]" Jan 26 18:59:20 crc kubenswrapper[4737]: I0126 18:59:20.842435 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-92v7q"] Jan 26 18:59:20 crc kubenswrapper[4737]: I0126 18:59:20.845498 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-92v7q" Jan 26 18:59:20 crc kubenswrapper[4737]: I0126 18:59:20.848949 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 26 18:59:20 crc kubenswrapper[4737]: I0126 18:59:20.859554 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-92v7q"] Jan 26 18:59:20 crc kubenswrapper[4737]: I0126 18:59:20.886821 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25444fbe-165b-40a7-b446-8bec4dfb854d-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-92v7q\" (UID: \"25444fbe-165b-40a7-b446-8bec4dfb854d\") " pod="openstack/dnsmasq-dns-7d84b4d45c-92v7q" Jan 26 18:59:20 crc kubenswrapper[4737]: I0126 18:59:20.886951 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25444fbe-165b-40a7-b446-8bec4dfb854d-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-92v7q\" (UID: \"25444fbe-165b-40a7-b446-8bec4dfb854d\") " pod="openstack/dnsmasq-dns-7d84b4d45c-92v7q" Jan 26 18:59:20 crc kubenswrapper[4737]: I0126 18:59:20.888982 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/25444fbe-165b-40a7-b446-8bec4dfb854d-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-92v7q\" (UID: \"25444fbe-165b-40a7-b446-8bec4dfb854d\") " pod="openstack/dnsmasq-dns-7d84b4d45c-92v7q" Jan 26 18:59:20 crc kubenswrapper[4737]: I0126 18:59:20.892039 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/25444fbe-165b-40a7-b446-8bec4dfb854d-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-92v7q\" (UID: \"25444fbe-165b-40a7-b446-8bec4dfb854d\") " pod="openstack/dnsmasq-dns-7d84b4d45c-92v7q" Jan 26 18:59:20 crc kubenswrapper[4737]: I0126 18:59:20.892422 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25444fbe-165b-40a7-b446-8bec4dfb854d-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-92v7q\" (UID: \"25444fbe-165b-40a7-b446-8bec4dfb854d\") " pod="openstack/dnsmasq-dns-7d84b4d45c-92v7q" Jan 26 18:59:20 crc kubenswrapper[4737]: I0126 18:59:20.892488 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25444fbe-165b-40a7-b446-8bec4dfb854d-config\") pod \"dnsmasq-dns-7d84b4d45c-92v7q\" (UID: \"25444fbe-165b-40a7-b446-8bec4dfb854d\") " pod="openstack/dnsmasq-dns-7d84b4d45c-92v7q" Jan 26 18:59:20 crc kubenswrapper[4737]: I0126 18:59:20.892904 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8vcl\" (UniqueName: \"kubernetes.io/projected/25444fbe-165b-40a7-b446-8bec4dfb854d-kube-api-access-r8vcl\") pod \"dnsmasq-dns-7d84b4d45c-92v7q\" (UID: \"25444fbe-165b-40a7-b446-8bec4dfb854d\") " pod="openstack/dnsmasq-dns-7d84b4d45c-92v7q" Jan 26 18:59:20 crc kubenswrapper[4737]: I0126 18:59:20.996300 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/25444fbe-165b-40a7-b446-8bec4dfb854d-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-92v7q\" (UID: \"25444fbe-165b-40a7-b446-8bec4dfb854d\") " pod="openstack/dnsmasq-dns-7d84b4d45c-92v7q" Jan 26 18:59:20 crc kubenswrapper[4737]: I0126 18:59:20.996905 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/25444fbe-165b-40a7-b446-8bec4dfb854d-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-92v7q\" (UID: \"25444fbe-165b-40a7-b446-8bec4dfb854d\") " pod="openstack/dnsmasq-dns-7d84b4d45c-92v7q" Jan 26 18:59:20 crc kubenswrapper[4737]: I0126 18:59:20.997026 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25444fbe-165b-40a7-b446-8bec4dfb854d-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-92v7q\" (UID: \"25444fbe-165b-40a7-b446-8bec4dfb854d\") " pod="openstack/dnsmasq-dns-7d84b4d45c-92v7q" Jan 26 18:59:20 crc kubenswrapper[4737]: I0126 18:59:20.997234 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25444fbe-165b-40a7-b446-8bec4dfb854d-config\") pod \"dnsmasq-dns-7d84b4d45c-92v7q\" (UID: \"25444fbe-165b-40a7-b446-8bec4dfb854d\") " pod="openstack/dnsmasq-dns-7d84b4d45c-92v7q" Jan 26 18:59:20 crc kubenswrapper[4737]: I0126 18:59:20.997405 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8vcl\" (UniqueName: \"kubernetes.io/projected/25444fbe-165b-40a7-b446-8bec4dfb854d-kube-api-access-r8vcl\") pod \"dnsmasq-dns-7d84b4d45c-92v7q\" (UID: \"25444fbe-165b-40a7-b446-8bec4dfb854d\") " pod="openstack/dnsmasq-dns-7d84b4d45c-92v7q" Jan 26 18:59:20 crc kubenswrapper[4737]: I0126 18:59:20.997520 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25444fbe-165b-40a7-b446-8bec4dfb854d-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-92v7q\" (UID: \"25444fbe-165b-40a7-b446-8bec4dfb854d\") " pod="openstack/dnsmasq-dns-7d84b4d45c-92v7q" Jan 26 18:59:20 crc kubenswrapper[4737]: I0126 18:59:20.997632 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25444fbe-165b-40a7-b446-8bec4dfb854d-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-92v7q\" (UID: \"25444fbe-165b-40a7-b446-8bec4dfb854d\") " pod="openstack/dnsmasq-dns-7d84b4d45c-92v7q" Jan 26 18:59:20 crc kubenswrapper[4737]: I0126 18:59:20.997526 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/25444fbe-165b-40a7-b446-8bec4dfb854d-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-92v7q\" (UID: \"25444fbe-165b-40a7-b446-8bec4dfb854d\") " pod="openstack/dnsmasq-dns-7d84b4d45c-92v7q" Jan 26 18:59:20 crc kubenswrapper[4737]: I0126 18:59:20.997770 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/25444fbe-165b-40a7-b446-8bec4dfb854d-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-92v7q\" (UID: \"25444fbe-165b-40a7-b446-8bec4dfb854d\") " pod="openstack/dnsmasq-dns-7d84b4d45c-92v7q" Jan 26 18:59:20 crc kubenswrapper[4737]: I0126 18:59:20.998120 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25444fbe-165b-40a7-b446-8bec4dfb854d-config\") pod \"dnsmasq-dns-7d84b4d45c-92v7q\" (UID: \"25444fbe-165b-40a7-b446-8bec4dfb854d\") " pod="openstack/dnsmasq-dns-7d84b4d45c-92v7q" Jan 26 18:59:20 crc kubenswrapper[4737]: I0126 18:59:20.998629 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25444fbe-165b-40a7-b446-8bec4dfb854d-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-92v7q\" (UID: \"25444fbe-165b-40a7-b446-8bec4dfb854d\") " pod="openstack/dnsmasq-dns-7d84b4d45c-92v7q" Jan 26 18:59:20 crc kubenswrapper[4737]: I0126 18:59:20.998710 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25444fbe-165b-40a7-b446-8bec4dfb854d-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-92v7q\" (UID: \"25444fbe-165b-40a7-b446-8bec4dfb854d\") " pod="openstack/dnsmasq-dns-7d84b4d45c-92v7q" Jan 26 18:59:21 crc kubenswrapper[4737]: I0126 18:59:21.001621 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25444fbe-165b-40a7-b446-8bec4dfb854d-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-92v7q\" (UID: \"25444fbe-165b-40a7-b446-8bec4dfb854d\") " pod="openstack/dnsmasq-dns-7d84b4d45c-92v7q" Jan 26 18:59:21 crc kubenswrapper[4737]: I0126 18:59:21.049538 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8vcl\" (UniqueName: \"kubernetes.io/projected/25444fbe-165b-40a7-b446-8bec4dfb854d-kube-api-access-r8vcl\") pod \"dnsmasq-dns-7d84b4d45c-92v7q\" (UID: \"25444fbe-165b-40a7-b446-8bec4dfb854d\") " pod="openstack/dnsmasq-dns-7d84b4d45c-92v7q" Jan 26 18:59:21 crc kubenswrapper[4737]: I0126 18:59:21.177770 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-92v7q" Jan 26 18:59:24 crc kubenswrapper[4737]: E0126 18:59:24.335701 4737 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Jan 26 18:59:24 crc kubenswrapper[4737]: E0126 18:59:24.336463 4737 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Jan 26 18:59:24 crc kubenswrapper[4737]: E0126 18:59:24.336621 4737 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t8pdh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-k2vkj_openstack(7f3a0926-ce79-4117-b8e6-96fcf0a492fc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 18:59:24 crc kubenswrapper[4737]: E0126 18:59:24.339109 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-k2vkj" podUID="7f3a0926-ce79-4117-b8e6-96fcf0a492fc" Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.465931 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"ca2ccc7a-b591-4abe-b133-f959b5445611","Type":"ContainerDied","Data":"6a1a51e2413b378d6a7940812f10933c9a99e1b502881a766a143b74e90c7c5a"} Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.466063 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a1a51e2413b378d6a7940812f10933c9a99e1b502881a766a143b74e90c7c5a" Jan 26 18:59:24 crc kubenswrapper[4737]: E0126 18:59:24.469309 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k2vkj" podUID="7f3a0926-ce79-4117-b8e6-96fcf0a492fc" Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.523765 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.619240 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ca2ccc7a-b591-4abe-b133-f959b5445611-rabbitmq-erlang-cookie\") pod \"ca2ccc7a-b591-4abe-b133-f959b5445611\" (UID: \"ca2ccc7a-b591-4abe-b133-f959b5445611\") " Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.619393 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrm67\" (UniqueName: \"kubernetes.io/projected/ca2ccc7a-b591-4abe-b133-f959b5445611-kube-api-access-mrm67\") pod \"ca2ccc7a-b591-4abe-b133-f959b5445611\" (UID: \"ca2ccc7a-b591-4abe-b133-f959b5445611\") " Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.619470 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ca2ccc7a-b591-4abe-b133-f959b5445611-config-data\") pod \"ca2ccc7a-b591-4abe-b133-f959b5445611\" (UID: \"ca2ccc7a-b591-4abe-b133-f959b5445611\") " Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.619521 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ca2ccc7a-b591-4abe-b133-f959b5445611-rabbitmq-tls\") pod \"ca2ccc7a-b591-4abe-b133-f959b5445611\" (UID: \"ca2ccc7a-b591-4abe-b133-f959b5445611\") " Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.619591 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ca2ccc7a-b591-4abe-b133-f959b5445611-pod-info\") pod \"ca2ccc7a-b591-4abe-b133-f959b5445611\" (UID: \"ca2ccc7a-b591-4abe-b133-f959b5445611\") " Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.625310 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca2ccc7a-b591-4abe-b133-f959b5445611-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "ca2ccc7a-b591-4abe-b133-f959b5445611" (UID: "ca2ccc7a-b591-4abe-b133-f959b5445611"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.631222 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ca2ccc7a-b591-4abe-b133-f959b5445611-rabbitmq-confd\") pod \"ca2ccc7a-b591-4abe-b133-f959b5445611\" (UID: \"ca2ccc7a-b591-4abe-b133-f959b5445611\") " Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.631305 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ca2ccc7a-b591-4abe-b133-f959b5445611-erlang-cookie-secret\") pod \"ca2ccc7a-b591-4abe-b133-f959b5445611\" (UID: \"ca2ccc7a-b591-4abe-b133-f959b5445611\") " Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.632260 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ca2ccc7a-b591-4abe-b133-f959b5445611-plugins-conf\") pod \"ca2ccc7a-b591-4abe-b133-f959b5445611\" (UID: \"ca2ccc7a-b591-4abe-b133-f959b5445611\") " Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.632532 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ca2ccc7a-b591-4abe-b133-f959b5445611-rabbitmq-plugins\") pod \"ca2ccc7a-b591-4abe-b133-f959b5445611\" (UID: \"ca2ccc7a-b591-4abe-b133-f959b5445611\") " Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.632693 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/ca2ccc7a-b591-4abe-b133-f959b5445611-pod-info" (OuterVolumeSpecName: "pod-info") pod "ca2ccc7a-b591-4abe-b133-f959b5445611" (UID: "ca2ccc7a-b591-4abe-b133-f959b5445611"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.633339 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca2ccc7a-b591-4abe-b133-f959b5445611-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "ca2ccc7a-b591-4abe-b133-f959b5445611" (UID: "ca2ccc7a-b591-4abe-b133-f959b5445611"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.633484 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ca2ccc7a-b591-4abe-b133-f959b5445611-server-conf\") pod \"ca2ccc7a-b591-4abe-b133-f959b5445611\" (UID: \"ca2ccc7a-b591-4abe-b133-f959b5445611\") " Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.633951 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca2ccc7a-b591-4abe-b133-f959b5445611-kube-api-access-mrm67" (OuterVolumeSpecName: "kube-api-access-mrm67") pod "ca2ccc7a-b591-4abe-b133-f959b5445611" (UID: "ca2ccc7a-b591-4abe-b133-f959b5445611"). InnerVolumeSpecName "kube-api-access-mrm67". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.634777 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca2ccc7a-b591-4abe-b133-f959b5445611-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "ca2ccc7a-b591-4abe-b133-f959b5445611" (UID: "ca2ccc7a-b591-4abe-b133-f959b5445611"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.634872 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4fbf0178-86f6-49f4-b2f3-2a47d08ef3e7\") pod \"ca2ccc7a-b591-4abe-b133-f959b5445611\" (UID: \"ca2ccc7a-b591-4abe-b133-f959b5445611\") " Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.635310 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca2ccc7a-b591-4abe-b133-f959b5445611-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "ca2ccc7a-b591-4abe-b133-f959b5445611" (UID: "ca2ccc7a-b591-4abe-b133-f959b5445611"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.636822 4737 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ca2ccc7a-b591-4abe-b133-f959b5445611-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.636848 4737 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ca2ccc7a-b591-4abe-b133-f959b5445611-pod-info\") on node \"crc\" DevicePath \"\"" Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.636863 4737 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ca2ccc7a-b591-4abe-b133-f959b5445611-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.636873 4737 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ca2ccc7a-b591-4abe-b133-f959b5445611-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.636893 4737 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ca2ccc7a-b591-4abe-b133-f959b5445611-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.636908 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrm67\" (UniqueName: \"kubernetes.io/projected/ca2ccc7a-b591-4abe-b133-f959b5445611-kube-api-access-mrm67\") on node \"crc\" DevicePath \"\"" Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.671253 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca2ccc7a-b591-4abe-b133-f959b5445611-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "ca2ccc7a-b591-4abe-b133-f959b5445611" (UID: "ca2ccc7a-b591-4abe-b133-f959b5445611"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.739255 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4fbf0178-86f6-49f4-b2f3-2a47d08ef3e7" (OuterVolumeSpecName: "persistence") pod "ca2ccc7a-b591-4abe-b133-f959b5445611" (UID: "ca2ccc7a-b591-4abe-b133-f959b5445611"). InnerVolumeSpecName "pvc-4fbf0178-86f6-49f4-b2f3-2a47d08ef3e7". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.747323 4737 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-4fbf0178-86f6-49f4-b2f3-2a47d08ef3e7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4fbf0178-86f6-49f4-b2f3-2a47d08ef3e7\") on node \"crc\" " Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.747430 4737 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ca2ccc7a-b591-4abe-b133-f959b5445611-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.767471 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.834926 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca2ccc7a-b591-4abe-b133-f959b5445611-server-conf" (OuterVolumeSpecName: "server-conf") pod "ca2ccc7a-b591-4abe-b133-f959b5445611" (UID: "ca2ccc7a-b591-4abe-b133-f959b5445611"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.849325 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/89a3c35d-3e74-49b8-ae17-81808321d00d-rabbitmq-confd\") pod \"89a3c35d-3e74-49b8-ae17-81808321d00d\" (UID: \"89a3c35d-3e74-49b8-ae17-81808321d00d\") " Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.849797 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49eb82bb-9c03-410a-9d39-d4b8709abbeb\") pod \"89a3c35d-3e74-49b8-ae17-81808321d00d\" (UID: \"89a3c35d-3e74-49b8-ae17-81808321d00d\") " Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.849869 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/89a3c35d-3e74-49b8-ae17-81808321d00d-rabbitmq-plugins\") pod \"89a3c35d-3e74-49b8-ae17-81808321d00d\" (UID: \"89a3c35d-3e74-49b8-ae17-81808321d00d\") " Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.849966 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/89a3c35d-3e74-49b8-ae17-81808321d00d-rabbitmq-erlang-cookie\") pod \"89a3c35d-3e74-49b8-ae17-81808321d00d\" (UID: \"89a3c35d-3e74-49b8-ae17-81808321d00d\") " Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.850035 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/89a3c35d-3e74-49b8-ae17-81808321d00d-server-conf\") pod \"89a3c35d-3e74-49b8-ae17-81808321d00d\" (UID: \"89a3c35d-3e74-49b8-ae17-81808321d00d\") " Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.850060 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/89a3c35d-3e74-49b8-ae17-81808321d00d-config-data\") pod \"89a3c35d-3e74-49b8-ae17-81808321d00d\" (UID: \"89a3c35d-3e74-49b8-ae17-81808321d00d\") " Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.850126 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/89a3c35d-3e74-49b8-ae17-81808321d00d-pod-info\") pod \"89a3c35d-3e74-49b8-ae17-81808321d00d\" (UID: \"89a3c35d-3e74-49b8-ae17-81808321d00d\") " Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.850206 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/89a3c35d-3e74-49b8-ae17-81808321d00d-rabbitmq-tls\") pod \"89a3c35d-3e74-49b8-ae17-81808321d00d\" (UID: \"89a3c35d-3e74-49b8-ae17-81808321d00d\") " Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.850243 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/89a3c35d-3e74-49b8-ae17-81808321d00d-erlang-cookie-secret\") pod \"89a3c35d-3e74-49b8-ae17-81808321d00d\" (UID: \"89a3c35d-3e74-49b8-ae17-81808321d00d\") " Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.850267 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/89a3c35d-3e74-49b8-ae17-81808321d00d-plugins-conf\") pod \"89a3c35d-3e74-49b8-ae17-81808321d00d\" (UID: \"89a3c35d-3e74-49b8-ae17-81808321d00d\") " Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.850316 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbj5x\" (UniqueName: \"kubernetes.io/projected/89a3c35d-3e74-49b8-ae17-81808321d00d-kube-api-access-dbj5x\") pod \"89a3c35d-3e74-49b8-ae17-81808321d00d\" (UID: \"89a3c35d-3e74-49b8-ae17-81808321d00d\") " Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.851314 4737 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ca2ccc7a-b591-4abe-b133-f959b5445611-server-conf\") on node \"crc\" DevicePath \"\"" Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.853224 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89a3c35d-3e74-49b8-ae17-81808321d00d-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "89a3c35d-3e74-49b8-ae17-81808321d00d" (UID: "89a3c35d-3e74-49b8-ae17-81808321d00d"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.853242 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89a3c35d-3e74-49b8-ae17-81808321d00d-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "89a3c35d-3e74-49b8-ae17-81808321d00d" (UID: "89a3c35d-3e74-49b8-ae17-81808321d00d"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.856547 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89a3c35d-3e74-49b8-ae17-81808321d00d-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "89a3c35d-3e74-49b8-ae17-81808321d00d" (UID: "89a3c35d-3e74-49b8-ae17-81808321d00d"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.863720 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89a3c35d-3e74-49b8-ae17-81808321d00d-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "89a3c35d-3e74-49b8-ae17-81808321d00d" (UID: "89a3c35d-3e74-49b8-ae17-81808321d00d"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.867526 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89a3c35d-3e74-49b8-ae17-81808321d00d-kube-api-access-dbj5x" (OuterVolumeSpecName: "kube-api-access-dbj5x") pod "89a3c35d-3e74-49b8-ae17-81808321d00d" (UID: "89a3c35d-3e74-49b8-ae17-81808321d00d"). InnerVolumeSpecName "kube-api-access-dbj5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.868593 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89a3c35d-3e74-49b8-ae17-81808321d00d-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "89a3c35d-3e74-49b8-ae17-81808321d00d" (UID: "89a3c35d-3e74-49b8-ae17-81808321d00d"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.901978 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca2ccc7a-b591-4abe-b133-f959b5445611-config-data" (OuterVolumeSpecName: "config-data") pod "ca2ccc7a-b591-4abe-b133-f959b5445611" (UID: "ca2ccc7a-b591-4abe-b133-f959b5445611"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.904782 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/89a3c35d-3e74-49b8-ae17-81808321d00d-pod-info" (OuterVolumeSpecName: "pod-info") pod "89a3c35d-3e74-49b8-ae17-81808321d00d" (UID: "89a3c35d-3e74-49b8-ae17-81808321d00d"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.916542 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49eb82bb-9c03-410a-9d39-d4b8709abbeb" (OuterVolumeSpecName: "persistence") pod "89a3c35d-3e74-49b8-ae17-81808321d00d" (UID: "89a3c35d-3e74-49b8-ae17-81808321d00d"). InnerVolumeSpecName "pvc-49eb82bb-9c03-410a-9d39-d4b8709abbeb". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.955087 4737 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/89a3c35d-3e74-49b8-ae17-81808321d00d-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.955136 4737 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/89a3c35d-3e74-49b8-ae17-81808321d00d-pod-info\") on node \"crc\" DevicePath \"\"" Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.955149 4737 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/89a3c35d-3e74-49b8-ae17-81808321d00d-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.955164 4737 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/89a3c35d-3e74-49b8-ae17-81808321d00d-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.955180 4737 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/89a3c35d-3e74-49b8-ae17-81808321d00d-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.955194 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbj5x\" (UniqueName: \"kubernetes.io/projected/89a3c35d-3e74-49b8-ae17-81808321d00d-kube-api-access-dbj5x\") on node \"crc\" DevicePath \"\"" Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.955206 4737 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ca2ccc7a-b591-4abe-b133-f959b5445611-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.955251 4737 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-49eb82bb-9c03-410a-9d39-d4b8709abbeb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49eb82bb-9c03-410a-9d39-d4b8709abbeb\") on node \"crc\" " Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.955267 4737 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/89a3c35d-3e74-49b8-ae17-81808321d00d-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.967229 4737 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 26 18:59:24 crc kubenswrapper[4737]: I0126 18:59:24.973691 4737 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-4fbf0178-86f6-49f4-b2f3-2a47d08ef3e7" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4fbf0178-86f6-49f4-b2f3-2a47d08ef3e7") on node "crc" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.009912 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89a3c35d-3e74-49b8-ae17-81808321d00d-config-data" (OuterVolumeSpecName: "config-data") pod "89a3c35d-3e74-49b8-ae17-81808321d00d" (UID: "89a3c35d-3e74-49b8-ae17-81808321d00d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.010802 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89a3c35d-3e74-49b8-ae17-81808321d00d-server-conf" (OuterVolumeSpecName: "server-conf") pod "89a3c35d-3e74-49b8-ae17-81808321d00d" (UID: "89a3c35d-3e74-49b8-ae17-81808321d00d"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.048431 4737 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.048680 4737 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-49eb82bb-9c03-410a-9d39-d4b8709abbeb" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49eb82bb-9c03-410a-9d39-d4b8709abbeb") on node "crc" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.062774 4737 reconciler_common.go:293] "Volume detached for volume \"pvc-4fbf0178-86f6-49f4-b2f3-2a47d08ef3e7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4fbf0178-86f6-49f4-b2f3-2a47d08ef3e7\") on node \"crc\" DevicePath \"\"" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.062803 4737 reconciler_common.go:293] "Volume detached for volume \"pvc-49eb82bb-9c03-410a-9d39-d4b8709abbeb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49eb82bb-9c03-410a-9d39-d4b8709abbeb\") on node \"crc\" DevicePath \"\"" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.062816 4737 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/89a3c35d-3e74-49b8-ae17-81808321d00d-server-conf\") on node \"crc\" DevicePath \"\"" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.062832 4737 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/89a3c35d-3e74-49b8-ae17-81808321d00d-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.125354 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca2ccc7a-b591-4abe-b133-f959b5445611-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "ca2ccc7a-b591-4abe-b133-f959b5445611" (UID: "ca2ccc7a-b591-4abe-b133-f959b5445611"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.171723 4737 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ca2ccc7a-b591-4abe-b133-f959b5445611-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.183451 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89a3c35d-3e74-49b8-ae17-81808321d00d-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "89a3c35d-3e74-49b8-ae17-81808321d00d" (UID: "89a3c35d-3e74-49b8-ae17-81808321d00d"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.273769 4737 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/89a3c35d-3e74-49b8-ae17-81808321d00d-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.390758 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-92v7q"] Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.408428 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.498968 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"89a3c35d-3e74-49b8-ae17-81808321d00d","Type":"ContainerDied","Data":"8f62d35970963431573036fce6585d65aa0b4fb788a7b5e7fa3cc2b77ba8009e"} Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.499088 4737 scope.go:117] "RemoveContainer" containerID="06ea35a5ccb8ba1fbe6e8de8565abfd8337b400abc61eb1d009c2e44d87e15bc" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.499435 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.510478 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"43f4c1d0-e222-4099-ad1a-73d3c9d9530a","Type":"ContainerStarted","Data":"8ff8c7bcea3e9f9841a605f9af9d4b737a11c003a1b617e2cd23e16eed5698ac"} Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.515763 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.518193 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-92v7q" event={"ID":"25444fbe-165b-40a7-b446-8bec4dfb854d","Type":"ContainerStarted","Data":"8e88e05b28ba8565bbb97884f132205a50c3c2139b3545855f80c9b504be92dc"} Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.545727 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.560256 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.580050 4737 scope.go:117] "RemoveContainer" containerID="2a45bf488bd58772199e809a22fe3c7f3e42578b271a140966f49ff0c91d3844" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.605878 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 18:59:25 crc kubenswrapper[4737]: E0126 18:59:25.606588 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca2ccc7a-b591-4abe-b133-f959b5445611" containerName="setup-container" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.606602 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca2ccc7a-b591-4abe-b133-f959b5445611" containerName="setup-container" Jan 26 18:59:25 crc kubenswrapper[4737]: E0126 18:59:25.606620 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89a3c35d-3e74-49b8-ae17-81808321d00d" containerName="setup-container" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.606626 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="89a3c35d-3e74-49b8-ae17-81808321d00d" containerName="setup-container" Jan 26 18:59:25 crc kubenswrapper[4737]: E0126 18:59:25.606645 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89a3c35d-3e74-49b8-ae17-81808321d00d" containerName="rabbitmq" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.606651 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="89a3c35d-3e74-49b8-ae17-81808321d00d" containerName="rabbitmq" Jan 26 18:59:25 crc kubenswrapper[4737]: E0126 18:59:25.607174 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca2ccc7a-b591-4abe-b133-f959b5445611" containerName="rabbitmq" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.607185 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca2ccc7a-b591-4abe-b133-f959b5445611" containerName="rabbitmq" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.607653 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca2ccc7a-b591-4abe-b133-f959b5445611" containerName="rabbitmq" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.607689 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="89a3c35d-3e74-49b8-ae17-81808321d00d" containerName="rabbitmq" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.609182 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.614129 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.619632 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.619890 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.620260 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-j5nkh" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.620414 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.620323 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.620612 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.635913 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.669880 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.715746 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.719528 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.757695 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.785719 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.790514 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e5db87e3-e7cb-4248-bc3a-5c6f5d92c144-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5db87e3-e7cb-4248-bc3a-5c6f5d92c144\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.790832 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e5db87e3-e7cb-4248-bc3a-5c6f5d92c144-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5db87e3-e7cb-4248-bc3a-5c6f5d92c144\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.793505 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e5db87e3-e7cb-4248-bc3a-5c6f5d92c144-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5db87e3-e7cb-4248-bc3a-5c6f5d92c144\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.793710 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e5db87e3-e7cb-4248-bc3a-5c6f5d92c144-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5db87e3-e7cb-4248-bc3a-5c6f5d92c144\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.793867 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e5db87e3-e7cb-4248-bc3a-5c6f5d92c144-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5db87e3-e7cb-4248-bc3a-5c6f5d92c144\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.794105 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79d6z\" (UniqueName: \"kubernetes.io/projected/e5db87e3-e7cb-4248-bc3a-5c6f5d92c144-kube-api-access-79d6z\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5db87e3-e7cb-4248-bc3a-5c6f5d92c144\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.794310 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e5db87e3-e7cb-4248-bc3a-5c6f5d92c144-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5db87e3-e7cb-4248-bc3a-5c6f5d92c144\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.794598 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e5db87e3-e7cb-4248-bc3a-5c6f5d92c144-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5db87e3-e7cb-4248-bc3a-5c6f5d92c144\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.795227 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-49eb82bb-9c03-410a-9d39-d4b8709abbeb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49eb82bb-9c03-410a-9d39-d4b8709abbeb\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5db87e3-e7cb-4248-bc3a-5c6f5d92c144\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.796688 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e5db87e3-e7cb-4248-bc3a-5c6f5d92c144-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5db87e3-e7cb-4248-bc3a-5c6f5d92c144\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.796874 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e5db87e3-e7cb-4248-bc3a-5c6f5d92c144-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5db87e3-e7cb-4248-bc3a-5c6f5d92c144\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.900099 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/44d4092c-abb5-4218-81dc-32ba2257004d-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"44d4092c-abb5-4218-81dc-32ba2257004d\") " pod="openstack/rabbitmq-server-2" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.900604 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/44d4092c-abb5-4218-81dc-32ba2257004d-config-data\") pod \"rabbitmq-server-2\" (UID: \"44d4092c-abb5-4218-81dc-32ba2257004d\") " pod="openstack/rabbitmq-server-2" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.900688 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e5db87e3-e7cb-4248-bc3a-5c6f5d92c144-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5db87e3-e7cb-4248-bc3a-5c6f5d92c144\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.901637 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e5db87e3-e7cb-4248-bc3a-5c6f5d92c144-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5db87e3-e7cb-4248-bc3a-5c6f5d92c144\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.901681 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/44d4092c-abb5-4218-81dc-32ba2257004d-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"44d4092c-abb5-4218-81dc-32ba2257004d\") " pod="openstack/rabbitmq-server-2" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.901707 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/44d4092c-abb5-4218-81dc-32ba2257004d-pod-info\") pod \"rabbitmq-server-2\" (UID: \"44d4092c-abb5-4218-81dc-32ba2257004d\") " pod="openstack/rabbitmq-server-2" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.902122 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/44d4092c-abb5-4218-81dc-32ba2257004d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"44d4092c-abb5-4218-81dc-32ba2257004d\") " pod="openstack/rabbitmq-server-2" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.902148 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/44d4092c-abb5-4218-81dc-32ba2257004d-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"44d4092c-abb5-4218-81dc-32ba2257004d\") " pod="openstack/rabbitmq-server-2" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.902277 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e5db87e3-e7cb-4248-bc3a-5c6f5d92c144-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5db87e3-e7cb-4248-bc3a-5c6f5d92c144\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.902365 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/44d4092c-abb5-4218-81dc-32ba2257004d-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"44d4092c-abb5-4218-81dc-32ba2257004d\") " pod="openstack/rabbitmq-server-2" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.902452 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e5db87e3-e7cb-4248-bc3a-5c6f5d92c144-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5db87e3-e7cb-4248-bc3a-5c6f5d92c144\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.902512 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m5m5\" (UniqueName: \"kubernetes.io/projected/44d4092c-abb5-4218-81dc-32ba2257004d-kube-api-access-7m5m5\") pod \"rabbitmq-server-2\" (UID: \"44d4092c-abb5-4218-81dc-32ba2257004d\") " pod="openstack/rabbitmq-server-2" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.902621 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e5db87e3-e7cb-4248-bc3a-5c6f5d92c144-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5db87e3-e7cb-4248-bc3a-5c6f5d92c144\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.902587 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/44d4092c-abb5-4218-81dc-32ba2257004d-server-conf\") pod \"rabbitmq-server-2\" (UID: \"44d4092c-abb5-4218-81dc-32ba2257004d\") " pod="openstack/rabbitmq-server-2" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.903143 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e5db87e3-e7cb-4248-bc3a-5c6f5d92c144-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5db87e3-e7cb-4248-bc3a-5c6f5d92c144\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.903208 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4fbf0178-86f6-49f4-b2f3-2a47d08ef3e7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4fbf0178-86f6-49f4-b2f3-2a47d08ef3e7\") pod \"rabbitmq-server-2\" (UID: \"44d4092c-abb5-4218-81dc-32ba2257004d\") " pod="openstack/rabbitmq-server-2" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.903260 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e5db87e3-e7cb-4248-bc3a-5c6f5d92c144-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5db87e3-e7cb-4248-bc3a-5c6f5d92c144\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.903267 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e5db87e3-e7cb-4248-bc3a-5c6f5d92c144-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5db87e3-e7cb-4248-bc3a-5c6f5d92c144\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.903317 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79d6z\" (UniqueName: \"kubernetes.io/projected/e5db87e3-e7cb-4248-bc3a-5c6f5d92c144-kube-api-access-79d6z\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5db87e3-e7cb-4248-bc3a-5c6f5d92c144\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.903451 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e5db87e3-e7cb-4248-bc3a-5c6f5d92c144-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5db87e3-e7cb-4248-bc3a-5c6f5d92c144\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.903504 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/44d4092c-abb5-4218-81dc-32ba2257004d-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"44d4092c-abb5-4218-81dc-32ba2257004d\") " pod="openstack/rabbitmq-server-2" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.903596 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e5db87e3-e7cb-4248-bc3a-5c6f5d92c144-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5db87e3-e7cb-4248-bc3a-5c6f5d92c144\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.903713 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-49eb82bb-9c03-410a-9d39-d4b8709abbeb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49eb82bb-9c03-410a-9d39-d4b8709abbeb\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5db87e3-e7cb-4248-bc3a-5c6f5d92c144\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.903757 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e5db87e3-e7cb-4248-bc3a-5c6f5d92c144-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5db87e3-e7cb-4248-bc3a-5c6f5d92c144\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.904674 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e5db87e3-e7cb-4248-bc3a-5c6f5d92c144-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5db87e3-e7cb-4248-bc3a-5c6f5d92c144\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.904658 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e5db87e3-e7cb-4248-bc3a-5c6f5d92c144-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5db87e3-e7cb-4248-bc3a-5c6f5d92c144\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.905765 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e5db87e3-e7cb-4248-bc3a-5c6f5d92c144-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5db87e3-e7cb-4248-bc3a-5c6f5d92c144\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.909418 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e5db87e3-e7cb-4248-bc3a-5c6f5d92c144-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5db87e3-e7cb-4248-bc3a-5c6f5d92c144\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.910580 4737 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.910629 4737 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-49eb82bb-9c03-410a-9d39-d4b8709abbeb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49eb82bb-9c03-410a-9d39-d4b8709abbeb\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5db87e3-e7cb-4248-bc3a-5c6f5d92c144\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/fd4fc01515cf411f2c3c1201953e7057ccc603e7317600a03debd4076f0e2cbc/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.913329 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e5db87e3-e7cb-4248-bc3a-5c6f5d92c144-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5db87e3-e7cb-4248-bc3a-5c6f5d92c144\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.914558 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e5db87e3-e7cb-4248-bc3a-5c6f5d92c144-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5db87e3-e7cb-4248-bc3a-5c6f5d92c144\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.920863 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e5db87e3-e7cb-4248-bc3a-5c6f5d92c144-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5db87e3-e7cb-4248-bc3a-5c6f5d92c144\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.927722 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79d6z\" (UniqueName: \"kubernetes.io/projected/e5db87e3-e7cb-4248-bc3a-5c6f5d92c144-kube-api-access-79d6z\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5db87e3-e7cb-4248-bc3a-5c6f5d92c144\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:25 crc kubenswrapper[4737]: E0126 18:59:25.987977 4737 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41b95787_7a5f_4e14_98f2_e2d9500a9df6.slice/crio-84c325de66a34510bdc87f64b53fbe96d77ce6eb3b7015b5731523859705a700\": RecentStats: unable to find data in memory cache]" Jan 26 18:59:25 crc kubenswrapper[4737]: I0126 18:59:25.998124 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-49eb82bb-9c03-410a-9d39-d4b8709abbeb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49eb82bb-9c03-410a-9d39-d4b8709abbeb\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5db87e3-e7cb-4248-bc3a-5c6f5d92c144\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:26 crc kubenswrapper[4737]: I0126 18:59:26.006500 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/44d4092c-abb5-4218-81dc-32ba2257004d-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"44d4092c-abb5-4218-81dc-32ba2257004d\") " pod="openstack/rabbitmq-server-2" Jan 26 18:59:26 crc kubenswrapper[4737]: I0126 18:59:26.006607 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7m5m5\" (UniqueName: \"kubernetes.io/projected/44d4092c-abb5-4218-81dc-32ba2257004d-kube-api-access-7m5m5\") pod \"rabbitmq-server-2\" (UID: \"44d4092c-abb5-4218-81dc-32ba2257004d\") " pod="openstack/rabbitmq-server-2" Jan 26 18:59:26 crc kubenswrapper[4737]: I0126 18:59:26.006647 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/44d4092c-abb5-4218-81dc-32ba2257004d-server-conf\") pod \"rabbitmq-server-2\" (UID: \"44d4092c-abb5-4218-81dc-32ba2257004d\") " pod="openstack/rabbitmq-server-2" Jan 26 18:59:26 crc kubenswrapper[4737]: I0126 18:59:26.006711 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4fbf0178-86f6-49f4-b2f3-2a47d08ef3e7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4fbf0178-86f6-49f4-b2f3-2a47d08ef3e7\") pod \"rabbitmq-server-2\" (UID: \"44d4092c-abb5-4218-81dc-32ba2257004d\") " pod="openstack/rabbitmq-server-2" Jan 26 18:59:26 crc kubenswrapper[4737]: I0126 18:59:26.006797 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/44d4092c-abb5-4218-81dc-32ba2257004d-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"44d4092c-abb5-4218-81dc-32ba2257004d\") " pod="openstack/rabbitmq-server-2" Jan 26 18:59:26 crc kubenswrapper[4737]: I0126 18:59:26.006877 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/44d4092c-abb5-4218-81dc-32ba2257004d-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"44d4092c-abb5-4218-81dc-32ba2257004d\") " pod="openstack/rabbitmq-server-2" Jan 26 18:59:26 crc kubenswrapper[4737]: I0126 18:59:26.006908 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/44d4092c-abb5-4218-81dc-32ba2257004d-config-data\") pod \"rabbitmq-server-2\" (UID: \"44d4092c-abb5-4218-81dc-32ba2257004d\") " pod="openstack/rabbitmq-server-2" Jan 26 18:59:26 crc kubenswrapper[4737]: I0126 18:59:26.006970 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/44d4092c-abb5-4218-81dc-32ba2257004d-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"44d4092c-abb5-4218-81dc-32ba2257004d\") " pod="openstack/rabbitmq-server-2" Jan 26 18:59:26 crc kubenswrapper[4737]: I0126 18:59:26.006988 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/44d4092c-abb5-4218-81dc-32ba2257004d-pod-info\") pod \"rabbitmq-server-2\" (UID: \"44d4092c-abb5-4218-81dc-32ba2257004d\") " pod="openstack/rabbitmq-server-2" Jan 26 18:59:26 crc kubenswrapper[4737]: I0126 18:59:26.007034 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/44d4092c-abb5-4218-81dc-32ba2257004d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"44d4092c-abb5-4218-81dc-32ba2257004d\") " pod="openstack/rabbitmq-server-2" Jan 26 18:59:26 crc kubenswrapper[4737]: I0126 18:59:26.007058 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/44d4092c-abb5-4218-81dc-32ba2257004d-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"44d4092c-abb5-4218-81dc-32ba2257004d\") " pod="openstack/rabbitmq-server-2" Jan 26 18:59:26 crc kubenswrapper[4737]: I0126 18:59:26.007442 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/44d4092c-abb5-4218-81dc-32ba2257004d-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"44d4092c-abb5-4218-81dc-32ba2257004d\") " pod="openstack/rabbitmq-server-2" Jan 26 18:59:26 crc kubenswrapper[4737]: I0126 18:59:26.007793 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/44d4092c-abb5-4218-81dc-32ba2257004d-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"44d4092c-abb5-4218-81dc-32ba2257004d\") " pod="openstack/rabbitmq-server-2" Jan 26 18:59:26 crc kubenswrapper[4737]: I0126 18:59:26.008528 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/44d4092c-abb5-4218-81dc-32ba2257004d-server-conf\") pod \"rabbitmq-server-2\" (UID: \"44d4092c-abb5-4218-81dc-32ba2257004d\") " pod="openstack/rabbitmq-server-2" Jan 26 18:59:26 crc kubenswrapper[4737]: I0126 18:59:26.011265 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/44d4092c-abb5-4218-81dc-32ba2257004d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"44d4092c-abb5-4218-81dc-32ba2257004d\") " pod="openstack/rabbitmq-server-2" Jan 26 18:59:26 crc kubenswrapper[4737]: I0126 18:59:26.011441 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/44d4092c-abb5-4218-81dc-32ba2257004d-config-data\") pod \"rabbitmq-server-2\" (UID: \"44d4092c-abb5-4218-81dc-32ba2257004d\") " pod="openstack/rabbitmq-server-2" Jan 26 18:59:26 crc kubenswrapper[4737]: I0126 18:59:26.011664 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/44d4092c-abb5-4218-81dc-32ba2257004d-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"44d4092c-abb5-4218-81dc-32ba2257004d\") " pod="openstack/rabbitmq-server-2" Jan 26 18:59:26 crc kubenswrapper[4737]: I0126 18:59:26.011715 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/44d4092c-abb5-4218-81dc-32ba2257004d-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"44d4092c-abb5-4218-81dc-32ba2257004d\") " pod="openstack/rabbitmq-server-2" Jan 26 18:59:26 crc kubenswrapper[4737]: I0126 18:59:26.013014 4737 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 18:59:26 crc kubenswrapper[4737]: I0126 18:59:26.013044 4737 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4fbf0178-86f6-49f4-b2f3-2a47d08ef3e7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4fbf0178-86f6-49f4-b2f3-2a47d08ef3e7\") pod \"rabbitmq-server-2\" (UID: \"44d4092c-abb5-4218-81dc-32ba2257004d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e4b9ceb8c52abf651bff7514af3cc683572e9e232935ffe7b4905a076db603b6/globalmount\"" pod="openstack/rabbitmq-server-2" Jan 26 18:59:26 crc kubenswrapper[4737]: I0126 18:59:26.013162 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/44d4092c-abb5-4218-81dc-32ba2257004d-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"44d4092c-abb5-4218-81dc-32ba2257004d\") " pod="openstack/rabbitmq-server-2" Jan 26 18:59:26 crc kubenswrapper[4737]: I0126 18:59:26.015066 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:26 crc kubenswrapper[4737]: I0126 18:59:26.016119 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/44d4092c-abb5-4218-81dc-32ba2257004d-pod-info\") pod \"rabbitmq-server-2\" (UID: \"44d4092c-abb5-4218-81dc-32ba2257004d\") " pod="openstack/rabbitmq-server-2" Jan 26 18:59:26 crc kubenswrapper[4737]: I0126 18:59:26.032635 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7m5m5\" (UniqueName: \"kubernetes.io/projected/44d4092c-abb5-4218-81dc-32ba2257004d-kube-api-access-7m5m5\") pod \"rabbitmq-server-2\" (UID: \"44d4092c-abb5-4218-81dc-32ba2257004d\") " pod="openstack/rabbitmq-server-2" Jan 26 18:59:26 crc kubenswrapper[4737]: I0126 18:59:26.115493 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4fbf0178-86f6-49f4-b2f3-2a47d08ef3e7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4fbf0178-86f6-49f4-b2f3-2a47d08ef3e7\") pod \"rabbitmq-server-2\" (UID: \"44d4092c-abb5-4218-81dc-32ba2257004d\") " pod="openstack/rabbitmq-server-2" Jan 26 18:59:26 crc kubenswrapper[4737]: I0126 18:59:26.368897 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 26 18:59:26 crc kubenswrapper[4737]: I0126 18:59:26.558544 4737 generic.go:334] "Generic (PLEG): container finished" podID="25444fbe-165b-40a7-b446-8bec4dfb854d" containerID="5e039cf758f095a4a15cfa9f3a7ada9f578f7c5f939613b226aef9873ffad6da" exitCode=0 Jan 26 18:59:26 crc kubenswrapper[4737]: I0126 18:59:26.559132 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-92v7q" event={"ID":"25444fbe-165b-40a7-b446-8bec4dfb854d","Type":"ContainerDied","Data":"5e039cf758f095a4a15cfa9f3a7ada9f578f7c5f939613b226aef9873ffad6da"} Jan 26 18:59:26 crc kubenswrapper[4737]: I0126 18:59:26.679037 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 18:59:27 crc kubenswrapper[4737]: I0126 18:59:27.030420 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89a3c35d-3e74-49b8-ae17-81808321d00d" path="/var/lib/kubelet/pods/89a3c35d-3e74-49b8-ae17-81808321d00d/volumes" Jan 26 18:59:27 crc kubenswrapper[4737]: I0126 18:59:27.033606 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca2ccc7a-b591-4abe-b133-f959b5445611" path="/var/lib/kubelet/pods/ca2ccc7a-b591-4abe-b133-f959b5445611/volumes" Jan 26 18:59:27 crc kubenswrapper[4737]: E0126 18:59:27.036052 4737 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/bac70e6e1967499e3b30a5f6301e82769cd44a427eb2b6593cb414e41b25c208/diff" to get inode usage: stat /var/lib/containers/storage/overlay/bac70e6e1967499e3b30a5f6301e82769cd44a427eb2b6593cb414e41b25c208/diff: no such file or directory, extraDiskErr: Jan 26 18:59:27 crc kubenswrapper[4737]: I0126 18:59:27.037861 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 26 18:59:27 crc kubenswrapper[4737]: I0126 18:59:27.590618 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-92v7q" event={"ID":"25444fbe-165b-40a7-b446-8bec4dfb854d","Type":"ContainerStarted","Data":"9cc3f91555982e2441d64fb4a6979cf1b31ba6767a247398fa73c5aefce34931"} Jan 26 18:59:27 crc kubenswrapper[4737]: I0126 18:59:27.590932 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d84b4d45c-92v7q" Jan 26 18:59:27 crc kubenswrapper[4737]: I0126 18:59:27.592269 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"44d4092c-abb5-4218-81dc-32ba2257004d","Type":"ContainerStarted","Data":"d8dfb09ccf3b4816f7a1dd6da9dbf04e79007f548c5f487b16719b581534ba43"} Jan 26 18:59:27 crc kubenswrapper[4737]: I0126 18:59:27.595890 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e5db87e3-e7cb-4248-bc3a-5c6f5d92c144","Type":"ContainerStarted","Data":"73063a03e9b1ef90adcb8688ecc57a9ba748352b62fc863dd28d23a51ff58a41"} Jan 26 18:59:27 crc kubenswrapper[4737]: I0126 18:59:27.616655 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d84b4d45c-92v7q" podStartSLOduration=7.616604761 podStartE2EDuration="7.616604761s" podCreationTimestamp="2026-01-26 18:59:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:59:27.61398644 +0000 UTC m=+1740.922181148" watchObservedRunningTime="2026-01-26 18:59:27.616604761 +0000 UTC m=+1740.924799469" Jan 26 18:59:28 crc kubenswrapper[4737]: I0126 18:59:28.982677 4737 scope.go:117] "RemoveContainer" containerID="1118354a04db19a991298cf7d8a2d128f4afb57f133e36502b231054abcee336" Jan 26 18:59:28 crc kubenswrapper[4737]: E0126 18:59:28.983621 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 18:59:29 crc kubenswrapper[4737]: I0126 18:59:29.626604 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e5db87e3-e7cb-4248-bc3a-5c6f5d92c144","Type":"ContainerStarted","Data":"31a2a099439e157ecf8493014afd255b7a80069ba8aeaaf6b2eb6c5b49781d9e"} Jan 26 18:59:29 crc kubenswrapper[4737]: I0126 18:59:29.629740 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"44d4092c-abb5-4218-81dc-32ba2257004d","Type":"ContainerStarted","Data":"19222c1560ce758a884e246a915896dd7a1a0926381767e796af16850e93d2c4"} Jan 26 18:59:30 crc kubenswrapper[4737]: I0126 18:59:30.846923 4737 scope.go:117] "RemoveContainer" containerID="06306e7466a0c6f5f61dfb9fca1c925ea9079f79f0d7027946b84c72b13358b0" Jan 26 18:59:31 crc kubenswrapper[4737]: I0126 18:59:31.008477 4737 scope.go:117] "RemoveContainer" containerID="8c379f6429cef2f3fe40f14884abebbff80588aafc47fcf061ea9ab1f406e9aa" Jan 26 18:59:31 crc kubenswrapper[4737]: I0126 18:59:31.681278 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"43f4c1d0-e222-4099-ad1a-73d3c9d9530a","Type":"ContainerStarted","Data":"4ba574fb97287279da9f7333fc479ffaab54cc4289c5dc68b251af1ddffb96bf"} Jan 26 18:59:32 crc kubenswrapper[4737]: I0126 18:59:32.713409 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"43f4c1d0-e222-4099-ad1a-73d3c9d9530a","Type":"ContainerStarted","Data":"b6a2cffecc9d7b9e0653554ad9f3fb625117ba025f62b00b91fc208910cc2ce6"} Jan 26 18:59:33 crc kubenswrapper[4737]: I0126 18:59:33.731867 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"43f4c1d0-e222-4099-ad1a-73d3c9d9530a","Type":"ContainerStarted","Data":"e438198f179e4a194a158bf533e9d4117a47d4255b7388ffe8874688134480ee"} Jan 26 18:59:35 crc kubenswrapper[4737]: I0126 18:59:35.766593 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"43f4c1d0-e222-4099-ad1a-73d3c9d9530a","Type":"ContainerStarted","Data":"11946a94738de0ff568d00935dfd645ff84ba34270f5074d5cbee1f459b1d383"} Jan 26 18:59:35 crc kubenswrapper[4737]: I0126 18:59:35.769355 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 18:59:35 crc kubenswrapper[4737]: I0126 18:59:35.809180 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=16.4903405 podStartE2EDuration="25.809152102s" podCreationTimestamp="2026-01-26 18:59:10 +0000 UTC" firstStartedPulling="2026-01-26 18:59:25.402797064 +0000 UTC m=+1738.710991772" lastFinishedPulling="2026-01-26 18:59:34.721608666 +0000 UTC m=+1748.029803374" observedRunningTime="2026-01-26 18:59:35.791613932 +0000 UTC m=+1749.099808650" watchObservedRunningTime="2026-01-26 18:59:35.809152102 +0000 UTC m=+1749.117346810" Jan 26 18:59:36 crc kubenswrapper[4737]: I0126 18:59:36.179991 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d84b4d45c-92v7q" Jan 26 18:59:36 crc kubenswrapper[4737]: I0126 18:59:36.254833 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-2rt9q"] Jan 26 18:59:36 crc kubenswrapper[4737]: I0126 18:59:36.255130 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7bbf7cf9-2rt9q" podUID="38de7871-ef90-4700-b77f-abf3c4f9a99d" containerName="dnsmasq-dns" containerID="cri-o://3895191728f2e0a03e3de77c7fbfeda4fe6b2bc3cdfcc08cd0e5deefe97a9c53" gracePeriod=10 Jan 26 18:59:36 crc kubenswrapper[4737]: I0126 18:59:36.502303 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-f67kv"] Jan 26 18:59:36 crc kubenswrapper[4737]: I0126 18:59:36.508049 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6df4f56c-f67kv" Jan 26 18:59:36 crc kubenswrapper[4737]: I0126 18:59:36.523941 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-f67kv"] Jan 26 18:59:36 crc kubenswrapper[4737]: I0126 18:59:36.624194 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/50a8451d-1c9f-4e7b-a24a-36a22672f896-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-f67kv\" (UID: \"50a8451d-1c9f-4e7b-a24a-36a22672f896\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f67kv" Jan 26 18:59:36 crc kubenswrapper[4737]: I0126 18:59:36.624324 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50a8451d-1c9f-4e7b-a24a-36a22672f896-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-f67kv\" (UID: \"50a8451d-1c9f-4e7b-a24a-36a22672f896\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f67kv" Jan 26 18:59:36 crc kubenswrapper[4737]: I0126 18:59:36.624375 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50a8451d-1c9f-4e7b-a24a-36a22672f896-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-f67kv\" (UID: \"50a8451d-1c9f-4e7b-a24a-36a22672f896\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f67kv" Jan 26 18:59:36 crc kubenswrapper[4737]: I0126 18:59:36.624470 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/50a8451d-1c9f-4e7b-a24a-36a22672f896-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-f67kv\" (UID: \"50a8451d-1c9f-4e7b-a24a-36a22672f896\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f67kv" Jan 26 18:59:36 crc kubenswrapper[4737]: I0126 18:59:36.624504 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50a8451d-1c9f-4e7b-a24a-36a22672f896-config\") pod \"dnsmasq-dns-6f6df4f56c-f67kv\" (UID: \"50a8451d-1c9f-4e7b-a24a-36a22672f896\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f67kv" Jan 26 18:59:36 crc kubenswrapper[4737]: I0126 18:59:36.624878 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8fzd\" (UniqueName: \"kubernetes.io/projected/50a8451d-1c9f-4e7b-a24a-36a22672f896-kube-api-access-m8fzd\") pod \"dnsmasq-dns-6f6df4f56c-f67kv\" (UID: \"50a8451d-1c9f-4e7b-a24a-36a22672f896\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f67kv" Jan 26 18:59:36 crc kubenswrapper[4737]: I0126 18:59:36.625031 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50a8451d-1c9f-4e7b-a24a-36a22672f896-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-f67kv\" (UID: \"50a8451d-1c9f-4e7b-a24a-36a22672f896\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f67kv" Jan 26 18:59:36 crc kubenswrapper[4737]: I0126 18:59:36.727767 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50a8451d-1c9f-4e7b-a24a-36a22672f896-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-f67kv\" (UID: \"50a8451d-1c9f-4e7b-a24a-36a22672f896\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f67kv" Jan 26 18:59:36 crc kubenswrapper[4737]: I0126 18:59:36.727849 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50a8451d-1c9f-4e7b-a24a-36a22672f896-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-f67kv\" (UID: \"50a8451d-1c9f-4e7b-a24a-36a22672f896\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f67kv" Jan 26 18:59:36 crc kubenswrapper[4737]: I0126 18:59:36.727942 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/50a8451d-1c9f-4e7b-a24a-36a22672f896-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-f67kv\" (UID: \"50a8451d-1c9f-4e7b-a24a-36a22672f896\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f67kv" Jan 26 18:59:36 crc kubenswrapper[4737]: I0126 18:59:36.727982 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50a8451d-1c9f-4e7b-a24a-36a22672f896-config\") pod \"dnsmasq-dns-6f6df4f56c-f67kv\" (UID: \"50a8451d-1c9f-4e7b-a24a-36a22672f896\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f67kv" Jan 26 18:59:36 crc kubenswrapper[4737]: I0126 18:59:36.728035 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8fzd\" (UniqueName: \"kubernetes.io/projected/50a8451d-1c9f-4e7b-a24a-36a22672f896-kube-api-access-m8fzd\") pod \"dnsmasq-dns-6f6df4f56c-f67kv\" (UID: \"50a8451d-1c9f-4e7b-a24a-36a22672f896\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f67kv" Jan 26 18:59:36 crc kubenswrapper[4737]: I0126 18:59:36.728222 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50a8451d-1c9f-4e7b-a24a-36a22672f896-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-f67kv\" (UID: \"50a8451d-1c9f-4e7b-a24a-36a22672f896\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f67kv" Jan 26 18:59:36 crc kubenswrapper[4737]: I0126 18:59:36.728300 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/50a8451d-1c9f-4e7b-a24a-36a22672f896-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-f67kv\" (UID: \"50a8451d-1c9f-4e7b-a24a-36a22672f896\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f67kv" Jan 26 18:59:36 crc kubenswrapper[4737]: I0126 18:59:36.728753 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50a8451d-1c9f-4e7b-a24a-36a22672f896-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-f67kv\" (UID: \"50a8451d-1c9f-4e7b-a24a-36a22672f896\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f67kv" Jan 26 18:59:36 crc kubenswrapper[4737]: I0126 18:59:36.729126 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50a8451d-1c9f-4e7b-a24a-36a22672f896-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-f67kv\" (UID: \"50a8451d-1c9f-4e7b-a24a-36a22672f896\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f67kv" Jan 26 18:59:36 crc kubenswrapper[4737]: I0126 18:59:36.729311 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/50a8451d-1c9f-4e7b-a24a-36a22672f896-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-f67kv\" (UID: \"50a8451d-1c9f-4e7b-a24a-36a22672f896\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f67kv" Jan 26 18:59:36 crc kubenswrapper[4737]: I0126 18:59:36.729426 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/50a8451d-1c9f-4e7b-a24a-36a22672f896-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-f67kv\" (UID: \"50a8451d-1c9f-4e7b-a24a-36a22672f896\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f67kv" Jan 26 18:59:36 crc kubenswrapper[4737]: I0126 18:59:36.729766 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50a8451d-1c9f-4e7b-a24a-36a22672f896-config\") pod \"dnsmasq-dns-6f6df4f56c-f67kv\" (UID: \"50a8451d-1c9f-4e7b-a24a-36a22672f896\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f67kv" Jan 26 18:59:36 crc kubenswrapper[4737]: I0126 18:59:36.730496 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50a8451d-1c9f-4e7b-a24a-36a22672f896-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-f67kv\" (UID: \"50a8451d-1c9f-4e7b-a24a-36a22672f896\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f67kv" Jan 26 18:59:36 crc kubenswrapper[4737]: I0126 18:59:36.797765 4737 generic.go:334] "Generic (PLEG): container finished" podID="38de7871-ef90-4700-b77f-abf3c4f9a99d" containerID="3895191728f2e0a03e3de77c7fbfeda4fe6b2bc3cdfcc08cd0e5deefe97a9c53" exitCode=0 Jan 26 18:59:36 crc kubenswrapper[4737]: I0126 18:59:36.799394 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-2rt9q" event={"ID":"38de7871-ef90-4700-b77f-abf3c4f9a99d","Type":"ContainerDied","Data":"3895191728f2e0a03e3de77c7fbfeda4fe6b2bc3cdfcc08cd0e5deefe97a9c53"} Jan 26 18:59:36 crc kubenswrapper[4737]: I0126 18:59:36.803253 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8fzd\" (UniqueName: \"kubernetes.io/projected/50a8451d-1c9f-4e7b-a24a-36a22672f896-kube-api-access-m8fzd\") pod \"dnsmasq-dns-6f6df4f56c-f67kv\" (UID: \"50a8451d-1c9f-4e7b-a24a-36a22672f896\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f67kv" Jan 26 18:59:36 crc kubenswrapper[4737]: I0126 18:59:36.847144 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6df4f56c-f67kv" Jan 26 18:59:37 crc kubenswrapper[4737]: I0126 18:59:37.172836 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-2rt9q" Jan 26 18:59:37 crc kubenswrapper[4737]: I0126 18:59:37.350589 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38de7871-ef90-4700-b77f-abf3c4f9a99d-ovsdbserver-sb\") pod \"38de7871-ef90-4700-b77f-abf3c4f9a99d\" (UID: \"38de7871-ef90-4700-b77f-abf3c4f9a99d\") " Jan 26 18:59:37 crc kubenswrapper[4737]: I0126 18:59:37.350869 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8nss\" (UniqueName: \"kubernetes.io/projected/38de7871-ef90-4700-b77f-abf3c4f9a99d-kube-api-access-l8nss\") pod \"38de7871-ef90-4700-b77f-abf3c4f9a99d\" (UID: \"38de7871-ef90-4700-b77f-abf3c4f9a99d\") " Jan 26 18:59:37 crc kubenswrapper[4737]: I0126 18:59:37.350973 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38de7871-ef90-4700-b77f-abf3c4f9a99d-config\") pod \"38de7871-ef90-4700-b77f-abf3c4f9a99d\" (UID: \"38de7871-ef90-4700-b77f-abf3c4f9a99d\") " Jan 26 18:59:37 crc kubenswrapper[4737]: I0126 18:59:37.351296 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38de7871-ef90-4700-b77f-abf3c4f9a99d-dns-swift-storage-0\") pod \"38de7871-ef90-4700-b77f-abf3c4f9a99d\" (UID: \"38de7871-ef90-4700-b77f-abf3c4f9a99d\") " Jan 26 18:59:37 crc kubenswrapper[4737]: I0126 18:59:37.351356 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38de7871-ef90-4700-b77f-abf3c4f9a99d-ovsdbserver-nb\") pod \"38de7871-ef90-4700-b77f-abf3c4f9a99d\" (UID: \"38de7871-ef90-4700-b77f-abf3c4f9a99d\") " Jan 26 18:59:37 crc kubenswrapper[4737]: I0126 18:59:37.351425 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38de7871-ef90-4700-b77f-abf3c4f9a99d-dns-svc\") pod \"38de7871-ef90-4700-b77f-abf3c4f9a99d\" (UID: \"38de7871-ef90-4700-b77f-abf3c4f9a99d\") " Jan 26 18:59:37 crc kubenswrapper[4737]: I0126 18:59:37.387878 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38de7871-ef90-4700-b77f-abf3c4f9a99d-kube-api-access-l8nss" (OuterVolumeSpecName: "kube-api-access-l8nss") pod "38de7871-ef90-4700-b77f-abf3c4f9a99d" (UID: "38de7871-ef90-4700-b77f-abf3c4f9a99d"). InnerVolumeSpecName "kube-api-access-l8nss". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:59:37 crc kubenswrapper[4737]: I0126 18:59:37.467776 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8nss\" (UniqueName: \"kubernetes.io/projected/38de7871-ef90-4700-b77f-abf3c4f9a99d-kube-api-access-l8nss\") on node \"crc\" DevicePath \"\"" Jan 26 18:59:37 crc kubenswrapper[4737]: I0126 18:59:37.557543 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38de7871-ef90-4700-b77f-abf3c4f9a99d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "38de7871-ef90-4700-b77f-abf3c4f9a99d" (UID: "38de7871-ef90-4700-b77f-abf3c4f9a99d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:59:37 crc kubenswrapper[4737]: I0126 18:59:37.574612 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38de7871-ef90-4700-b77f-abf3c4f9a99d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "38de7871-ef90-4700-b77f-abf3c4f9a99d" (UID: "38de7871-ef90-4700-b77f-abf3c4f9a99d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:59:37 crc kubenswrapper[4737]: I0126 18:59:37.582434 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38de7871-ef90-4700-b77f-abf3c4f9a99d-dns-swift-storage-0\") pod \"38de7871-ef90-4700-b77f-abf3c4f9a99d\" (UID: \"38de7871-ef90-4700-b77f-abf3c4f9a99d\") " Jan 26 18:59:37 crc kubenswrapper[4737]: I0126 18:59:37.583908 4737 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38de7871-ef90-4700-b77f-abf3c4f9a99d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 18:59:37 crc kubenswrapper[4737]: W0126 18:59:37.588416 4737 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/38de7871-ef90-4700-b77f-abf3c4f9a99d/volumes/kubernetes.io~configmap/dns-swift-storage-0 Jan 26 18:59:37 crc kubenswrapper[4737]: I0126 18:59:37.588516 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38de7871-ef90-4700-b77f-abf3c4f9a99d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "38de7871-ef90-4700-b77f-abf3c4f9a99d" (UID: "38de7871-ef90-4700-b77f-abf3c4f9a99d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:59:37 crc kubenswrapper[4737]: I0126 18:59:37.593921 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38de7871-ef90-4700-b77f-abf3c4f9a99d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "38de7871-ef90-4700-b77f-abf3c4f9a99d" (UID: "38de7871-ef90-4700-b77f-abf3c4f9a99d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:59:37 crc kubenswrapper[4737]: I0126 18:59:37.596381 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38de7871-ef90-4700-b77f-abf3c4f9a99d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "38de7871-ef90-4700-b77f-abf3c4f9a99d" (UID: "38de7871-ef90-4700-b77f-abf3c4f9a99d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:59:37 crc kubenswrapper[4737]: I0126 18:59:37.632724 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38de7871-ef90-4700-b77f-abf3c4f9a99d-config" (OuterVolumeSpecName: "config") pod "38de7871-ef90-4700-b77f-abf3c4f9a99d" (UID: "38de7871-ef90-4700-b77f-abf3c4f9a99d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:59:37 crc kubenswrapper[4737]: I0126 18:59:37.686544 4737 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38de7871-ef90-4700-b77f-abf3c4f9a99d-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:59:37 crc kubenswrapper[4737]: I0126 18:59:37.686577 4737 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38de7871-ef90-4700-b77f-abf3c4f9a99d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 18:59:37 crc kubenswrapper[4737]: I0126 18:59:37.686590 4737 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38de7871-ef90-4700-b77f-abf3c4f9a99d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 18:59:37 crc kubenswrapper[4737]: I0126 18:59:37.686598 4737 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38de7871-ef90-4700-b77f-abf3c4f9a99d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 18:59:37 crc kubenswrapper[4737]: I0126 18:59:37.755106 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-f67kv"] Jan 26 18:59:37 crc kubenswrapper[4737]: W0126 18:59:37.764256 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50a8451d_1c9f_4e7b_a24a_36a22672f896.slice/crio-5ddb63d2ac4b33e0eb68e4aa418d38c7ca74b043ff9d94869ae418bd9f188848 WatchSource:0}: Error finding container 5ddb63d2ac4b33e0eb68e4aa418d38c7ca74b043ff9d94869ae418bd9f188848: Status 404 returned error can't find the container with id 5ddb63d2ac4b33e0eb68e4aa418d38c7ca74b043ff9d94869ae418bd9f188848 Jan 26 18:59:37 crc kubenswrapper[4737]: I0126 18:59:37.816037 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-2rt9q" event={"ID":"38de7871-ef90-4700-b77f-abf3c4f9a99d","Type":"ContainerDied","Data":"690056ace108600476ac20610e5d45511c30302065c1b1edf704d484f9d9451f"} Jan 26 18:59:37 crc kubenswrapper[4737]: I0126 18:59:37.816133 4737 scope.go:117] "RemoveContainer" containerID="3895191728f2e0a03e3de77c7fbfeda4fe6b2bc3cdfcc08cd0e5deefe97a9c53" Jan 26 18:59:37 crc kubenswrapper[4737]: I0126 18:59:37.816313 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-2rt9q" Jan 26 18:59:37 crc kubenswrapper[4737]: I0126 18:59:37.820272 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-f67kv" event={"ID":"50a8451d-1c9f-4e7b-a24a-36a22672f896","Type":"ContainerStarted","Data":"5ddb63d2ac4b33e0eb68e4aa418d38c7ca74b043ff9d94869ae418bd9f188848"} Jan 26 18:59:37 crc kubenswrapper[4737]: I0126 18:59:37.895792 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-2rt9q"] Jan 26 18:59:37 crc kubenswrapper[4737]: I0126 18:59:37.903334 4737 scope.go:117] "RemoveContainer" containerID="e4f2f5c857c1c7e95c45d76e27956cde41b8ff646f347ec3ac87ede251084f09" Jan 26 18:59:37 crc kubenswrapper[4737]: I0126 18:59:37.911892 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-2rt9q"] Jan 26 18:59:38 crc kubenswrapper[4737]: I0126 18:59:38.836046 4737 generic.go:334] "Generic (PLEG): container finished" podID="50a8451d-1c9f-4e7b-a24a-36a22672f896" containerID="dd05af8cd529ca05d7cf64e19e263126e160be787775c4a5eb70d2c4f095ff62" exitCode=0 Jan 26 18:59:38 crc kubenswrapper[4737]: I0126 18:59:38.836556 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-f67kv" event={"ID":"50a8451d-1c9f-4e7b-a24a-36a22672f896","Type":"ContainerDied","Data":"dd05af8cd529ca05d7cf64e19e263126e160be787775c4a5eb70d2c4f095ff62"} Jan 26 18:59:39 crc kubenswrapper[4737]: I0126 18:59:39.005409 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38de7871-ef90-4700-b77f-abf3c4f9a99d" path="/var/lib/kubelet/pods/38de7871-ef90-4700-b77f-abf3c4f9a99d/volumes" Jan 26 18:59:39 crc kubenswrapper[4737]: I0126 18:59:39.864274 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-k2vkj" event={"ID":"7f3a0926-ce79-4117-b8e6-96fcf0a492fc","Type":"ContainerStarted","Data":"63e9ba0775d01058dfbad686887b37bfe07af5dfdd2248eb938214feb97f3122"} Jan 26 18:59:39 crc kubenswrapper[4737]: I0126 18:59:39.891400 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-f67kv" event={"ID":"50a8451d-1c9f-4e7b-a24a-36a22672f896","Type":"ContainerStarted","Data":"3c400d64f95227ab5a11c932a0621506166ddeca6c3b8fc9951a9e1935e5a570"} Jan 26 18:59:39 crc kubenswrapper[4737]: I0126 18:59:39.894723 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f6df4f56c-f67kv" Jan 26 18:59:39 crc kubenswrapper[4737]: I0126 18:59:39.921257 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-k2vkj" podStartSLOduration=1.9930116039999999 podStartE2EDuration="37.921211326s" podCreationTimestamp="2026-01-26 18:59:02 +0000 UTC" firstStartedPulling="2026-01-26 18:59:03.295643498 +0000 UTC m=+1716.603838206" lastFinishedPulling="2026-01-26 18:59:39.22384323 +0000 UTC m=+1752.532037928" observedRunningTime="2026-01-26 18:59:39.889962806 +0000 UTC m=+1753.198157534" watchObservedRunningTime="2026-01-26 18:59:39.921211326 +0000 UTC m=+1753.229406024" Jan 26 18:59:39 crc kubenswrapper[4737]: I0126 18:59:39.970531 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f6df4f56c-f67kv" podStartSLOduration=3.970506715 podStartE2EDuration="3.970506715s" podCreationTimestamp="2026-01-26 18:59:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:59:39.913113059 +0000 UTC m=+1753.221307787" watchObservedRunningTime="2026-01-26 18:59:39.970506715 +0000 UTC m=+1753.278701423" Jan 26 18:59:42 crc kubenswrapper[4737]: I0126 18:59:42.934545 4737 generic.go:334] "Generic (PLEG): container finished" podID="7f3a0926-ce79-4117-b8e6-96fcf0a492fc" containerID="63e9ba0775d01058dfbad686887b37bfe07af5dfdd2248eb938214feb97f3122" exitCode=0 Jan 26 18:59:42 crc kubenswrapper[4737]: I0126 18:59:42.934676 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-k2vkj" event={"ID":"7f3a0926-ce79-4117-b8e6-96fcf0a492fc","Type":"ContainerDied","Data":"63e9ba0775d01058dfbad686887b37bfe07af5dfdd2248eb938214feb97f3122"} Jan 26 18:59:43 crc kubenswrapper[4737]: I0126 18:59:43.982798 4737 scope.go:117] "RemoveContainer" containerID="1118354a04db19a991298cf7d8a2d128f4afb57f133e36502b231054abcee336" Jan 26 18:59:43 crc kubenswrapper[4737]: E0126 18:59:43.983295 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 18:59:44 crc kubenswrapper[4737]: I0126 18:59:44.476731 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-k2vkj" Jan 26 18:59:44 crc kubenswrapper[4737]: I0126 18:59:44.628614 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8pdh\" (UniqueName: \"kubernetes.io/projected/7f3a0926-ce79-4117-b8e6-96fcf0a492fc-kube-api-access-t8pdh\") pod \"7f3a0926-ce79-4117-b8e6-96fcf0a492fc\" (UID: \"7f3a0926-ce79-4117-b8e6-96fcf0a492fc\") " Jan 26 18:59:44 crc kubenswrapper[4737]: I0126 18:59:44.628903 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f3a0926-ce79-4117-b8e6-96fcf0a492fc-combined-ca-bundle\") pod \"7f3a0926-ce79-4117-b8e6-96fcf0a492fc\" (UID: \"7f3a0926-ce79-4117-b8e6-96fcf0a492fc\") " Jan 26 18:59:44 crc kubenswrapper[4737]: I0126 18:59:44.628985 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f3a0926-ce79-4117-b8e6-96fcf0a492fc-config-data\") pod \"7f3a0926-ce79-4117-b8e6-96fcf0a492fc\" (UID: \"7f3a0926-ce79-4117-b8e6-96fcf0a492fc\") " Jan 26 18:59:44 crc kubenswrapper[4737]: I0126 18:59:44.636461 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f3a0926-ce79-4117-b8e6-96fcf0a492fc-kube-api-access-t8pdh" (OuterVolumeSpecName: "kube-api-access-t8pdh") pod "7f3a0926-ce79-4117-b8e6-96fcf0a492fc" (UID: "7f3a0926-ce79-4117-b8e6-96fcf0a492fc"). InnerVolumeSpecName "kube-api-access-t8pdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:59:44 crc kubenswrapper[4737]: I0126 18:59:44.664690 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f3a0926-ce79-4117-b8e6-96fcf0a492fc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7f3a0926-ce79-4117-b8e6-96fcf0a492fc" (UID: "7f3a0926-ce79-4117-b8e6-96fcf0a492fc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:59:44 crc kubenswrapper[4737]: I0126 18:59:44.725535 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f3a0926-ce79-4117-b8e6-96fcf0a492fc-config-data" (OuterVolumeSpecName: "config-data") pod "7f3a0926-ce79-4117-b8e6-96fcf0a492fc" (UID: "7f3a0926-ce79-4117-b8e6-96fcf0a492fc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:59:44 crc kubenswrapper[4737]: I0126 18:59:44.732307 4737 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f3a0926-ce79-4117-b8e6-96fcf0a492fc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:59:44 crc kubenswrapper[4737]: I0126 18:59:44.732336 4737 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f3a0926-ce79-4117-b8e6-96fcf0a492fc-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:59:44 crc kubenswrapper[4737]: I0126 18:59:44.732346 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8pdh\" (UniqueName: \"kubernetes.io/projected/7f3a0926-ce79-4117-b8e6-96fcf0a492fc-kube-api-access-t8pdh\") on node \"crc\" DevicePath \"\"" Jan 26 18:59:44 crc kubenswrapper[4737]: I0126 18:59:44.961620 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-k2vkj" event={"ID":"7f3a0926-ce79-4117-b8e6-96fcf0a492fc","Type":"ContainerDied","Data":"12297f3720eeef90ca194ba9d7786e83bed69b06c2febbfef59f7f8b3d2df749"} Jan 26 18:59:44 crc kubenswrapper[4737]: I0126 18:59:44.961686 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12297f3720eeef90ca194ba9d7786e83bed69b06c2febbfef59f7f8b3d2df749" Jan 26 18:59:44 crc kubenswrapper[4737]: I0126 18:59:44.962115 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-k2vkj" Jan 26 18:59:45 crc kubenswrapper[4737]: I0126 18:59:45.938595 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-858867c5df-ppbxf"] Jan 26 18:59:45 crc kubenswrapper[4737]: E0126 18:59:45.939523 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38de7871-ef90-4700-b77f-abf3c4f9a99d" containerName="dnsmasq-dns" Jan 26 18:59:45 crc kubenswrapper[4737]: I0126 18:59:45.939537 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="38de7871-ef90-4700-b77f-abf3c4f9a99d" containerName="dnsmasq-dns" Jan 26 18:59:45 crc kubenswrapper[4737]: E0126 18:59:45.939553 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38de7871-ef90-4700-b77f-abf3c4f9a99d" containerName="init" Jan 26 18:59:45 crc kubenswrapper[4737]: I0126 18:59:45.939559 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="38de7871-ef90-4700-b77f-abf3c4f9a99d" containerName="init" Jan 26 18:59:45 crc kubenswrapper[4737]: E0126 18:59:45.939579 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f3a0926-ce79-4117-b8e6-96fcf0a492fc" containerName="heat-db-sync" Jan 26 18:59:45 crc kubenswrapper[4737]: I0126 18:59:45.939599 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f3a0926-ce79-4117-b8e6-96fcf0a492fc" containerName="heat-db-sync" Jan 26 18:59:45 crc kubenswrapper[4737]: I0126 18:59:45.939867 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f3a0926-ce79-4117-b8e6-96fcf0a492fc" containerName="heat-db-sync" Jan 26 18:59:45 crc kubenswrapper[4737]: I0126 18:59:45.939905 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="38de7871-ef90-4700-b77f-abf3c4f9a99d" containerName="dnsmasq-dns" Jan 26 18:59:45 crc kubenswrapper[4737]: I0126 18:59:45.941057 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-858867c5df-ppbxf" Jan 26 18:59:45 crc kubenswrapper[4737]: I0126 18:59:45.954015 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-858867c5df-ppbxf"] Jan 26 18:59:45 crc kubenswrapper[4737]: I0126 18:59:45.963471 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmvxv\" (UniqueName: \"kubernetes.io/projected/de816c7c-1d5a-4226-b17c-b4f5a5c8d07b-kube-api-access-dmvxv\") pod \"heat-engine-858867c5df-ppbxf\" (UID: \"de816c7c-1d5a-4226-b17c-b4f5a5c8d07b\") " pod="openstack/heat-engine-858867c5df-ppbxf" Jan 26 18:59:45 crc kubenswrapper[4737]: I0126 18:59:45.963566 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de816c7c-1d5a-4226-b17c-b4f5a5c8d07b-config-data\") pod \"heat-engine-858867c5df-ppbxf\" (UID: \"de816c7c-1d5a-4226-b17c-b4f5a5c8d07b\") " pod="openstack/heat-engine-858867c5df-ppbxf" Jan 26 18:59:45 crc kubenswrapper[4737]: I0126 18:59:45.963624 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de816c7c-1d5a-4226-b17c-b4f5a5c8d07b-config-data-custom\") pod \"heat-engine-858867c5df-ppbxf\" (UID: \"de816c7c-1d5a-4226-b17c-b4f5a5c8d07b\") " pod="openstack/heat-engine-858867c5df-ppbxf" Jan 26 18:59:45 crc kubenswrapper[4737]: I0126 18:59:45.963692 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de816c7c-1d5a-4226-b17c-b4f5a5c8d07b-combined-ca-bundle\") pod \"heat-engine-858867c5df-ppbxf\" (UID: \"de816c7c-1d5a-4226-b17c-b4f5a5c8d07b\") " pod="openstack/heat-engine-858867c5df-ppbxf" Jan 26 18:59:46 crc kubenswrapper[4737]: I0126 18:59:46.054979 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-6b78c96546-lpdfk"] Jan 26 18:59:46 crc kubenswrapper[4737]: I0126 18:59:46.058053 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6b78c96546-lpdfk" Jan 26 18:59:46 crc kubenswrapper[4737]: I0126 18:59:46.068316 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thv5p\" (UniqueName: \"kubernetes.io/projected/bbb9e95d-409d-4b81-a1e4-1dca34c9d1cb-kube-api-access-thv5p\") pod \"heat-cfnapi-6b78c96546-lpdfk\" (UID: \"bbb9e95d-409d-4b81-a1e4-1dca34c9d1cb\") " pod="openstack/heat-cfnapi-6b78c96546-lpdfk" Jan 26 18:59:46 crc kubenswrapper[4737]: I0126 18:59:46.068410 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbb9e95d-409d-4b81-a1e4-1dca34c9d1cb-internal-tls-certs\") pod \"heat-cfnapi-6b78c96546-lpdfk\" (UID: \"bbb9e95d-409d-4b81-a1e4-1dca34c9d1cb\") " pod="openstack/heat-cfnapi-6b78c96546-lpdfk" Jan 26 18:59:46 crc kubenswrapper[4737]: I0126 18:59:46.068528 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bbb9e95d-409d-4b81-a1e4-1dca34c9d1cb-config-data-custom\") pod \"heat-cfnapi-6b78c96546-lpdfk\" (UID: \"bbb9e95d-409d-4b81-a1e4-1dca34c9d1cb\") " pod="openstack/heat-cfnapi-6b78c96546-lpdfk" Jan 26 18:59:46 crc kubenswrapper[4737]: I0126 18:59:46.068684 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmvxv\" (UniqueName: \"kubernetes.io/projected/de816c7c-1d5a-4226-b17c-b4f5a5c8d07b-kube-api-access-dmvxv\") pod \"heat-engine-858867c5df-ppbxf\" (UID: \"de816c7c-1d5a-4226-b17c-b4f5a5c8d07b\") " pod="openstack/heat-engine-858867c5df-ppbxf" Jan 26 18:59:46 crc kubenswrapper[4737]: I0126 18:59:46.068796 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de816c7c-1d5a-4226-b17c-b4f5a5c8d07b-config-data\") pod \"heat-engine-858867c5df-ppbxf\" (UID: \"de816c7c-1d5a-4226-b17c-b4f5a5c8d07b\") " pod="openstack/heat-engine-858867c5df-ppbxf" Jan 26 18:59:46 crc kubenswrapper[4737]: I0126 18:59:46.068843 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de816c7c-1d5a-4226-b17c-b4f5a5c8d07b-config-data-custom\") pod \"heat-engine-858867c5df-ppbxf\" (UID: \"de816c7c-1d5a-4226-b17c-b4f5a5c8d07b\") " pod="openstack/heat-engine-858867c5df-ppbxf" Jan 26 18:59:46 crc kubenswrapper[4737]: I0126 18:59:46.068894 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbb9e95d-409d-4b81-a1e4-1dca34c9d1cb-public-tls-certs\") pod \"heat-cfnapi-6b78c96546-lpdfk\" (UID: \"bbb9e95d-409d-4b81-a1e4-1dca34c9d1cb\") " pod="openstack/heat-cfnapi-6b78c96546-lpdfk" Jan 26 18:59:46 crc kubenswrapper[4737]: I0126 18:59:46.068948 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de816c7c-1d5a-4226-b17c-b4f5a5c8d07b-combined-ca-bundle\") pod \"heat-engine-858867c5df-ppbxf\" (UID: \"de816c7c-1d5a-4226-b17c-b4f5a5c8d07b\") " pod="openstack/heat-engine-858867c5df-ppbxf" Jan 26 18:59:46 crc kubenswrapper[4737]: I0126 18:59:46.069023 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb9e95d-409d-4b81-a1e4-1dca34c9d1cb-combined-ca-bundle\") pod \"heat-cfnapi-6b78c96546-lpdfk\" (UID: \"bbb9e95d-409d-4b81-a1e4-1dca34c9d1cb\") " pod="openstack/heat-cfnapi-6b78c96546-lpdfk" Jan 26 18:59:46 crc kubenswrapper[4737]: I0126 18:59:46.069090 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbb9e95d-409d-4b81-a1e4-1dca34c9d1cb-config-data\") pod \"heat-cfnapi-6b78c96546-lpdfk\" (UID: \"bbb9e95d-409d-4b81-a1e4-1dca34c9d1cb\") " pod="openstack/heat-cfnapi-6b78c96546-lpdfk" Jan 26 18:59:46 crc kubenswrapper[4737]: I0126 18:59:46.072207 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-cfff6bbff-s577r"] Jan 26 18:59:46 crc kubenswrapper[4737]: I0126 18:59:46.077543 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de816c7c-1d5a-4226-b17c-b4f5a5c8d07b-combined-ca-bundle\") pod \"heat-engine-858867c5df-ppbxf\" (UID: \"de816c7c-1d5a-4226-b17c-b4f5a5c8d07b\") " pod="openstack/heat-engine-858867c5df-ppbxf" Jan 26 18:59:46 crc kubenswrapper[4737]: I0126 18:59:46.077731 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de816c7c-1d5a-4226-b17c-b4f5a5c8d07b-config-data-custom\") pod \"heat-engine-858867c5df-ppbxf\" (UID: \"de816c7c-1d5a-4226-b17c-b4f5a5c8d07b\") " pod="openstack/heat-engine-858867c5df-ppbxf" Jan 26 18:59:46 crc kubenswrapper[4737]: I0126 18:59:46.077770 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de816c7c-1d5a-4226-b17c-b4f5a5c8d07b-config-data\") pod \"heat-engine-858867c5df-ppbxf\" (UID: \"de816c7c-1d5a-4226-b17c-b4f5a5c8d07b\") " pod="openstack/heat-engine-858867c5df-ppbxf" Jan 26 18:59:46 crc kubenswrapper[4737]: I0126 18:59:46.079020 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-cfff6bbff-s577r" Jan 26 18:59:46 crc kubenswrapper[4737]: I0126 18:59:46.117768 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmvxv\" (UniqueName: \"kubernetes.io/projected/de816c7c-1d5a-4226-b17c-b4f5a5c8d07b-kube-api-access-dmvxv\") pod \"heat-engine-858867c5df-ppbxf\" (UID: \"de816c7c-1d5a-4226-b17c-b4f5a5c8d07b\") " pod="openstack/heat-engine-858867c5df-ppbxf" Jan 26 18:59:46 crc kubenswrapper[4737]: I0126 18:59:46.127873 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6b78c96546-lpdfk"] Jan 26 18:59:46 crc kubenswrapper[4737]: I0126 18:59:46.147399 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-cfff6bbff-s577r"] Jan 26 18:59:46 crc kubenswrapper[4737]: I0126 18:59:46.172051 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbb9e95d-409d-4b81-a1e4-1dca34c9d1cb-public-tls-certs\") pod \"heat-cfnapi-6b78c96546-lpdfk\" (UID: \"bbb9e95d-409d-4b81-a1e4-1dca34c9d1cb\") " pod="openstack/heat-cfnapi-6b78c96546-lpdfk" Jan 26 18:59:46 crc kubenswrapper[4737]: I0126 18:59:46.172138 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f4b0bd32-90db-4eae-a748-903c5d5cd931-config-data-custom\") pod \"heat-api-cfff6bbff-s577r\" (UID: \"f4b0bd32-90db-4eae-a748-903c5d5cd931\") " pod="openstack/heat-api-cfff6bbff-s577r" Jan 26 18:59:46 crc kubenswrapper[4737]: I0126 18:59:46.172183 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4b0bd32-90db-4eae-a748-903c5d5cd931-combined-ca-bundle\") pod \"heat-api-cfff6bbff-s577r\" (UID: \"f4b0bd32-90db-4eae-a748-903c5d5cd931\") " pod="openstack/heat-api-cfff6bbff-s577r" Jan 26 18:59:46 crc kubenswrapper[4737]: I0126 18:59:46.172213 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb9e95d-409d-4b81-a1e4-1dca34c9d1cb-combined-ca-bundle\") pod \"heat-cfnapi-6b78c96546-lpdfk\" (UID: \"bbb9e95d-409d-4b81-a1e4-1dca34c9d1cb\") " pod="openstack/heat-cfnapi-6b78c96546-lpdfk" Jan 26 18:59:46 crc kubenswrapper[4737]: I0126 18:59:46.172245 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbb9e95d-409d-4b81-a1e4-1dca34c9d1cb-config-data\") pod \"heat-cfnapi-6b78c96546-lpdfk\" (UID: \"bbb9e95d-409d-4b81-a1e4-1dca34c9d1cb\") " pod="openstack/heat-cfnapi-6b78c96546-lpdfk" Jan 26 18:59:46 crc kubenswrapper[4737]: I0126 18:59:46.172294 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thv5p\" (UniqueName: \"kubernetes.io/projected/bbb9e95d-409d-4b81-a1e4-1dca34c9d1cb-kube-api-access-thv5p\") pod \"heat-cfnapi-6b78c96546-lpdfk\" (UID: \"bbb9e95d-409d-4b81-a1e4-1dca34c9d1cb\") " pod="openstack/heat-cfnapi-6b78c96546-lpdfk" Jan 26 18:59:46 crc kubenswrapper[4737]: I0126 18:59:46.172331 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4wsg\" (UniqueName: \"kubernetes.io/projected/f4b0bd32-90db-4eae-a748-903c5d5cd931-kube-api-access-p4wsg\") pod \"heat-api-cfff6bbff-s577r\" (UID: \"f4b0bd32-90db-4eae-a748-903c5d5cd931\") " pod="openstack/heat-api-cfff6bbff-s577r" Jan 26 18:59:46 crc kubenswrapper[4737]: I0126 18:59:46.172348 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbb9e95d-409d-4b81-a1e4-1dca34c9d1cb-internal-tls-certs\") pod \"heat-cfnapi-6b78c96546-lpdfk\" (UID: \"bbb9e95d-409d-4b81-a1e4-1dca34c9d1cb\") " pod="openstack/heat-cfnapi-6b78c96546-lpdfk" Jan 26 18:59:46 crc kubenswrapper[4737]: I0126 18:59:46.172363 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4b0bd32-90db-4eae-a748-903c5d5cd931-public-tls-certs\") pod \"heat-api-cfff6bbff-s577r\" (UID: \"f4b0bd32-90db-4eae-a748-903c5d5cd931\") " pod="openstack/heat-api-cfff6bbff-s577r" Jan 26 18:59:46 crc kubenswrapper[4737]: I0126 18:59:46.172425 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bbb9e95d-409d-4b81-a1e4-1dca34c9d1cb-config-data-custom\") pod \"heat-cfnapi-6b78c96546-lpdfk\" (UID: \"bbb9e95d-409d-4b81-a1e4-1dca34c9d1cb\") " pod="openstack/heat-cfnapi-6b78c96546-lpdfk" Jan 26 18:59:46 crc kubenswrapper[4737]: I0126 18:59:46.172482 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4b0bd32-90db-4eae-a748-903c5d5cd931-config-data\") pod \"heat-api-cfff6bbff-s577r\" (UID: \"f4b0bd32-90db-4eae-a748-903c5d5cd931\") " pod="openstack/heat-api-cfff6bbff-s577r" Jan 26 18:59:46 crc kubenswrapper[4737]: I0126 18:59:46.172523 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4b0bd32-90db-4eae-a748-903c5d5cd931-internal-tls-certs\") pod \"heat-api-cfff6bbff-s577r\" (UID: \"f4b0bd32-90db-4eae-a748-903c5d5cd931\") " pod="openstack/heat-api-cfff6bbff-s577r" Jan 26 18:59:46 crc kubenswrapper[4737]: I0126 18:59:46.178963 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbb9e95d-409d-4b81-a1e4-1dca34c9d1cb-internal-tls-certs\") pod \"heat-cfnapi-6b78c96546-lpdfk\" (UID: \"bbb9e95d-409d-4b81-a1e4-1dca34c9d1cb\") " pod="openstack/heat-cfnapi-6b78c96546-lpdfk" Jan 26 18:59:46 crc kubenswrapper[4737]: I0126 18:59:46.179023 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbb9e95d-409d-4b81-a1e4-1dca34c9d1cb-public-tls-certs\") pod \"heat-cfnapi-6b78c96546-lpdfk\" (UID: \"bbb9e95d-409d-4b81-a1e4-1dca34c9d1cb\") " pod="openstack/heat-cfnapi-6b78c96546-lpdfk" Jan 26 18:59:46 crc kubenswrapper[4737]: I0126 18:59:46.184536 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbb9e95d-409d-4b81-a1e4-1dca34c9d1cb-config-data\") pod \"heat-cfnapi-6b78c96546-lpdfk\" (UID: \"bbb9e95d-409d-4b81-a1e4-1dca34c9d1cb\") " pod="openstack/heat-cfnapi-6b78c96546-lpdfk" Jan 26 18:59:46 crc kubenswrapper[4737]: I0126 18:59:46.187885 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb9e95d-409d-4b81-a1e4-1dca34c9d1cb-combined-ca-bundle\") pod \"heat-cfnapi-6b78c96546-lpdfk\" (UID: \"bbb9e95d-409d-4b81-a1e4-1dca34c9d1cb\") " pod="openstack/heat-cfnapi-6b78c96546-lpdfk" Jan 26 18:59:46 crc kubenswrapper[4737]: I0126 18:59:46.188398 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bbb9e95d-409d-4b81-a1e4-1dca34c9d1cb-config-data-custom\") pod \"heat-cfnapi-6b78c96546-lpdfk\" (UID: \"bbb9e95d-409d-4b81-a1e4-1dca34c9d1cb\") " pod="openstack/heat-cfnapi-6b78c96546-lpdfk" Jan 26 18:59:46 crc kubenswrapper[4737]: I0126 18:59:46.197142 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thv5p\" (UniqueName: \"kubernetes.io/projected/bbb9e95d-409d-4b81-a1e4-1dca34c9d1cb-kube-api-access-thv5p\") pod \"heat-cfnapi-6b78c96546-lpdfk\" (UID: \"bbb9e95d-409d-4b81-a1e4-1dca34c9d1cb\") " pod="openstack/heat-cfnapi-6b78c96546-lpdfk" Jan 26 18:59:46 crc kubenswrapper[4737]: I0126 18:59:46.264199 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-858867c5df-ppbxf" Jan 26 18:59:46 crc kubenswrapper[4737]: I0126 18:59:46.274674 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4b0bd32-90db-4eae-a748-903c5d5cd931-internal-tls-certs\") pod \"heat-api-cfff6bbff-s577r\" (UID: \"f4b0bd32-90db-4eae-a748-903c5d5cd931\") " pod="openstack/heat-api-cfff6bbff-s577r" Jan 26 18:59:46 crc kubenswrapper[4737]: I0126 18:59:46.274807 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f4b0bd32-90db-4eae-a748-903c5d5cd931-config-data-custom\") pod \"heat-api-cfff6bbff-s577r\" (UID: \"f4b0bd32-90db-4eae-a748-903c5d5cd931\") " pod="openstack/heat-api-cfff6bbff-s577r" Jan 26 18:59:46 crc kubenswrapper[4737]: I0126 18:59:46.274845 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4b0bd32-90db-4eae-a748-903c5d5cd931-combined-ca-bundle\") pod \"heat-api-cfff6bbff-s577r\" (UID: \"f4b0bd32-90db-4eae-a748-903c5d5cd931\") " pod="openstack/heat-api-cfff6bbff-s577r" Jan 26 18:59:46 crc kubenswrapper[4737]: I0126 18:59:46.274957 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4b0bd32-90db-4eae-a748-903c5d5cd931-public-tls-certs\") pod \"heat-api-cfff6bbff-s577r\" (UID: \"f4b0bd32-90db-4eae-a748-903c5d5cd931\") " pod="openstack/heat-api-cfff6bbff-s577r" Jan 26 18:59:46 crc kubenswrapper[4737]: I0126 18:59:46.274983 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4wsg\" (UniqueName: \"kubernetes.io/projected/f4b0bd32-90db-4eae-a748-903c5d5cd931-kube-api-access-p4wsg\") pod \"heat-api-cfff6bbff-s577r\" (UID: \"f4b0bd32-90db-4eae-a748-903c5d5cd931\") " pod="openstack/heat-api-cfff6bbff-s577r" Jan 26 18:59:46 crc kubenswrapper[4737]: I0126 18:59:46.275109 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4b0bd32-90db-4eae-a748-903c5d5cd931-config-data\") pod \"heat-api-cfff6bbff-s577r\" (UID: \"f4b0bd32-90db-4eae-a748-903c5d5cd931\") " pod="openstack/heat-api-cfff6bbff-s577r" Jan 26 18:59:46 crc kubenswrapper[4737]: I0126 18:59:46.279453 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4b0bd32-90db-4eae-a748-903c5d5cd931-combined-ca-bundle\") pod \"heat-api-cfff6bbff-s577r\" (UID: \"f4b0bd32-90db-4eae-a748-903c5d5cd931\") " pod="openstack/heat-api-cfff6bbff-s577r" Jan 26 18:59:46 crc kubenswrapper[4737]: I0126 18:59:46.280183 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4b0bd32-90db-4eae-a748-903c5d5cd931-config-data\") pod \"heat-api-cfff6bbff-s577r\" (UID: \"f4b0bd32-90db-4eae-a748-903c5d5cd931\") " pod="openstack/heat-api-cfff6bbff-s577r" Jan 26 18:59:46 crc kubenswrapper[4737]: I0126 18:59:46.281113 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4b0bd32-90db-4eae-a748-903c5d5cd931-public-tls-certs\") pod \"heat-api-cfff6bbff-s577r\" (UID: \"f4b0bd32-90db-4eae-a748-903c5d5cd931\") " pod="openstack/heat-api-cfff6bbff-s577r" Jan 26 18:59:46 crc kubenswrapper[4737]: I0126 18:59:46.282403 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f4b0bd32-90db-4eae-a748-903c5d5cd931-config-data-custom\") pod \"heat-api-cfff6bbff-s577r\" (UID: \"f4b0bd32-90db-4eae-a748-903c5d5cd931\") " pod="openstack/heat-api-cfff6bbff-s577r" Jan 26 18:59:46 crc kubenswrapper[4737]: I0126 18:59:46.282985 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4b0bd32-90db-4eae-a748-903c5d5cd931-internal-tls-certs\") pod \"heat-api-cfff6bbff-s577r\" (UID: \"f4b0bd32-90db-4eae-a748-903c5d5cd931\") " pod="openstack/heat-api-cfff6bbff-s577r" Jan 26 18:59:46 crc kubenswrapper[4737]: I0126 18:59:46.296946 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4wsg\" (UniqueName: \"kubernetes.io/projected/f4b0bd32-90db-4eae-a748-903c5d5cd931-kube-api-access-p4wsg\") pod \"heat-api-cfff6bbff-s577r\" (UID: \"f4b0bd32-90db-4eae-a748-903c5d5cd931\") " pod="openstack/heat-api-cfff6bbff-s577r" Jan 26 18:59:46 crc kubenswrapper[4737]: I0126 18:59:46.373048 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6b78c96546-lpdfk" Jan 26 18:59:46 crc kubenswrapper[4737]: I0126 18:59:46.381592 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-cfff6bbff-s577r" Jan 26 18:59:46 crc kubenswrapper[4737]: I0126 18:59:46.809397 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-858867c5df-ppbxf"] Jan 26 18:59:46 crc kubenswrapper[4737]: I0126 18:59:46.848337 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f6df4f56c-f67kv" Jan 26 18:59:46 crc kubenswrapper[4737]: I0126 18:59:46.936898 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-92v7q"] Jan 26 18:59:46 crc kubenswrapper[4737]: I0126 18:59:46.937175 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d84b4d45c-92v7q" podUID="25444fbe-165b-40a7-b446-8bec4dfb854d" containerName="dnsmasq-dns" containerID="cri-o://9cc3f91555982e2441d64fb4a6979cf1b31ba6767a247398fa73c5aefce34931" gracePeriod=10 Jan 26 18:59:47 crc kubenswrapper[4737]: I0126 18:59:47.018464 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-858867c5df-ppbxf" event={"ID":"de816c7c-1d5a-4226-b17c-b4f5a5c8d07b","Type":"ContainerStarted","Data":"23e78af6a19f09b512fb4e94a63eaafacc4414f8403d26d85d71192750990281"} Jan 26 18:59:47 crc kubenswrapper[4737]: I0126 18:59:47.026749 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6b78c96546-lpdfk"] Jan 26 18:59:47 crc kubenswrapper[4737]: W0126 18:59:47.057213 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbbb9e95d_409d_4b81_a1e4_1dca34c9d1cb.slice/crio-5b7c7fba3222517b20f8e08101071f6c0e86fe97afaedb791e34a4a10b8f3c29 WatchSource:0}: Error finding container 5b7c7fba3222517b20f8e08101071f6c0e86fe97afaedb791e34a4a10b8f3c29: Status 404 returned error can't find the container with id 5b7c7fba3222517b20f8e08101071f6c0e86fe97afaedb791e34a4a10b8f3c29 Jan 26 18:59:47 crc kubenswrapper[4737]: I0126 18:59:47.200955 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-cfff6bbff-s577r"] Jan 26 18:59:47 crc kubenswrapper[4737]: W0126 18:59:47.207216 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4b0bd32_90db_4eae_a748_903c5d5cd931.slice/crio-5829949bad65febf2418a533c2e879f7b04135314019acb7b99558adea5cee05 WatchSource:0}: Error finding container 5829949bad65febf2418a533c2e879f7b04135314019acb7b99558adea5cee05: Status 404 returned error can't find the container with id 5829949bad65febf2418a533c2e879f7b04135314019acb7b99558adea5cee05 Jan 26 18:59:47 crc kubenswrapper[4737]: I0126 18:59:47.716821 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-92v7q" Jan 26 18:59:47 crc kubenswrapper[4737]: I0126 18:59:47.824124 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25444fbe-165b-40a7-b446-8bec4dfb854d-ovsdbserver-sb\") pod \"25444fbe-165b-40a7-b446-8bec4dfb854d\" (UID: \"25444fbe-165b-40a7-b446-8bec4dfb854d\") " Jan 26 18:59:47 crc kubenswrapper[4737]: I0126 18:59:47.824231 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/25444fbe-165b-40a7-b446-8bec4dfb854d-openstack-edpm-ipam\") pod \"25444fbe-165b-40a7-b446-8bec4dfb854d\" (UID: \"25444fbe-165b-40a7-b446-8bec4dfb854d\") " Jan 26 18:59:47 crc kubenswrapper[4737]: I0126 18:59:47.824335 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8vcl\" (UniqueName: \"kubernetes.io/projected/25444fbe-165b-40a7-b446-8bec4dfb854d-kube-api-access-r8vcl\") pod \"25444fbe-165b-40a7-b446-8bec4dfb854d\" (UID: \"25444fbe-165b-40a7-b446-8bec4dfb854d\") " Jan 26 18:59:47 crc kubenswrapper[4737]: I0126 18:59:47.824513 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/25444fbe-165b-40a7-b446-8bec4dfb854d-dns-swift-storage-0\") pod \"25444fbe-165b-40a7-b446-8bec4dfb854d\" (UID: \"25444fbe-165b-40a7-b446-8bec4dfb854d\") " Jan 26 18:59:47 crc kubenswrapper[4737]: I0126 18:59:47.824595 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25444fbe-165b-40a7-b446-8bec4dfb854d-ovsdbserver-nb\") pod \"25444fbe-165b-40a7-b446-8bec4dfb854d\" (UID: \"25444fbe-165b-40a7-b446-8bec4dfb854d\") " Jan 26 18:59:47 crc kubenswrapper[4737]: I0126 18:59:47.824629 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25444fbe-165b-40a7-b446-8bec4dfb854d-config\") pod \"25444fbe-165b-40a7-b446-8bec4dfb854d\" (UID: \"25444fbe-165b-40a7-b446-8bec4dfb854d\") " Jan 26 18:59:47 crc kubenswrapper[4737]: I0126 18:59:47.824742 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25444fbe-165b-40a7-b446-8bec4dfb854d-dns-svc\") pod \"25444fbe-165b-40a7-b446-8bec4dfb854d\" (UID: \"25444fbe-165b-40a7-b446-8bec4dfb854d\") " Jan 26 18:59:47 crc kubenswrapper[4737]: I0126 18:59:47.861012 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25444fbe-165b-40a7-b446-8bec4dfb854d-kube-api-access-r8vcl" (OuterVolumeSpecName: "kube-api-access-r8vcl") pod "25444fbe-165b-40a7-b446-8bec4dfb854d" (UID: "25444fbe-165b-40a7-b446-8bec4dfb854d"). InnerVolumeSpecName "kube-api-access-r8vcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:59:47 crc kubenswrapper[4737]: I0126 18:59:47.927862 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8vcl\" (UniqueName: \"kubernetes.io/projected/25444fbe-165b-40a7-b446-8bec4dfb854d-kube-api-access-r8vcl\") on node \"crc\" DevicePath \"\"" Jan 26 18:59:47 crc kubenswrapper[4737]: I0126 18:59:47.934716 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25444fbe-165b-40a7-b446-8bec4dfb854d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "25444fbe-165b-40a7-b446-8bec4dfb854d" (UID: "25444fbe-165b-40a7-b446-8bec4dfb854d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:59:47 crc kubenswrapper[4737]: I0126 18:59:47.934737 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25444fbe-165b-40a7-b446-8bec4dfb854d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "25444fbe-165b-40a7-b446-8bec4dfb854d" (UID: "25444fbe-165b-40a7-b446-8bec4dfb854d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:59:47 crc kubenswrapper[4737]: I0126 18:59:47.934898 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25444fbe-165b-40a7-b446-8bec4dfb854d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "25444fbe-165b-40a7-b446-8bec4dfb854d" (UID: "25444fbe-165b-40a7-b446-8bec4dfb854d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:59:47 crc kubenswrapper[4737]: I0126 18:59:47.964212 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25444fbe-165b-40a7-b446-8bec4dfb854d-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "25444fbe-165b-40a7-b446-8bec4dfb854d" (UID: "25444fbe-165b-40a7-b446-8bec4dfb854d"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:59:47 crc kubenswrapper[4737]: I0126 18:59:47.977350 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25444fbe-165b-40a7-b446-8bec4dfb854d-config" (OuterVolumeSpecName: "config") pod "25444fbe-165b-40a7-b446-8bec4dfb854d" (UID: "25444fbe-165b-40a7-b446-8bec4dfb854d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:59:47 crc kubenswrapper[4737]: I0126 18:59:47.997774 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25444fbe-165b-40a7-b446-8bec4dfb854d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "25444fbe-165b-40a7-b446-8bec4dfb854d" (UID: "25444fbe-165b-40a7-b446-8bec4dfb854d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:59:48 crc kubenswrapper[4737]: I0126 18:59:48.023623 4737 generic.go:334] "Generic (PLEG): container finished" podID="25444fbe-165b-40a7-b446-8bec4dfb854d" containerID="9cc3f91555982e2441d64fb4a6979cf1b31ba6767a247398fa73c5aefce34931" exitCode=0 Jan 26 18:59:48 crc kubenswrapper[4737]: I0126 18:59:48.023692 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-92v7q" event={"ID":"25444fbe-165b-40a7-b446-8bec4dfb854d","Type":"ContainerDied","Data":"9cc3f91555982e2441d64fb4a6979cf1b31ba6767a247398fa73c5aefce34931"} Jan 26 18:59:48 crc kubenswrapper[4737]: I0126 18:59:48.023742 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-92v7q" event={"ID":"25444fbe-165b-40a7-b446-8bec4dfb854d","Type":"ContainerDied","Data":"8e88e05b28ba8565bbb97884f132205a50c3c2139b3545855f80c9b504be92dc"} Jan 26 18:59:48 crc kubenswrapper[4737]: I0126 18:59:48.023760 4737 scope.go:117] "RemoveContainer" containerID="9cc3f91555982e2441d64fb4a6979cf1b31ba6767a247398fa73c5aefce34931" Jan 26 18:59:48 crc kubenswrapper[4737]: I0126 18:59:48.023882 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-92v7q" Jan 26 18:59:48 crc kubenswrapper[4737]: I0126 18:59:48.032500 4737 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/25444fbe-165b-40a7-b446-8bec4dfb854d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 18:59:48 crc kubenswrapper[4737]: I0126 18:59:48.032547 4737 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25444fbe-165b-40a7-b446-8bec4dfb854d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 18:59:48 crc kubenswrapper[4737]: I0126 18:59:48.032563 4737 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25444fbe-165b-40a7-b446-8bec4dfb854d-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:59:48 crc kubenswrapper[4737]: I0126 18:59:48.032578 4737 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25444fbe-165b-40a7-b446-8bec4dfb854d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 18:59:48 crc kubenswrapper[4737]: I0126 18:59:48.032590 4737 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25444fbe-165b-40a7-b446-8bec4dfb854d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 18:59:48 crc kubenswrapper[4737]: I0126 18:59:48.032601 4737 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/25444fbe-165b-40a7-b446-8bec4dfb854d-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 18:59:48 crc kubenswrapper[4737]: I0126 18:59:48.036629 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6b78c96546-lpdfk" event={"ID":"bbb9e95d-409d-4b81-a1e4-1dca34c9d1cb","Type":"ContainerStarted","Data":"5b7c7fba3222517b20f8e08101071f6c0e86fe97afaedb791e34a4a10b8f3c29"} Jan 26 18:59:48 crc kubenswrapper[4737]: I0126 18:59:48.039687 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-858867c5df-ppbxf" event={"ID":"de816c7c-1d5a-4226-b17c-b4f5a5c8d07b","Type":"ContainerStarted","Data":"3a73795a4eb12a86853280e6ba53c793c4c82d38a7108713c91328a0973d4a8c"} Jan 26 18:59:48 crc kubenswrapper[4737]: I0126 18:59:48.042533 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-858867c5df-ppbxf" Jan 26 18:59:48 crc kubenswrapper[4737]: I0126 18:59:48.049870 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-cfff6bbff-s577r" event={"ID":"f4b0bd32-90db-4eae-a748-903c5d5cd931","Type":"ContainerStarted","Data":"5829949bad65febf2418a533c2e879f7b04135314019acb7b99558adea5cee05"} Jan 26 18:59:48 crc kubenswrapper[4737]: I0126 18:59:48.075832 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-858867c5df-ppbxf" podStartSLOduration=3.075804756 podStartE2EDuration="3.075804756s" podCreationTimestamp="2026-01-26 18:59:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:59:48.069844032 +0000 UTC m=+1761.378038730" watchObservedRunningTime="2026-01-26 18:59:48.075804756 +0000 UTC m=+1761.383999464" Jan 26 18:59:48 crc kubenswrapper[4737]: I0126 18:59:48.083054 4737 scope.go:117] "RemoveContainer" containerID="5e039cf758f095a4a15cfa9f3a7ada9f578f7c5f939613b226aef9873ffad6da" Jan 26 18:59:48 crc kubenswrapper[4737]: I0126 18:59:48.142050 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-92v7q"] Jan 26 18:59:48 crc kubenswrapper[4737]: I0126 18:59:48.174092 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-92v7q"] Jan 26 18:59:48 crc kubenswrapper[4737]: I0126 18:59:48.226109 4737 scope.go:117] "RemoveContainer" containerID="9cc3f91555982e2441d64fb4a6979cf1b31ba6767a247398fa73c5aefce34931" Jan 26 18:59:48 crc kubenswrapper[4737]: E0126 18:59:48.226902 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cc3f91555982e2441d64fb4a6979cf1b31ba6767a247398fa73c5aefce34931\": container with ID starting with 9cc3f91555982e2441d64fb4a6979cf1b31ba6767a247398fa73c5aefce34931 not found: ID does not exist" containerID="9cc3f91555982e2441d64fb4a6979cf1b31ba6767a247398fa73c5aefce34931" Jan 26 18:59:48 crc kubenswrapper[4737]: I0126 18:59:48.227009 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cc3f91555982e2441d64fb4a6979cf1b31ba6767a247398fa73c5aefce34931"} err="failed to get container status \"9cc3f91555982e2441d64fb4a6979cf1b31ba6767a247398fa73c5aefce34931\": rpc error: code = NotFound desc = could not find container \"9cc3f91555982e2441d64fb4a6979cf1b31ba6767a247398fa73c5aefce34931\": container with ID starting with 9cc3f91555982e2441d64fb4a6979cf1b31ba6767a247398fa73c5aefce34931 not found: ID does not exist" Jan 26 18:59:48 crc kubenswrapper[4737]: I0126 18:59:48.227339 4737 scope.go:117] "RemoveContainer" containerID="5e039cf758f095a4a15cfa9f3a7ada9f578f7c5f939613b226aef9873ffad6da" Jan 26 18:59:48 crc kubenswrapper[4737]: E0126 18:59:48.228917 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e039cf758f095a4a15cfa9f3a7ada9f578f7c5f939613b226aef9873ffad6da\": container with ID starting with 5e039cf758f095a4a15cfa9f3a7ada9f578f7c5f939613b226aef9873ffad6da not found: ID does not exist" containerID="5e039cf758f095a4a15cfa9f3a7ada9f578f7c5f939613b226aef9873ffad6da" Jan 26 18:59:48 crc kubenswrapper[4737]: I0126 18:59:48.228968 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e039cf758f095a4a15cfa9f3a7ada9f578f7c5f939613b226aef9873ffad6da"} err="failed to get container status \"5e039cf758f095a4a15cfa9f3a7ada9f578f7c5f939613b226aef9873ffad6da\": rpc error: code = NotFound desc = could not find container \"5e039cf758f095a4a15cfa9f3a7ada9f578f7c5f939613b226aef9873ffad6da\": container with ID starting with 5e039cf758f095a4a15cfa9f3a7ada9f578f7c5f939613b226aef9873ffad6da not found: ID does not exist" Jan 26 18:59:49 crc kubenswrapper[4737]: I0126 18:59:49.015855 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25444fbe-165b-40a7-b446-8bec4dfb854d" path="/var/lib/kubelet/pods/25444fbe-165b-40a7-b446-8bec4dfb854d/volumes" Jan 26 18:59:50 crc kubenswrapper[4737]: I0126 18:59:50.114735 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6b78c96546-lpdfk" event={"ID":"bbb9e95d-409d-4b81-a1e4-1dca34c9d1cb","Type":"ContainerStarted","Data":"8cd5a223548e3c71efe2ec96484c6785c4b425c78c6a9791be02a862a0d7de17"} Jan 26 18:59:50 crc kubenswrapper[4737]: I0126 18:59:50.115373 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6b78c96546-lpdfk" Jan 26 18:59:50 crc kubenswrapper[4737]: I0126 18:59:50.117089 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-cfff6bbff-s577r" event={"ID":"f4b0bd32-90db-4eae-a748-903c5d5cd931","Type":"ContainerStarted","Data":"da06d2ad4c15acba86f971faad81ce3a19163ef040b090ae6603b6e208edb241"} Jan 26 18:59:50 crc kubenswrapper[4737]: I0126 18:59:50.152609 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-6b78c96546-lpdfk" podStartSLOduration=1.8963329500000001 podStartE2EDuration="4.152581662s" podCreationTimestamp="2026-01-26 18:59:46 +0000 UTC" firstStartedPulling="2026-01-26 18:59:47.11537095 +0000 UTC m=+1760.423565658" lastFinishedPulling="2026-01-26 18:59:49.371619662 +0000 UTC m=+1762.679814370" observedRunningTime="2026-01-26 18:59:50.135768712 +0000 UTC m=+1763.443963420" watchObservedRunningTime="2026-01-26 18:59:50.152581662 +0000 UTC m=+1763.460776370" Jan 26 18:59:50 crc kubenswrapper[4737]: I0126 18:59:50.166651 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-cfff6bbff-s577r" podStartSLOduration=2.001366356 podStartE2EDuration="4.166625324s" podCreationTimestamp="2026-01-26 18:59:46 +0000 UTC" firstStartedPulling="2026-01-26 18:59:47.21029344 +0000 UTC m=+1760.518488148" lastFinishedPulling="2026-01-26 18:59:49.375552398 +0000 UTC m=+1762.683747116" observedRunningTime="2026-01-26 18:59:50.15864191 +0000 UTC m=+1763.466836638" watchObservedRunningTime="2026-01-26 18:59:50.166625324 +0000 UTC m=+1763.474820032" Jan 26 18:59:51 crc kubenswrapper[4737]: I0126 18:59:51.134637 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-cfff6bbff-s577r" Jan 26 18:59:56 crc kubenswrapper[4737]: I0126 18:59:56.388390 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x2cd5"] Jan 26 18:59:56 crc kubenswrapper[4737]: E0126 18:59:56.390264 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25444fbe-165b-40a7-b446-8bec4dfb854d" containerName="dnsmasq-dns" Jan 26 18:59:56 crc kubenswrapper[4737]: I0126 18:59:56.390285 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="25444fbe-165b-40a7-b446-8bec4dfb854d" containerName="dnsmasq-dns" Jan 26 18:59:56 crc kubenswrapper[4737]: E0126 18:59:56.390314 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25444fbe-165b-40a7-b446-8bec4dfb854d" containerName="init" Jan 26 18:59:56 crc kubenswrapper[4737]: I0126 18:59:56.390320 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="25444fbe-165b-40a7-b446-8bec4dfb854d" containerName="init" Jan 26 18:59:56 crc kubenswrapper[4737]: I0126 18:59:56.390594 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="25444fbe-165b-40a7-b446-8bec4dfb854d" containerName="dnsmasq-dns" Jan 26 18:59:56 crc kubenswrapper[4737]: I0126 18:59:56.391813 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x2cd5" Jan 26 18:59:56 crc kubenswrapper[4737]: I0126 18:59:56.393910 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 18:59:56 crc kubenswrapper[4737]: I0126 18:59:56.394006 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xlvv9" Jan 26 18:59:56 crc kubenswrapper[4737]: I0126 18:59:56.394741 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 18:59:56 crc kubenswrapper[4737]: I0126 18:59:56.395914 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 18:59:56 crc kubenswrapper[4737]: I0126 18:59:56.415398 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x2cd5"] Jan 26 18:59:56 crc kubenswrapper[4737]: I0126 18:59:56.487204 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67eb47db-a20a-4f95-97c2-67df12c02360-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-x2cd5\" (UID: \"67eb47db-a20a-4f95-97c2-67df12c02360\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x2cd5" Jan 26 18:59:56 crc kubenswrapper[4737]: I0126 18:59:56.487313 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67eb47db-a20a-4f95-97c2-67df12c02360-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-x2cd5\" (UID: \"67eb47db-a20a-4f95-97c2-67df12c02360\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x2cd5" Jan 26 18:59:56 crc kubenswrapper[4737]: I0126 18:59:56.487459 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ngkv\" (UniqueName: \"kubernetes.io/projected/67eb47db-a20a-4f95-97c2-67df12c02360-kube-api-access-5ngkv\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-x2cd5\" (UID: \"67eb47db-a20a-4f95-97c2-67df12c02360\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x2cd5" Jan 26 18:59:56 crc kubenswrapper[4737]: I0126 18:59:56.487589 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/67eb47db-a20a-4f95-97c2-67df12c02360-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-x2cd5\" (UID: \"67eb47db-a20a-4f95-97c2-67df12c02360\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x2cd5" Jan 26 18:59:56 crc kubenswrapper[4737]: I0126 18:59:56.589789 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67eb47db-a20a-4f95-97c2-67df12c02360-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-x2cd5\" (UID: \"67eb47db-a20a-4f95-97c2-67df12c02360\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x2cd5" Jan 26 18:59:56 crc kubenswrapper[4737]: I0126 18:59:56.589887 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67eb47db-a20a-4f95-97c2-67df12c02360-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-x2cd5\" (UID: \"67eb47db-a20a-4f95-97c2-67df12c02360\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x2cd5" Jan 26 18:59:56 crc kubenswrapper[4737]: I0126 18:59:56.589984 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ngkv\" (UniqueName: \"kubernetes.io/projected/67eb47db-a20a-4f95-97c2-67df12c02360-kube-api-access-5ngkv\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-x2cd5\" (UID: \"67eb47db-a20a-4f95-97c2-67df12c02360\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x2cd5" Jan 26 18:59:56 crc kubenswrapper[4737]: I0126 18:59:56.590136 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/67eb47db-a20a-4f95-97c2-67df12c02360-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-x2cd5\" (UID: \"67eb47db-a20a-4f95-97c2-67df12c02360\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x2cd5" Jan 26 18:59:56 crc kubenswrapper[4737]: I0126 18:59:56.601166 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67eb47db-a20a-4f95-97c2-67df12c02360-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-x2cd5\" (UID: \"67eb47db-a20a-4f95-97c2-67df12c02360\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x2cd5" Jan 26 18:59:56 crc kubenswrapper[4737]: I0126 18:59:56.607760 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/67eb47db-a20a-4f95-97c2-67df12c02360-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-x2cd5\" (UID: \"67eb47db-a20a-4f95-97c2-67df12c02360\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x2cd5" Jan 26 18:59:56 crc kubenswrapper[4737]: I0126 18:59:56.608785 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ngkv\" (UniqueName: \"kubernetes.io/projected/67eb47db-a20a-4f95-97c2-67df12c02360-kube-api-access-5ngkv\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-x2cd5\" (UID: \"67eb47db-a20a-4f95-97c2-67df12c02360\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x2cd5" Jan 26 18:59:56 crc kubenswrapper[4737]: I0126 18:59:56.613659 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67eb47db-a20a-4f95-97c2-67df12c02360-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-x2cd5\" (UID: \"67eb47db-a20a-4f95-97c2-67df12c02360\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x2cd5" Jan 26 18:59:56 crc kubenswrapper[4737]: I0126 18:59:56.722187 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x2cd5" Jan 26 18:59:57 crc kubenswrapper[4737]: I0126 18:59:57.695422 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x2cd5"] Jan 26 18:59:57 crc kubenswrapper[4737]: I0126 18:59:57.982430 4737 scope.go:117] "RemoveContainer" containerID="1118354a04db19a991298cf7d8a2d128f4afb57f133e36502b231054abcee336" Jan 26 18:59:57 crc kubenswrapper[4737]: E0126 18:59:57.983105 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 18:59:58 crc kubenswrapper[4737]: I0126 18:59:58.243641 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x2cd5" event={"ID":"67eb47db-a20a-4f95-97c2-67df12c02360","Type":"ContainerStarted","Data":"806b14976819cb5608987f7766e742fd459c4ba5b95faf0678242163817474a2"} Jan 26 18:59:58 crc kubenswrapper[4737]: I0126 18:59:58.420999 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-6b78c96546-lpdfk" Jan 26 18:59:58 crc kubenswrapper[4737]: I0126 18:59:58.548718 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-755b5655f9-7jhg9"] Jan 26 18:59:58 crc kubenswrapper[4737]: I0126 18:59:58.549770 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-755b5655f9-7jhg9" podUID="514a8219-8732-4d4b-abe6-154d215f65ed" containerName="heat-cfnapi" containerID="cri-o://34933807e8713fbb46bfe29c3169d3d3a8435a9b015d9b5d3c70887166bfcc2c" gracePeriod=60 Jan 26 18:59:58 crc kubenswrapper[4737]: I0126 18:59:58.569565 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-cfff6bbff-s577r" Jan 26 18:59:58 crc kubenswrapper[4737]: I0126 18:59:58.695060 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5494f5754b-8k4bc"] Jan 26 18:59:58 crc kubenswrapper[4737]: I0126 18:59:58.705583 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-5494f5754b-8k4bc" podUID="3e432095-1a99-44ef-8941-dd57947cfea2" containerName="heat-api" containerID="cri-o://2bb1c3822e504e928e9e99802f63896084a67c29e44d0d476bf0cfbf3a001047" gracePeriod=60 Jan 26 19:00:00 crc kubenswrapper[4737]: I0126 19:00:00.150194 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490900-5hl86"] Jan 26 19:00:00 crc kubenswrapper[4737]: I0126 19:00:00.152404 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490900-5hl86" Jan 26 19:00:00 crc kubenswrapper[4737]: I0126 19:00:00.155830 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 19:00:00 crc kubenswrapper[4737]: I0126 19:00:00.156422 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 19:00:00 crc kubenswrapper[4737]: I0126 19:00:00.174262 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490900-5hl86"] Jan 26 19:00:00 crc kubenswrapper[4737]: I0126 19:00:00.245719 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qpw9\" (UniqueName: \"kubernetes.io/projected/75833d1d-a0c8-4b19-8754-f491c70ce8e3-kube-api-access-6qpw9\") pod \"collect-profiles-29490900-5hl86\" (UID: \"75833d1d-a0c8-4b19-8754-f491c70ce8e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490900-5hl86" Jan 26 19:00:00 crc kubenswrapper[4737]: I0126 19:00:00.245817 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75833d1d-a0c8-4b19-8754-f491c70ce8e3-secret-volume\") pod \"collect-profiles-29490900-5hl86\" (UID: \"75833d1d-a0c8-4b19-8754-f491c70ce8e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490900-5hl86" Jan 26 19:00:00 crc kubenswrapper[4737]: I0126 19:00:00.246306 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75833d1d-a0c8-4b19-8754-f491c70ce8e3-config-volume\") pod \"collect-profiles-29490900-5hl86\" (UID: \"75833d1d-a0c8-4b19-8754-f491c70ce8e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490900-5hl86" Jan 26 19:00:00 crc kubenswrapper[4737]: I0126 19:00:00.355609 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qpw9\" (UniqueName: \"kubernetes.io/projected/75833d1d-a0c8-4b19-8754-f491c70ce8e3-kube-api-access-6qpw9\") pod \"collect-profiles-29490900-5hl86\" (UID: \"75833d1d-a0c8-4b19-8754-f491c70ce8e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490900-5hl86" Jan 26 19:00:00 crc kubenswrapper[4737]: I0126 19:00:00.355814 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75833d1d-a0c8-4b19-8754-f491c70ce8e3-secret-volume\") pod \"collect-profiles-29490900-5hl86\" (UID: \"75833d1d-a0c8-4b19-8754-f491c70ce8e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490900-5hl86" Jan 26 19:00:00 crc kubenswrapper[4737]: I0126 19:00:00.356590 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75833d1d-a0c8-4b19-8754-f491c70ce8e3-config-volume\") pod \"collect-profiles-29490900-5hl86\" (UID: \"75833d1d-a0c8-4b19-8754-f491c70ce8e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490900-5hl86" Jan 26 19:00:00 crc kubenswrapper[4737]: I0126 19:00:00.358037 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75833d1d-a0c8-4b19-8754-f491c70ce8e3-config-volume\") pod \"collect-profiles-29490900-5hl86\" (UID: \"75833d1d-a0c8-4b19-8754-f491c70ce8e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490900-5hl86" Jan 26 19:00:00 crc kubenswrapper[4737]: I0126 19:00:00.369153 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75833d1d-a0c8-4b19-8754-f491c70ce8e3-secret-volume\") pod \"collect-profiles-29490900-5hl86\" (UID: \"75833d1d-a0c8-4b19-8754-f491c70ce8e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490900-5hl86" Jan 26 19:00:00 crc kubenswrapper[4737]: I0126 19:00:00.374891 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qpw9\" (UniqueName: \"kubernetes.io/projected/75833d1d-a0c8-4b19-8754-f491c70ce8e3-kube-api-access-6qpw9\") pod \"collect-profiles-29490900-5hl86\" (UID: \"75833d1d-a0c8-4b19-8754-f491c70ce8e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490900-5hl86" Jan 26 19:00:00 crc kubenswrapper[4737]: I0126 19:00:00.490382 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490900-5hl86" Jan 26 19:00:01 crc kubenswrapper[4737]: W0126 19:00:01.023098 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75833d1d_a0c8_4b19_8754_f491c70ce8e3.slice/crio-71b09d65339eadde257b1f22f4779d7177c0e2c593b62599adef23974a85e170 WatchSource:0}: Error finding container 71b09d65339eadde257b1f22f4779d7177c0e2c593b62599adef23974a85e170: Status 404 returned error can't find the container with id 71b09d65339eadde257b1f22f4779d7177c0e2c593b62599adef23974a85e170 Jan 26 19:00:01 crc kubenswrapper[4737]: I0126 19:00:01.026217 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490900-5hl86"] Jan 26 19:00:01 crc kubenswrapper[4737]: I0126 19:00:01.285387 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490900-5hl86" event={"ID":"75833d1d-a0c8-4b19-8754-f491c70ce8e3","Type":"ContainerStarted","Data":"5108861c4cb099fc4e4b3a0f817369f4ef64d904b5ae6533f7c5aca450b244de"} Jan 26 19:00:01 crc kubenswrapper[4737]: I0126 19:00:01.285755 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490900-5hl86" event={"ID":"75833d1d-a0c8-4b19-8754-f491c70ce8e3","Type":"ContainerStarted","Data":"71b09d65339eadde257b1f22f4779d7177c0e2c593b62599adef23974a85e170"} Jan 26 19:00:01 crc kubenswrapper[4737]: I0126 19:00:01.320005 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29490900-5hl86" podStartSLOduration=1.31998293 podStartE2EDuration="1.31998293s" podCreationTimestamp="2026-01-26 19:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:00:01.304651097 +0000 UTC m=+1774.612845805" watchObservedRunningTime="2026-01-26 19:00:01.31998293 +0000 UTC m=+1774.628177638" Jan 26 19:00:01 crc kubenswrapper[4737]: I0126 19:00:01.879639 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-5494f5754b-8k4bc" podUID="3e432095-1a99-44ef-8941-dd57947cfea2" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.0.226:8004/healthcheck\": read tcp 10.217.0.2:52226->10.217.0.226:8004: read: connection reset by peer" Jan 26 19:00:02 crc kubenswrapper[4737]: I0126 19:00:01.998087 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-755b5655f9-7jhg9" podUID="514a8219-8732-4d4b-abe6-154d215f65ed" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.0.225:8000/healthcheck\": read tcp 10.217.0.2:42258->10.217.0.225:8000: read: connection reset by peer" Jan 26 19:00:02 crc kubenswrapper[4737]: I0126 19:00:02.378002 4737 generic.go:334] "Generic (PLEG): container finished" podID="3e432095-1a99-44ef-8941-dd57947cfea2" containerID="2bb1c3822e504e928e9e99802f63896084a67c29e44d0d476bf0cfbf3a001047" exitCode=0 Jan 26 19:00:02 crc kubenswrapper[4737]: I0126 19:00:02.378507 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5494f5754b-8k4bc" event={"ID":"3e432095-1a99-44ef-8941-dd57947cfea2","Type":"ContainerDied","Data":"2bb1c3822e504e928e9e99802f63896084a67c29e44d0d476bf0cfbf3a001047"} Jan 26 19:00:02 crc kubenswrapper[4737]: I0126 19:00:02.385554 4737 generic.go:334] "Generic (PLEG): container finished" podID="44d4092c-abb5-4218-81dc-32ba2257004d" containerID="19222c1560ce758a884e246a915896dd7a1a0926381767e796af16850e93d2c4" exitCode=0 Jan 26 19:00:02 crc kubenswrapper[4737]: I0126 19:00:02.385613 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"44d4092c-abb5-4218-81dc-32ba2257004d","Type":"ContainerDied","Data":"19222c1560ce758a884e246a915896dd7a1a0926381767e796af16850e93d2c4"} Jan 26 19:00:02 crc kubenswrapper[4737]: I0126 19:00:02.392978 4737 generic.go:334] "Generic (PLEG): container finished" podID="75833d1d-a0c8-4b19-8754-f491c70ce8e3" containerID="5108861c4cb099fc4e4b3a0f817369f4ef64d904b5ae6533f7c5aca450b244de" exitCode=0 Jan 26 19:00:02 crc kubenswrapper[4737]: I0126 19:00:02.393175 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490900-5hl86" event={"ID":"75833d1d-a0c8-4b19-8754-f491c70ce8e3","Type":"ContainerDied","Data":"5108861c4cb099fc4e4b3a0f817369f4ef64d904b5ae6533f7c5aca450b244de"} Jan 26 19:00:02 crc kubenswrapper[4737]: I0126 19:00:02.396920 4737 generic.go:334] "Generic (PLEG): container finished" podID="e5db87e3-e7cb-4248-bc3a-5c6f5d92c144" containerID="31a2a099439e157ecf8493014afd255b7a80069ba8aeaaf6b2eb6c5b49781d9e" exitCode=0 Jan 26 19:00:02 crc kubenswrapper[4737]: I0126 19:00:02.397011 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e5db87e3-e7cb-4248-bc3a-5c6f5d92c144","Type":"ContainerDied","Data":"31a2a099439e157ecf8493014afd255b7a80069ba8aeaaf6b2eb6c5b49781d9e"} Jan 26 19:00:02 crc kubenswrapper[4737]: I0126 19:00:02.403255 4737 generic.go:334] "Generic (PLEG): container finished" podID="514a8219-8732-4d4b-abe6-154d215f65ed" containerID="34933807e8713fbb46bfe29c3169d3d3a8435a9b015d9b5d3c70887166bfcc2c" exitCode=0 Jan 26 19:00:02 crc kubenswrapper[4737]: I0126 19:00:02.403314 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-755b5655f9-7jhg9" event={"ID":"514a8219-8732-4d4b-abe6-154d215f65ed","Type":"ContainerDied","Data":"34933807e8713fbb46bfe29c3169d3d3a8435a9b015d9b5d3c70887166bfcc2c"} Jan 26 19:00:02 crc kubenswrapper[4737]: I0126 19:00:02.823767 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5494f5754b-8k4bc" Jan 26 19:00:02 crc kubenswrapper[4737]: I0126 19:00:02.856563 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-755b5655f9-7jhg9" Jan 26 19:00:02 crc kubenswrapper[4737]: I0126 19:00:02.968449 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/514a8219-8732-4d4b-abe6-154d215f65ed-config-data\") pod \"514a8219-8732-4d4b-abe6-154d215f65ed\" (UID: \"514a8219-8732-4d4b-abe6-154d215f65ed\") " Jan 26 19:00:02 crc kubenswrapper[4737]: I0126 19:00:02.968537 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3e432095-1a99-44ef-8941-dd57947cfea2-config-data-custom\") pod \"3e432095-1a99-44ef-8941-dd57947cfea2\" (UID: \"3e432095-1a99-44ef-8941-dd57947cfea2\") " Jan 26 19:00:02 crc kubenswrapper[4737]: I0126 19:00:02.968632 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rj9bv\" (UniqueName: \"kubernetes.io/projected/514a8219-8732-4d4b-abe6-154d215f65ed-kube-api-access-rj9bv\") pod \"514a8219-8732-4d4b-abe6-154d215f65ed\" (UID: \"514a8219-8732-4d4b-abe6-154d215f65ed\") " Jan 26 19:00:02 crc kubenswrapper[4737]: I0126 19:00:02.968676 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lczkp\" (UniqueName: \"kubernetes.io/projected/3e432095-1a99-44ef-8941-dd57947cfea2-kube-api-access-lczkp\") pod \"3e432095-1a99-44ef-8941-dd57947cfea2\" (UID: \"3e432095-1a99-44ef-8941-dd57947cfea2\") " Jan 26 19:00:02 crc kubenswrapper[4737]: I0126 19:00:02.968791 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e432095-1a99-44ef-8941-dd57947cfea2-public-tls-certs\") pod \"3e432095-1a99-44ef-8941-dd57947cfea2\" (UID: \"3e432095-1a99-44ef-8941-dd57947cfea2\") " Jan 26 19:00:02 crc kubenswrapper[4737]: I0126 19:00:02.968876 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/514a8219-8732-4d4b-abe6-154d215f65ed-public-tls-certs\") pod \"514a8219-8732-4d4b-abe6-154d215f65ed\" (UID: \"514a8219-8732-4d4b-abe6-154d215f65ed\") " Jan 26 19:00:02 crc kubenswrapper[4737]: I0126 19:00:02.968927 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e432095-1a99-44ef-8941-dd57947cfea2-internal-tls-certs\") pod \"3e432095-1a99-44ef-8941-dd57947cfea2\" (UID: \"3e432095-1a99-44ef-8941-dd57947cfea2\") " Jan 26 19:00:02 crc kubenswrapper[4737]: I0126 19:00:02.968984 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/514a8219-8732-4d4b-abe6-154d215f65ed-internal-tls-certs\") pod \"514a8219-8732-4d4b-abe6-154d215f65ed\" (UID: \"514a8219-8732-4d4b-abe6-154d215f65ed\") " Jan 26 19:00:02 crc kubenswrapper[4737]: I0126 19:00:02.969024 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e432095-1a99-44ef-8941-dd57947cfea2-config-data\") pod \"3e432095-1a99-44ef-8941-dd57947cfea2\" (UID: \"3e432095-1a99-44ef-8941-dd57947cfea2\") " Jan 26 19:00:02 crc kubenswrapper[4737]: I0126 19:00:02.969045 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e432095-1a99-44ef-8941-dd57947cfea2-combined-ca-bundle\") pod \"3e432095-1a99-44ef-8941-dd57947cfea2\" (UID: \"3e432095-1a99-44ef-8941-dd57947cfea2\") " Jan 26 19:00:02 crc kubenswrapper[4737]: I0126 19:00:02.969103 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/514a8219-8732-4d4b-abe6-154d215f65ed-combined-ca-bundle\") pod \"514a8219-8732-4d4b-abe6-154d215f65ed\" (UID: \"514a8219-8732-4d4b-abe6-154d215f65ed\") " Jan 26 19:00:02 crc kubenswrapper[4737]: I0126 19:00:02.969156 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/514a8219-8732-4d4b-abe6-154d215f65ed-config-data-custom\") pod \"514a8219-8732-4d4b-abe6-154d215f65ed\" (UID: \"514a8219-8732-4d4b-abe6-154d215f65ed\") " Jan 26 19:00:02 crc kubenswrapper[4737]: I0126 19:00:02.974367 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/514a8219-8732-4d4b-abe6-154d215f65ed-kube-api-access-rj9bv" (OuterVolumeSpecName: "kube-api-access-rj9bv") pod "514a8219-8732-4d4b-abe6-154d215f65ed" (UID: "514a8219-8732-4d4b-abe6-154d215f65ed"). InnerVolumeSpecName "kube-api-access-rj9bv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:00:02 crc kubenswrapper[4737]: I0126 19:00:02.974641 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e432095-1a99-44ef-8941-dd57947cfea2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3e432095-1a99-44ef-8941-dd57947cfea2" (UID: "3e432095-1a99-44ef-8941-dd57947cfea2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:00:02 crc kubenswrapper[4737]: I0126 19:00:02.974870 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/514a8219-8732-4d4b-abe6-154d215f65ed-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "514a8219-8732-4d4b-abe6-154d215f65ed" (UID: "514a8219-8732-4d4b-abe6-154d215f65ed"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:00:02 crc kubenswrapper[4737]: I0126 19:00:02.977281 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e432095-1a99-44ef-8941-dd57947cfea2-kube-api-access-lczkp" (OuterVolumeSpecName: "kube-api-access-lczkp") pod "3e432095-1a99-44ef-8941-dd57947cfea2" (UID: "3e432095-1a99-44ef-8941-dd57947cfea2"). InnerVolumeSpecName "kube-api-access-lczkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:00:03 crc kubenswrapper[4737]: I0126 19:00:03.030721 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e432095-1a99-44ef-8941-dd57947cfea2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3e432095-1a99-44ef-8941-dd57947cfea2" (UID: "3e432095-1a99-44ef-8941-dd57947cfea2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:00:03 crc kubenswrapper[4737]: I0126 19:00:03.056387 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/514a8219-8732-4d4b-abe6-154d215f65ed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "514a8219-8732-4d4b-abe6-154d215f65ed" (UID: "514a8219-8732-4d4b-abe6-154d215f65ed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:00:03 crc kubenswrapper[4737]: I0126 19:00:03.093001 4737 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e432095-1a99-44ef-8941-dd57947cfea2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:03 crc kubenswrapper[4737]: I0126 19:00:03.093043 4737 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/514a8219-8732-4d4b-abe6-154d215f65ed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:03 crc kubenswrapper[4737]: I0126 19:00:03.093057 4737 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/514a8219-8732-4d4b-abe6-154d215f65ed-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:03 crc kubenswrapper[4737]: I0126 19:00:03.093084 4737 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3e432095-1a99-44ef-8941-dd57947cfea2-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:03 crc kubenswrapper[4737]: I0126 19:00:03.093096 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rj9bv\" (UniqueName: \"kubernetes.io/projected/514a8219-8732-4d4b-abe6-154d215f65ed-kube-api-access-rj9bv\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:03 crc kubenswrapper[4737]: I0126 19:00:03.093112 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lczkp\" (UniqueName: \"kubernetes.io/projected/3e432095-1a99-44ef-8941-dd57947cfea2-kube-api-access-lczkp\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:03 crc kubenswrapper[4737]: I0126 19:00:03.101306 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/514a8219-8732-4d4b-abe6-154d215f65ed-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "514a8219-8732-4d4b-abe6-154d215f65ed" (UID: "514a8219-8732-4d4b-abe6-154d215f65ed"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:00:03 crc kubenswrapper[4737]: I0126 19:00:03.101450 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e432095-1a99-44ef-8941-dd57947cfea2-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3e432095-1a99-44ef-8941-dd57947cfea2" (UID: "3e432095-1a99-44ef-8941-dd57947cfea2"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:00:03 crc kubenswrapper[4737]: I0126 19:00:03.111494 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e432095-1a99-44ef-8941-dd57947cfea2-config-data" (OuterVolumeSpecName: "config-data") pod "3e432095-1a99-44ef-8941-dd57947cfea2" (UID: "3e432095-1a99-44ef-8941-dd57947cfea2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:00:03 crc kubenswrapper[4737]: I0126 19:00:03.114329 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/514a8219-8732-4d4b-abe6-154d215f65ed-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "514a8219-8732-4d4b-abe6-154d215f65ed" (UID: "514a8219-8732-4d4b-abe6-154d215f65ed"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:00:03 crc kubenswrapper[4737]: I0126 19:00:03.146211 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e432095-1a99-44ef-8941-dd57947cfea2-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3e432095-1a99-44ef-8941-dd57947cfea2" (UID: "3e432095-1a99-44ef-8941-dd57947cfea2"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:00:03 crc kubenswrapper[4737]: I0126 19:00:03.150489 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/514a8219-8732-4d4b-abe6-154d215f65ed-config-data" (OuterVolumeSpecName: "config-data") pod "514a8219-8732-4d4b-abe6-154d215f65ed" (UID: "514a8219-8732-4d4b-abe6-154d215f65ed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:00:03 crc kubenswrapper[4737]: I0126 19:00:03.195797 4737 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e432095-1a99-44ef-8941-dd57947cfea2-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:03 crc kubenswrapper[4737]: I0126 19:00:03.195834 4737 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/514a8219-8732-4d4b-abe6-154d215f65ed-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:03 crc kubenswrapper[4737]: I0126 19:00:03.195844 4737 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e432095-1a99-44ef-8941-dd57947cfea2-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:03 crc kubenswrapper[4737]: I0126 19:00:03.195854 4737 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/514a8219-8732-4d4b-abe6-154d215f65ed-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:03 crc kubenswrapper[4737]: I0126 19:00:03.195868 4737 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e432095-1a99-44ef-8941-dd57947cfea2-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:03 crc kubenswrapper[4737]: I0126 19:00:03.195876 4737 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/514a8219-8732-4d4b-abe6-154d215f65ed-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:03 crc kubenswrapper[4737]: I0126 19:00:03.435918 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e5db87e3-e7cb-4248-bc3a-5c6f5d92c144","Type":"ContainerStarted","Data":"c5a912deb1cd808acfba15d71dc487eed8d44506a36e58f49e1a564e0b806083"} Jan 26 19:00:03 crc kubenswrapper[4737]: I0126 19:00:03.436273 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:00:03 crc kubenswrapper[4737]: I0126 19:00:03.438710 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-755b5655f9-7jhg9" event={"ID":"514a8219-8732-4d4b-abe6-154d215f65ed","Type":"ContainerDied","Data":"0557ea9d71d3b623a7939eb8d0a1d6b2b4745470d449e6b3b26a5cd1ad736075"} Jan 26 19:00:03 crc kubenswrapper[4737]: I0126 19:00:03.438758 4737 scope.go:117] "RemoveContainer" containerID="34933807e8713fbb46bfe29c3169d3d3a8435a9b015d9b5d3c70887166bfcc2c" Jan 26 19:00:03 crc kubenswrapper[4737]: I0126 19:00:03.438889 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-755b5655f9-7jhg9" Jan 26 19:00:03 crc kubenswrapper[4737]: I0126 19:00:03.445204 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5494f5754b-8k4bc" event={"ID":"3e432095-1a99-44ef-8941-dd57947cfea2","Type":"ContainerDied","Data":"4b84466188b9d0e36a36e1538367c696778a98d0c965822666729d8bab2cb6a1"} Jan 26 19:00:03 crc kubenswrapper[4737]: I0126 19:00:03.445295 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5494f5754b-8k4bc" Jan 26 19:00:03 crc kubenswrapper[4737]: I0126 19:00:03.450921 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"44d4092c-abb5-4218-81dc-32ba2257004d","Type":"ContainerStarted","Data":"46560796d59e2dd102ca8789e233320ae10038bd2d0c1ca6836929257b1513b3"} Jan 26 19:00:03 crc kubenswrapper[4737]: I0126 19:00:03.451708 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Jan 26 19:00:03 crc kubenswrapper[4737]: I0126 19:00:03.479656 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.479617522 podStartE2EDuration="38.479617522s" podCreationTimestamp="2026-01-26 18:59:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:00:03.470238774 +0000 UTC m=+1776.778433472" watchObservedRunningTime="2026-01-26 19:00:03.479617522 +0000 UTC m=+1776.787812230" Jan 26 19:00:03 crc kubenswrapper[4737]: I0126 19:00:03.512901 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=38.51279492 podStartE2EDuration="38.51279492s" podCreationTimestamp="2026-01-26 18:59:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:00:03.505369098 +0000 UTC m=+1776.813563816" watchObservedRunningTime="2026-01-26 19:00:03.51279492 +0000 UTC m=+1776.820989628" Jan 26 19:00:03 crc kubenswrapper[4737]: I0126 19:00:03.599041 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-755b5655f9-7jhg9"] Jan 26 19:00:03 crc kubenswrapper[4737]: I0126 19:00:03.633126 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-755b5655f9-7jhg9"] Jan 26 19:00:03 crc kubenswrapper[4737]: I0126 19:00:03.651734 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5494f5754b-8k4bc"] Jan 26 19:00:03 crc kubenswrapper[4737]: I0126 19:00:03.674954 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-5494f5754b-8k4bc"] Jan 26 19:00:04 crc kubenswrapper[4737]: I0126 19:00:04.995579 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e432095-1a99-44ef-8941-dd57947cfea2" path="/var/lib/kubelet/pods/3e432095-1a99-44ef-8941-dd57947cfea2/volumes" Jan 26 19:00:04 crc kubenswrapper[4737]: I0126 19:00:04.996702 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="514a8219-8732-4d4b-abe6-154d215f65ed" path="/var/lib/kubelet/pods/514a8219-8732-4d4b-abe6-154d215f65ed/volumes" Jan 26 19:00:06 crc kubenswrapper[4737]: I0126 19:00:06.308359 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-858867c5df-ppbxf" Jan 26 19:00:06 crc kubenswrapper[4737]: I0126 19:00:06.401426 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-5f8757c766-6hm2h"] Jan 26 19:00:06 crc kubenswrapper[4737]: I0126 19:00:06.402003 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-5f8757c766-6hm2h" podUID="b65814f9-7380-40c2-8d93-d95858c98d6b" containerName="heat-engine" containerID="cri-o://2ff97339c8905235db89a01821aafbcb5143c6652bcbb69d630b4c6379074d7c" gracePeriod=60 Jan 26 19:00:09 crc kubenswrapper[4737]: E0126 19:00:09.050682 4737 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2ff97339c8905235db89a01821aafbcb5143c6652bcbb69d630b4c6379074d7c" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 26 19:00:09 crc kubenswrapper[4737]: E0126 19:00:09.053182 4737 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2ff97339c8905235db89a01821aafbcb5143c6652bcbb69d630b4c6379074d7c" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 26 19:00:09 crc kubenswrapper[4737]: E0126 19:00:09.054932 4737 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2ff97339c8905235db89a01821aafbcb5143c6652bcbb69d630b4c6379074d7c" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 26 19:00:09 crc kubenswrapper[4737]: E0126 19:00:09.054984 4737 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-5f8757c766-6hm2h" podUID="b65814f9-7380-40c2-8d93-d95858c98d6b" containerName="heat-engine" Jan 26 19:00:10 crc kubenswrapper[4737]: I0126 19:00:10.737526 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 26 19:00:11 crc kubenswrapper[4737]: I0126 19:00:11.126573 4737 scope.go:117] "RemoveContainer" containerID="2bb1c3822e504e928e9e99802f63896084a67c29e44d0d476bf0cfbf3a001047" Jan 26 19:00:11 crc kubenswrapper[4737]: I0126 19:00:11.414723 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490900-5hl86" Jan 26 19:00:11 crc kubenswrapper[4737]: I0126 19:00:11.471549 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qpw9\" (UniqueName: \"kubernetes.io/projected/75833d1d-a0c8-4b19-8754-f491c70ce8e3-kube-api-access-6qpw9\") pod \"75833d1d-a0c8-4b19-8754-f491c70ce8e3\" (UID: \"75833d1d-a0c8-4b19-8754-f491c70ce8e3\") " Jan 26 19:00:11 crc kubenswrapper[4737]: I0126 19:00:11.471616 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75833d1d-a0c8-4b19-8754-f491c70ce8e3-config-volume\") pod \"75833d1d-a0c8-4b19-8754-f491c70ce8e3\" (UID: \"75833d1d-a0c8-4b19-8754-f491c70ce8e3\") " Jan 26 19:00:11 crc kubenswrapper[4737]: I0126 19:00:11.471678 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75833d1d-a0c8-4b19-8754-f491c70ce8e3-secret-volume\") pod \"75833d1d-a0c8-4b19-8754-f491c70ce8e3\" (UID: \"75833d1d-a0c8-4b19-8754-f491c70ce8e3\") " Jan 26 19:00:11 crc kubenswrapper[4737]: I0126 19:00:11.472421 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75833d1d-a0c8-4b19-8754-f491c70ce8e3-config-volume" (OuterVolumeSpecName: "config-volume") pod "75833d1d-a0c8-4b19-8754-f491c70ce8e3" (UID: "75833d1d-a0c8-4b19-8754-f491c70ce8e3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:00:11 crc kubenswrapper[4737]: I0126 19:00:11.479273 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75833d1d-a0c8-4b19-8754-f491c70ce8e3-kube-api-access-6qpw9" (OuterVolumeSpecName: "kube-api-access-6qpw9") pod "75833d1d-a0c8-4b19-8754-f491c70ce8e3" (UID: "75833d1d-a0c8-4b19-8754-f491c70ce8e3"). InnerVolumeSpecName "kube-api-access-6qpw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:00:11 crc kubenswrapper[4737]: I0126 19:00:11.479402 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75833d1d-a0c8-4b19-8754-f491c70ce8e3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "75833d1d-a0c8-4b19-8754-f491c70ce8e3" (UID: "75833d1d-a0c8-4b19-8754-f491c70ce8e3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:00:11 crc kubenswrapper[4737]: I0126 19:00:11.575618 4737 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75833d1d-a0c8-4b19-8754-f491c70ce8e3-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:11 crc kubenswrapper[4737]: I0126 19:00:11.576014 4737 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75833d1d-a0c8-4b19-8754-f491c70ce8e3-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:11 crc kubenswrapper[4737]: I0126 19:00:11.576030 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qpw9\" (UniqueName: \"kubernetes.io/projected/75833d1d-a0c8-4b19-8754-f491c70ce8e3-kube-api-access-6qpw9\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:11 crc kubenswrapper[4737]: I0126 19:00:11.593914 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490900-5hl86" event={"ID":"75833d1d-a0c8-4b19-8754-f491c70ce8e3","Type":"ContainerDied","Data":"71b09d65339eadde257b1f22f4779d7177c0e2c593b62599adef23974a85e170"} Jan 26 19:00:11 crc kubenswrapper[4737]: I0126 19:00:11.593957 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71b09d65339eadde257b1f22f4779d7177c0e2c593b62599adef23974a85e170" Jan 26 19:00:11 crc kubenswrapper[4737]: I0126 19:00:11.593924 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490900-5hl86" Jan 26 19:00:11 crc kubenswrapper[4737]: I0126 19:00:11.982245 4737 scope.go:117] "RemoveContainer" containerID="1118354a04db19a991298cf7d8a2d128f4afb57f133e36502b231054abcee336" Jan 26 19:00:11 crc kubenswrapper[4737]: E0126 19:00:11.982770 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:00:12 crc kubenswrapper[4737]: I0126 19:00:12.616102 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x2cd5" event={"ID":"67eb47db-a20a-4f95-97c2-67df12c02360","Type":"ContainerStarted","Data":"7b502660883b694f4241b82ba282feb68a2305c4c85206e6da01859b49beb6af"} Jan 26 19:00:12 crc kubenswrapper[4737]: I0126 19:00:12.637177 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x2cd5" podStartSLOduration=3.110280231 podStartE2EDuration="16.637158563s" podCreationTimestamp="2026-01-26 18:59:56 +0000 UTC" firstStartedPulling="2026-01-26 18:59:57.70037938 +0000 UTC m=+1771.008574088" lastFinishedPulling="2026-01-26 19:00:11.227257712 +0000 UTC m=+1784.535452420" observedRunningTime="2026-01-26 19:00:12.633228767 +0000 UTC m=+1785.941423485" watchObservedRunningTime="2026-01-26 19:00:12.637158563 +0000 UTC m=+1785.945353271" Jan 26 19:00:12 crc kubenswrapper[4737]: I0126 19:00:12.693084 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-8cnbf"] Jan 26 19:00:12 crc kubenswrapper[4737]: I0126 19:00:12.704861 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-8cnbf"] Jan 26 19:00:12 crc kubenswrapper[4737]: I0126 19:00:12.848480 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-pv5s6"] Jan 26 19:00:12 crc kubenswrapper[4737]: E0126 19:00:12.849152 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e432095-1a99-44ef-8941-dd57947cfea2" containerName="heat-api" Jan 26 19:00:12 crc kubenswrapper[4737]: I0126 19:00:12.849179 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e432095-1a99-44ef-8941-dd57947cfea2" containerName="heat-api" Jan 26 19:00:12 crc kubenswrapper[4737]: E0126 19:00:12.849200 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="514a8219-8732-4d4b-abe6-154d215f65ed" containerName="heat-cfnapi" Jan 26 19:00:12 crc kubenswrapper[4737]: I0126 19:00:12.849211 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="514a8219-8732-4d4b-abe6-154d215f65ed" containerName="heat-cfnapi" Jan 26 19:00:12 crc kubenswrapper[4737]: E0126 19:00:12.849245 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75833d1d-a0c8-4b19-8754-f491c70ce8e3" containerName="collect-profiles" Jan 26 19:00:12 crc kubenswrapper[4737]: I0126 19:00:12.849254 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="75833d1d-a0c8-4b19-8754-f491c70ce8e3" containerName="collect-profiles" Jan 26 19:00:12 crc kubenswrapper[4737]: I0126 19:00:12.849517 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e432095-1a99-44ef-8941-dd57947cfea2" containerName="heat-api" Jan 26 19:00:12 crc kubenswrapper[4737]: I0126 19:00:12.849556 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="75833d1d-a0c8-4b19-8754-f491c70ce8e3" containerName="collect-profiles" Jan 26 19:00:12 crc kubenswrapper[4737]: I0126 19:00:12.849581 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="514a8219-8732-4d4b-abe6-154d215f65ed" containerName="heat-cfnapi" Jan 26 19:00:12 crc kubenswrapper[4737]: I0126 19:00:12.850665 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-pv5s6" Jan 26 19:00:12 crc kubenswrapper[4737]: I0126 19:00:12.859195 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 26 19:00:12 crc kubenswrapper[4737]: I0126 19:00:12.889337 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-pv5s6"] Jan 26 19:00:12 crc kubenswrapper[4737]: I0126 19:00:12.999902 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ab0f9a5-ed4e-434e-ac42-e5293fcf921c" path="/var/lib/kubelet/pods/0ab0f9a5-ed4e-434e-ac42-e5293fcf921c/volumes" Jan 26 19:00:13 crc kubenswrapper[4737]: I0126 19:00:13.013832 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc-config-data\") pod \"aodh-db-sync-pv5s6\" (UID: \"6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc\") " pod="openstack/aodh-db-sync-pv5s6" Jan 26 19:00:13 crc kubenswrapper[4737]: I0126 19:00:13.014396 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc-combined-ca-bundle\") pod \"aodh-db-sync-pv5s6\" (UID: \"6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc\") " pod="openstack/aodh-db-sync-pv5s6" Jan 26 19:00:13 crc kubenswrapper[4737]: I0126 19:00:13.014796 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc-scripts\") pod \"aodh-db-sync-pv5s6\" (UID: \"6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc\") " pod="openstack/aodh-db-sync-pv5s6" Jan 26 19:00:13 crc kubenswrapper[4737]: I0126 19:00:13.014919 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpr95\" (UniqueName: \"kubernetes.io/projected/6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc-kube-api-access-qpr95\") pod \"aodh-db-sync-pv5s6\" (UID: \"6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc\") " pod="openstack/aodh-db-sync-pv5s6" Jan 26 19:00:13 crc kubenswrapper[4737]: I0126 19:00:13.117386 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc-config-data\") pod \"aodh-db-sync-pv5s6\" (UID: \"6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc\") " pod="openstack/aodh-db-sync-pv5s6" Jan 26 19:00:13 crc kubenswrapper[4737]: I0126 19:00:13.117647 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc-combined-ca-bundle\") pod \"aodh-db-sync-pv5s6\" (UID: \"6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc\") " pod="openstack/aodh-db-sync-pv5s6" Jan 26 19:00:13 crc kubenswrapper[4737]: I0126 19:00:13.117735 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc-scripts\") pod \"aodh-db-sync-pv5s6\" (UID: \"6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc\") " pod="openstack/aodh-db-sync-pv5s6" Jan 26 19:00:13 crc kubenswrapper[4737]: I0126 19:00:13.117780 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpr95\" (UniqueName: \"kubernetes.io/projected/6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc-kube-api-access-qpr95\") pod \"aodh-db-sync-pv5s6\" (UID: \"6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc\") " pod="openstack/aodh-db-sync-pv5s6" Jan 26 19:00:13 crc kubenswrapper[4737]: I0126 19:00:13.127999 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc-combined-ca-bundle\") pod \"aodh-db-sync-pv5s6\" (UID: \"6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc\") " pod="openstack/aodh-db-sync-pv5s6" Jan 26 19:00:13 crc kubenswrapper[4737]: I0126 19:00:13.133256 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc-scripts\") pod \"aodh-db-sync-pv5s6\" (UID: \"6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc\") " pod="openstack/aodh-db-sync-pv5s6" Jan 26 19:00:13 crc kubenswrapper[4737]: I0126 19:00:13.145307 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc-config-data\") pod \"aodh-db-sync-pv5s6\" (UID: \"6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc\") " pod="openstack/aodh-db-sync-pv5s6" Jan 26 19:00:13 crc kubenswrapper[4737]: I0126 19:00:13.153769 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpr95\" (UniqueName: \"kubernetes.io/projected/6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc-kube-api-access-qpr95\") pod \"aodh-db-sync-pv5s6\" (UID: \"6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc\") " pod="openstack/aodh-db-sync-pv5s6" Jan 26 19:00:13 crc kubenswrapper[4737]: I0126 19:00:13.191344 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-pv5s6" Jan 26 19:00:14 crc kubenswrapper[4737]: I0126 19:00:13.866456 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-pv5s6"] Jan 26 19:00:14 crc kubenswrapper[4737]: W0126 19:00:13.876787 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b0d6ef5_d1e3_4a80_83c1_04f01fc707dc.slice/crio-201a9f80ffec1043237b34a36eaf1bc7407e948951dbba8ea0ee4dddbd1c3859 WatchSource:0}: Error finding container 201a9f80ffec1043237b34a36eaf1bc7407e948951dbba8ea0ee4dddbd1c3859: Status 404 returned error can't find the container with id 201a9f80ffec1043237b34a36eaf1bc7407e948951dbba8ea0ee4dddbd1c3859 Jan 26 19:00:14 crc kubenswrapper[4737]: I0126 19:00:14.647273 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-pv5s6" event={"ID":"6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc","Type":"ContainerStarted","Data":"201a9f80ffec1043237b34a36eaf1bc7407e948951dbba8ea0ee4dddbd1c3859"} Jan 26 19:00:16 crc kubenswrapper[4737]: I0126 19:00:16.018911 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="e5db87e3-e7cb-4248-bc3a-5c6f5d92c144" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.16:5671: connect: connection refused" Jan 26 19:00:16 crc kubenswrapper[4737]: I0126 19:00:16.371679 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="44d4092c-abb5-4218-81dc-32ba2257004d" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.17:5671: connect: connection refused" Jan 26 19:00:19 crc kubenswrapper[4737]: E0126 19:00:19.050801 4737 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2ff97339c8905235db89a01821aafbcb5143c6652bcbb69d630b4c6379074d7c" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 26 19:00:19 crc kubenswrapper[4737]: E0126 19:00:19.052098 4737 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2ff97339c8905235db89a01821aafbcb5143c6652bcbb69d630b4c6379074d7c" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 26 19:00:19 crc kubenswrapper[4737]: E0126 19:00:19.058424 4737 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2ff97339c8905235db89a01821aafbcb5143c6652bcbb69d630b4c6379074d7c" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 26 19:00:19 crc kubenswrapper[4737]: E0126 19:00:19.058506 4737 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-5f8757c766-6hm2h" podUID="b65814f9-7380-40c2-8d93-d95858c98d6b" containerName="heat-engine" Jan 26 19:00:20 crc kubenswrapper[4737]: I0126 19:00:20.733031 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-pv5s6" event={"ID":"6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc","Type":"ContainerStarted","Data":"0f3e5a988859ee2f6011c7e863d600a3f9dc924ea7d67718e116dfa56ddf5a40"} Jan 26 19:00:20 crc kubenswrapper[4737]: I0126 19:00:20.756025 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-pv5s6" podStartSLOduration=2.986816528 podStartE2EDuration="8.756005414s" podCreationTimestamp="2026-01-26 19:00:12 +0000 UTC" firstStartedPulling="2026-01-26 19:00:13.879480878 +0000 UTC m=+1787.187675586" lastFinishedPulling="2026-01-26 19:00:19.648669764 +0000 UTC m=+1792.956864472" observedRunningTime="2026-01-26 19:00:20.754213221 +0000 UTC m=+1794.062407949" watchObservedRunningTime="2026-01-26 19:00:20.756005414 +0000 UTC m=+1794.064200122" Jan 26 19:00:22 crc kubenswrapper[4737]: I0126 19:00:22.770879 4737 generic.go:334] "Generic (PLEG): container finished" podID="b65814f9-7380-40c2-8d93-d95858c98d6b" containerID="2ff97339c8905235db89a01821aafbcb5143c6652bcbb69d630b4c6379074d7c" exitCode=0 Jan 26 19:00:22 crc kubenswrapper[4737]: I0126 19:00:22.771461 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5f8757c766-6hm2h" event={"ID":"b65814f9-7380-40c2-8d93-d95858c98d6b","Type":"ContainerDied","Data":"2ff97339c8905235db89a01821aafbcb5143c6652bcbb69d630b4c6379074d7c"} Jan 26 19:00:22 crc kubenswrapper[4737]: I0126 19:00:22.773895 4737 generic.go:334] "Generic (PLEG): container finished" podID="6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc" containerID="0f3e5a988859ee2f6011c7e863d600a3f9dc924ea7d67718e116dfa56ddf5a40" exitCode=0 Jan 26 19:00:22 crc kubenswrapper[4737]: I0126 19:00:22.773926 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-pv5s6" event={"ID":"6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc","Type":"ContainerDied","Data":"0f3e5a988859ee2f6011c7e863d600a3f9dc924ea7d67718e116dfa56ddf5a40"} Jan 26 19:00:23 crc kubenswrapper[4737]: I0126 19:00:23.124336 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5f8757c766-6hm2h" Jan 26 19:00:23 crc kubenswrapper[4737]: I0126 19:00:23.300131 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vcmz\" (UniqueName: \"kubernetes.io/projected/b65814f9-7380-40c2-8d93-d95858c98d6b-kube-api-access-7vcmz\") pod \"b65814f9-7380-40c2-8d93-d95858c98d6b\" (UID: \"b65814f9-7380-40c2-8d93-d95858c98d6b\") " Jan 26 19:00:23 crc kubenswrapper[4737]: I0126 19:00:23.302374 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b65814f9-7380-40c2-8d93-d95858c98d6b-config-data\") pod \"b65814f9-7380-40c2-8d93-d95858c98d6b\" (UID: \"b65814f9-7380-40c2-8d93-d95858c98d6b\") " Jan 26 19:00:23 crc kubenswrapper[4737]: I0126 19:00:23.302447 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b65814f9-7380-40c2-8d93-d95858c98d6b-combined-ca-bundle\") pod \"b65814f9-7380-40c2-8d93-d95858c98d6b\" (UID: \"b65814f9-7380-40c2-8d93-d95858c98d6b\") " Jan 26 19:00:23 crc kubenswrapper[4737]: I0126 19:00:23.302487 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b65814f9-7380-40c2-8d93-d95858c98d6b-config-data-custom\") pod \"b65814f9-7380-40c2-8d93-d95858c98d6b\" (UID: \"b65814f9-7380-40c2-8d93-d95858c98d6b\") " Jan 26 19:00:23 crc kubenswrapper[4737]: I0126 19:00:23.308182 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b65814f9-7380-40c2-8d93-d95858c98d6b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b65814f9-7380-40c2-8d93-d95858c98d6b" (UID: "b65814f9-7380-40c2-8d93-d95858c98d6b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:00:23 crc kubenswrapper[4737]: I0126 19:00:23.323273 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b65814f9-7380-40c2-8d93-d95858c98d6b-kube-api-access-7vcmz" (OuterVolumeSpecName: "kube-api-access-7vcmz") pod "b65814f9-7380-40c2-8d93-d95858c98d6b" (UID: "b65814f9-7380-40c2-8d93-d95858c98d6b"). InnerVolumeSpecName "kube-api-access-7vcmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:00:23 crc kubenswrapper[4737]: I0126 19:00:23.337143 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b65814f9-7380-40c2-8d93-d95858c98d6b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b65814f9-7380-40c2-8d93-d95858c98d6b" (UID: "b65814f9-7380-40c2-8d93-d95858c98d6b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:00:23 crc kubenswrapper[4737]: I0126 19:00:23.370486 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b65814f9-7380-40c2-8d93-d95858c98d6b-config-data" (OuterVolumeSpecName: "config-data") pod "b65814f9-7380-40c2-8d93-d95858c98d6b" (UID: "b65814f9-7380-40c2-8d93-d95858c98d6b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:00:23 crc kubenswrapper[4737]: I0126 19:00:23.408936 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vcmz\" (UniqueName: \"kubernetes.io/projected/b65814f9-7380-40c2-8d93-d95858c98d6b-kube-api-access-7vcmz\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:23 crc kubenswrapper[4737]: I0126 19:00:23.408991 4737 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b65814f9-7380-40c2-8d93-d95858c98d6b-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:23 crc kubenswrapper[4737]: I0126 19:00:23.409008 4737 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b65814f9-7380-40c2-8d93-d95858c98d6b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:23 crc kubenswrapper[4737]: I0126 19:00:23.409023 4737 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b65814f9-7380-40c2-8d93-d95858c98d6b-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:23 crc kubenswrapper[4737]: I0126 19:00:23.787177 4737 generic.go:334] "Generic (PLEG): container finished" podID="67eb47db-a20a-4f95-97c2-67df12c02360" containerID="7b502660883b694f4241b82ba282feb68a2305c4c85206e6da01859b49beb6af" exitCode=0 Jan 26 19:00:23 crc kubenswrapper[4737]: I0126 19:00:23.787275 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x2cd5" event={"ID":"67eb47db-a20a-4f95-97c2-67df12c02360","Type":"ContainerDied","Data":"7b502660883b694f4241b82ba282feb68a2305c4c85206e6da01859b49beb6af"} Jan 26 19:00:23 crc kubenswrapper[4737]: I0126 19:00:23.789152 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5f8757c766-6hm2h" event={"ID":"b65814f9-7380-40c2-8d93-d95858c98d6b","Type":"ContainerDied","Data":"7c3db5c789880e7ccd4e1aa703650d2eb3fa7c664c97e08be94127b6c19c85e1"} Jan 26 19:00:23 crc kubenswrapper[4737]: I0126 19:00:23.789215 4737 scope.go:117] "RemoveContainer" containerID="2ff97339c8905235db89a01821aafbcb5143c6652bcbb69d630b4c6379074d7c" Jan 26 19:00:23 crc kubenswrapper[4737]: I0126 19:00:23.789248 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5f8757c766-6hm2h" Jan 26 19:00:23 crc kubenswrapper[4737]: I0126 19:00:23.836043 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-5f8757c766-6hm2h"] Jan 26 19:00:23 crc kubenswrapper[4737]: I0126 19:00:23.849443 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-5f8757c766-6hm2h"] Jan 26 19:00:24 crc kubenswrapper[4737]: I0126 19:00:24.243330 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-pv5s6" Jan 26 19:00:24 crc kubenswrapper[4737]: I0126 19:00:24.328594 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc-scripts\") pod \"6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc\" (UID: \"6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc\") " Jan 26 19:00:24 crc kubenswrapper[4737]: I0126 19:00:24.328766 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpr95\" (UniqueName: \"kubernetes.io/projected/6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc-kube-api-access-qpr95\") pod \"6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc\" (UID: \"6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc\") " Jan 26 19:00:24 crc kubenswrapper[4737]: I0126 19:00:24.328923 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc-combined-ca-bundle\") pod \"6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc\" (UID: \"6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc\") " Jan 26 19:00:24 crc kubenswrapper[4737]: I0126 19:00:24.328952 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc-config-data\") pod \"6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc\" (UID: \"6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc\") " Jan 26 19:00:24 crc kubenswrapper[4737]: I0126 19:00:24.335307 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc-scripts" (OuterVolumeSpecName: "scripts") pod "6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc" (UID: "6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:00:24 crc kubenswrapper[4737]: I0126 19:00:24.340277 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc-kube-api-access-qpr95" (OuterVolumeSpecName: "kube-api-access-qpr95") pod "6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc" (UID: "6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc"). InnerVolumeSpecName "kube-api-access-qpr95". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:00:24 crc kubenswrapper[4737]: I0126 19:00:24.500868 4737 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:24 crc kubenswrapper[4737]: I0126 19:00:24.500898 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpr95\" (UniqueName: \"kubernetes.io/projected/6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc-kube-api-access-qpr95\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:24 crc kubenswrapper[4737]: I0126 19:00:24.504656 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc-config-data" (OuterVolumeSpecName: "config-data") pod "6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc" (UID: "6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:00:24 crc kubenswrapper[4737]: I0126 19:00:24.504709 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc" (UID: "6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:00:24 crc kubenswrapper[4737]: I0126 19:00:24.603501 4737 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:24 crc kubenswrapper[4737]: I0126 19:00:24.603536 4737 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:24 crc kubenswrapper[4737]: I0126 19:00:24.805887 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-pv5s6" Jan 26 19:00:24 crc kubenswrapper[4737]: I0126 19:00:24.805882 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-pv5s6" event={"ID":"6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc","Type":"ContainerDied","Data":"201a9f80ffec1043237b34a36eaf1bc7407e948951dbba8ea0ee4dddbd1c3859"} Jan 26 19:00:24 crc kubenswrapper[4737]: I0126 19:00:24.808096 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="201a9f80ffec1043237b34a36eaf1bc7407e948951dbba8ea0ee4dddbd1c3859" Jan 26 19:00:24 crc kubenswrapper[4737]: I0126 19:00:24.982781 4737 scope.go:117] "RemoveContainer" containerID="1118354a04db19a991298cf7d8a2d128f4afb57f133e36502b231054abcee336" Jan 26 19:00:24 crc kubenswrapper[4737]: E0126 19:00:24.983541 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:00:24 crc kubenswrapper[4737]: I0126 19:00:24.998216 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b65814f9-7380-40c2-8d93-d95858c98d6b" path="/var/lib/kubelet/pods/b65814f9-7380-40c2-8d93-d95858c98d6b/volumes" Jan 26 19:00:25 crc kubenswrapper[4737]: I0126 19:00:25.325803 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x2cd5" Jan 26 19:00:25 crc kubenswrapper[4737]: I0126 19:00:25.422342 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ngkv\" (UniqueName: \"kubernetes.io/projected/67eb47db-a20a-4f95-97c2-67df12c02360-kube-api-access-5ngkv\") pod \"67eb47db-a20a-4f95-97c2-67df12c02360\" (UID: \"67eb47db-a20a-4f95-97c2-67df12c02360\") " Jan 26 19:00:25 crc kubenswrapper[4737]: I0126 19:00:25.422476 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/67eb47db-a20a-4f95-97c2-67df12c02360-ssh-key-openstack-edpm-ipam\") pod \"67eb47db-a20a-4f95-97c2-67df12c02360\" (UID: \"67eb47db-a20a-4f95-97c2-67df12c02360\") " Jan 26 19:00:25 crc kubenswrapper[4737]: I0126 19:00:25.422540 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67eb47db-a20a-4f95-97c2-67df12c02360-inventory\") pod \"67eb47db-a20a-4f95-97c2-67df12c02360\" (UID: \"67eb47db-a20a-4f95-97c2-67df12c02360\") " Jan 26 19:00:25 crc kubenswrapper[4737]: I0126 19:00:25.422825 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67eb47db-a20a-4f95-97c2-67df12c02360-repo-setup-combined-ca-bundle\") pod \"67eb47db-a20a-4f95-97c2-67df12c02360\" (UID: \"67eb47db-a20a-4f95-97c2-67df12c02360\") " Jan 26 19:00:25 crc kubenswrapper[4737]: I0126 19:00:25.427018 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67eb47db-a20a-4f95-97c2-67df12c02360-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "67eb47db-a20a-4f95-97c2-67df12c02360" (UID: "67eb47db-a20a-4f95-97c2-67df12c02360"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:00:25 crc kubenswrapper[4737]: I0126 19:00:25.439119 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67eb47db-a20a-4f95-97c2-67df12c02360-kube-api-access-5ngkv" (OuterVolumeSpecName: "kube-api-access-5ngkv") pod "67eb47db-a20a-4f95-97c2-67df12c02360" (UID: "67eb47db-a20a-4f95-97c2-67df12c02360"). InnerVolumeSpecName "kube-api-access-5ngkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:00:25 crc kubenswrapper[4737]: I0126 19:00:25.457857 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67eb47db-a20a-4f95-97c2-67df12c02360-inventory" (OuterVolumeSpecName: "inventory") pod "67eb47db-a20a-4f95-97c2-67df12c02360" (UID: "67eb47db-a20a-4f95-97c2-67df12c02360"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:00:25 crc kubenswrapper[4737]: I0126 19:00:25.459999 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67eb47db-a20a-4f95-97c2-67df12c02360-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "67eb47db-a20a-4f95-97c2-67df12c02360" (UID: "67eb47db-a20a-4f95-97c2-67df12c02360"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:00:25 crc kubenswrapper[4737]: I0126 19:00:25.526177 4737 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67eb47db-a20a-4f95-97c2-67df12c02360-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:25 crc kubenswrapper[4737]: I0126 19:00:25.526219 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ngkv\" (UniqueName: \"kubernetes.io/projected/67eb47db-a20a-4f95-97c2-67df12c02360-kube-api-access-5ngkv\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:25 crc kubenswrapper[4737]: I0126 19:00:25.526229 4737 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/67eb47db-a20a-4f95-97c2-67df12c02360-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:25 crc kubenswrapper[4737]: I0126 19:00:25.526239 4737 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67eb47db-a20a-4f95-97c2-67df12c02360-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:25 crc kubenswrapper[4737]: I0126 19:00:25.823395 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x2cd5" event={"ID":"67eb47db-a20a-4f95-97c2-67df12c02360","Type":"ContainerDied","Data":"806b14976819cb5608987f7766e742fd459c4ba5b95faf0678242163817474a2"} Jan 26 19:00:25 crc kubenswrapper[4737]: I0126 19:00:25.825728 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="806b14976819cb5608987f7766e742fd459c4ba5b95faf0678242163817474a2" Jan 26 19:00:25 crc kubenswrapper[4737]: I0126 19:00:25.825349 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x2cd5" Jan 26 19:00:25 crc kubenswrapper[4737]: I0126 19:00:25.892243 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-ld6hr"] Jan 26 19:00:25 crc kubenswrapper[4737]: E0126 19:00:25.892911 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc" containerName="aodh-db-sync" Jan 26 19:00:25 crc kubenswrapper[4737]: I0126 19:00:25.892935 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc" containerName="aodh-db-sync" Jan 26 19:00:25 crc kubenswrapper[4737]: E0126 19:00:25.892995 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b65814f9-7380-40c2-8d93-d95858c98d6b" containerName="heat-engine" Jan 26 19:00:25 crc kubenswrapper[4737]: I0126 19:00:25.893005 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="b65814f9-7380-40c2-8d93-d95858c98d6b" containerName="heat-engine" Jan 26 19:00:25 crc kubenswrapper[4737]: E0126 19:00:25.893036 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67eb47db-a20a-4f95-97c2-67df12c02360" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 26 19:00:25 crc kubenswrapper[4737]: I0126 19:00:25.893048 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="67eb47db-a20a-4f95-97c2-67df12c02360" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 26 19:00:25 crc kubenswrapper[4737]: I0126 19:00:25.893376 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="b65814f9-7380-40c2-8d93-d95858c98d6b" containerName="heat-engine" Jan 26 19:00:25 crc kubenswrapper[4737]: I0126 19:00:25.893408 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc" containerName="aodh-db-sync" Jan 26 19:00:25 crc kubenswrapper[4737]: I0126 19:00:25.893424 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="67eb47db-a20a-4f95-97c2-67df12c02360" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 26 19:00:25 crc kubenswrapper[4737]: I0126 19:00:25.894675 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ld6hr" Jan 26 19:00:25 crc kubenswrapper[4737]: I0126 19:00:25.897124 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 19:00:25 crc kubenswrapper[4737]: I0126 19:00:25.897447 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 19:00:25 crc kubenswrapper[4737]: I0126 19:00:25.901818 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xlvv9" Jan 26 19:00:25 crc kubenswrapper[4737]: I0126 19:00:25.904521 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 19:00:25 crc kubenswrapper[4737]: I0126 19:00:25.918470 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-ld6hr"] Jan 26 19:00:25 crc kubenswrapper[4737]: I0126 19:00:25.932756 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2af8847d-3acf-4733-a507-7d00229ef74c-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-ld6hr\" (UID: \"2af8847d-3acf-4733-a507-7d00229ef74c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ld6hr" Jan 26 19:00:25 crc kubenswrapper[4737]: I0126 19:00:25.932890 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2af8847d-3acf-4733-a507-7d00229ef74c-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-ld6hr\" (UID: \"2af8847d-3acf-4733-a507-7d00229ef74c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ld6hr" Jan 26 19:00:25 crc kubenswrapper[4737]: I0126 19:00:25.932933 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd2cl\" (UniqueName: \"kubernetes.io/projected/2af8847d-3acf-4733-a507-7d00229ef74c-kube-api-access-qd2cl\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-ld6hr\" (UID: \"2af8847d-3acf-4733-a507-7d00229ef74c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ld6hr" Jan 26 19:00:26 crc kubenswrapper[4737]: I0126 19:00:26.018328 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:00:26 crc kubenswrapper[4737]: I0126 19:00:26.037526 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2af8847d-3acf-4733-a507-7d00229ef74c-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-ld6hr\" (UID: \"2af8847d-3acf-4733-a507-7d00229ef74c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ld6hr" Jan 26 19:00:26 crc kubenswrapper[4737]: I0126 19:00:26.037629 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qd2cl\" (UniqueName: \"kubernetes.io/projected/2af8847d-3acf-4733-a507-7d00229ef74c-kube-api-access-qd2cl\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-ld6hr\" (UID: \"2af8847d-3acf-4733-a507-7d00229ef74c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ld6hr" Jan 26 19:00:26 crc kubenswrapper[4737]: I0126 19:00:26.037932 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2af8847d-3acf-4733-a507-7d00229ef74c-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-ld6hr\" (UID: \"2af8847d-3acf-4733-a507-7d00229ef74c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ld6hr" Jan 26 19:00:26 crc kubenswrapper[4737]: I0126 19:00:26.045391 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2af8847d-3acf-4733-a507-7d00229ef74c-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-ld6hr\" (UID: \"2af8847d-3acf-4733-a507-7d00229ef74c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ld6hr" Jan 26 19:00:26 crc kubenswrapper[4737]: I0126 19:00:26.057807 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2af8847d-3acf-4733-a507-7d00229ef74c-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-ld6hr\" (UID: \"2af8847d-3acf-4733-a507-7d00229ef74c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ld6hr" Jan 26 19:00:26 crc kubenswrapper[4737]: I0126 19:00:26.064298 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qd2cl\" (UniqueName: \"kubernetes.io/projected/2af8847d-3acf-4733-a507-7d00229ef74c-kube-api-access-qd2cl\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-ld6hr\" (UID: \"2af8847d-3acf-4733-a507-7d00229ef74c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ld6hr" Jan 26 19:00:26 crc kubenswrapper[4737]: I0126 19:00:26.219305 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ld6hr" Jan 26 19:00:26 crc kubenswrapper[4737]: I0126 19:00:26.372635 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Jan 26 19:00:26 crc kubenswrapper[4737]: I0126 19:00:26.451053 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 26 19:00:26 crc kubenswrapper[4737]: I0126 19:00:26.611125 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 26 19:00:26 crc kubenswrapper[4737]: I0126 19:00:26.611422 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="b6e782a5-335e-4e15-b264-73a1433e49a8" containerName="aodh-api" containerID="cri-o://e6308bf8ea79d7cb29c981c7aae95fb92ae7625fedaf96040c606475c5136c5e" gracePeriod=30 Jan 26 19:00:26 crc kubenswrapper[4737]: I0126 19:00:26.611719 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="b6e782a5-335e-4e15-b264-73a1433e49a8" containerName="aodh-notifier" containerID="cri-o://c1828109c44925e8788da5cc4feb78f716c57158a1bb287520d16b6f2bf768a6" gracePeriod=30 Jan 26 19:00:26 crc kubenswrapper[4737]: I0126 19:00:26.611785 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="b6e782a5-335e-4e15-b264-73a1433e49a8" containerName="aodh-listener" containerID="cri-o://94580c31fcb8b54b50cee8e33ce948e14949b72730dfa3ca9f36ef3f38abcd59" gracePeriod=30 Jan 26 19:00:26 crc kubenswrapper[4737]: I0126 19:00:26.611824 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="b6e782a5-335e-4e15-b264-73a1433e49a8" containerName="aodh-evaluator" containerID="cri-o://4e6da485e3fa31da590ef6ae3811c6aa55a644fed8a1b6c2a1033c771bd84091" gracePeriod=30 Jan 26 19:00:26 crc kubenswrapper[4737]: I0126 19:00:26.970989 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-ld6hr"] Jan 26 19:00:27 crc kubenswrapper[4737]: I0126 19:00:27.509025 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 19:00:27 crc kubenswrapper[4737]: I0126 19:00:27.855123 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ld6hr" event={"ID":"2af8847d-3acf-4733-a507-7d00229ef74c","Type":"ContainerStarted","Data":"5f2f24cd1720b055d2ab4f1e12d0e41b12510fc069dcc424485201c135658377"} Jan 26 19:00:27 crc kubenswrapper[4737]: I0126 19:00:27.855753 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ld6hr" event={"ID":"2af8847d-3acf-4733-a507-7d00229ef74c","Type":"ContainerStarted","Data":"252b72abb5cf55963067a6c4fa080bc15c8c01da1da46d1610300b46d16fdb74"} Jan 26 19:00:27 crc kubenswrapper[4737]: I0126 19:00:27.859398 4737 generic.go:334] "Generic (PLEG): container finished" podID="b6e782a5-335e-4e15-b264-73a1433e49a8" containerID="4e6da485e3fa31da590ef6ae3811c6aa55a644fed8a1b6c2a1033c771bd84091" exitCode=0 Jan 26 19:00:27 crc kubenswrapper[4737]: I0126 19:00:27.859432 4737 generic.go:334] "Generic (PLEG): container finished" podID="b6e782a5-335e-4e15-b264-73a1433e49a8" containerID="e6308bf8ea79d7cb29c981c7aae95fb92ae7625fedaf96040c606475c5136c5e" exitCode=0 Jan 26 19:00:27 crc kubenswrapper[4737]: I0126 19:00:27.859465 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b6e782a5-335e-4e15-b264-73a1433e49a8","Type":"ContainerDied","Data":"4e6da485e3fa31da590ef6ae3811c6aa55a644fed8a1b6c2a1033c771bd84091"} Jan 26 19:00:27 crc kubenswrapper[4737]: I0126 19:00:27.859501 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b6e782a5-335e-4e15-b264-73a1433e49a8","Type":"ContainerDied","Data":"e6308bf8ea79d7cb29c981c7aae95fb92ae7625fedaf96040c606475c5136c5e"} Jan 26 19:00:28 crc kubenswrapper[4737]: I0126 19:00:28.876306 4737 generic.go:334] "Generic (PLEG): container finished" podID="b6e782a5-335e-4e15-b264-73a1433e49a8" containerID="c1828109c44925e8788da5cc4feb78f716c57158a1bb287520d16b6f2bf768a6" exitCode=0 Jan 26 19:00:28 crc kubenswrapper[4737]: I0126 19:00:28.876374 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b6e782a5-335e-4e15-b264-73a1433e49a8","Type":"ContainerDied","Data":"c1828109c44925e8788da5cc4feb78f716c57158a1bb287520d16b6f2bf768a6"} Jan 26 19:00:29 crc kubenswrapper[4737]: I0126 19:00:29.829684 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 26 19:00:29 crc kubenswrapper[4737]: I0126 19:00:29.854359 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ld6hr" podStartSLOduration=4.395652706 podStartE2EDuration="4.854329294s" podCreationTimestamp="2026-01-26 19:00:25 +0000 UTC" firstStartedPulling="2026-01-26 19:00:27.046528975 +0000 UTC m=+1800.354723683" lastFinishedPulling="2026-01-26 19:00:27.505205563 +0000 UTC m=+1800.813400271" observedRunningTime="2026-01-26 19:00:27.880264488 +0000 UTC m=+1801.188459196" watchObservedRunningTime="2026-01-26 19:00:29.854329294 +0000 UTC m=+1803.162524002" Jan 26 19:00:29 crc kubenswrapper[4737]: I0126 19:00:29.899006 4737 generic.go:334] "Generic (PLEG): container finished" podID="b6e782a5-335e-4e15-b264-73a1433e49a8" containerID="94580c31fcb8b54b50cee8e33ce948e14949b72730dfa3ca9f36ef3f38abcd59" exitCode=0 Jan 26 19:00:29 crc kubenswrapper[4737]: I0126 19:00:29.899060 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b6e782a5-335e-4e15-b264-73a1433e49a8","Type":"ContainerDied","Data":"94580c31fcb8b54b50cee8e33ce948e14949b72730dfa3ca9f36ef3f38abcd59"} Jan 26 19:00:29 crc kubenswrapper[4737]: I0126 19:00:29.899110 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b6e782a5-335e-4e15-b264-73a1433e49a8","Type":"ContainerDied","Data":"7f8f5d8acd8e0bb41e7509c1295c2ee88fd36b931d20e940ba1afae653c3e8a8"} Jan 26 19:00:29 crc kubenswrapper[4737]: I0126 19:00:29.899131 4737 scope.go:117] "RemoveContainer" containerID="94580c31fcb8b54b50cee8e33ce948e14949b72730dfa3ca9f36ef3f38abcd59" Jan 26 19:00:29 crc kubenswrapper[4737]: I0126 19:00:29.899316 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 26 19:00:29 crc kubenswrapper[4737]: I0126 19:00:29.951134 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6e782a5-335e-4e15-b264-73a1433e49a8-config-data\") pod \"b6e782a5-335e-4e15-b264-73a1433e49a8\" (UID: \"b6e782a5-335e-4e15-b264-73a1433e49a8\") " Jan 26 19:00:29 crc kubenswrapper[4737]: I0126 19:00:29.951458 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6e782a5-335e-4e15-b264-73a1433e49a8-combined-ca-bundle\") pod \"b6e782a5-335e-4e15-b264-73a1433e49a8\" (UID: \"b6e782a5-335e-4e15-b264-73a1433e49a8\") " Jan 26 19:00:29 crc kubenswrapper[4737]: I0126 19:00:29.951523 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6e782a5-335e-4e15-b264-73a1433e49a8-internal-tls-certs\") pod \"b6e782a5-335e-4e15-b264-73a1433e49a8\" (UID: \"b6e782a5-335e-4e15-b264-73a1433e49a8\") " Jan 26 19:00:29 crc kubenswrapper[4737]: I0126 19:00:29.951549 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxdgv\" (UniqueName: \"kubernetes.io/projected/b6e782a5-335e-4e15-b264-73a1433e49a8-kube-api-access-fxdgv\") pod \"b6e782a5-335e-4e15-b264-73a1433e49a8\" (UID: \"b6e782a5-335e-4e15-b264-73a1433e49a8\") " Jan 26 19:00:29 crc kubenswrapper[4737]: I0126 19:00:29.951658 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6e782a5-335e-4e15-b264-73a1433e49a8-scripts\") pod \"b6e782a5-335e-4e15-b264-73a1433e49a8\" (UID: \"b6e782a5-335e-4e15-b264-73a1433e49a8\") " Jan 26 19:00:29 crc kubenswrapper[4737]: I0126 19:00:29.951728 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6e782a5-335e-4e15-b264-73a1433e49a8-public-tls-certs\") pod \"b6e782a5-335e-4e15-b264-73a1433e49a8\" (UID: \"b6e782a5-335e-4e15-b264-73a1433e49a8\") " Jan 26 19:00:29 crc kubenswrapper[4737]: I0126 19:00:29.951335 4737 scope.go:117] "RemoveContainer" containerID="c1828109c44925e8788da5cc4feb78f716c57158a1bb287520d16b6f2bf768a6" Jan 26 19:00:29 crc kubenswrapper[4737]: I0126 19:00:29.959474 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6e782a5-335e-4e15-b264-73a1433e49a8-scripts" (OuterVolumeSpecName: "scripts") pod "b6e782a5-335e-4e15-b264-73a1433e49a8" (UID: "b6e782a5-335e-4e15-b264-73a1433e49a8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:00:29 crc kubenswrapper[4737]: I0126 19:00:29.969845 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6e782a5-335e-4e15-b264-73a1433e49a8-kube-api-access-fxdgv" (OuterVolumeSpecName: "kube-api-access-fxdgv") pod "b6e782a5-335e-4e15-b264-73a1433e49a8" (UID: "b6e782a5-335e-4e15-b264-73a1433e49a8"). InnerVolumeSpecName "kube-api-access-fxdgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:00:30 crc kubenswrapper[4737]: I0126 19:00:30.046693 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6e782a5-335e-4e15-b264-73a1433e49a8-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b6e782a5-335e-4e15-b264-73a1433e49a8" (UID: "b6e782a5-335e-4e15-b264-73a1433e49a8"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:00:30 crc kubenswrapper[4737]: I0126 19:00:30.054446 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxdgv\" (UniqueName: \"kubernetes.io/projected/b6e782a5-335e-4e15-b264-73a1433e49a8-kube-api-access-fxdgv\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:30 crc kubenswrapper[4737]: I0126 19:00:30.054481 4737 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6e782a5-335e-4e15-b264-73a1433e49a8-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:30 crc kubenswrapper[4737]: I0126 19:00:30.054493 4737 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6e782a5-335e-4e15-b264-73a1433e49a8-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:30 crc kubenswrapper[4737]: I0126 19:00:30.061930 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6e782a5-335e-4e15-b264-73a1433e49a8-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b6e782a5-335e-4e15-b264-73a1433e49a8" (UID: "b6e782a5-335e-4e15-b264-73a1433e49a8"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:00:30 crc kubenswrapper[4737]: I0126 19:00:30.106780 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6e782a5-335e-4e15-b264-73a1433e49a8-config-data" (OuterVolumeSpecName: "config-data") pod "b6e782a5-335e-4e15-b264-73a1433e49a8" (UID: "b6e782a5-335e-4e15-b264-73a1433e49a8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:00:30 crc kubenswrapper[4737]: I0126 19:00:30.136866 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6e782a5-335e-4e15-b264-73a1433e49a8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b6e782a5-335e-4e15-b264-73a1433e49a8" (UID: "b6e782a5-335e-4e15-b264-73a1433e49a8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:00:30 crc kubenswrapper[4737]: I0126 19:00:30.153514 4737 scope.go:117] "RemoveContainer" containerID="4e6da485e3fa31da590ef6ae3811c6aa55a644fed8a1b6c2a1033c771bd84091" Jan 26 19:00:30 crc kubenswrapper[4737]: I0126 19:00:30.158429 4737 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6e782a5-335e-4e15-b264-73a1433e49a8-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:30 crc kubenswrapper[4737]: I0126 19:00:30.158468 4737 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6e782a5-335e-4e15-b264-73a1433e49a8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:30 crc kubenswrapper[4737]: I0126 19:00:30.158481 4737 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6e782a5-335e-4e15-b264-73a1433e49a8-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:30 crc kubenswrapper[4737]: I0126 19:00:30.189480 4737 scope.go:117] "RemoveContainer" containerID="e6308bf8ea79d7cb29c981c7aae95fb92ae7625fedaf96040c606475c5136c5e" Jan 26 19:00:30 crc kubenswrapper[4737]: I0126 19:00:30.248272 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 26 19:00:30 crc kubenswrapper[4737]: I0126 19:00:30.253976 4737 scope.go:117] "RemoveContainer" containerID="94580c31fcb8b54b50cee8e33ce948e14949b72730dfa3ca9f36ef3f38abcd59" Jan 26 19:00:30 crc kubenswrapper[4737]: E0126 19:00:30.254729 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94580c31fcb8b54b50cee8e33ce948e14949b72730dfa3ca9f36ef3f38abcd59\": container with ID starting with 94580c31fcb8b54b50cee8e33ce948e14949b72730dfa3ca9f36ef3f38abcd59 not found: ID does not exist" containerID="94580c31fcb8b54b50cee8e33ce948e14949b72730dfa3ca9f36ef3f38abcd59" Jan 26 19:00:30 crc kubenswrapper[4737]: I0126 19:00:30.254772 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94580c31fcb8b54b50cee8e33ce948e14949b72730dfa3ca9f36ef3f38abcd59"} err="failed to get container status \"94580c31fcb8b54b50cee8e33ce948e14949b72730dfa3ca9f36ef3f38abcd59\": rpc error: code = NotFound desc = could not find container \"94580c31fcb8b54b50cee8e33ce948e14949b72730dfa3ca9f36ef3f38abcd59\": container with ID starting with 94580c31fcb8b54b50cee8e33ce948e14949b72730dfa3ca9f36ef3f38abcd59 not found: ID does not exist" Jan 26 19:00:30 crc kubenswrapper[4737]: I0126 19:00:30.254800 4737 scope.go:117] "RemoveContainer" containerID="c1828109c44925e8788da5cc4feb78f716c57158a1bb287520d16b6f2bf768a6" Jan 26 19:00:30 crc kubenswrapper[4737]: E0126 19:00:30.255101 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1828109c44925e8788da5cc4feb78f716c57158a1bb287520d16b6f2bf768a6\": container with ID starting with c1828109c44925e8788da5cc4feb78f716c57158a1bb287520d16b6f2bf768a6 not found: ID does not exist" containerID="c1828109c44925e8788da5cc4feb78f716c57158a1bb287520d16b6f2bf768a6" Jan 26 19:00:30 crc kubenswrapper[4737]: I0126 19:00:30.255133 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1828109c44925e8788da5cc4feb78f716c57158a1bb287520d16b6f2bf768a6"} err="failed to get container status \"c1828109c44925e8788da5cc4feb78f716c57158a1bb287520d16b6f2bf768a6\": rpc error: code = NotFound desc = could not find container \"c1828109c44925e8788da5cc4feb78f716c57158a1bb287520d16b6f2bf768a6\": container with ID starting with c1828109c44925e8788da5cc4feb78f716c57158a1bb287520d16b6f2bf768a6 not found: ID does not exist" Jan 26 19:00:30 crc kubenswrapper[4737]: I0126 19:00:30.255152 4737 scope.go:117] "RemoveContainer" containerID="4e6da485e3fa31da590ef6ae3811c6aa55a644fed8a1b6c2a1033c771bd84091" Jan 26 19:00:30 crc kubenswrapper[4737]: E0126 19:00:30.255457 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e6da485e3fa31da590ef6ae3811c6aa55a644fed8a1b6c2a1033c771bd84091\": container with ID starting with 4e6da485e3fa31da590ef6ae3811c6aa55a644fed8a1b6c2a1033c771bd84091 not found: ID does not exist" containerID="4e6da485e3fa31da590ef6ae3811c6aa55a644fed8a1b6c2a1033c771bd84091" Jan 26 19:00:30 crc kubenswrapper[4737]: I0126 19:00:30.255505 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e6da485e3fa31da590ef6ae3811c6aa55a644fed8a1b6c2a1033c771bd84091"} err="failed to get container status \"4e6da485e3fa31da590ef6ae3811c6aa55a644fed8a1b6c2a1033c771bd84091\": rpc error: code = NotFound desc = could not find container \"4e6da485e3fa31da590ef6ae3811c6aa55a644fed8a1b6c2a1033c771bd84091\": container with ID starting with 4e6da485e3fa31da590ef6ae3811c6aa55a644fed8a1b6c2a1033c771bd84091 not found: ID does not exist" Jan 26 19:00:30 crc kubenswrapper[4737]: I0126 19:00:30.255540 4737 scope.go:117] "RemoveContainer" containerID="e6308bf8ea79d7cb29c981c7aae95fb92ae7625fedaf96040c606475c5136c5e" Jan 26 19:00:30 crc kubenswrapper[4737]: E0126 19:00:30.255871 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6308bf8ea79d7cb29c981c7aae95fb92ae7625fedaf96040c606475c5136c5e\": container with ID starting with e6308bf8ea79d7cb29c981c7aae95fb92ae7625fedaf96040c606475c5136c5e not found: ID does not exist" containerID="e6308bf8ea79d7cb29c981c7aae95fb92ae7625fedaf96040c606475c5136c5e" Jan 26 19:00:30 crc kubenswrapper[4737]: I0126 19:00:30.255907 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6308bf8ea79d7cb29c981c7aae95fb92ae7625fedaf96040c606475c5136c5e"} err="failed to get container status \"e6308bf8ea79d7cb29c981c7aae95fb92ae7625fedaf96040c606475c5136c5e\": rpc error: code = NotFound desc = could not find container \"e6308bf8ea79d7cb29c981c7aae95fb92ae7625fedaf96040c606475c5136c5e\": container with ID starting with e6308bf8ea79d7cb29c981c7aae95fb92ae7625fedaf96040c606475c5136c5e not found: ID does not exist" Jan 26 19:00:30 crc kubenswrapper[4737]: I0126 19:00:30.264871 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Jan 26 19:00:30 crc kubenswrapper[4737]: I0126 19:00:30.279595 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 26 19:00:30 crc kubenswrapper[4737]: E0126 19:00:30.280266 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6e782a5-335e-4e15-b264-73a1433e49a8" containerName="aodh-listener" Jan 26 19:00:30 crc kubenswrapper[4737]: I0126 19:00:30.280290 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6e782a5-335e-4e15-b264-73a1433e49a8" containerName="aodh-listener" Jan 26 19:00:30 crc kubenswrapper[4737]: E0126 19:00:30.280307 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6e782a5-335e-4e15-b264-73a1433e49a8" containerName="aodh-notifier" Jan 26 19:00:30 crc kubenswrapper[4737]: I0126 19:00:30.280315 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6e782a5-335e-4e15-b264-73a1433e49a8" containerName="aodh-notifier" Jan 26 19:00:30 crc kubenswrapper[4737]: E0126 19:00:30.280343 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6e782a5-335e-4e15-b264-73a1433e49a8" containerName="aodh-evaluator" Jan 26 19:00:30 crc kubenswrapper[4737]: I0126 19:00:30.280352 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6e782a5-335e-4e15-b264-73a1433e49a8" containerName="aodh-evaluator" Jan 26 19:00:30 crc kubenswrapper[4737]: E0126 19:00:30.280369 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6e782a5-335e-4e15-b264-73a1433e49a8" containerName="aodh-api" Jan 26 19:00:30 crc kubenswrapper[4737]: I0126 19:00:30.280377 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6e782a5-335e-4e15-b264-73a1433e49a8" containerName="aodh-api" Jan 26 19:00:30 crc kubenswrapper[4737]: I0126 19:00:30.280709 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6e782a5-335e-4e15-b264-73a1433e49a8" containerName="aodh-listener" Jan 26 19:00:30 crc kubenswrapper[4737]: I0126 19:00:30.280747 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6e782a5-335e-4e15-b264-73a1433e49a8" containerName="aodh-evaluator" Jan 26 19:00:30 crc kubenswrapper[4737]: I0126 19:00:30.280775 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6e782a5-335e-4e15-b264-73a1433e49a8" containerName="aodh-api" Jan 26 19:00:30 crc kubenswrapper[4737]: I0126 19:00:30.280791 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6e782a5-335e-4e15-b264-73a1433e49a8" containerName="aodh-notifier" Jan 26 19:00:30 crc kubenswrapper[4737]: I0126 19:00:30.283367 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 26 19:00:30 crc kubenswrapper[4737]: I0126 19:00:30.286924 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 26 19:00:30 crc kubenswrapper[4737]: I0126 19:00:30.289893 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Jan 26 19:00:30 crc kubenswrapper[4737]: I0126 19:00:30.290195 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 26 19:00:30 crc kubenswrapper[4737]: I0126 19:00:30.290424 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-5skxc" Jan 26 19:00:30 crc kubenswrapper[4737]: I0126 19:00:30.300603 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 26 19:00:30 crc kubenswrapper[4737]: I0126 19:00:30.303581 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Jan 26 19:00:30 crc kubenswrapper[4737]: I0126 19:00:30.365829 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/147666d0-b0ae-46ad-aaa0-2fcf6db0f137-scripts\") pod \"aodh-0\" (UID: \"147666d0-b0ae-46ad-aaa0-2fcf6db0f137\") " pod="openstack/aodh-0" Jan 26 19:00:30 crc kubenswrapper[4737]: I0126 19:00:30.365884 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xks6h\" (UniqueName: \"kubernetes.io/projected/147666d0-b0ae-46ad-aaa0-2fcf6db0f137-kube-api-access-xks6h\") pod \"aodh-0\" (UID: \"147666d0-b0ae-46ad-aaa0-2fcf6db0f137\") " pod="openstack/aodh-0" Jan 26 19:00:30 crc kubenswrapper[4737]: I0126 19:00:30.366010 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/147666d0-b0ae-46ad-aaa0-2fcf6db0f137-config-data\") pod \"aodh-0\" (UID: \"147666d0-b0ae-46ad-aaa0-2fcf6db0f137\") " pod="openstack/aodh-0" Jan 26 19:00:30 crc kubenswrapper[4737]: I0126 19:00:30.366027 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/147666d0-b0ae-46ad-aaa0-2fcf6db0f137-public-tls-certs\") pod \"aodh-0\" (UID: \"147666d0-b0ae-46ad-aaa0-2fcf6db0f137\") " pod="openstack/aodh-0" Jan 26 19:00:30 crc kubenswrapper[4737]: I0126 19:00:30.366050 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/147666d0-b0ae-46ad-aaa0-2fcf6db0f137-internal-tls-certs\") pod \"aodh-0\" (UID: \"147666d0-b0ae-46ad-aaa0-2fcf6db0f137\") " pod="openstack/aodh-0" Jan 26 19:00:30 crc kubenswrapper[4737]: I0126 19:00:30.366221 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/147666d0-b0ae-46ad-aaa0-2fcf6db0f137-combined-ca-bundle\") pod \"aodh-0\" (UID: \"147666d0-b0ae-46ad-aaa0-2fcf6db0f137\") " pod="openstack/aodh-0" Jan 26 19:00:30 crc kubenswrapper[4737]: I0126 19:00:30.467208 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xks6h\" (UniqueName: \"kubernetes.io/projected/147666d0-b0ae-46ad-aaa0-2fcf6db0f137-kube-api-access-xks6h\") pod \"aodh-0\" (UID: \"147666d0-b0ae-46ad-aaa0-2fcf6db0f137\") " pod="openstack/aodh-0" Jan 26 19:00:30 crc kubenswrapper[4737]: I0126 19:00:30.467591 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/147666d0-b0ae-46ad-aaa0-2fcf6db0f137-config-data\") pod \"aodh-0\" (UID: \"147666d0-b0ae-46ad-aaa0-2fcf6db0f137\") " pod="openstack/aodh-0" Jan 26 19:00:30 crc kubenswrapper[4737]: I0126 19:00:30.467677 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/147666d0-b0ae-46ad-aaa0-2fcf6db0f137-public-tls-certs\") pod \"aodh-0\" (UID: \"147666d0-b0ae-46ad-aaa0-2fcf6db0f137\") " pod="openstack/aodh-0" Jan 26 19:00:30 crc kubenswrapper[4737]: I0126 19:00:30.467757 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/147666d0-b0ae-46ad-aaa0-2fcf6db0f137-internal-tls-certs\") pod \"aodh-0\" (UID: \"147666d0-b0ae-46ad-aaa0-2fcf6db0f137\") " pod="openstack/aodh-0" Jan 26 19:00:30 crc kubenswrapper[4737]: I0126 19:00:30.467856 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/147666d0-b0ae-46ad-aaa0-2fcf6db0f137-combined-ca-bundle\") pod \"aodh-0\" (UID: \"147666d0-b0ae-46ad-aaa0-2fcf6db0f137\") " pod="openstack/aodh-0" Jan 26 19:00:30 crc kubenswrapper[4737]: I0126 19:00:30.467985 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/147666d0-b0ae-46ad-aaa0-2fcf6db0f137-scripts\") pod \"aodh-0\" (UID: \"147666d0-b0ae-46ad-aaa0-2fcf6db0f137\") " pod="openstack/aodh-0" Jan 26 19:00:30 crc kubenswrapper[4737]: I0126 19:00:30.472499 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/147666d0-b0ae-46ad-aaa0-2fcf6db0f137-public-tls-certs\") pod \"aodh-0\" (UID: \"147666d0-b0ae-46ad-aaa0-2fcf6db0f137\") " pod="openstack/aodh-0" Jan 26 19:00:30 crc kubenswrapper[4737]: I0126 19:00:30.472617 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/147666d0-b0ae-46ad-aaa0-2fcf6db0f137-internal-tls-certs\") pod \"aodh-0\" (UID: \"147666d0-b0ae-46ad-aaa0-2fcf6db0f137\") " pod="openstack/aodh-0" Jan 26 19:00:30 crc kubenswrapper[4737]: I0126 19:00:30.489878 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xks6h\" (UniqueName: \"kubernetes.io/projected/147666d0-b0ae-46ad-aaa0-2fcf6db0f137-kube-api-access-xks6h\") pod \"aodh-0\" (UID: \"147666d0-b0ae-46ad-aaa0-2fcf6db0f137\") " pod="openstack/aodh-0" Jan 26 19:00:30 crc kubenswrapper[4737]: I0126 19:00:30.489988 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/147666d0-b0ae-46ad-aaa0-2fcf6db0f137-combined-ca-bundle\") pod \"aodh-0\" (UID: \"147666d0-b0ae-46ad-aaa0-2fcf6db0f137\") " pod="openstack/aodh-0" Jan 26 19:00:30 crc kubenswrapper[4737]: I0126 19:00:30.490272 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/147666d0-b0ae-46ad-aaa0-2fcf6db0f137-config-data\") pod \"aodh-0\" (UID: \"147666d0-b0ae-46ad-aaa0-2fcf6db0f137\") " pod="openstack/aodh-0" Jan 26 19:00:30 crc kubenswrapper[4737]: I0126 19:00:30.490711 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/147666d0-b0ae-46ad-aaa0-2fcf6db0f137-scripts\") pod \"aodh-0\" (UID: \"147666d0-b0ae-46ad-aaa0-2fcf6db0f137\") " pod="openstack/aodh-0" Jan 26 19:00:30 crc kubenswrapper[4737]: I0126 19:00:30.606483 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 26 19:00:30 crc kubenswrapper[4737]: I0126 19:00:30.920207 4737 generic.go:334] "Generic (PLEG): container finished" podID="2af8847d-3acf-4733-a507-7d00229ef74c" containerID="5f2f24cd1720b055d2ab4f1e12d0e41b12510fc069dcc424485201c135658377" exitCode=0 Jan 26 19:00:30 crc kubenswrapper[4737]: I0126 19:00:30.920250 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ld6hr" event={"ID":"2af8847d-3acf-4733-a507-7d00229ef74c","Type":"ContainerDied","Data":"5f2f24cd1720b055d2ab4f1e12d0e41b12510fc069dcc424485201c135658377"} Jan 26 19:00:30 crc kubenswrapper[4737]: I0126 19:00:30.995161 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6e782a5-335e-4e15-b264-73a1433e49a8" path="/var/lib/kubelet/pods/b6e782a5-335e-4e15-b264-73a1433e49a8/volumes" Jan 26 19:00:31 crc kubenswrapper[4737]: W0126 19:00:31.138027 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod147666d0_b0ae_46ad_aaa0_2fcf6db0f137.slice/crio-3ea368f89a75c5978141bb3e39818dc182ecc98e0e62c8fed7aabcaa155233b0 WatchSource:0}: Error finding container 3ea368f89a75c5978141bb3e39818dc182ecc98e0e62c8fed7aabcaa155233b0: Status 404 returned error can't find the container with id 3ea368f89a75c5978141bb3e39818dc182ecc98e0e62c8fed7aabcaa155233b0 Jan 26 19:00:31 crc kubenswrapper[4737]: I0126 19:00:31.139555 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 26 19:00:31 crc kubenswrapper[4737]: I0126 19:00:31.277841 4737 scope.go:117] "RemoveContainer" containerID="f20e4d4c577e8e539ad11813ea69c80f37dee7eabb6f108154211ee6d49e87e9" Jan 26 19:00:31 crc kubenswrapper[4737]: I0126 19:00:31.306219 4737 scope.go:117] "RemoveContainer" containerID="30ea1d45258592e4482ad8d2cf21b32cd34002b74a4604b607e91a5e253c915b" Jan 26 19:00:31 crc kubenswrapper[4737]: I0126 19:00:31.336970 4737 scope.go:117] "RemoveContainer" containerID="030a0100206de3b5ac22e3f507c2013ce42eebd95236b216e3933c2d5ccf93b1" Jan 26 19:00:31 crc kubenswrapper[4737]: I0126 19:00:31.523954 4737 scope.go:117] "RemoveContainer" containerID="2ebfdcdfdbe14aa0e28829a0a38464bd85c37378a7a7e87d87baabaa0d87c375" Jan 26 19:00:31 crc kubenswrapper[4737]: I0126 19:00:31.604012 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-1" podUID="5bfe0217-6204-407d-aaeb-94051bb8255b" containerName="rabbitmq" containerID="cri-o://10ba0aca777890fd9ac6c38b0f53691f9bca7a7ec22d9448fc1c1adc2a454d16" gracePeriod=604795 Jan 26 19:00:31 crc kubenswrapper[4737]: I0126 19:00:31.628351 4737 scope.go:117] "RemoveContainer" containerID="afd662fed630029ff5f2e324a72eedc21f44c56b09e0acccce1a15ca6ba0a38d" Jan 26 19:00:31 crc kubenswrapper[4737]: I0126 19:00:31.936930 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"147666d0-b0ae-46ad-aaa0-2fcf6db0f137","Type":"ContainerStarted","Data":"59bc9f4bf4ffda1032210380979a1323f3e31c993a5435a909cf22b579acef06"} Jan 26 19:00:31 crc kubenswrapper[4737]: I0126 19:00:31.936973 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"147666d0-b0ae-46ad-aaa0-2fcf6db0f137","Type":"ContainerStarted","Data":"3ea368f89a75c5978141bb3e39818dc182ecc98e0e62c8fed7aabcaa155233b0"} Jan 26 19:00:33 crc kubenswrapper[4737]: I0126 19:00:33.198722 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ld6hr" Jan 26 19:00:33 crc kubenswrapper[4737]: I0126 19:00:33.260047 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2af8847d-3acf-4733-a507-7d00229ef74c-ssh-key-openstack-edpm-ipam\") pod \"2af8847d-3acf-4733-a507-7d00229ef74c\" (UID: \"2af8847d-3acf-4733-a507-7d00229ef74c\") " Jan 26 19:00:33 crc kubenswrapper[4737]: I0126 19:00:33.290985 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2af8847d-3acf-4733-a507-7d00229ef74c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2af8847d-3acf-4733-a507-7d00229ef74c" (UID: "2af8847d-3acf-4733-a507-7d00229ef74c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:00:33 crc kubenswrapper[4737]: I0126 19:00:33.362265 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2af8847d-3acf-4733-a507-7d00229ef74c-inventory\") pod \"2af8847d-3acf-4733-a507-7d00229ef74c\" (UID: \"2af8847d-3acf-4733-a507-7d00229ef74c\") " Jan 26 19:00:33 crc kubenswrapper[4737]: I0126 19:00:33.362751 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qd2cl\" (UniqueName: \"kubernetes.io/projected/2af8847d-3acf-4733-a507-7d00229ef74c-kube-api-access-qd2cl\") pod \"2af8847d-3acf-4733-a507-7d00229ef74c\" (UID: \"2af8847d-3acf-4733-a507-7d00229ef74c\") " Jan 26 19:00:33 crc kubenswrapper[4737]: I0126 19:00:33.363198 4737 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2af8847d-3acf-4733-a507-7d00229ef74c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:33 crc kubenswrapper[4737]: I0126 19:00:33.366553 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2af8847d-3acf-4733-a507-7d00229ef74c-kube-api-access-qd2cl" (OuterVolumeSpecName: "kube-api-access-qd2cl") pod "2af8847d-3acf-4733-a507-7d00229ef74c" (UID: "2af8847d-3acf-4733-a507-7d00229ef74c"). InnerVolumeSpecName "kube-api-access-qd2cl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:00:33 crc kubenswrapper[4737]: I0126 19:00:33.393583 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2af8847d-3acf-4733-a507-7d00229ef74c-inventory" (OuterVolumeSpecName: "inventory") pod "2af8847d-3acf-4733-a507-7d00229ef74c" (UID: "2af8847d-3acf-4733-a507-7d00229ef74c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:00:33 crc kubenswrapper[4737]: I0126 19:00:33.465971 4737 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2af8847d-3acf-4733-a507-7d00229ef74c-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:33 crc kubenswrapper[4737]: I0126 19:00:33.466018 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qd2cl\" (UniqueName: \"kubernetes.io/projected/2af8847d-3acf-4733-a507-7d00229ef74c-kube-api-access-qd2cl\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:33 crc kubenswrapper[4737]: I0126 19:00:33.971897 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ld6hr" Jan 26 19:00:33 crc kubenswrapper[4737]: I0126 19:00:33.971891 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ld6hr" event={"ID":"2af8847d-3acf-4733-a507-7d00229ef74c","Type":"ContainerDied","Data":"252b72abb5cf55963067a6c4fa080bc15c8c01da1da46d1610300b46d16fdb74"} Jan 26 19:00:33 crc kubenswrapper[4737]: I0126 19:00:33.971992 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="252b72abb5cf55963067a6c4fa080bc15c8c01da1da46d1610300b46d16fdb74" Jan 26 19:00:33 crc kubenswrapper[4737]: I0126 19:00:33.975463 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"147666d0-b0ae-46ad-aaa0-2fcf6db0f137","Type":"ContainerStarted","Data":"4922d872ebf6d3672736c3eddf8ba191919040b8c0ce19c5b7c81ecf77ce18fe"} Jan 26 19:00:34 crc kubenswrapper[4737]: I0126 19:00:34.303316 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kpbfj"] Jan 26 19:00:34 crc kubenswrapper[4737]: E0126 19:00:34.304488 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2af8847d-3acf-4733-a507-7d00229ef74c" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 26 19:00:34 crc kubenswrapper[4737]: I0126 19:00:34.304524 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="2af8847d-3acf-4733-a507-7d00229ef74c" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 26 19:00:34 crc kubenswrapper[4737]: I0126 19:00:34.304982 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="2af8847d-3acf-4733-a507-7d00229ef74c" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 26 19:00:34 crc kubenswrapper[4737]: I0126 19:00:34.306788 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kpbfj" Jan 26 19:00:34 crc kubenswrapper[4737]: I0126 19:00:34.311847 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 19:00:34 crc kubenswrapper[4737]: I0126 19:00:34.312008 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xlvv9" Jan 26 19:00:34 crc kubenswrapper[4737]: I0126 19:00:34.312225 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 19:00:34 crc kubenswrapper[4737]: I0126 19:00:34.312369 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 19:00:34 crc kubenswrapper[4737]: I0126 19:00:34.336711 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kpbfj"] Jan 26 19:00:34 crc kubenswrapper[4737]: I0126 19:00:34.389790 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d1d0ed3-31b7-41a2-8f49-741d206509bd-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kpbfj\" (UID: \"6d1d0ed3-31b7-41a2-8f49-741d206509bd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kpbfj" Jan 26 19:00:34 crc kubenswrapper[4737]: I0126 19:00:34.390277 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6d1d0ed3-31b7-41a2-8f49-741d206509bd-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kpbfj\" (UID: \"6d1d0ed3-31b7-41a2-8f49-741d206509bd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kpbfj" Jan 26 19:00:34 crc kubenswrapper[4737]: I0126 19:00:34.390573 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57m9f\" (UniqueName: \"kubernetes.io/projected/6d1d0ed3-31b7-41a2-8f49-741d206509bd-kube-api-access-57m9f\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kpbfj\" (UID: \"6d1d0ed3-31b7-41a2-8f49-741d206509bd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kpbfj" Jan 26 19:00:34 crc kubenswrapper[4737]: I0126 19:00:34.390957 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6d1d0ed3-31b7-41a2-8f49-741d206509bd-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kpbfj\" (UID: \"6d1d0ed3-31b7-41a2-8f49-741d206509bd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kpbfj" Jan 26 19:00:34 crc kubenswrapper[4737]: I0126 19:00:34.492495 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d1d0ed3-31b7-41a2-8f49-741d206509bd-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kpbfj\" (UID: \"6d1d0ed3-31b7-41a2-8f49-741d206509bd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kpbfj" Jan 26 19:00:34 crc kubenswrapper[4737]: I0126 19:00:34.492554 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6d1d0ed3-31b7-41a2-8f49-741d206509bd-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kpbfj\" (UID: \"6d1d0ed3-31b7-41a2-8f49-741d206509bd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kpbfj" Jan 26 19:00:34 crc kubenswrapper[4737]: I0126 19:00:34.492623 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57m9f\" (UniqueName: \"kubernetes.io/projected/6d1d0ed3-31b7-41a2-8f49-741d206509bd-kube-api-access-57m9f\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kpbfj\" (UID: \"6d1d0ed3-31b7-41a2-8f49-741d206509bd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kpbfj" Jan 26 19:00:34 crc kubenswrapper[4737]: I0126 19:00:34.492725 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6d1d0ed3-31b7-41a2-8f49-741d206509bd-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kpbfj\" (UID: \"6d1d0ed3-31b7-41a2-8f49-741d206509bd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kpbfj" Jan 26 19:00:34 crc kubenswrapper[4737]: I0126 19:00:34.500670 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6d1d0ed3-31b7-41a2-8f49-741d206509bd-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kpbfj\" (UID: \"6d1d0ed3-31b7-41a2-8f49-741d206509bd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kpbfj" Jan 26 19:00:34 crc kubenswrapper[4737]: I0126 19:00:34.502091 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6d1d0ed3-31b7-41a2-8f49-741d206509bd-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kpbfj\" (UID: \"6d1d0ed3-31b7-41a2-8f49-741d206509bd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kpbfj" Jan 26 19:00:34 crc kubenswrapper[4737]: I0126 19:00:34.509128 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d1d0ed3-31b7-41a2-8f49-741d206509bd-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kpbfj\" (UID: \"6d1d0ed3-31b7-41a2-8f49-741d206509bd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kpbfj" Jan 26 19:00:34 crc kubenswrapper[4737]: I0126 19:00:34.515593 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57m9f\" (UniqueName: \"kubernetes.io/projected/6d1d0ed3-31b7-41a2-8f49-741d206509bd-kube-api-access-57m9f\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kpbfj\" (UID: \"6d1d0ed3-31b7-41a2-8f49-741d206509bd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kpbfj" Jan 26 19:00:34 crc kubenswrapper[4737]: I0126 19:00:34.701138 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kpbfj" Jan 26 19:00:35 crc kubenswrapper[4737]: I0126 19:00:35.035079 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"147666d0-b0ae-46ad-aaa0-2fcf6db0f137","Type":"ContainerStarted","Data":"72fdd5b16040f997143a9403254eb947a39d4b34e28651bec27a9650715f132e"} Jan 26 19:00:35 crc kubenswrapper[4737]: I0126 19:00:35.389292 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kpbfj"] Jan 26 19:00:35 crc kubenswrapper[4737]: W0126 19:00:35.515511 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d1d0ed3_31b7_41a2_8f49_741d206509bd.slice/crio-6f7b912eafa501d6186eedb051d97184701200e538d9847c5a06d840d409542d WatchSource:0}: Error finding container 6f7b912eafa501d6186eedb051d97184701200e538d9847c5a06d840d409542d: Status 404 returned error can't find the container with id 6f7b912eafa501d6186eedb051d97184701200e538d9847c5a06d840d409542d Jan 26 19:00:36 crc kubenswrapper[4737]: I0126 19:00:36.052118 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kpbfj" event={"ID":"6d1d0ed3-31b7-41a2-8f49-741d206509bd","Type":"ContainerStarted","Data":"6f7b912eafa501d6186eedb051d97184701200e538d9847c5a06d840d409542d"} Jan 26 19:00:36 crc kubenswrapper[4737]: I0126 19:00:36.061305 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"147666d0-b0ae-46ad-aaa0-2fcf6db0f137","Type":"ContainerStarted","Data":"7d7fa1795a93e3b6a81f7a8af654dce4a59f1dbccdd0485b3f97e1ca816c3005"} Jan 26 19:00:36 crc kubenswrapper[4737]: I0126 19:00:36.513304 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="5bfe0217-6204-407d-aaeb-94051bb8255b" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.131:5671: connect: connection refused" Jan 26 19:00:36 crc kubenswrapper[4737]: I0126 19:00:36.995597 4737 scope.go:117] "RemoveContainer" containerID="1118354a04db19a991298cf7d8a2d128f4afb57f133e36502b231054abcee336" Jan 26 19:00:36 crc kubenswrapper[4737]: E0126 19:00:36.995936 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:00:37 crc kubenswrapper[4737]: I0126 19:00:37.020536 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.594631252 podStartE2EDuration="7.020515938s" podCreationTimestamp="2026-01-26 19:00:30 +0000 UTC" firstStartedPulling="2026-01-26 19:00:31.140466695 +0000 UTC m=+1804.448661403" lastFinishedPulling="2026-01-26 19:00:35.566351381 +0000 UTC m=+1808.874546089" observedRunningTime="2026-01-26 19:00:36.093161047 +0000 UTC m=+1809.401355745" watchObservedRunningTime="2026-01-26 19:00:37.020515938 +0000 UTC m=+1810.328710646" Jan 26 19:00:37 crc kubenswrapper[4737]: I0126 19:00:37.078004 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kpbfj" event={"ID":"6d1d0ed3-31b7-41a2-8f49-741d206509bd","Type":"ContainerStarted","Data":"24817856bb13aef40d9b78414fe746d946c5ccb96272a219f738e7d73a75717b"} Jan 26 19:00:37 crc kubenswrapper[4737]: I0126 19:00:37.104013 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kpbfj" podStartSLOduration=2.545702846 podStartE2EDuration="3.103989759s" podCreationTimestamp="2026-01-26 19:00:34 +0000 UTC" firstStartedPulling="2026-01-26 19:00:35.519654184 +0000 UTC m=+1808.827848892" lastFinishedPulling="2026-01-26 19:00:36.077941097 +0000 UTC m=+1809.386135805" observedRunningTime="2026-01-26 19:00:37.101694643 +0000 UTC m=+1810.409889361" watchObservedRunningTime="2026-01-26 19:00:37.103989759 +0000 UTC m=+1810.412184487" Jan 26 19:00:38 crc kubenswrapper[4737]: I0126 19:00:38.104946 4737 generic.go:334] "Generic (PLEG): container finished" podID="5bfe0217-6204-407d-aaeb-94051bb8255b" containerID="10ba0aca777890fd9ac6c38b0f53691f9bca7a7ec22d9448fc1c1adc2a454d16" exitCode=0 Jan 26 19:00:38 crc kubenswrapper[4737]: I0126 19:00:38.105023 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"5bfe0217-6204-407d-aaeb-94051bb8255b","Type":"ContainerDied","Data":"10ba0aca777890fd9ac6c38b0f53691f9bca7a7ec22d9448fc1c1adc2a454d16"} Jan 26 19:00:38 crc kubenswrapper[4737]: I0126 19:00:38.344454 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 26 19:00:38 crc kubenswrapper[4737]: I0126 19:00:38.427522 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5bfe0217-6204-407d-aaeb-94051bb8255b-rabbitmq-erlang-cookie\") pod \"5bfe0217-6204-407d-aaeb-94051bb8255b\" (UID: \"5bfe0217-6204-407d-aaeb-94051bb8255b\") " Jan 26 19:00:38 crc kubenswrapper[4737]: I0126 19:00:38.427586 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5bfe0217-6204-407d-aaeb-94051bb8255b-config-data\") pod \"5bfe0217-6204-407d-aaeb-94051bb8255b\" (UID: \"5bfe0217-6204-407d-aaeb-94051bb8255b\") " Jan 26 19:00:38 crc kubenswrapper[4737]: I0126 19:00:38.427616 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcwlb\" (UniqueName: \"kubernetes.io/projected/5bfe0217-6204-407d-aaeb-94051bb8255b-kube-api-access-jcwlb\") pod \"5bfe0217-6204-407d-aaeb-94051bb8255b\" (UID: \"5bfe0217-6204-407d-aaeb-94051bb8255b\") " Jan 26 19:00:38 crc kubenswrapper[4737]: I0126 19:00:38.429165 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf74a87d-7af1-49ca-ad77-fa33c810eec2\") pod \"5bfe0217-6204-407d-aaeb-94051bb8255b\" (UID: \"5bfe0217-6204-407d-aaeb-94051bb8255b\") " Jan 26 19:00:38 crc kubenswrapper[4737]: I0126 19:00:38.429448 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bfe0217-6204-407d-aaeb-94051bb8255b-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "5bfe0217-6204-407d-aaeb-94051bb8255b" (UID: "5bfe0217-6204-407d-aaeb-94051bb8255b"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:00:38 crc kubenswrapper[4737]: I0126 19:00:38.430144 4737 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5bfe0217-6204-407d-aaeb-94051bb8255b-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:38 crc kubenswrapper[4737]: I0126 19:00:38.441286 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bfe0217-6204-407d-aaeb-94051bb8255b-kube-api-access-jcwlb" (OuterVolumeSpecName: "kube-api-access-jcwlb") pod "5bfe0217-6204-407d-aaeb-94051bb8255b" (UID: "5bfe0217-6204-407d-aaeb-94051bb8255b"). InnerVolumeSpecName "kube-api-access-jcwlb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:00:38 crc kubenswrapper[4737]: I0126 19:00:38.488684 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf74a87d-7af1-49ca-ad77-fa33c810eec2" (OuterVolumeSpecName: "persistence") pod "5bfe0217-6204-407d-aaeb-94051bb8255b" (UID: "5bfe0217-6204-407d-aaeb-94051bb8255b"). InnerVolumeSpecName "pvc-cf74a87d-7af1-49ca-ad77-fa33c810eec2". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 19:00:38 crc kubenswrapper[4737]: I0126 19:00:38.489629 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bfe0217-6204-407d-aaeb-94051bb8255b-config-data" (OuterVolumeSpecName: "config-data") pod "5bfe0217-6204-407d-aaeb-94051bb8255b" (UID: "5bfe0217-6204-407d-aaeb-94051bb8255b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:00:38 crc kubenswrapper[4737]: I0126 19:00:38.531026 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5bfe0217-6204-407d-aaeb-94051bb8255b-server-conf\") pod \"5bfe0217-6204-407d-aaeb-94051bb8255b\" (UID: \"5bfe0217-6204-407d-aaeb-94051bb8255b\") " Jan 26 19:00:38 crc kubenswrapper[4737]: I0126 19:00:38.531281 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5bfe0217-6204-407d-aaeb-94051bb8255b-rabbitmq-tls\") pod \"5bfe0217-6204-407d-aaeb-94051bb8255b\" (UID: \"5bfe0217-6204-407d-aaeb-94051bb8255b\") " Jan 26 19:00:38 crc kubenswrapper[4737]: I0126 19:00:38.531324 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5bfe0217-6204-407d-aaeb-94051bb8255b-pod-info\") pod \"5bfe0217-6204-407d-aaeb-94051bb8255b\" (UID: \"5bfe0217-6204-407d-aaeb-94051bb8255b\") " Jan 26 19:00:38 crc kubenswrapper[4737]: I0126 19:00:38.532146 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5bfe0217-6204-407d-aaeb-94051bb8255b-plugins-conf\") pod \"5bfe0217-6204-407d-aaeb-94051bb8255b\" (UID: \"5bfe0217-6204-407d-aaeb-94051bb8255b\") " Jan 26 19:00:38 crc kubenswrapper[4737]: I0126 19:00:38.532732 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5bfe0217-6204-407d-aaeb-94051bb8255b-erlang-cookie-secret\") pod \"5bfe0217-6204-407d-aaeb-94051bb8255b\" (UID: \"5bfe0217-6204-407d-aaeb-94051bb8255b\") " Jan 26 19:00:38 crc kubenswrapper[4737]: I0126 19:00:38.533297 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5bfe0217-6204-407d-aaeb-94051bb8255b-rabbitmq-plugins\") pod \"5bfe0217-6204-407d-aaeb-94051bb8255b\" (UID: \"5bfe0217-6204-407d-aaeb-94051bb8255b\") " Jan 26 19:00:38 crc kubenswrapper[4737]: I0126 19:00:38.533403 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5bfe0217-6204-407d-aaeb-94051bb8255b-rabbitmq-confd\") pod \"5bfe0217-6204-407d-aaeb-94051bb8255b\" (UID: \"5bfe0217-6204-407d-aaeb-94051bb8255b\") " Jan 26 19:00:38 crc kubenswrapper[4737]: I0126 19:00:38.532769 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bfe0217-6204-407d-aaeb-94051bb8255b-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "5bfe0217-6204-407d-aaeb-94051bb8255b" (UID: "5bfe0217-6204-407d-aaeb-94051bb8255b"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:00:38 crc kubenswrapper[4737]: I0126 19:00:38.534319 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bfe0217-6204-407d-aaeb-94051bb8255b-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "5bfe0217-6204-407d-aaeb-94051bb8255b" (UID: "5bfe0217-6204-407d-aaeb-94051bb8255b"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:00:38 crc kubenswrapper[4737]: I0126 19:00:38.535166 4737 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5bfe0217-6204-407d-aaeb-94051bb8255b-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:38 crc kubenswrapper[4737]: I0126 19:00:38.535195 4737 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5bfe0217-6204-407d-aaeb-94051bb8255b-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:38 crc kubenswrapper[4737]: I0126 19:00:38.535233 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcwlb\" (UniqueName: \"kubernetes.io/projected/5bfe0217-6204-407d-aaeb-94051bb8255b-kube-api-access-jcwlb\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:38 crc kubenswrapper[4737]: I0126 19:00:38.535264 4737 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-cf74a87d-7af1-49ca-ad77-fa33c810eec2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf74a87d-7af1-49ca-ad77-fa33c810eec2\") on node \"crc\" " Jan 26 19:00:38 crc kubenswrapper[4737]: I0126 19:00:38.535279 4737 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5bfe0217-6204-407d-aaeb-94051bb8255b-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:38 crc kubenswrapper[4737]: I0126 19:00:38.544326 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bfe0217-6204-407d-aaeb-94051bb8255b-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "5bfe0217-6204-407d-aaeb-94051bb8255b" (UID: "5bfe0217-6204-407d-aaeb-94051bb8255b"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:00:38 crc kubenswrapper[4737]: I0126 19:00:38.544734 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bfe0217-6204-407d-aaeb-94051bb8255b-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "5bfe0217-6204-407d-aaeb-94051bb8255b" (UID: "5bfe0217-6204-407d-aaeb-94051bb8255b"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:00:38 crc kubenswrapper[4737]: I0126 19:00:38.551931 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/5bfe0217-6204-407d-aaeb-94051bb8255b-pod-info" (OuterVolumeSpecName: "pod-info") pod "5bfe0217-6204-407d-aaeb-94051bb8255b" (UID: "5bfe0217-6204-407d-aaeb-94051bb8255b"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 26 19:00:38 crc kubenswrapper[4737]: I0126 19:00:38.573547 4737 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 26 19:00:38 crc kubenswrapper[4737]: I0126 19:00:38.573731 4737 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-cf74a87d-7af1-49ca-ad77-fa33c810eec2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf74a87d-7af1-49ca-ad77-fa33c810eec2") on node "crc" Jan 26 19:00:38 crc kubenswrapper[4737]: I0126 19:00:38.627625 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bfe0217-6204-407d-aaeb-94051bb8255b-server-conf" (OuterVolumeSpecName: "server-conf") pod "5bfe0217-6204-407d-aaeb-94051bb8255b" (UID: "5bfe0217-6204-407d-aaeb-94051bb8255b"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:00:38 crc kubenswrapper[4737]: I0126 19:00:38.637485 4737 reconciler_common.go:293] "Volume detached for volume \"pvc-cf74a87d-7af1-49ca-ad77-fa33c810eec2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf74a87d-7af1-49ca-ad77-fa33c810eec2\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:38 crc kubenswrapper[4737]: I0126 19:00:38.637520 4737 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5bfe0217-6204-407d-aaeb-94051bb8255b-server-conf\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:38 crc kubenswrapper[4737]: I0126 19:00:38.637532 4737 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5bfe0217-6204-407d-aaeb-94051bb8255b-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:38 crc kubenswrapper[4737]: I0126 19:00:38.637543 4737 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5bfe0217-6204-407d-aaeb-94051bb8255b-pod-info\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:38 crc kubenswrapper[4737]: I0126 19:00:38.637553 4737 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5bfe0217-6204-407d-aaeb-94051bb8255b-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:38 crc kubenswrapper[4737]: I0126 19:00:38.692267 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bfe0217-6204-407d-aaeb-94051bb8255b-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "5bfe0217-6204-407d-aaeb-94051bb8255b" (UID: "5bfe0217-6204-407d-aaeb-94051bb8255b"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:00:38 crc kubenswrapper[4737]: I0126 19:00:38.740112 4737 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5bfe0217-6204-407d-aaeb-94051bb8255b-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:39 crc kubenswrapper[4737]: I0126 19:00:39.121333 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"5bfe0217-6204-407d-aaeb-94051bb8255b","Type":"ContainerDied","Data":"1ce82639dbc64e8e36e50a8dca2bc037cfe125204c0dbd49fb60a56482e408a3"} Jan 26 19:00:39 crc kubenswrapper[4737]: I0126 19:00:39.121405 4737 scope.go:117] "RemoveContainer" containerID="10ba0aca777890fd9ac6c38b0f53691f9bca7a7ec22d9448fc1c1adc2a454d16" Jan 26 19:00:39 crc kubenswrapper[4737]: I0126 19:00:39.121646 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 26 19:00:39 crc kubenswrapper[4737]: I0126 19:00:39.154229 4737 scope.go:117] "RemoveContainer" containerID="3014aff826d6940c1d9ef79a0dc47bd5a4dba695d4fb45b94f0378a1b7619f38" Jan 26 19:00:39 crc kubenswrapper[4737]: I0126 19:00:39.175779 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 26 19:00:39 crc kubenswrapper[4737]: I0126 19:00:39.270125 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 26 19:00:39 crc kubenswrapper[4737]: I0126 19:00:39.291410 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Jan 26 19:00:39 crc kubenswrapper[4737]: E0126 19:00:39.292319 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bfe0217-6204-407d-aaeb-94051bb8255b" containerName="setup-container" Jan 26 19:00:39 crc kubenswrapper[4737]: I0126 19:00:39.292552 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bfe0217-6204-407d-aaeb-94051bb8255b" containerName="setup-container" Jan 26 19:00:39 crc kubenswrapper[4737]: E0126 19:00:39.292579 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bfe0217-6204-407d-aaeb-94051bb8255b" containerName="rabbitmq" Jan 26 19:00:39 crc kubenswrapper[4737]: I0126 19:00:39.292593 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bfe0217-6204-407d-aaeb-94051bb8255b" containerName="rabbitmq" Jan 26 19:00:39 crc kubenswrapper[4737]: I0126 19:00:39.292938 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bfe0217-6204-407d-aaeb-94051bb8255b" containerName="rabbitmq" Jan 26 19:00:39 crc kubenswrapper[4737]: I0126 19:00:39.294873 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 26 19:00:39 crc kubenswrapper[4737]: I0126 19:00:39.305720 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 26 19:00:39 crc kubenswrapper[4737]: I0126 19:00:39.466967 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/72e5eb94-0267-4126-b24c-9b816c66badf-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"72e5eb94-0267-4126-b24c-9b816c66badf\") " pod="openstack/rabbitmq-server-1" Jan 26 19:00:39 crc kubenswrapper[4737]: I0126 19:00:39.467038 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/72e5eb94-0267-4126-b24c-9b816c66badf-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"72e5eb94-0267-4126-b24c-9b816c66badf\") " pod="openstack/rabbitmq-server-1" Jan 26 19:00:39 crc kubenswrapper[4737]: I0126 19:00:39.467199 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/72e5eb94-0267-4126-b24c-9b816c66badf-config-data\") pod \"rabbitmq-server-1\" (UID: \"72e5eb94-0267-4126-b24c-9b816c66badf\") " pod="openstack/rabbitmq-server-1" Jan 26 19:00:39 crc kubenswrapper[4737]: I0126 19:00:39.467463 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/72e5eb94-0267-4126-b24c-9b816c66badf-pod-info\") pod \"rabbitmq-server-1\" (UID: \"72e5eb94-0267-4126-b24c-9b816c66badf\") " pod="openstack/rabbitmq-server-1" Jan 26 19:00:39 crc kubenswrapper[4737]: I0126 19:00:39.467592 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttz9h\" (UniqueName: \"kubernetes.io/projected/72e5eb94-0267-4126-b24c-9b816c66badf-kube-api-access-ttz9h\") pod \"rabbitmq-server-1\" (UID: \"72e5eb94-0267-4126-b24c-9b816c66badf\") " pod="openstack/rabbitmq-server-1" Jan 26 19:00:39 crc kubenswrapper[4737]: I0126 19:00:39.467726 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/72e5eb94-0267-4126-b24c-9b816c66badf-server-conf\") pod \"rabbitmq-server-1\" (UID: \"72e5eb94-0267-4126-b24c-9b816c66badf\") " pod="openstack/rabbitmq-server-1" Jan 26 19:00:39 crc kubenswrapper[4737]: I0126 19:00:39.467752 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/72e5eb94-0267-4126-b24c-9b816c66badf-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"72e5eb94-0267-4126-b24c-9b816c66badf\") " pod="openstack/rabbitmq-server-1" Jan 26 19:00:39 crc kubenswrapper[4737]: I0126 19:00:39.467781 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/72e5eb94-0267-4126-b24c-9b816c66badf-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"72e5eb94-0267-4126-b24c-9b816c66badf\") " pod="openstack/rabbitmq-server-1" Jan 26 19:00:39 crc kubenswrapper[4737]: I0126 19:00:39.467806 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cf74a87d-7af1-49ca-ad77-fa33c810eec2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf74a87d-7af1-49ca-ad77-fa33c810eec2\") pod \"rabbitmq-server-1\" (UID: \"72e5eb94-0267-4126-b24c-9b816c66badf\") " pod="openstack/rabbitmq-server-1" Jan 26 19:00:39 crc kubenswrapper[4737]: I0126 19:00:39.467833 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/72e5eb94-0267-4126-b24c-9b816c66badf-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"72e5eb94-0267-4126-b24c-9b816c66badf\") " pod="openstack/rabbitmq-server-1" Jan 26 19:00:39 crc kubenswrapper[4737]: I0126 19:00:39.467941 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/72e5eb94-0267-4126-b24c-9b816c66badf-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"72e5eb94-0267-4126-b24c-9b816c66badf\") " pod="openstack/rabbitmq-server-1" Jan 26 19:00:39 crc kubenswrapper[4737]: I0126 19:00:39.570528 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/72e5eb94-0267-4126-b24c-9b816c66badf-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"72e5eb94-0267-4126-b24c-9b816c66badf\") " pod="openstack/rabbitmq-server-1" Jan 26 19:00:39 crc kubenswrapper[4737]: I0126 19:00:39.570645 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/72e5eb94-0267-4126-b24c-9b816c66badf-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"72e5eb94-0267-4126-b24c-9b816c66badf\") " pod="openstack/rabbitmq-server-1" Jan 26 19:00:39 crc kubenswrapper[4737]: I0126 19:00:39.571653 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/72e5eb94-0267-4126-b24c-9b816c66badf-config-data\") pod \"rabbitmq-server-1\" (UID: \"72e5eb94-0267-4126-b24c-9b816c66badf\") " pod="openstack/rabbitmq-server-1" Jan 26 19:00:39 crc kubenswrapper[4737]: I0126 19:00:39.571773 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/72e5eb94-0267-4126-b24c-9b816c66badf-pod-info\") pod \"rabbitmq-server-1\" (UID: \"72e5eb94-0267-4126-b24c-9b816c66badf\") " pod="openstack/rabbitmq-server-1" Jan 26 19:00:39 crc kubenswrapper[4737]: I0126 19:00:39.571838 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttz9h\" (UniqueName: \"kubernetes.io/projected/72e5eb94-0267-4126-b24c-9b816c66badf-kube-api-access-ttz9h\") pod \"rabbitmq-server-1\" (UID: \"72e5eb94-0267-4126-b24c-9b816c66badf\") " pod="openstack/rabbitmq-server-1" Jan 26 19:00:39 crc kubenswrapper[4737]: I0126 19:00:39.571948 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/72e5eb94-0267-4126-b24c-9b816c66badf-server-conf\") pod \"rabbitmq-server-1\" (UID: \"72e5eb94-0267-4126-b24c-9b816c66badf\") " pod="openstack/rabbitmq-server-1" Jan 26 19:00:39 crc kubenswrapper[4737]: I0126 19:00:39.571984 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/72e5eb94-0267-4126-b24c-9b816c66badf-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"72e5eb94-0267-4126-b24c-9b816c66badf\") " pod="openstack/rabbitmq-server-1" Jan 26 19:00:39 crc kubenswrapper[4737]: I0126 19:00:39.572013 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-cf74a87d-7af1-49ca-ad77-fa33c810eec2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf74a87d-7af1-49ca-ad77-fa33c810eec2\") pod \"rabbitmq-server-1\" (UID: \"72e5eb94-0267-4126-b24c-9b816c66badf\") " pod="openstack/rabbitmq-server-1" Jan 26 19:00:39 crc kubenswrapper[4737]: I0126 19:00:39.572040 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/72e5eb94-0267-4126-b24c-9b816c66badf-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"72e5eb94-0267-4126-b24c-9b816c66badf\") " pod="openstack/rabbitmq-server-1" Jan 26 19:00:39 crc kubenswrapper[4737]: I0126 19:00:39.572071 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/72e5eb94-0267-4126-b24c-9b816c66badf-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"72e5eb94-0267-4126-b24c-9b816c66badf\") " pod="openstack/rabbitmq-server-1" Jan 26 19:00:39 crc kubenswrapper[4737]: I0126 19:00:39.572193 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/72e5eb94-0267-4126-b24c-9b816c66badf-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"72e5eb94-0267-4126-b24c-9b816c66badf\") " pod="openstack/rabbitmq-server-1" Jan 26 19:00:39 crc kubenswrapper[4737]: I0126 19:00:39.572610 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/72e5eb94-0267-4126-b24c-9b816c66badf-config-data\") pod \"rabbitmq-server-1\" (UID: \"72e5eb94-0267-4126-b24c-9b816c66badf\") " pod="openstack/rabbitmq-server-1" Jan 26 19:00:39 crc kubenswrapper[4737]: I0126 19:00:39.572901 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/72e5eb94-0267-4126-b24c-9b816c66badf-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"72e5eb94-0267-4126-b24c-9b816c66badf\") " pod="openstack/rabbitmq-server-1" Jan 26 19:00:39 crc kubenswrapper[4737]: I0126 19:00:39.572947 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/72e5eb94-0267-4126-b24c-9b816c66badf-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"72e5eb94-0267-4126-b24c-9b816c66badf\") " pod="openstack/rabbitmq-server-1" Jan 26 19:00:39 crc kubenswrapper[4737]: I0126 19:00:39.573340 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/72e5eb94-0267-4126-b24c-9b816c66badf-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"72e5eb94-0267-4126-b24c-9b816c66badf\") " pod="openstack/rabbitmq-server-1" Jan 26 19:00:39 crc kubenswrapper[4737]: I0126 19:00:39.573438 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/72e5eb94-0267-4126-b24c-9b816c66badf-server-conf\") pod \"rabbitmq-server-1\" (UID: \"72e5eb94-0267-4126-b24c-9b816c66badf\") " pod="openstack/rabbitmq-server-1" Jan 26 19:00:39 crc kubenswrapper[4737]: I0126 19:00:39.575492 4737 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 19:00:39 crc kubenswrapper[4737]: I0126 19:00:39.575523 4737 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-cf74a87d-7af1-49ca-ad77-fa33c810eec2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf74a87d-7af1-49ca-ad77-fa33c810eec2\") pod \"rabbitmq-server-1\" (UID: \"72e5eb94-0267-4126-b24c-9b816c66badf\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/94be7bc95e95b6be2553bf8bbbf70b563164647bca719a84027c68345843d929/globalmount\"" pod="openstack/rabbitmq-server-1" Jan 26 19:00:39 crc kubenswrapper[4737]: I0126 19:00:39.578490 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/72e5eb94-0267-4126-b24c-9b816c66badf-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"72e5eb94-0267-4126-b24c-9b816c66badf\") " pod="openstack/rabbitmq-server-1" Jan 26 19:00:39 crc kubenswrapper[4737]: I0126 19:00:39.579290 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/72e5eb94-0267-4126-b24c-9b816c66badf-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"72e5eb94-0267-4126-b24c-9b816c66badf\") " pod="openstack/rabbitmq-server-1" Jan 26 19:00:39 crc kubenswrapper[4737]: I0126 19:00:39.580613 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/72e5eb94-0267-4126-b24c-9b816c66badf-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"72e5eb94-0267-4126-b24c-9b816c66badf\") " pod="openstack/rabbitmq-server-1" Jan 26 19:00:39 crc kubenswrapper[4737]: I0126 19:00:39.589777 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/72e5eb94-0267-4126-b24c-9b816c66badf-pod-info\") pod \"rabbitmq-server-1\" (UID: \"72e5eb94-0267-4126-b24c-9b816c66badf\") " pod="openstack/rabbitmq-server-1" Jan 26 19:00:39 crc kubenswrapper[4737]: I0126 19:00:39.595671 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttz9h\" (UniqueName: \"kubernetes.io/projected/72e5eb94-0267-4126-b24c-9b816c66badf-kube-api-access-ttz9h\") pod \"rabbitmq-server-1\" (UID: \"72e5eb94-0267-4126-b24c-9b816c66badf\") " pod="openstack/rabbitmq-server-1" Jan 26 19:00:39 crc kubenswrapper[4737]: I0126 19:00:39.654406 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-cf74a87d-7af1-49ca-ad77-fa33c810eec2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf74a87d-7af1-49ca-ad77-fa33c810eec2\") pod \"rabbitmq-server-1\" (UID: \"72e5eb94-0267-4126-b24c-9b816c66badf\") " pod="openstack/rabbitmq-server-1" Jan 26 19:00:39 crc kubenswrapper[4737]: I0126 19:00:39.924451 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 26 19:00:40 crc kubenswrapper[4737]: I0126 19:00:40.439973 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 26 19:00:40 crc kubenswrapper[4737]: W0126 19:00:40.441490 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72e5eb94_0267_4126_b24c_9b816c66badf.slice/crio-fde409a36ee4f892d50657c2da6c4d9aad2bfd8e6853a46dbfa2ab69e7834370 WatchSource:0}: Error finding container fde409a36ee4f892d50657c2da6c4d9aad2bfd8e6853a46dbfa2ab69e7834370: Status 404 returned error can't find the container with id fde409a36ee4f892d50657c2da6c4d9aad2bfd8e6853a46dbfa2ab69e7834370 Jan 26 19:00:40 crc kubenswrapper[4737]: I0126 19:00:40.998102 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bfe0217-6204-407d-aaeb-94051bb8255b" path="/var/lib/kubelet/pods/5bfe0217-6204-407d-aaeb-94051bb8255b/volumes" Jan 26 19:00:41 crc kubenswrapper[4737]: I0126 19:00:41.155551 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"72e5eb94-0267-4126-b24c-9b816c66badf","Type":"ContainerStarted","Data":"fde409a36ee4f892d50657c2da6c4d9aad2bfd8e6853a46dbfa2ab69e7834370"} Jan 26 19:00:43 crc kubenswrapper[4737]: I0126 19:00:43.187026 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"72e5eb94-0267-4126-b24c-9b816c66badf","Type":"ContainerStarted","Data":"69e0efdaf077153de2dee532c438a14d4dd6f34a000e0599c8b2386242c988fe"} Jan 26 19:00:47 crc kubenswrapper[4737]: I0126 19:00:47.981551 4737 scope.go:117] "RemoveContainer" containerID="1118354a04db19a991298cf7d8a2d128f4afb57f133e36502b231054abcee336" Jan 26 19:00:47 crc kubenswrapper[4737]: E0126 19:00:47.982284 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:01:00 crc kubenswrapper[4737]: I0126 19:01:00.162190 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29490901-rhxwf"] Jan 26 19:01:00 crc kubenswrapper[4737]: I0126 19:01:00.164883 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490901-rhxwf" Jan 26 19:01:00 crc kubenswrapper[4737]: I0126 19:01:00.174354 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29490901-rhxwf"] Jan 26 19:01:00 crc kubenswrapper[4737]: I0126 19:01:00.327055 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/37efbad2-f8c2-4830-9ece-86870bf29923-fernet-keys\") pod \"keystone-cron-29490901-rhxwf\" (UID: \"37efbad2-f8c2-4830-9ece-86870bf29923\") " pod="openstack/keystone-cron-29490901-rhxwf" Jan 26 19:01:00 crc kubenswrapper[4737]: I0126 19:01:00.327420 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37efbad2-f8c2-4830-9ece-86870bf29923-config-data\") pod \"keystone-cron-29490901-rhxwf\" (UID: \"37efbad2-f8c2-4830-9ece-86870bf29923\") " pod="openstack/keystone-cron-29490901-rhxwf" Jan 26 19:01:00 crc kubenswrapper[4737]: I0126 19:01:00.327447 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpbfq\" (UniqueName: \"kubernetes.io/projected/37efbad2-f8c2-4830-9ece-86870bf29923-kube-api-access-gpbfq\") pod \"keystone-cron-29490901-rhxwf\" (UID: \"37efbad2-f8c2-4830-9ece-86870bf29923\") " pod="openstack/keystone-cron-29490901-rhxwf" Jan 26 19:01:00 crc kubenswrapper[4737]: I0126 19:01:00.327709 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37efbad2-f8c2-4830-9ece-86870bf29923-combined-ca-bundle\") pod \"keystone-cron-29490901-rhxwf\" (UID: \"37efbad2-f8c2-4830-9ece-86870bf29923\") " pod="openstack/keystone-cron-29490901-rhxwf" Jan 26 19:01:00 crc kubenswrapper[4737]: I0126 19:01:00.430324 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/37efbad2-f8c2-4830-9ece-86870bf29923-fernet-keys\") pod \"keystone-cron-29490901-rhxwf\" (UID: \"37efbad2-f8c2-4830-9ece-86870bf29923\") " pod="openstack/keystone-cron-29490901-rhxwf" Jan 26 19:01:00 crc kubenswrapper[4737]: I0126 19:01:00.430434 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37efbad2-f8c2-4830-9ece-86870bf29923-config-data\") pod \"keystone-cron-29490901-rhxwf\" (UID: \"37efbad2-f8c2-4830-9ece-86870bf29923\") " pod="openstack/keystone-cron-29490901-rhxwf" Jan 26 19:01:00 crc kubenswrapper[4737]: I0126 19:01:00.430457 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpbfq\" (UniqueName: \"kubernetes.io/projected/37efbad2-f8c2-4830-9ece-86870bf29923-kube-api-access-gpbfq\") pod \"keystone-cron-29490901-rhxwf\" (UID: \"37efbad2-f8c2-4830-9ece-86870bf29923\") " pod="openstack/keystone-cron-29490901-rhxwf" Jan 26 19:01:00 crc kubenswrapper[4737]: I0126 19:01:00.430502 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37efbad2-f8c2-4830-9ece-86870bf29923-combined-ca-bundle\") pod \"keystone-cron-29490901-rhxwf\" (UID: \"37efbad2-f8c2-4830-9ece-86870bf29923\") " pod="openstack/keystone-cron-29490901-rhxwf" Jan 26 19:01:00 crc kubenswrapper[4737]: I0126 19:01:00.437139 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37efbad2-f8c2-4830-9ece-86870bf29923-config-data\") pod \"keystone-cron-29490901-rhxwf\" (UID: \"37efbad2-f8c2-4830-9ece-86870bf29923\") " pod="openstack/keystone-cron-29490901-rhxwf" Jan 26 19:01:00 crc kubenswrapper[4737]: I0126 19:01:00.437569 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37efbad2-f8c2-4830-9ece-86870bf29923-combined-ca-bundle\") pod \"keystone-cron-29490901-rhxwf\" (UID: \"37efbad2-f8c2-4830-9ece-86870bf29923\") " pod="openstack/keystone-cron-29490901-rhxwf" Jan 26 19:01:00 crc kubenswrapper[4737]: I0126 19:01:00.440715 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/37efbad2-f8c2-4830-9ece-86870bf29923-fernet-keys\") pod \"keystone-cron-29490901-rhxwf\" (UID: \"37efbad2-f8c2-4830-9ece-86870bf29923\") " pod="openstack/keystone-cron-29490901-rhxwf" Jan 26 19:01:00 crc kubenswrapper[4737]: I0126 19:01:00.449942 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpbfq\" (UniqueName: \"kubernetes.io/projected/37efbad2-f8c2-4830-9ece-86870bf29923-kube-api-access-gpbfq\") pod \"keystone-cron-29490901-rhxwf\" (UID: \"37efbad2-f8c2-4830-9ece-86870bf29923\") " pod="openstack/keystone-cron-29490901-rhxwf" Jan 26 19:01:00 crc kubenswrapper[4737]: I0126 19:01:00.496421 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490901-rhxwf" Jan 26 19:01:00 crc kubenswrapper[4737]: I0126 19:01:00.980949 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29490901-rhxwf"] Jan 26 19:01:01 crc kubenswrapper[4737]: I0126 19:01:01.401625 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490901-rhxwf" event={"ID":"37efbad2-f8c2-4830-9ece-86870bf29923","Type":"ContainerStarted","Data":"034d47700a23eae2be6564f60f6f35f1ea2db7efad7d5621568cbc982840c277"} Jan 26 19:01:01 crc kubenswrapper[4737]: I0126 19:01:01.401950 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490901-rhxwf" event={"ID":"37efbad2-f8c2-4830-9ece-86870bf29923","Type":"ContainerStarted","Data":"00e4b0eda60c25749e1eb18b8dce07968ce20be047a4083ee725f0d67a2c7bd6"} Jan 26 19:01:01 crc kubenswrapper[4737]: I0126 19:01:01.420667 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29490901-rhxwf" podStartSLOduration=1.4206451420000001 podStartE2EDuration="1.420645142s" podCreationTimestamp="2026-01-26 19:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:01:01.420377196 +0000 UTC m=+1834.728571904" watchObservedRunningTime="2026-01-26 19:01:01.420645142 +0000 UTC m=+1834.728839850" Jan 26 19:01:02 crc kubenswrapper[4737]: I0126 19:01:02.982200 4737 scope.go:117] "RemoveContainer" containerID="1118354a04db19a991298cf7d8a2d128f4afb57f133e36502b231054abcee336" Jan 26 19:01:02 crc kubenswrapper[4737]: E0126 19:01:02.982897 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:01:04 crc kubenswrapper[4737]: I0126 19:01:04.436976 4737 generic.go:334] "Generic (PLEG): container finished" podID="37efbad2-f8c2-4830-9ece-86870bf29923" containerID="034d47700a23eae2be6564f60f6f35f1ea2db7efad7d5621568cbc982840c277" exitCode=0 Jan 26 19:01:04 crc kubenswrapper[4737]: I0126 19:01:04.437028 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490901-rhxwf" event={"ID":"37efbad2-f8c2-4830-9ece-86870bf29923","Type":"ContainerDied","Data":"034d47700a23eae2be6564f60f6f35f1ea2db7efad7d5621568cbc982840c277"} Jan 26 19:01:05 crc kubenswrapper[4737]: I0126 19:01:05.951717 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490901-rhxwf" Jan 26 19:01:06 crc kubenswrapper[4737]: I0126 19:01:06.097382 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37efbad2-f8c2-4830-9ece-86870bf29923-config-data\") pod \"37efbad2-f8c2-4830-9ece-86870bf29923\" (UID: \"37efbad2-f8c2-4830-9ece-86870bf29923\") " Jan 26 19:01:06 crc kubenswrapper[4737]: I0126 19:01:06.097581 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpbfq\" (UniqueName: \"kubernetes.io/projected/37efbad2-f8c2-4830-9ece-86870bf29923-kube-api-access-gpbfq\") pod \"37efbad2-f8c2-4830-9ece-86870bf29923\" (UID: \"37efbad2-f8c2-4830-9ece-86870bf29923\") " Jan 26 19:01:06 crc kubenswrapper[4737]: I0126 19:01:06.097663 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/37efbad2-f8c2-4830-9ece-86870bf29923-fernet-keys\") pod \"37efbad2-f8c2-4830-9ece-86870bf29923\" (UID: \"37efbad2-f8c2-4830-9ece-86870bf29923\") " Jan 26 19:01:06 crc kubenswrapper[4737]: I0126 19:01:06.097694 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37efbad2-f8c2-4830-9ece-86870bf29923-combined-ca-bundle\") pod \"37efbad2-f8c2-4830-9ece-86870bf29923\" (UID: \"37efbad2-f8c2-4830-9ece-86870bf29923\") " Jan 26 19:01:06 crc kubenswrapper[4737]: I0126 19:01:06.121257 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37efbad2-f8c2-4830-9ece-86870bf29923-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "37efbad2-f8c2-4830-9ece-86870bf29923" (UID: "37efbad2-f8c2-4830-9ece-86870bf29923"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:01:06 crc kubenswrapper[4737]: I0126 19:01:06.122296 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37efbad2-f8c2-4830-9ece-86870bf29923-kube-api-access-gpbfq" (OuterVolumeSpecName: "kube-api-access-gpbfq") pod "37efbad2-f8c2-4830-9ece-86870bf29923" (UID: "37efbad2-f8c2-4830-9ece-86870bf29923"). InnerVolumeSpecName "kube-api-access-gpbfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:01:06 crc kubenswrapper[4737]: I0126 19:01:06.165261 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37efbad2-f8c2-4830-9ece-86870bf29923-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "37efbad2-f8c2-4830-9ece-86870bf29923" (UID: "37efbad2-f8c2-4830-9ece-86870bf29923"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:01:06 crc kubenswrapper[4737]: I0126 19:01:06.201715 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpbfq\" (UniqueName: \"kubernetes.io/projected/37efbad2-f8c2-4830-9ece-86870bf29923-kube-api-access-gpbfq\") on node \"crc\" DevicePath \"\"" Jan 26 19:01:06 crc kubenswrapper[4737]: I0126 19:01:06.201755 4737 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/37efbad2-f8c2-4830-9ece-86870bf29923-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 19:01:06 crc kubenswrapper[4737]: I0126 19:01:06.201770 4737 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37efbad2-f8c2-4830-9ece-86870bf29923-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:01:06 crc kubenswrapper[4737]: I0126 19:01:06.205172 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37efbad2-f8c2-4830-9ece-86870bf29923-config-data" (OuterVolumeSpecName: "config-data") pod "37efbad2-f8c2-4830-9ece-86870bf29923" (UID: "37efbad2-f8c2-4830-9ece-86870bf29923"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:01:06 crc kubenswrapper[4737]: I0126 19:01:06.303846 4737 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37efbad2-f8c2-4830-9ece-86870bf29923-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:01:06 crc kubenswrapper[4737]: I0126 19:01:06.471792 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490901-rhxwf" event={"ID":"37efbad2-f8c2-4830-9ece-86870bf29923","Type":"ContainerDied","Data":"00e4b0eda60c25749e1eb18b8dce07968ce20be047a4083ee725f0d67a2c7bd6"} Jan 26 19:01:06 crc kubenswrapper[4737]: I0126 19:01:06.471846 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00e4b0eda60c25749e1eb18b8dce07968ce20be047a4083ee725f0d67a2c7bd6" Jan 26 19:01:06 crc kubenswrapper[4737]: I0126 19:01:06.471913 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490901-rhxwf" Jan 26 19:01:15 crc kubenswrapper[4737]: I0126 19:01:15.588277 4737 generic.go:334] "Generic (PLEG): container finished" podID="72e5eb94-0267-4126-b24c-9b816c66badf" containerID="69e0efdaf077153de2dee532c438a14d4dd6f34a000e0599c8b2386242c988fe" exitCode=0 Jan 26 19:01:15 crc kubenswrapper[4737]: I0126 19:01:15.588377 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"72e5eb94-0267-4126-b24c-9b816c66badf","Type":"ContainerDied","Data":"69e0efdaf077153de2dee532c438a14d4dd6f34a000e0599c8b2386242c988fe"} Jan 26 19:01:16 crc kubenswrapper[4737]: I0126 19:01:16.600531 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"72e5eb94-0267-4126-b24c-9b816c66badf","Type":"ContainerStarted","Data":"4edfe20dbcccb7187eb3e10e502c22eed1675d016cbe7fb225e14d36ef9dc717"} Jan 26 19:01:16 crc kubenswrapper[4737]: I0126 19:01:16.601888 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Jan 26 19:01:16 crc kubenswrapper[4737]: I0126 19:01:16.626495 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=37.62647279 podStartE2EDuration="37.62647279s" podCreationTimestamp="2026-01-26 19:00:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:01:16.622160205 +0000 UTC m=+1849.930354903" watchObservedRunningTime="2026-01-26 19:01:16.62647279 +0000 UTC m=+1849.934667508" Jan 26 19:01:17 crc kubenswrapper[4737]: I0126 19:01:17.982020 4737 scope.go:117] "RemoveContainer" containerID="1118354a04db19a991298cf7d8a2d128f4afb57f133e36502b231054abcee336" Jan 26 19:01:17 crc kubenswrapper[4737]: E0126 19:01:17.982587 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:01:29 crc kubenswrapper[4737]: I0126 19:01:29.929467 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Jan 26 19:01:30 crc kubenswrapper[4737]: I0126 19:01:30.006724 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 19:01:31 crc kubenswrapper[4737]: I0126 19:01:31.959591 4737 scope.go:117] "RemoveContainer" containerID="6e28763de49ab84419a183827eeaa2498baa40575e3f5b2ab71c1383ba21e7bf" Jan 26 19:01:31 crc kubenswrapper[4737]: I0126 19:01:31.982187 4737 scope.go:117] "RemoveContainer" containerID="1118354a04db19a991298cf7d8a2d128f4afb57f133e36502b231054abcee336" Jan 26 19:01:31 crc kubenswrapper[4737]: E0126 19:01:31.982649 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:01:32 crc kubenswrapper[4737]: I0126 19:01:32.029640 4737 scope.go:117] "RemoveContainer" containerID="a5a1a24c6d16166051da6f258f5ce4c4c2ed6a4c723f322d6a20383febb61693" Jan 26 19:01:32 crc kubenswrapper[4737]: I0126 19:01:32.077813 4737 scope.go:117] "RemoveContainer" containerID="fbdf9cd4e5898363e13e592218834c4a83818b60685c65abeca87b0bc8064703" Jan 26 19:01:32 crc kubenswrapper[4737]: I0126 19:01:32.134360 4737 scope.go:117] "RemoveContainer" containerID="0be6c934d819d7882080f2d5bcefc3f6ede201b6a0c105d7d0b2ec4ca03547ab" Jan 26 19:01:34 crc kubenswrapper[4737]: I0126 19:01:34.332329 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="49c4dfd6-d334-4e11-8a1d-0dd773f91b1f" containerName="rabbitmq" containerID="cri-o://022be3a0298b767246af123798dbc6e92b83adbf032bcac0595eebfe08f81137" gracePeriod=604796 Jan 26 19:01:36 crc kubenswrapper[4737]: I0126 19:01:36.459857 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="49c4dfd6-d334-4e11-8a1d-0dd773f91b1f" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Jan 26 19:01:40 crc kubenswrapper[4737]: I0126 19:01:40.914534 4737 generic.go:334] "Generic (PLEG): container finished" podID="49c4dfd6-d334-4e11-8a1d-0dd773f91b1f" containerID="022be3a0298b767246af123798dbc6e92b83adbf032bcac0595eebfe08f81137" exitCode=0 Jan 26 19:01:40 crc kubenswrapper[4737]: I0126 19:01:40.914620 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f","Type":"ContainerDied","Data":"022be3a0298b767246af123798dbc6e92b83adbf032bcac0595eebfe08f81137"} Jan 26 19:01:40 crc kubenswrapper[4737]: I0126 19:01:40.915001 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f","Type":"ContainerDied","Data":"1ee9dd549f27c874bf9d6d6ea6424c9bd6686b9ddc095a7c415dd84a7ad6f6b4"} Jan 26 19:01:40 crc kubenswrapper[4737]: I0126 19:01:40.915012 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ee9dd549f27c874bf9d6d6ea6424c9bd6686b9ddc095a7c415dd84a7ad6f6b4" Jan 26 19:01:41 crc kubenswrapper[4737]: I0126 19:01:41.029552 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 19:01:41 crc kubenswrapper[4737]: I0126 19:01:41.120272 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/49c4dfd6-d334-4e11-8a1d-0dd773f91b1f-rabbitmq-erlang-cookie\") pod \"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f\" (UID: \"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f\") " Jan 26 19:01:41 crc kubenswrapper[4737]: I0126 19:01:41.120647 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/49c4dfd6-d334-4e11-8a1d-0dd773f91b1f-rabbitmq-tls\") pod \"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f\" (UID: \"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f\") " Jan 26 19:01:41 crc kubenswrapper[4737]: I0126 19:01:41.120744 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/49c4dfd6-d334-4e11-8a1d-0dd773f91b1f-erlang-cookie-secret\") pod \"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f\" (UID: \"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f\") " Jan 26 19:01:41 crc kubenswrapper[4737]: I0126 19:01:41.120846 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/49c4dfd6-d334-4e11-8a1d-0dd773f91b1f-rabbitmq-plugins\") pod \"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f\" (UID: \"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f\") " Jan 26 19:01:41 crc kubenswrapper[4737]: I0126 19:01:41.120922 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/49c4dfd6-d334-4e11-8a1d-0dd773f91b1f-plugins-conf\") pod \"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f\" (UID: \"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f\") " Jan 26 19:01:41 crc kubenswrapper[4737]: I0126 19:01:41.121033 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/49c4dfd6-d334-4e11-8a1d-0dd773f91b1f-rabbitmq-confd\") pod \"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f\" (UID: \"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f\") " Jan 26 19:01:41 crc kubenswrapper[4737]: I0126 19:01:41.121157 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/49c4dfd6-d334-4e11-8a1d-0dd773f91b1f-pod-info\") pod \"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f\" (UID: \"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f\") " Jan 26 19:01:41 crc kubenswrapper[4737]: I0126 19:01:41.121255 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49c4dfd6-d334-4e11-8a1d-0dd773f91b1f-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "49c4dfd6-d334-4e11-8a1d-0dd773f91b1f" (UID: "49c4dfd6-d334-4e11-8a1d-0dd773f91b1f"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:01:41 crc kubenswrapper[4737]: I0126 19:01:41.121360 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zktn\" (UniqueName: \"kubernetes.io/projected/49c4dfd6-d334-4e11-8a1d-0dd773f91b1f-kube-api-access-8zktn\") pod \"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f\" (UID: \"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f\") " Jan 26 19:01:41 crc kubenswrapper[4737]: I0126 19:01:41.121980 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49c4dfd6-d334-4e11-8a1d-0dd773f91b1f-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "49c4dfd6-d334-4e11-8a1d-0dd773f91b1f" (UID: "49c4dfd6-d334-4e11-8a1d-0dd773f91b1f"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:01:41 crc kubenswrapper[4737]: I0126 19:01:41.122366 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5aa7f00a-70ef-4395-a7e7-fa25917f1da4\") pod \"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f\" (UID: \"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f\") " Jan 26 19:01:41 crc kubenswrapper[4737]: I0126 19:01:41.122529 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/49c4dfd6-d334-4e11-8a1d-0dd773f91b1f-config-data\") pod \"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f\" (UID: \"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f\") " Jan 26 19:01:41 crc kubenswrapper[4737]: I0126 19:01:41.122583 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c4dfd6-d334-4e11-8a1d-0dd773f91b1f-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "49c4dfd6-d334-4e11-8a1d-0dd773f91b1f" (UID: "49c4dfd6-d334-4e11-8a1d-0dd773f91b1f"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:01:41 crc kubenswrapper[4737]: I0126 19:01:41.122667 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/49c4dfd6-d334-4e11-8a1d-0dd773f91b1f-server-conf\") pod \"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f\" (UID: \"49c4dfd6-d334-4e11-8a1d-0dd773f91b1f\") " Jan 26 19:01:41 crc kubenswrapper[4737]: I0126 19:01:41.123661 4737 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/49c4dfd6-d334-4e11-8a1d-0dd773f91b1f-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 26 19:01:41 crc kubenswrapper[4737]: I0126 19:01:41.123745 4737 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/49c4dfd6-d334-4e11-8a1d-0dd773f91b1f-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 26 19:01:41 crc kubenswrapper[4737]: I0126 19:01:41.123802 4737 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/49c4dfd6-d334-4e11-8a1d-0dd773f91b1f-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 26 19:01:41 crc kubenswrapper[4737]: I0126 19:01:41.131284 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c4dfd6-d334-4e11-8a1d-0dd773f91b1f-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "49c4dfd6-d334-4e11-8a1d-0dd773f91b1f" (UID: "49c4dfd6-d334-4e11-8a1d-0dd773f91b1f"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:01:41 crc kubenswrapper[4737]: I0126 19:01:41.131378 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c4dfd6-d334-4e11-8a1d-0dd773f91b1f-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "49c4dfd6-d334-4e11-8a1d-0dd773f91b1f" (UID: "49c4dfd6-d334-4e11-8a1d-0dd773f91b1f"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:01:41 crc kubenswrapper[4737]: I0126 19:01:41.131874 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/49c4dfd6-d334-4e11-8a1d-0dd773f91b1f-pod-info" (OuterVolumeSpecName: "pod-info") pod "49c4dfd6-d334-4e11-8a1d-0dd773f91b1f" (UID: "49c4dfd6-d334-4e11-8a1d-0dd773f91b1f"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 26 19:01:41 crc kubenswrapper[4737]: I0126 19:01:41.144182 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c4dfd6-d334-4e11-8a1d-0dd773f91b1f-kube-api-access-8zktn" (OuterVolumeSpecName: "kube-api-access-8zktn") pod "49c4dfd6-d334-4e11-8a1d-0dd773f91b1f" (UID: "49c4dfd6-d334-4e11-8a1d-0dd773f91b1f"). InnerVolumeSpecName "kube-api-access-8zktn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:01:41 crc kubenswrapper[4737]: I0126 19:01:41.166311 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5aa7f00a-70ef-4395-a7e7-fa25917f1da4" (OuterVolumeSpecName: "persistence") pod "49c4dfd6-d334-4e11-8a1d-0dd773f91b1f" (UID: "49c4dfd6-d334-4e11-8a1d-0dd773f91b1f"). InnerVolumeSpecName "pvc-5aa7f00a-70ef-4395-a7e7-fa25917f1da4". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 19:01:41 crc kubenswrapper[4737]: I0126 19:01:41.217991 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c4dfd6-d334-4e11-8a1d-0dd773f91b1f-server-conf" (OuterVolumeSpecName: "server-conf") pod "49c4dfd6-d334-4e11-8a1d-0dd773f91b1f" (UID: "49c4dfd6-d334-4e11-8a1d-0dd773f91b1f"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:01:41 crc kubenswrapper[4737]: I0126 19:01:41.230332 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zktn\" (UniqueName: \"kubernetes.io/projected/49c4dfd6-d334-4e11-8a1d-0dd773f91b1f-kube-api-access-8zktn\") on node \"crc\" DevicePath \"\"" Jan 26 19:01:41 crc kubenswrapper[4737]: I0126 19:01:41.230399 4737 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-5aa7f00a-70ef-4395-a7e7-fa25917f1da4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5aa7f00a-70ef-4395-a7e7-fa25917f1da4\") on node \"crc\" " Jan 26 19:01:41 crc kubenswrapper[4737]: I0126 19:01:41.230417 4737 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/49c4dfd6-d334-4e11-8a1d-0dd773f91b1f-server-conf\") on node \"crc\" DevicePath \"\"" Jan 26 19:01:41 crc kubenswrapper[4737]: I0126 19:01:41.230427 4737 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/49c4dfd6-d334-4e11-8a1d-0dd773f91b1f-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 26 19:01:41 crc kubenswrapper[4737]: I0126 19:01:41.230437 4737 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/49c4dfd6-d334-4e11-8a1d-0dd773f91b1f-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 26 19:01:41 crc kubenswrapper[4737]: I0126 19:01:41.230449 4737 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/49c4dfd6-d334-4e11-8a1d-0dd773f91b1f-pod-info\") on node \"crc\" DevicePath \"\"" Jan 26 19:01:41 crc kubenswrapper[4737]: I0126 19:01:41.242455 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c4dfd6-d334-4e11-8a1d-0dd773f91b1f-config-data" (OuterVolumeSpecName: "config-data") pod "49c4dfd6-d334-4e11-8a1d-0dd773f91b1f" (UID: "49c4dfd6-d334-4e11-8a1d-0dd773f91b1f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:01:41 crc kubenswrapper[4737]: I0126 19:01:41.277676 4737 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 26 19:01:41 crc kubenswrapper[4737]: I0126 19:01:41.277965 4737 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-5aa7f00a-70ef-4395-a7e7-fa25917f1da4" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5aa7f00a-70ef-4395-a7e7-fa25917f1da4") on node "crc" Jan 26 19:01:41 crc kubenswrapper[4737]: I0126 19:01:41.332826 4737 reconciler_common.go:293] "Volume detached for volume \"pvc-5aa7f00a-70ef-4395-a7e7-fa25917f1da4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5aa7f00a-70ef-4395-a7e7-fa25917f1da4\") on node \"crc\" DevicePath \"\"" Jan 26 19:01:41 crc kubenswrapper[4737]: I0126 19:01:41.332872 4737 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/49c4dfd6-d334-4e11-8a1d-0dd773f91b1f-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:01:41 crc kubenswrapper[4737]: I0126 19:01:41.352942 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c4dfd6-d334-4e11-8a1d-0dd773f91b1f-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "49c4dfd6-d334-4e11-8a1d-0dd773f91b1f" (UID: "49c4dfd6-d334-4e11-8a1d-0dd773f91b1f"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:01:41 crc kubenswrapper[4737]: I0126 19:01:41.435184 4737 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/49c4dfd6-d334-4e11-8a1d-0dd773f91b1f-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 26 19:01:41 crc kubenswrapper[4737]: I0126 19:01:41.925966 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 19:01:41 crc kubenswrapper[4737]: I0126 19:01:41.964540 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 19:01:41 crc kubenswrapper[4737]: I0126 19:01:41.979128 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 19:01:41 crc kubenswrapper[4737]: I0126 19:01:41.995979 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 19:01:42 crc kubenswrapper[4737]: E0126 19:01:42.004104 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49c4dfd6-d334-4e11-8a1d-0dd773f91b1f" containerName="rabbitmq" Jan 26 19:01:42 crc kubenswrapper[4737]: I0126 19:01:42.004137 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="49c4dfd6-d334-4e11-8a1d-0dd773f91b1f" containerName="rabbitmq" Jan 26 19:01:42 crc kubenswrapper[4737]: E0126 19:01:42.004152 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49c4dfd6-d334-4e11-8a1d-0dd773f91b1f" containerName="setup-container" Jan 26 19:01:42 crc kubenswrapper[4737]: I0126 19:01:42.004160 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="49c4dfd6-d334-4e11-8a1d-0dd773f91b1f" containerName="setup-container" Jan 26 19:01:42 crc kubenswrapper[4737]: E0126 19:01:42.004209 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37efbad2-f8c2-4830-9ece-86870bf29923" containerName="keystone-cron" Jan 26 19:01:42 crc kubenswrapper[4737]: I0126 19:01:42.004218 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="37efbad2-f8c2-4830-9ece-86870bf29923" containerName="keystone-cron" Jan 26 19:01:42 crc kubenswrapper[4737]: I0126 19:01:42.004428 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="37efbad2-f8c2-4830-9ece-86870bf29923" containerName="keystone-cron" Jan 26 19:01:42 crc kubenswrapper[4737]: I0126 19:01:42.004447 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="49c4dfd6-d334-4e11-8a1d-0dd773f91b1f" containerName="rabbitmq" Jan 26 19:01:42 crc kubenswrapper[4737]: I0126 19:01:42.005790 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 19:01:42 crc kubenswrapper[4737]: I0126 19:01:42.041872 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 19:01:42 crc kubenswrapper[4737]: I0126 19:01:42.152367 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bcd52a93-f277-416b-b37b-2ae58d2edaa5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bcd52a93-f277-416b-b37b-2ae58d2edaa5\") " pod="openstack/rabbitmq-server-0" Jan 26 19:01:42 crc kubenswrapper[4737]: I0126 19:01:42.152405 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bcd52a93-f277-416b-b37b-2ae58d2edaa5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bcd52a93-f277-416b-b37b-2ae58d2edaa5\") " pod="openstack/rabbitmq-server-0" Jan 26 19:01:42 crc kubenswrapper[4737]: I0126 19:01:42.152472 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5aa7f00a-70ef-4395-a7e7-fa25917f1da4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5aa7f00a-70ef-4395-a7e7-fa25917f1da4\") pod \"rabbitmq-server-0\" (UID: \"bcd52a93-f277-416b-b37b-2ae58d2edaa5\") " pod="openstack/rabbitmq-server-0" Jan 26 19:01:42 crc kubenswrapper[4737]: I0126 19:01:42.152511 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bcd52a93-f277-416b-b37b-2ae58d2edaa5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bcd52a93-f277-416b-b37b-2ae58d2edaa5\") " pod="openstack/rabbitmq-server-0" Jan 26 19:01:42 crc kubenswrapper[4737]: I0126 19:01:42.152530 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bcd52a93-f277-416b-b37b-2ae58d2edaa5-config-data\") pod \"rabbitmq-server-0\" (UID: \"bcd52a93-f277-416b-b37b-2ae58d2edaa5\") " pod="openstack/rabbitmq-server-0" Jan 26 19:01:42 crc kubenswrapper[4737]: I0126 19:01:42.152571 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-824kg\" (UniqueName: \"kubernetes.io/projected/bcd52a93-f277-416b-b37b-2ae58d2edaa5-kube-api-access-824kg\") pod \"rabbitmq-server-0\" (UID: \"bcd52a93-f277-416b-b37b-2ae58d2edaa5\") " pod="openstack/rabbitmq-server-0" Jan 26 19:01:42 crc kubenswrapper[4737]: I0126 19:01:42.152610 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bcd52a93-f277-416b-b37b-2ae58d2edaa5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bcd52a93-f277-416b-b37b-2ae58d2edaa5\") " pod="openstack/rabbitmq-server-0" Jan 26 19:01:42 crc kubenswrapper[4737]: I0126 19:01:42.152648 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bcd52a93-f277-416b-b37b-2ae58d2edaa5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bcd52a93-f277-416b-b37b-2ae58d2edaa5\") " pod="openstack/rabbitmq-server-0" Jan 26 19:01:42 crc kubenswrapper[4737]: I0126 19:01:42.152680 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bcd52a93-f277-416b-b37b-2ae58d2edaa5-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bcd52a93-f277-416b-b37b-2ae58d2edaa5\") " pod="openstack/rabbitmq-server-0" Jan 26 19:01:42 crc kubenswrapper[4737]: I0126 19:01:42.152704 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bcd52a93-f277-416b-b37b-2ae58d2edaa5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bcd52a93-f277-416b-b37b-2ae58d2edaa5\") " pod="openstack/rabbitmq-server-0" Jan 26 19:01:42 crc kubenswrapper[4737]: I0126 19:01:42.152723 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bcd52a93-f277-416b-b37b-2ae58d2edaa5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bcd52a93-f277-416b-b37b-2ae58d2edaa5\") " pod="openstack/rabbitmq-server-0" Jan 26 19:01:42 crc kubenswrapper[4737]: I0126 19:01:42.254631 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bcd52a93-f277-416b-b37b-2ae58d2edaa5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bcd52a93-f277-416b-b37b-2ae58d2edaa5\") " pod="openstack/rabbitmq-server-0" Jan 26 19:01:42 crc kubenswrapper[4737]: I0126 19:01:42.255378 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bcd52a93-f277-416b-b37b-2ae58d2edaa5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bcd52a93-f277-416b-b37b-2ae58d2edaa5\") " pod="openstack/rabbitmq-server-0" Jan 26 19:01:42 crc kubenswrapper[4737]: I0126 19:01:42.255427 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bcd52a93-f277-416b-b37b-2ae58d2edaa5-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bcd52a93-f277-416b-b37b-2ae58d2edaa5\") " pod="openstack/rabbitmq-server-0" Jan 26 19:01:42 crc kubenswrapper[4737]: I0126 19:01:42.255459 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bcd52a93-f277-416b-b37b-2ae58d2edaa5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bcd52a93-f277-416b-b37b-2ae58d2edaa5\") " pod="openstack/rabbitmq-server-0" Jan 26 19:01:42 crc kubenswrapper[4737]: I0126 19:01:42.255477 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bcd52a93-f277-416b-b37b-2ae58d2edaa5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bcd52a93-f277-416b-b37b-2ae58d2edaa5\") " pod="openstack/rabbitmq-server-0" Jan 26 19:01:42 crc kubenswrapper[4737]: I0126 19:01:42.255551 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bcd52a93-f277-416b-b37b-2ae58d2edaa5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bcd52a93-f277-416b-b37b-2ae58d2edaa5\") " pod="openstack/rabbitmq-server-0" Jan 26 19:01:42 crc kubenswrapper[4737]: I0126 19:01:42.255571 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bcd52a93-f277-416b-b37b-2ae58d2edaa5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bcd52a93-f277-416b-b37b-2ae58d2edaa5\") " pod="openstack/rabbitmq-server-0" Jan 26 19:01:42 crc kubenswrapper[4737]: I0126 19:01:42.255622 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5aa7f00a-70ef-4395-a7e7-fa25917f1da4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5aa7f00a-70ef-4395-a7e7-fa25917f1da4\") pod \"rabbitmq-server-0\" (UID: \"bcd52a93-f277-416b-b37b-2ae58d2edaa5\") " pod="openstack/rabbitmq-server-0" Jan 26 19:01:42 crc kubenswrapper[4737]: I0126 19:01:42.255676 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bcd52a93-f277-416b-b37b-2ae58d2edaa5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bcd52a93-f277-416b-b37b-2ae58d2edaa5\") " pod="openstack/rabbitmq-server-0" Jan 26 19:01:42 crc kubenswrapper[4737]: I0126 19:01:42.255706 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bcd52a93-f277-416b-b37b-2ae58d2edaa5-config-data\") pod \"rabbitmq-server-0\" (UID: \"bcd52a93-f277-416b-b37b-2ae58d2edaa5\") " pod="openstack/rabbitmq-server-0" Jan 26 19:01:42 crc kubenswrapper[4737]: I0126 19:01:42.255745 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-824kg\" (UniqueName: \"kubernetes.io/projected/bcd52a93-f277-416b-b37b-2ae58d2edaa5-kube-api-access-824kg\") pod \"rabbitmq-server-0\" (UID: \"bcd52a93-f277-416b-b37b-2ae58d2edaa5\") " pod="openstack/rabbitmq-server-0" Jan 26 19:01:42 crc kubenswrapper[4737]: I0126 19:01:42.256399 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bcd52a93-f277-416b-b37b-2ae58d2edaa5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bcd52a93-f277-416b-b37b-2ae58d2edaa5\") " pod="openstack/rabbitmq-server-0" Jan 26 19:01:42 crc kubenswrapper[4737]: I0126 19:01:42.256936 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bcd52a93-f277-416b-b37b-2ae58d2edaa5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bcd52a93-f277-416b-b37b-2ae58d2edaa5\") " pod="openstack/rabbitmq-server-0" Jan 26 19:01:42 crc kubenswrapper[4737]: I0126 19:01:42.257332 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bcd52a93-f277-416b-b37b-2ae58d2edaa5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bcd52a93-f277-416b-b37b-2ae58d2edaa5\") " pod="openstack/rabbitmq-server-0" Jan 26 19:01:42 crc kubenswrapper[4737]: I0126 19:01:42.259469 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bcd52a93-f277-416b-b37b-2ae58d2edaa5-config-data\") pod \"rabbitmq-server-0\" (UID: \"bcd52a93-f277-416b-b37b-2ae58d2edaa5\") " pod="openstack/rabbitmq-server-0" Jan 26 19:01:42 crc kubenswrapper[4737]: I0126 19:01:42.260234 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bcd52a93-f277-416b-b37b-2ae58d2edaa5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bcd52a93-f277-416b-b37b-2ae58d2edaa5\") " pod="openstack/rabbitmq-server-0" Jan 26 19:01:42 crc kubenswrapper[4737]: I0126 19:01:42.263805 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bcd52a93-f277-416b-b37b-2ae58d2edaa5-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bcd52a93-f277-416b-b37b-2ae58d2edaa5\") " pod="openstack/rabbitmq-server-0" Jan 26 19:01:42 crc kubenswrapper[4737]: I0126 19:01:42.263968 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bcd52a93-f277-416b-b37b-2ae58d2edaa5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bcd52a93-f277-416b-b37b-2ae58d2edaa5\") " pod="openstack/rabbitmq-server-0" Jan 26 19:01:42 crc kubenswrapper[4737]: I0126 19:01:42.266782 4737 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 19:01:42 crc kubenswrapper[4737]: I0126 19:01:42.266833 4737 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5aa7f00a-70ef-4395-a7e7-fa25917f1da4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5aa7f00a-70ef-4395-a7e7-fa25917f1da4\") pod \"rabbitmq-server-0\" (UID: \"bcd52a93-f277-416b-b37b-2ae58d2edaa5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e9878083e57acdd195c36221ffb7f100349a5e63230bc6c4e3af1f5b75c0abd7/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 26 19:01:42 crc kubenswrapper[4737]: I0126 19:01:42.273531 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bcd52a93-f277-416b-b37b-2ae58d2edaa5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bcd52a93-f277-416b-b37b-2ae58d2edaa5\") " pod="openstack/rabbitmq-server-0" Jan 26 19:01:42 crc kubenswrapper[4737]: I0126 19:01:42.276907 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bcd52a93-f277-416b-b37b-2ae58d2edaa5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bcd52a93-f277-416b-b37b-2ae58d2edaa5\") " pod="openstack/rabbitmq-server-0" Jan 26 19:01:42 crc kubenswrapper[4737]: I0126 19:01:42.279515 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-824kg\" (UniqueName: \"kubernetes.io/projected/bcd52a93-f277-416b-b37b-2ae58d2edaa5-kube-api-access-824kg\") pod \"rabbitmq-server-0\" (UID: \"bcd52a93-f277-416b-b37b-2ae58d2edaa5\") " pod="openstack/rabbitmq-server-0" Jan 26 19:01:42 crc kubenswrapper[4737]: I0126 19:01:42.359838 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5aa7f00a-70ef-4395-a7e7-fa25917f1da4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5aa7f00a-70ef-4395-a7e7-fa25917f1da4\") pod \"rabbitmq-server-0\" (UID: \"bcd52a93-f277-416b-b37b-2ae58d2edaa5\") " pod="openstack/rabbitmq-server-0" Jan 26 19:01:42 crc kubenswrapper[4737]: I0126 19:01:42.627966 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 19:01:43 crc kubenswrapper[4737]: I0126 19:01:43.002707 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c4dfd6-d334-4e11-8a1d-0dd773f91b1f" path="/var/lib/kubelet/pods/49c4dfd6-d334-4e11-8a1d-0dd773f91b1f/volumes" Jan 26 19:01:43 crc kubenswrapper[4737]: I0126 19:01:43.171518 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 19:01:43 crc kubenswrapper[4737]: I0126 19:01:43.949876 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bcd52a93-f277-416b-b37b-2ae58d2edaa5","Type":"ContainerStarted","Data":"061eebe5d9d0fa6abe7900fa0a1bf6bf7480b402234c6cfc1cc661e1bc5e5691"} Jan 26 19:01:44 crc kubenswrapper[4737]: I0126 19:01:44.982438 4737 scope.go:117] "RemoveContainer" containerID="1118354a04db19a991298cf7d8a2d128f4afb57f133e36502b231054abcee336" Jan 26 19:01:44 crc kubenswrapper[4737]: E0126 19:01:44.983137 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:01:45 crc kubenswrapper[4737]: I0126 19:01:45.987319 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bcd52a93-f277-416b-b37b-2ae58d2edaa5","Type":"ContainerStarted","Data":"7d0b4832263cb845a98af8c235aa82e745de7b92dbabe8fd205c8fd5c174e2d5"} Jan 26 19:01:56 crc kubenswrapper[4737]: I0126 19:01:56.994346 4737 scope.go:117] "RemoveContainer" containerID="1118354a04db19a991298cf7d8a2d128f4afb57f133e36502b231054abcee336" Jan 26 19:01:56 crc kubenswrapper[4737]: E0126 19:01:56.995289 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:02:10 crc kubenswrapper[4737]: I0126 19:02:10.018257 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hwld9"] Jan 26 19:02:10 crc kubenswrapper[4737]: I0126 19:02:10.023891 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hwld9" Jan 26 19:02:10 crc kubenswrapper[4737]: I0126 19:02:10.040439 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hwld9"] Jan 26 19:02:10 crc kubenswrapper[4737]: I0126 19:02:10.075656 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32cf47c0-b118-4b69-a582-3f62868b5dd0-utilities\") pod \"redhat-marketplace-hwld9\" (UID: \"32cf47c0-b118-4b69-a582-3f62868b5dd0\") " pod="openshift-marketplace/redhat-marketplace-hwld9" Jan 26 19:02:10 crc kubenswrapper[4737]: I0126 19:02:10.075777 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32cf47c0-b118-4b69-a582-3f62868b5dd0-catalog-content\") pod \"redhat-marketplace-hwld9\" (UID: \"32cf47c0-b118-4b69-a582-3f62868b5dd0\") " pod="openshift-marketplace/redhat-marketplace-hwld9" Jan 26 19:02:10 crc kubenswrapper[4737]: I0126 19:02:10.076051 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bv2d\" (UniqueName: \"kubernetes.io/projected/32cf47c0-b118-4b69-a582-3f62868b5dd0-kube-api-access-7bv2d\") pod \"redhat-marketplace-hwld9\" (UID: \"32cf47c0-b118-4b69-a582-3f62868b5dd0\") " pod="openshift-marketplace/redhat-marketplace-hwld9" Jan 26 19:02:10 crc kubenswrapper[4737]: I0126 19:02:10.179487 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bv2d\" (UniqueName: \"kubernetes.io/projected/32cf47c0-b118-4b69-a582-3f62868b5dd0-kube-api-access-7bv2d\") pod \"redhat-marketplace-hwld9\" (UID: \"32cf47c0-b118-4b69-a582-3f62868b5dd0\") " pod="openshift-marketplace/redhat-marketplace-hwld9" Jan 26 19:02:10 crc kubenswrapper[4737]: I0126 19:02:10.180469 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32cf47c0-b118-4b69-a582-3f62868b5dd0-utilities\") pod \"redhat-marketplace-hwld9\" (UID: \"32cf47c0-b118-4b69-a582-3f62868b5dd0\") " pod="openshift-marketplace/redhat-marketplace-hwld9" Jan 26 19:02:10 crc kubenswrapper[4737]: I0126 19:02:10.180489 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32cf47c0-b118-4b69-a582-3f62868b5dd0-utilities\") pod \"redhat-marketplace-hwld9\" (UID: \"32cf47c0-b118-4b69-a582-3f62868b5dd0\") " pod="openshift-marketplace/redhat-marketplace-hwld9" Jan 26 19:02:10 crc kubenswrapper[4737]: I0126 19:02:10.180915 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32cf47c0-b118-4b69-a582-3f62868b5dd0-catalog-content\") pod \"redhat-marketplace-hwld9\" (UID: \"32cf47c0-b118-4b69-a582-3f62868b5dd0\") " pod="openshift-marketplace/redhat-marketplace-hwld9" Jan 26 19:02:10 crc kubenswrapper[4737]: I0126 19:02:10.182130 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32cf47c0-b118-4b69-a582-3f62868b5dd0-catalog-content\") pod \"redhat-marketplace-hwld9\" (UID: \"32cf47c0-b118-4b69-a582-3f62868b5dd0\") " pod="openshift-marketplace/redhat-marketplace-hwld9" Jan 26 19:02:10 crc kubenswrapper[4737]: I0126 19:02:10.202040 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bv2d\" (UniqueName: \"kubernetes.io/projected/32cf47c0-b118-4b69-a582-3f62868b5dd0-kube-api-access-7bv2d\") pod \"redhat-marketplace-hwld9\" (UID: \"32cf47c0-b118-4b69-a582-3f62868b5dd0\") " pod="openshift-marketplace/redhat-marketplace-hwld9" Jan 26 19:02:10 crc kubenswrapper[4737]: I0126 19:02:10.359480 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hwld9" Jan 26 19:02:10 crc kubenswrapper[4737]: I0126 19:02:10.826812 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hwld9"] Jan 26 19:02:10 crc kubenswrapper[4737]: I0126 19:02:10.982658 4737 scope.go:117] "RemoveContainer" containerID="1118354a04db19a991298cf7d8a2d128f4afb57f133e36502b231054abcee336" Jan 26 19:02:10 crc kubenswrapper[4737]: E0126 19:02:10.983092 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:02:11 crc kubenswrapper[4737]: I0126 19:02:11.330805 4737 generic.go:334] "Generic (PLEG): container finished" podID="32cf47c0-b118-4b69-a582-3f62868b5dd0" containerID="b3724dac3262c7e9d3170be1a3c05cd2505c1bd37b7b2a6f9958a51dd43c6171" exitCode=0 Jan 26 19:02:11 crc kubenswrapper[4737]: I0126 19:02:11.330914 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hwld9" event={"ID":"32cf47c0-b118-4b69-a582-3f62868b5dd0","Type":"ContainerDied","Data":"b3724dac3262c7e9d3170be1a3c05cd2505c1bd37b7b2a6f9958a51dd43c6171"} Jan 26 19:02:11 crc kubenswrapper[4737]: I0126 19:02:11.331184 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hwld9" event={"ID":"32cf47c0-b118-4b69-a582-3f62868b5dd0","Type":"ContainerStarted","Data":"5e5321926be7caa69758aebd02ed8dcfbf00f75069c368a8b79c09d35013318e"} Jan 26 19:02:12 crc kubenswrapper[4737]: I0126 19:02:12.427777 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gqmrn"] Jan 26 19:02:12 crc kubenswrapper[4737]: I0126 19:02:12.434166 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gqmrn" Jan 26 19:02:12 crc kubenswrapper[4737]: I0126 19:02:12.442102 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gqmrn"] Jan 26 19:02:12 crc kubenswrapper[4737]: I0126 19:02:12.566578 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c3b0d3f-a8bc-46d5-94f2-1aaaf686f89d-utilities\") pod \"community-operators-gqmrn\" (UID: \"6c3b0d3f-a8bc-46d5-94f2-1aaaf686f89d\") " pod="openshift-marketplace/community-operators-gqmrn" Jan 26 19:02:12 crc kubenswrapper[4737]: I0126 19:02:12.566892 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c3b0d3f-a8bc-46d5-94f2-1aaaf686f89d-catalog-content\") pod \"community-operators-gqmrn\" (UID: \"6c3b0d3f-a8bc-46d5-94f2-1aaaf686f89d\") " pod="openshift-marketplace/community-operators-gqmrn" Jan 26 19:02:12 crc kubenswrapper[4737]: I0126 19:02:12.566942 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlkrn\" (UniqueName: \"kubernetes.io/projected/6c3b0d3f-a8bc-46d5-94f2-1aaaf686f89d-kube-api-access-rlkrn\") pod \"community-operators-gqmrn\" (UID: \"6c3b0d3f-a8bc-46d5-94f2-1aaaf686f89d\") " pod="openshift-marketplace/community-operators-gqmrn" Jan 26 19:02:12 crc kubenswrapper[4737]: I0126 19:02:12.668987 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c3b0d3f-a8bc-46d5-94f2-1aaaf686f89d-utilities\") pod \"community-operators-gqmrn\" (UID: \"6c3b0d3f-a8bc-46d5-94f2-1aaaf686f89d\") " pod="openshift-marketplace/community-operators-gqmrn" Jan 26 19:02:12 crc kubenswrapper[4737]: I0126 19:02:12.669090 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c3b0d3f-a8bc-46d5-94f2-1aaaf686f89d-catalog-content\") pod \"community-operators-gqmrn\" (UID: \"6c3b0d3f-a8bc-46d5-94f2-1aaaf686f89d\") " pod="openshift-marketplace/community-operators-gqmrn" Jan 26 19:02:12 crc kubenswrapper[4737]: I0126 19:02:12.669114 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlkrn\" (UniqueName: \"kubernetes.io/projected/6c3b0d3f-a8bc-46d5-94f2-1aaaf686f89d-kube-api-access-rlkrn\") pod \"community-operators-gqmrn\" (UID: \"6c3b0d3f-a8bc-46d5-94f2-1aaaf686f89d\") " pod="openshift-marketplace/community-operators-gqmrn" Jan 26 19:02:12 crc kubenswrapper[4737]: I0126 19:02:12.669618 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c3b0d3f-a8bc-46d5-94f2-1aaaf686f89d-utilities\") pod \"community-operators-gqmrn\" (UID: \"6c3b0d3f-a8bc-46d5-94f2-1aaaf686f89d\") " pod="openshift-marketplace/community-operators-gqmrn" Jan 26 19:02:12 crc kubenswrapper[4737]: I0126 19:02:12.669867 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c3b0d3f-a8bc-46d5-94f2-1aaaf686f89d-catalog-content\") pod \"community-operators-gqmrn\" (UID: \"6c3b0d3f-a8bc-46d5-94f2-1aaaf686f89d\") " pod="openshift-marketplace/community-operators-gqmrn" Jan 26 19:02:12 crc kubenswrapper[4737]: I0126 19:02:12.705124 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlkrn\" (UniqueName: \"kubernetes.io/projected/6c3b0d3f-a8bc-46d5-94f2-1aaaf686f89d-kube-api-access-rlkrn\") pod \"community-operators-gqmrn\" (UID: \"6c3b0d3f-a8bc-46d5-94f2-1aaaf686f89d\") " pod="openshift-marketplace/community-operators-gqmrn" Jan 26 19:02:12 crc kubenswrapper[4737]: I0126 19:02:12.757991 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gqmrn" Jan 26 19:02:13 crc kubenswrapper[4737]: I0126 19:02:13.362429 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gqmrn"] Jan 26 19:02:13 crc kubenswrapper[4737]: I0126 19:02:13.395431 4737 generic.go:334] "Generic (PLEG): container finished" podID="32cf47c0-b118-4b69-a582-3f62868b5dd0" containerID="9b5687f219e2b54df9065e82307263a6c51c175384e3e62c7b4f9610fd933c30" exitCode=0 Jan 26 19:02:13 crc kubenswrapper[4737]: I0126 19:02:13.395532 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hwld9" event={"ID":"32cf47c0-b118-4b69-a582-3f62868b5dd0","Type":"ContainerDied","Data":"9b5687f219e2b54df9065e82307263a6c51c175384e3e62c7b4f9610fd933c30"} Jan 26 19:02:13 crc kubenswrapper[4737]: I0126 19:02:13.405218 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gqmrn" event={"ID":"6c3b0d3f-a8bc-46d5-94f2-1aaaf686f89d","Type":"ContainerStarted","Data":"45c421eafd53949dfdefdf5306774c0efebba854ef1f35720f63a62d26ba65b4"} Jan 26 19:02:14 crc kubenswrapper[4737]: I0126 19:02:14.418937 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hwld9" event={"ID":"32cf47c0-b118-4b69-a582-3f62868b5dd0","Type":"ContainerStarted","Data":"f13be65bb53de5f86e68408c793dd81f7b0fef56631ea033820bbcc8bbcaca26"} Jan 26 19:02:14 crc kubenswrapper[4737]: I0126 19:02:14.421221 4737 generic.go:334] "Generic (PLEG): container finished" podID="6c3b0d3f-a8bc-46d5-94f2-1aaaf686f89d" containerID="34083f26c75505637999ca660d015eeb050ea2e785d866315a1fcfe3a6422989" exitCode=0 Jan 26 19:02:14 crc kubenswrapper[4737]: I0126 19:02:14.421273 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gqmrn" event={"ID":"6c3b0d3f-a8bc-46d5-94f2-1aaaf686f89d","Type":"ContainerDied","Data":"34083f26c75505637999ca660d015eeb050ea2e785d866315a1fcfe3a6422989"} Jan 26 19:02:14 crc kubenswrapper[4737]: I0126 19:02:14.446945 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hwld9" podStartSLOduration=2.9492219029999998 podStartE2EDuration="5.446923409s" podCreationTimestamp="2026-01-26 19:02:09 +0000 UTC" firstStartedPulling="2026-01-26 19:02:11.333134994 +0000 UTC m=+1904.641329702" lastFinishedPulling="2026-01-26 19:02:13.8308365 +0000 UTC m=+1907.139031208" observedRunningTime="2026-01-26 19:02:14.438909574 +0000 UTC m=+1907.747104272" watchObservedRunningTime="2026-01-26 19:02:14.446923409 +0000 UTC m=+1907.755118117" Jan 26 19:02:16 crc kubenswrapper[4737]: I0126 19:02:16.448620 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gqmrn" event={"ID":"6c3b0d3f-a8bc-46d5-94f2-1aaaf686f89d","Type":"ContainerStarted","Data":"7a7f48d9ddcb921b49249c8b2f928f991c12911e5c1aa4fdd52b2a3f815552a5"} Jan 26 19:02:17 crc kubenswrapper[4737]: I0126 19:02:17.462191 4737 generic.go:334] "Generic (PLEG): container finished" podID="bcd52a93-f277-416b-b37b-2ae58d2edaa5" containerID="7d0b4832263cb845a98af8c235aa82e745de7b92dbabe8fd205c8fd5c174e2d5" exitCode=0 Jan 26 19:02:17 crc kubenswrapper[4737]: I0126 19:02:17.462296 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bcd52a93-f277-416b-b37b-2ae58d2edaa5","Type":"ContainerDied","Data":"7d0b4832263cb845a98af8c235aa82e745de7b92dbabe8fd205c8fd5c174e2d5"} Jan 26 19:02:17 crc kubenswrapper[4737]: I0126 19:02:17.468340 4737 generic.go:334] "Generic (PLEG): container finished" podID="6c3b0d3f-a8bc-46d5-94f2-1aaaf686f89d" containerID="7a7f48d9ddcb921b49249c8b2f928f991c12911e5c1aa4fdd52b2a3f815552a5" exitCode=0 Jan 26 19:02:17 crc kubenswrapper[4737]: I0126 19:02:17.468391 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gqmrn" event={"ID":"6c3b0d3f-a8bc-46d5-94f2-1aaaf686f89d","Type":"ContainerDied","Data":"7a7f48d9ddcb921b49249c8b2f928f991c12911e5c1aa4fdd52b2a3f815552a5"} Jan 26 19:02:18 crc kubenswrapper[4737]: I0126 19:02:18.492844 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bcd52a93-f277-416b-b37b-2ae58d2edaa5","Type":"ContainerStarted","Data":"cd31778660d9c487a9c0c1f7da5dbc80a51b47c0ebfb47f614838f6d4a2c22aa"} Jan 26 19:02:18 crc kubenswrapper[4737]: I0126 19:02:18.493686 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 26 19:02:18 crc kubenswrapper[4737]: I0126 19:02:18.497439 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gqmrn" event={"ID":"6c3b0d3f-a8bc-46d5-94f2-1aaaf686f89d","Type":"ContainerStarted","Data":"1901a1544ae36ebf98f0558b93edc09a743e8de7c879c856b3875e08b4db5935"} Jan 26 19:02:18 crc kubenswrapper[4737]: I0126 19:02:18.546524 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.546503106 podStartE2EDuration="37.546503106s" podCreationTimestamp="2026-01-26 19:01:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:02:18.530724302 +0000 UTC m=+1911.838919010" watchObservedRunningTime="2026-01-26 19:02:18.546503106 +0000 UTC m=+1911.854697814" Jan 26 19:02:18 crc kubenswrapper[4737]: I0126 19:02:18.578481 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gqmrn" podStartSLOduration=3.156473341 podStartE2EDuration="6.578464103s" podCreationTimestamp="2026-01-26 19:02:12 +0000 UTC" firstStartedPulling="2026-01-26 19:02:14.422816352 +0000 UTC m=+1907.731011060" lastFinishedPulling="2026-01-26 19:02:17.844807114 +0000 UTC m=+1911.153001822" observedRunningTime="2026-01-26 19:02:18.572811336 +0000 UTC m=+1911.881006044" watchObservedRunningTime="2026-01-26 19:02:18.578464103 +0000 UTC m=+1911.886658811" Jan 26 19:02:20 crc kubenswrapper[4737]: I0126 19:02:20.360442 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hwld9" Jan 26 19:02:20 crc kubenswrapper[4737]: I0126 19:02:20.361421 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hwld9" Jan 26 19:02:20 crc kubenswrapper[4737]: I0126 19:02:20.413845 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hwld9" Jan 26 19:02:20 crc kubenswrapper[4737]: I0126 19:02:20.577434 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hwld9" Jan 26 19:02:21 crc kubenswrapper[4737]: I0126 19:02:21.607454 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hwld9"] Jan 26 19:02:21 crc kubenswrapper[4737]: I0126 19:02:21.982165 4737 scope.go:117] "RemoveContainer" containerID="1118354a04db19a991298cf7d8a2d128f4afb57f133e36502b231054abcee336" Jan 26 19:02:21 crc kubenswrapper[4737]: E0126 19:02:21.982496 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:02:22 crc kubenswrapper[4737]: I0126 19:02:22.548971 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hwld9" podUID="32cf47c0-b118-4b69-a582-3f62868b5dd0" containerName="registry-server" containerID="cri-o://f13be65bb53de5f86e68408c793dd81f7b0fef56631ea033820bbcc8bbcaca26" gracePeriod=2 Jan 26 19:02:22 crc kubenswrapper[4737]: I0126 19:02:22.767367 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gqmrn" Jan 26 19:02:22 crc kubenswrapper[4737]: I0126 19:02:22.767426 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gqmrn" Jan 26 19:02:22 crc kubenswrapper[4737]: I0126 19:02:22.823255 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gqmrn" Jan 26 19:02:23 crc kubenswrapper[4737]: I0126 19:02:23.168423 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hwld9" Jan 26 19:02:23 crc kubenswrapper[4737]: I0126 19:02:23.341844 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bv2d\" (UniqueName: \"kubernetes.io/projected/32cf47c0-b118-4b69-a582-3f62868b5dd0-kube-api-access-7bv2d\") pod \"32cf47c0-b118-4b69-a582-3f62868b5dd0\" (UID: \"32cf47c0-b118-4b69-a582-3f62868b5dd0\") " Jan 26 19:02:23 crc kubenswrapper[4737]: I0126 19:02:23.341952 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32cf47c0-b118-4b69-a582-3f62868b5dd0-catalog-content\") pod \"32cf47c0-b118-4b69-a582-3f62868b5dd0\" (UID: \"32cf47c0-b118-4b69-a582-3f62868b5dd0\") " Jan 26 19:02:23 crc kubenswrapper[4737]: I0126 19:02:23.342249 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32cf47c0-b118-4b69-a582-3f62868b5dd0-utilities\") pod \"32cf47c0-b118-4b69-a582-3f62868b5dd0\" (UID: \"32cf47c0-b118-4b69-a582-3f62868b5dd0\") " Jan 26 19:02:23 crc kubenswrapper[4737]: I0126 19:02:23.342640 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32cf47c0-b118-4b69-a582-3f62868b5dd0-utilities" (OuterVolumeSpecName: "utilities") pod "32cf47c0-b118-4b69-a582-3f62868b5dd0" (UID: "32cf47c0-b118-4b69-a582-3f62868b5dd0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:02:23 crc kubenswrapper[4737]: I0126 19:02:23.342845 4737 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32cf47c0-b118-4b69-a582-3f62868b5dd0-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:23 crc kubenswrapper[4737]: I0126 19:02:23.362107 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32cf47c0-b118-4b69-a582-3f62868b5dd0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "32cf47c0-b118-4b69-a582-3f62868b5dd0" (UID: "32cf47c0-b118-4b69-a582-3f62868b5dd0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:02:23 crc kubenswrapper[4737]: I0126 19:02:23.362388 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32cf47c0-b118-4b69-a582-3f62868b5dd0-kube-api-access-7bv2d" (OuterVolumeSpecName: "kube-api-access-7bv2d") pod "32cf47c0-b118-4b69-a582-3f62868b5dd0" (UID: "32cf47c0-b118-4b69-a582-3f62868b5dd0"). InnerVolumeSpecName "kube-api-access-7bv2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:02:23 crc kubenswrapper[4737]: I0126 19:02:23.445757 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bv2d\" (UniqueName: \"kubernetes.io/projected/32cf47c0-b118-4b69-a582-3f62868b5dd0-kube-api-access-7bv2d\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:23 crc kubenswrapper[4737]: I0126 19:02:23.445793 4737 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32cf47c0-b118-4b69-a582-3f62868b5dd0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:23 crc kubenswrapper[4737]: I0126 19:02:23.563443 4737 generic.go:334] "Generic (PLEG): container finished" podID="32cf47c0-b118-4b69-a582-3f62868b5dd0" containerID="f13be65bb53de5f86e68408c793dd81f7b0fef56631ea033820bbcc8bbcaca26" exitCode=0 Jan 26 19:02:23 crc kubenswrapper[4737]: I0126 19:02:23.563499 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hwld9" event={"ID":"32cf47c0-b118-4b69-a582-3f62868b5dd0","Type":"ContainerDied","Data":"f13be65bb53de5f86e68408c793dd81f7b0fef56631ea033820bbcc8bbcaca26"} Jan 26 19:02:23 crc kubenswrapper[4737]: I0126 19:02:23.563534 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hwld9" Jan 26 19:02:23 crc kubenswrapper[4737]: I0126 19:02:23.563557 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hwld9" event={"ID":"32cf47c0-b118-4b69-a582-3f62868b5dd0","Type":"ContainerDied","Data":"5e5321926be7caa69758aebd02ed8dcfbf00f75069c368a8b79c09d35013318e"} Jan 26 19:02:23 crc kubenswrapper[4737]: I0126 19:02:23.563584 4737 scope.go:117] "RemoveContainer" containerID="f13be65bb53de5f86e68408c793dd81f7b0fef56631ea033820bbcc8bbcaca26" Jan 26 19:02:23 crc kubenswrapper[4737]: I0126 19:02:23.605368 4737 scope.go:117] "RemoveContainer" containerID="9b5687f219e2b54df9065e82307263a6c51c175384e3e62c7b4f9610fd933c30" Jan 26 19:02:23 crc kubenswrapper[4737]: I0126 19:02:23.609238 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hwld9"] Jan 26 19:02:23 crc kubenswrapper[4737]: I0126 19:02:23.634619 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hwld9"] Jan 26 19:02:23 crc kubenswrapper[4737]: I0126 19:02:23.640569 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gqmrn" Jan 26 19:02:23 crc kubenswrapper[4737]: I0126 19:02:23.660223 4737 scope.go:117] "RemoveContainer" containerID="b3724dac3262c7e9d3170be1a3c05cd2505c1bd37b7b2a6f9958a51dd43c6171" Jan 26 19:02:23 crc kubenswrapper[4737]: I0126 19:02:23.699116 4737 scope.go:117] "RemoveContainer" containerID="f13be65bb53de5f86e68408c793dd81f7b0fef56631ea033820bbcc8bbcaca26" Jan 26 19:02:23 crc kubenswrapper[4737]: E0126 19:02:23.699653 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f13be65bb53de5f86e68408c793dd81f7b0fef56631ea033820bbcc8bbcaca26\": container with ID starting with f13be65bb53de5f86e68408c793dd81f7b0fef56631ea033820bbcc8bbcaca26 not found: ID does not exist" containerID="f13be65bb53de5f86e68408c793dd81f7b0fef56631ea033820bbcc8bbcaca26" Jan 26 19:02:23 crc kubenswrapper[4737]: I0126 19:02:23.699698 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f13be65bb53de5f86e68408c793dd81f7b0fef56631ea033820bbcc8bbcaca26"} err="failed to get container status \"f13be65bb53de5f86e68408c793dd81f7b0fef56631ea033820bbcc8bbcaca26\": rpc error: code = NotFound desc = could not find container \"f13be65bb53de5f86e68408c793dd81f7b0fef56631ea033820bbcc8bbcaca26\": container with ID starting with f13be65bb53de5f86e68408c793dd81f7b0fef56631ea033820bbcc8bbcaca26 not found: ID does not exist" Jan 26 19:02:23 crc kubenswrapper[4737]: I0126 19:02:23.699729 4737 scope.go:117] "RemoveContainer" containerID="9b5687f219e2b54df9065e82307263a6c51c175384e3e62c7b4f9610fd933c30" Jan 26 19:02:23 crc kubenswrapper[4737]: E0126 19:02:23.700486 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b5687f219e2b54df9065e82307263a6c51c175384e3e62c7b4f9610fd933c30\": container with ID starting with 9b5687f219e2b54df9065e82307263a6c51c175384e3e62c7b4f9610fd933c30 not found: ID does not exist" containerID="9b5687f219e2b54df9065e82307263a6c51c175384e3e62c7b4f9610fd933c30" Jan 26 19:02:23 crc kubenswrapper[4737]: I0126 19:02:23.700528 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b5687f219e2b54df9065e82307263a6c51c175384e3e62c7b4f9610fd933c30"} err="failed to get container status \"9b5687f219e2b54df9065e82307263a6c51c175384e3e62c7b4f9610fd933c30\": rpc error: code = NotFound desc = could not find container \"9b5687f219e2b54df9065e82307263a6c51c175384e3e62c7b4f9610fd933c30\": container with ID starting with 9b5687f219e2b54df9065e82307263a6c51c175384e3e62c7b4f9610fd933c30 not found: ID does not exist" Jan 26 19:02:23 crc kubenswrapper[4737]: I0126 19:02:23.700558 4737 scope.go:117] "RemoveContainer" containerID="b3724dac3262c7e9d3170be1a3c05cd2505c1bd37b7b2a6f9958a51dd43c6171" Jan 26 19:02:23 crc kubenswrapper[4737]: E0126 19:02:23.701028 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3724dac3262c7e9d3170be1a3c05cd2505c1bd37b7b2a6f9958a51dd43c6171\": container with ID starting with b3724dac3262c7e9d3170be1a3c05cd2505c1bd37b7b2a6f9958a51dd43c6171 not found: ID does not exist" containerID="b3724dac3262c7e9d3170be1a3c05cd2505c1bd37b7b2a6f9958a51dd43c6171" Jan 26 19:02:23 crc kubenswrapper[4737]: I0126 19:02:23.701095 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3724dac3262c7e9d3170be1a3c05cd2505c1bd37b7b2a6f9958a51dd43c6171"} err="failed to get container status \"b3724dac3262c7e9d3170be1a3c05cd2505c1bd37b7b2a6f9958a51dd43c6171\": rpc error: code = NotFound desc = could not find container \"b3724dac3262c7e9d3170be1a3c05cd2505c1bd37b7b2a6f9958a51dd43c6171\": container with ID starting with b3724dac3262c7e9d3170be1a3c05cd2505c1bd37b7b2a6f9958a51dd43c6171 not found: ID does not exist" Jan 26 19:02:24 crc kubenswrapper[4737]: I0126 19:02:24.996180 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32cf47c0-b118-4b69-a582-3f62868b5dd0" path="/var/lib/kubelet/pods/32cf47c0-b118-4b69-a582-3f62868b5dd0/volumes" Jan 26 19:02:25 crc kubenswrapper[4737]: I0126 19:02:25.207694 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gqmrn"] Jan 26 19:02:25 crc kubenswrapper[4737]: I0126 19:02:25.587291 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gqmrn" podUID="6c3b0d3f-a8bc-46d5-94f2-1aaaf686f89d" containerName="registry-server" containerID="cri-o://1901a1544ae36ebf98f0558b93edc09a743e8de7c879c856b3875e08b4db5935" gracePeriod=2 Jan 26 19:02:26 crc kubenswrapper[4737]: I0126 19:02:26.247267 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gqmrn" Jan 26 19:02:26 crc kubenswrapper[4737]: I0126 19:02:26.414298 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c3b0d3f-a8bc-46d5-94f2-1aaaf686f89d-catalog-content\") pod \"6c3b0d3f-a8bc-46d5-94f2-1aaaf686f89d\" (UID: \"6c3b0d3f-a8bc-46d5-94f2-1aaaf686f89d\") " Jan 26 19:02:26 crc kubenswrapper[4737]: I0126 19:02:26.414382 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c3b0d3f-a8bc-46d5-94f2-1aaaf686f89d-utilities\") pod \"6c3b0d3f-a8bc-46d5-94f2-1aaaf686f89d\" (UID: \"6c3b0d3f-a8bc-46d5-94f2-1aaaf686f89d\") " Jan 26 19:02:26 crc kubenswrapper[4737]: I0126 19:02:26.414553 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlkrn\" (UniqueName: \"kubernetes.io/projected/6c3b0d3f-a8bc-46d5-94f2-1aaaf686f89d-kube-api-access-rlkrn\") pod \"6c3b0d3f-a8bc-46d5-94f2-1aaaf686f89d\" (UID: \"6c3b0d3f-a8bc-46d5-94f2-1aaaf686f89d\") " Jan 26 19:02:26 crc kubenswrapper[4737]: I0126 19:02:26.415662 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c3b0d3f-a8bc-46d5-94f2-1aaaf686f89d-utilities" (OuterVolumeSpecName: "utilities") pod "6c3b0d3f-a8bc-46d5-94f2-1aaaf686f89d" (UID: "6c3b0d3f-a8bc-46d5-94f2-1aaaf686f89d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:02:26 crc kubenswrapper[4737]: I0126 19:02:26.424521 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c3b0d3f-a8bc-46d5-94f2-1aaaf686f89d-kube-api-access-rlkrn" (OuterVolumeSpecName: "kube-api-access-rlkrn") pod "6c3b0d3f-a8bc-46d5-94f2-1aaaf686f89d" (UID: "6c3b0d3f-a8bc-46d5-94f2-1aaaf686f89d"). InnerVolumeSpecName "kube-api-access-rlkrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:02:26 crc kubenswrapper[4737]: I0126 19:02:26.473526 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c3b0d3f-a8bc-46d5-94f2-1aaaf686f89d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6c3b0d3f-a8bc-46d5-94f2-1aaaf686f89d" (UID: "6c3b0d3f-a8bc-46d5-94f2-1aaaf686f89d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:02:26 crc kubenswrapper[4737]: I0126 19:02:26.518122 4737 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c3b0d3f-a8bc-46d5-94f2-1aaaf686f89d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:26 crc kubenswrapper[4737]: I0126 19:02:26.518170 4737 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c3b0d3f-a8bc-46d5-94f2-1aaaf686f89d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:26 crc kubenswrapper[4737]: I0126 19:02:26.518184 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlkrn\" (UniqueName: \"kubernetes.io/projected/6c3b0d3f-a8bc-46d5-94f2-1aaaf686f89d-kube-api-access-rlkrn\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:26 crc kubenswrapper[4737]: I0126 19:02:26.614625 4737 generic.go:334] "Generic (PLEG): container finished" podID="6c3b0d3f-a8bc-46d5-94f2-1aaaf686f89d" containerID="1901a1544ae36ebf98f0558b93edc09a743e8de7c879c856b3875e08b4db5935" exitCode=0 Jan 26 19:02:26 crc kubenswrapper[4737]: I0126 19:02:26.614693 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gqmrn" event={"ID":"6c3b0d3f-a8bc-46d5-94f2-1aaaf686f89d","Type":"ContainerDied","Data":"1901a1544ae36ebf98f0558b93edc09a743e8de7c879c856b3875e08b4db5935"} Jan 26 19:02:26 crc kubenswrapper[4737]: I0126 19:02:26.614735 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gqmrn" event={"ID":"6c3b0d3f-a8bc-46d5-94f2-1aaaf686f89d","Type":"ContainerDied","Data":"45c421eafd53949dfdefdf5306774c0efebba854ef1f35720f63a62d26ba65b4"} Jan 26 19:02:26 crc kubenswrapper[4737]: I0126 19:02:26.614815 4737 scope.go:117] "RemoveContainer" containerID="1901a1544ae36ebf98f0558b93edc09a743e8de7c879c856b3875e08b4db5935" Jan 26 19:02:26 crc kubenswrapper[4737]: I0126 19:02:26.615642 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gqmrn" Jan 26 19:02:26 crc kubenswrapper[4737]: I0126 19:02:26.652359 4737 scope.go:117] "RemoveContainer" containerID="7a7f48d9ddcb921b49249c8b2f928f991c12911e5c1aa4fdd52b2a3f815552a5" Jan 26 19:02:26 crc kubenswrapper[4737]: I0126 19:02:26.685113 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gqmrn"] Jan 26 19:02:26 crc kubenswrapper[4737]: I0126 19:02:26.692649 4737 scope.go:117] "RemoveContainer" containerID="34083f26c75505637999ca660d015eeb050ea2e785d866315a1fcfe3a6422989" Jan 26 19:02:26 crc kubenswrapper[4737]: I0126 19:02:26.710982 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gqmrn"] Jan 26 19:02:26 crc kubenswrapper[4737]: I0126 19:02:26.754712 4737 scope.go:117] "RemoveContainer" containerID="1901a1544ae36ebf98f0558b93edc09a743e8de7c879c856b3875e08b4db5935" Jan 26 19:02:26 crc kubenswrapper[4737]: E0126 19:02:26.755666 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1901a1544ae36ebf98f0558b93edc09a743e8de7c879c856b3875e08b4db5935\": container with ID starting with 1901a1544ae36ebf98f0558b93edc09a743e8de7c879c856b3875e08b4db5935 not found: ID does not exist" containerID="1901a1544ae36ebf98f0558b93edc09a743e8de7c879c856b3875e08b4db5935" Jan 26 19:02:26 crc kubenswrapper[4737]: I0126 19:02:26.755719 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1901a1544ae36ebf98f0558b93edc09a743e8de7c879c856b3875e08b4db5935"} err="failed to get container status \"1901a1544ae36ebf98f0558b93edc09a743e8de7c879c856b3875e08b4db5935\": rpc error: code = NotFound desc = could not find container \"1901a1544ae36ebf98f0558b93edc09a743e8de7c879c856b3875e08b4db5935\": container with ID starting with 1901a1544ae36ebf98f0558b93edc09a743e8de7c879c856b3875e08b4db5935 not found: ID does not exist" Jan 26 19:02:26 crc kubenswrapper[4737]: I0126 19:02:26.755743 4737 scope.go:117] "RemoveContainer" containerID="7a7f48d9ddcb921b49249c8b2f928f991c12911e5c1aa4fdd52b2a3f815552a5" Jan 26 19:02:26 crc kubenswrapper[4737]: E0126 19:02:26.756243 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a7f48d9ddcb921b49249c8b2f928f991c12911e5c1aa4fdd52b2a3f815552a5\": container with ID starting with 7a7f48d9ddcb921b49249c8b2f928f991c12911e5c1aa4fdd52b2a3f815552a5 not found: ID does not exist" containerID="7a7f48d9ddcb921b49249c8b2f928f991c12911e5c1aa4fdd52b2a3f815552a5" Jan 26 19:02:26 crc kubenswrapper[4737]: I0126 19:02:26.756264 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a7f48d9ddcb921b49249c8b2f928f991c12911e5c1aa4fdd52b2a3f815552a5"} err="failed to get container status \"7a7f48d9ddcb921b49249c8b2f928f991c12911e5c1aa4fdd52b2a3f815552a5\": rpc error: code = NotFound desc = could not find container \"7a7f48d9ddcb921b49249c8b2f928f991c12911e5c1aa4fdd52b2a3f815552a5\": container with ID starting with 7a7f48d9ddcb921b49249c8b2f928f991c12911e5c1aa4fdd52b2a3f815552a5 not found: ID does not exist" Jan 26 19:02:26 crc kubenswrapper[4737]: I0126 19:02:26.756278 4737 scope.go:117] "RemoveContainer" containerID="34083f26c75505637999ca660d015eeb050ea2e785d866315a1fcfe3a6422989" Jan 26 19:02:26 crc kubenswrapper[4737]: E0126 19:02:26.756805 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34083f26c75505637999ca660d015eeb050ea2e785d866315a1fcfe3a6422989\": container with ID starting with 34083f26c75505637999ca660d015eeb050ea2e785d866315a1fcfe3a6422989 not found: ID does not exist" containerID="34083f26c75505637999ca660d015eeb050ea2e785d866315a1fcfe3a6422989" Jan 26 19:02:26 crc kubenswrapper[4737]: I0126 19:02:26.756858 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34083f26c75505637999ca660d015eeb050ea2e785d866315a1fcfe3a6422989"} err="failed to get container status \"34083f26c75505637999ca660d015eeb050ea2e785d866315a1fcfe3a6422989\": rpc error: code = NotFound desc = could not find container \"34083f26c75505637999ca660d015eeb050ea2e785d866315a1fcfe3a6422989\": container with ID starting with 34083f26c75505637999ca660d015eeb050ea2e785d866315a1fcfe3a6422989 not found: ID does not exist" Jan 26 19:02:26 crc kubenswrapper[4737]: I0126 19:02:26.996776 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c3b0d3f-a8bc-46d5-94f2-1aaaf686f89d" path="/var/lib/kubelet/pods/6c3b0d3f-a8bc-46d5-94f2-1aaaf686f89d/volumes" Jan 26 19:02:32 crc kubenswrapper[4737]: I0126 19:02:32.285859 4737 scope.go:117] "RemoveContainer" containerID="c04a9af212861452c83b676661f97393cc144f3603cfef17b7005dfd75266a8c" Jan 26 19:02:32 crc kubenswrapper[4737]: I0126 19:02:32.327382 4737 scope.go:117] "RemoveContainer" containerID="022be3a0298b767246af123798dbc6e92b83adbf032bcac0595eebfe08f81137" Jan 26 19:02:32 crc kubenswrapper[4737]: I0126 19:02:32.631386 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 26 19:02:32 crc kubenswrapper[4737]: I0126 19:02:32.983888 4737 scope.go:117] "RemoveContainer" containerID="1118354a04db19a991298cf7d8a2d128f4afb57f133e36502b231054abcee336" Jan 26 19:02:32 crc kubenswrapper[4737]: E0126 19:02:32.984194 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:02:46 crc kubenswrapper[4737]: I0126 19:02:46.991888 4737 scope.go:117] "RemoveContainer" containerID="1118354a04db19a991298cf7d8a2d128f4afb57f133e36502b231054abcee336" Jan 26 19:02:46 crc kubenswrapper[4737]: E0126 19:02:46.994429 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:02:58 crc kubenswrapper[4737]: I0126 19:02:58.982703 4737 scope.go:117] "RemoveContainer" containerID="1118354a04db19a991298cf7d8a2d128f4afb57f133e36502b231054abcee336" Jan 26 19:02:58 crc kubenswrapper[4737]: E0126 19:02:58.983552 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:03:11 crc kubenswrapper[4737]: I0126 19:03:11.982618 4737 scope.go:117] "RemoveContainer" containerID="1118354a04db19a991298cf7d8a2d128f4afb57f133e36502b231054abcee336" Jan 26 19:03:11 crc kubenswrapper[4737]: E0126 19:03:11.983622 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:03:23 crc kubenswrapper[4737]: I0126 19:03:23.982247 4737 scope.go:117] "RemoveContainer" containerID="1118354a04db19a991298cf7d8a2d128f4afb57f133e36502b231054abcee336" Jan 26 19:03:23 crc kubenswrapper[4737]: E0126 19:03:23.984120 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:03:32 crc kubenswrapper[4737]: I0126 19:03:32.484413 4737 scope.go:117] "RemoveContainer" containerID="dfb5316391219fe2677f4dcb7434498b89581f308133624bef3d1ac1f0256895" Jan 26 19:03:32 crc kubenswrapper[4737]: I0126 19:03:32.516735 4737 scope.go:117] "RemoveContainer" containerID="e3f7fb73a02346192c2dd0c383b762e95c953006fc16f3cfa1990d8b470a7a91" Jan 26 19:03:32 crc kubenswrapper[4737]: I0126 19:03:32.542211 4737 scope.go:117] "RemoveContainer" containerID="b1fd9439b71dd0bb450527cc64945fda13d085cd73000d7b986a8b9ed49db546" Jan 26 19:03:32 crc kubenswrapper[4737]: I0126 19:03:32.571994 4737 scope.go:117] "RemoveContainer" containerID="eba2baaadf0d55b7aefb35243d0d6460754423de600cf76fe5c0a77e3e91077c" Jan 26 19:03:34 crc kubenswrapper[4737]: I0126 19:03:34.062879 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-ntqg8"] Jan 26 19:03:34 crc kubenswrapper[4737]: I0126 19:03:34.076039 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-f9d4-account-create-update-bf25x"] Jan 26 19:03:34 crc kubenswrapper[4737]: I0126 19:03:34.088926 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-xx2nh"] Jan 26 19:03:34 crc kubenswrapper[4737]: I0126 19:03:34.102183 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-pb2pj"] Jan 26 19:03:34 crc kubenswrapper[4737]: I0126 19:03:34.114744 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-887b-account-create-update-zvz84"] Jan 26 19:03:34 crc kubenswrapper[4737]: I0126 19:03:34.152143 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-a4fd-account-create-update-jq2tl"] Jan 26 19:03:34 crc kubenswrapper[4737]: I0126 19:03:34.171488 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-f9d4-account-create-update-bf25x"] Jan 26 19:03:34 crc kubenswrapper[4737]: I0126 19:03:34.185707 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-887b-account-create-update-zvz84"] Jan 26 19:03:34 crc kubenswrapper[4737]: I0126 19:03:34.202029 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-xx2nh"] Jan 26 19:03:34 crc kubenswrapper[4737]: I0126 19:03:34.215153 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-pb2pj"] Jan 26 19:03:34 crc kubenswrapper[4737]: I0126 19:03:34.228048 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-ntqg8"] Jan 26 19:03:34 crc kubenswrapper[4737]: I0126 19:03:34.242521 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-a4fd-account-create-update-jq2tl"] Jan 26 19:03:34 crc kubenswrapper[4737]: I0126 19:03:34.999822 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e51d252-fc4e-4694-87e5-dade4de60ec5" path="/var/lib/kubelet/pods/5e51d252-fc4e-4694-87e5-dade4de60ec5/volumes" Jan 26 19:03:35 crc kubenswrapper[4737]: I0126 19:03:35.002376 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9404174-9225-41ad-9db6-d523f17739d0" path="/var/lib/kubelet/pods/a9404174-9225-41ad-9db6-d523f17739d0/volumes" Jan 26 19:03:35 crc kubenswrapper[4737]: I0126 19:03:35.004042 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b117bcd7-b58c-4af6-9bd6-ce70ec70f601" path="/var/lib/kubelet/pods/b117bcd7-b58c-4af6-9bd6-ce70ec70f601/volumes" Jan 26 19:03:35 crc kubenswrapper[4737]: I0126 19:03:35.005448 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9c00937-8b37-4a41-8403-c69b2e307675" path="/var/lib/kubelet/pods/c9c00937-8b37-4a41-8403-c69b2e307675/volumes" Jan 26 19:03:35 crc kubenswrapper[4737]: I0126 19:03:35.008692 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e57f4e7a-0e31-4911-9f19-a43e3d91e721" path="/var/lib/kubelet/pods/e57f4e7a-0e31-4911-9f19-a43e3d91e721/volumes" Jan 26 19:03:35 crc kubenswrapper[4737]: I0126 19:03:35.011525 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e633aa68-a0b7-4ee4-bf00-7d46105654e2" path="/var/lib/kubelet/pods/e633aa68-a0b7-4ee4-bf00-7d46105654e2/volumes" Jan 26 19:03:35 crc kubenswrapper[4737]: I0126 19:03:35.982963 4737 scope.go:117] "RemoveContainer" containerID="1118354a04db19a991298cf7d8a2d128f4afb57f133e36502b231054abcee336" Jan 26 19:03:36 crc kubenswrapper[4737]: I0126 19:03:36.574949 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" event={"ID":"afd75772-7900-46c3-b392-afb075e1cc08","Type":"ContainerStarted","Data":"128858a05e84587d74f8a27fb380177b3d24231b3df428cd4848c4a2148ba1b3"} Jan 26 19:03:37 crc kubenswrapper[4737]: I0126 19:03:37.048848 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-bc81-account-create-update-fjw9f"] Jan 26 19:03:37 crc kubenswrapper[4737]: I0126 19:03:37.064252 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-nppft"] Jan 26 19:03:37 crc kubenswrapper[4737]: I0126 19:03:37.076959 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-nppft"] Jan 26 19:03:37 crc kubenswrapper[4737]: I0126 19:03:37.092699 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-bc81-account-create-update-fjw9f"] Jan 26 19:03:39 crc kubenswrapper[4737]: I0126 19:03:38.999045 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3228aed7-c127-465a-ba59-822d4e6e92e6" path="/var/lib/kubelet/pods/3228aed7-c127-465a-ba59-822d4e6e92e6/volumes" Jan 26 19:03:39 crc kubenswrapper[4737]: I0126 19:03:39.001350 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="907a6367-2724-43bf-aabf-b9488debfed4" path="/var/lib/kubelet/pods/907a6367-2724-43bf-aabf-b9488debfed4/volumes" Jan 26 19:03:46 crc kubenswrapper[4737]: I0126 19:03:46.053502 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-8zd8m"] Jan 26 19:03:46 crc kubenswrapper[4737]: I0126 19:03:46.070310 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-8zd8m"] Jan 26 19:03:46 crc kubenswrapper[4737]: I0126 19:03:46.082490 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-f388-account-create-update-2kjdh"] Jan 26 19:03:46 crc kubenswrapper[4737]: I0126 19:03:46.093441 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-f388-account-create-update-2kjdh"] Jan 26 19:03:47 crc kubenswrapper[4737]: I0126 19:03:46.999676 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a88f548-5326-4e23-bda1-cf97ba384393" path="/var/lib/kubelet/pods/7a88f548-5326-4e23-bda1-cf97ba384393/volumes" Jan 26 19:03:47 crc kubenswrapper[4737]: I0126 19:03:47.001496 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bccfe1e-6106-4184-8cff-37e44dfaef61" path="/var/lib/kubelet/pods/8bccfe1e-6106-4184-8cff-37e44dfaef61/volumes" Jan 26 19:03:58 crc kubenswrapper[4737]: I0126 19:03:58.051137 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-x5zfk"] Jan 26 19:03:58 crc kubenswrapper[4737]: I0126 19:03:58.064871 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-x5zfk"] Jan 26 19:03:58 crc kubenswrapper[4737]: I0126 19:03:58.994927 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4010ae2-e90f-44a2-99a0-28dd9db76d50" path="/var/lib/kubelet/pods/b4010ae2-e90f-44a2-99a0-28dd9db76d50/volumes" Jan 26 19:04:00 crc kubenswrapper[4737]: I0126 19:04:00.914510 4737 generic.go:334] "Generic (PLEG): container finished" podID="6d1d0ed3-31b7-41a2-8f49-741d206509bd" containerID="24817856bb13aef40d9b78414fe746d946c5ccb96272a219f738e7d73a75717b" exitCode=0 Jan 26 19:04:00 crc kubenswrapper[4737]: I0126 19:04:00.914619 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kpbfj" event={"ID":"6d1d0ed3-31b7-41a2-8f49-741d206509bd","Type":"ContainerDied","Data":"24817856bb13aef40d9b78414fe746d946c5ccb96272a219f738e7d73a75717b"} Jan 26 19:04:02 crc kubenswrapper[4737]: I0126 19:04:02.502898 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kpbfj" Jan 26 19:04:02 crc kubenswrapper[4737]: I0126 19:04:02.571605 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d1d0ed3-31b7-41a2-8f49-741d206509bd-bootstrap-combined-ca-bundle\") pod \"6d1d0ed3-31b7-41a2-8f49-741d206509bd\" (UID: \"6d1d0ed3-31b7-41a2-8f49-741d206509bd\") " Jan 26 19:04:02 crc kubenswrapper[4737]: I0126 19:04:02.572327 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6d1d0ed3-31b7-41a2-8f49-741d206509bd-ssh-key-openstack-edpm-ipam\") pod \"6d1d0ed3-31b7-41a2-8f49-741d206509bd\" (UID: \"6d1d0ed3-31b7-41a2-8f49-741d206509bd\") " Jan 26 19:04:02 crc kubenswrapper[4737]: I0126 19:04:02.572560 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57m9f\" (UniqueName: \"kubernetes.io/projected/6d1d0ed3-31b7-41a2-8f49-741d206509bd-kube-api-access-57m9f\") pod \"6d1d0ed3-31b7-41a2-8f49-741d206509bd\" (UID: \"6d1d0ed3-31b7-41a2-8f49-741d206509bd\") " Jan 26 19:04:02 crc kubenswrapper[4737]: I0126 19:04:02.572668 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6d1d0ed3-31b7-41a2-8f49-741d206509bd-inventory\") pod \"6d1d0ed3-31b7-41a2-8f49-741d206509bd\" (UID: \"6d1d0ed3-31b7-41a2-8f49-741d206509bd\") " Jan 26 19:04:02 crc kubenswrapper[4737]: I0126 19:04:02.584992 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d1d0ed3-31b7-41a2-8f49-741d206509bd-kube-api-access-57m9f" (OuterVolumeSpecName: "kube-api-access-57m9f") pod "6d1d0ed3-31b7-41a2-8f49-741d206509bd" (UID: "6d1d0ed3-31b7-41a2-8f49-741d206509bd"). InnerVolumeSpecName "kube-api-access-57m9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:04:02 crc kubenswrapper[4737]: I0126 19:04:02.585727 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d1d0ed3-31b7-41a2-8f49-741d206509bd-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "6d1d0ed3-31b7-41a2-8f49-741d206509bd" (UID: "6d1d0ed3-31b7-41a2-8f49-741d206509bd"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:04:02 crc kubenswrapper[4737]: I0126 19:04:02.615412 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d1d0ed3-31b7-41a2-8f49-741d206509bd-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6d1d0ed3-31b7-41a2-8f49-741d206509bd" (UID: "6d1d0ed3-31b7-41a2-8f49-741d206509bd"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:04:02 crc kubenswrapper[4737]: I0126 19:04:02.621521 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d1d0ed3-31b7-41a2-8f49-741d206509bd-inventory" (OuterVolumeSpecName: "inventory") pod "6d1d0ed3-31b7-41a2-8f49-741d206509bd" (UID: "6d1d0ed3-31b7-41a2-8f49-741d206509bd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:04:02 crc kubenswrapper[4737]: I0126 19:04:02.675982 4737 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6d1d0ed3-31b7-41a2-8f49-741d206509bd-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:02 crc kubenswrapper[4737]: I0126 19:04:02.676023 4737 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d1d0ed3-31b7-41a2-8f49-741d206509bd-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:02 crc kubenswrapper[4737]: I0126 19:04:02.676094 4737 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6d1d0ed3-31b7-41a2-8f49-741d206509bd-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:02 crc kubenswrapper[4737]: I0126 19:04:02.676109 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57m9f\" (UniqueName: \"kubernetes.io/projected/6d1d0ed3-31b7-41a2-8f49-741d206509bd-kube-api-access-57m9f\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:02 crc kubenswrapper[4737]: I0126 19:04:02.949214 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kpbfj" event={"ID":"6d1d0ed3-31b7-41a2-8f49-741d206509bd","Type":"ContainerDied","Data":"6f7b912eafa501d6186eedb051d97184701200e538d9847c5a06d840d409542d"} Jan 26 19:04:02 crc kubenswrapper[4737]: I0126 19:04:02.949279 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f7b912eafa501d6186eedb051d97184701200e538d9847c5a06d840d409542d" Jan 26 19:04:02 crc kubenswrapper[4737]: I0126 19:04:02.949506 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kpbfj" Jan 26 19:04:03 crc kubenswrapper[4737]: I0126 19:04:03.122887 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bd28j"] Jan 26 19:04:03 crc kubenswrapper[4737]: E0126 19:04:03.123520 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32cf47c0-b118-4b69-a582-3f62868b5dd0" containerName="extract-utilities" Jan 26 19:04:03 crc kubenswrapper[4737]: I0126 19:04:03.123549 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="32cf47c0-b118-4b69-a582-3f62868b5dd0" containerName="extract-utilities" Jan 26 19:04:03 crc kubenswrapper[4737]: E0126 19:04:03.123563 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d1d0ed3-31b7-41a2-8f49-741d206509bd" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 26 19:04:03 crc kubenswrapper[4737]: I0126 19:04:03.123571 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d1d0ed3-31b7-41a2-8f49-741d206509bd" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 26 19:04:03 crc kubenswrapper[4737]: E0126 19:04:03.123579 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32cf47c0-b118-4b69-a582-3f62868b5dd0" containerName="extract-content" Jan 26 19:04:03 crc kubenswrapper[4737]: I0126 19:04:03.123585 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="32cf47c0-b118-4b69-a582-3f62868b5dd0" containerName="extract-content" Jan 26 19:04:03 crc kubenswrapper[4737]: E0126 19:04:03.123596 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32cf47c0-b118-4b69-a582-3f62868b5dd0" containerName="registry-server" Jan 26 19:04:03 crc kubenswrapper[4737]: I0126 19:04:03.123603 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="32cf47c0-b118-4b69-a582-3f62868b5dd0" containerName="registry-server" Jan 26 19:04:03 crc kubenswrapper[4737]: E0126 19:04:03.123623 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c3b0d3f-a8bc-46d5-94f2-1aaaf686f89d" containerName="extract-utilities" Jan 26 19:04:03 crc kubenswrapper[4737]: I0126 19:04:03.123629 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c3b0d3f-a8bc-46d5-94f2-1aaaf686f89d" containerName="extract-utilities" Jan 26 19:04:03 crc kubenswrapper[4737]: E0126 19:04:03.123640 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c3b0d3f-a8bc-46d5-94f2-1aaaf686f89d" containerName="registry-server" Jan 26 19:04:03 crc kubenswrapper[4737]: I0126 19:04:03.123647 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c3b0d3f-a8bc-46d5-94f2-1aaaf686f89d" containerName="registry-server" Jan 26 19:04:03 crc kubenswrapper[4737]: E0126 19:04:03.123665 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c3b0d3f-a8bc-46d5-94f2-1aaaf686f89d" containerName="extract-content" Jan 26 19:04:03 crc kubenswrapper[4737]: I0126 19:04:03.123672 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c3b0d3f-a8bc-46d5-94f2-1aaaf686f89d" containerName="extract-content" Jan 26 19:04:03 crc kubenswrapper[4737]: I0126 19:04:03.124010 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="32cf47c0-b118-4b69-a582-3f62868b5dd0" containerName="registry-server" Jan 26 19:04:03 crc kubenswrapper[4737]: I0126 19:04:03.124039 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d1d0ed3-31b7-41a2-8f49-741d206509bd" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 26 19:04:03 crc kubenswrapper[4737]: I0126 19:04:03.124063 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c3b0d3f-a8bc-46d5-94f2-1aaaf686f89d" containerName="registry-server" Jan 26 19:04:03 crc kubenswrapper[4737]: I0126 19:04:03.125007 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bd28j" Jan 26 19:04:03 crc kubenswrapper[4737]: I0126 19:04:03.127630 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 19:04:03 crc kubenswrapper[4737]: I0126 19:04:03.127872 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 19:04:03 crc kubenswrapper[4737]: I0126 19:04:03.128006 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xlvv9" Jan 26 19:04:03 crc kubenswrapper[4737]: I0126 19:04:03.129468 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 19:04:03 crc kubenswrapper[4737]: I0126 19:04:03.138008 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bd28j"] Jan 26 19:04:03 crc kubenswrapper[4737]: I0126 19:04:03.318906 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5e950231-d00c-4fbd-b9de-a93d2d86eb36-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bd28j\" (UID: \"5e950231-d00c-4fbd-b9de-a93d2d86eb36\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bd28j" Jan 26 19:04:03 crc kubenswrapper[4737]: I0126 19:04:03.319136 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6zgp\" (UniqueName: \"kubernetes.io/projected/5e950231-d00c-4fbd-b9de-a93d2d86eb36-kube-api-access-r6zgp\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bd28j\" (UID: \"5e950231-d00c-4fbd-b9de-a93d2d86eb36\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bd28j" Jan 26 19:04:03 crc kubenswrapper[4737]: I0126 19:04:03.319184 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e950231-d00c-4fbd-b9de-a93d2d86eb36-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bd28j\" (UID: \"5e950231-d00c-4fbd-b9de-a93d2d86eb36\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bd28j" Jan 26 19:04:03 crc kubenswrapper[4737]: I0126 19:04:03.420463 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6zgp\" (UniqueName: \"kubernetes.io/projected/5e950231-d00c-4fbd-b9de-a93d2d86eb36-kube-api-access-r6zgp\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bd28j\" (UID: \"5e950231-d00c-4fbd-b9de-a93d2d86eb36\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bd28j" Jan 26 19:04:03 crc kubenswrapper[4737]: I0126 19:04:03.420531 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e950231-d00c-4fbd-b9de-a93d2d86eb36-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bd28j\" (UID: \"5e950231-d00c-4fbd-b9de-a93d2d86eb36\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bd28j" Jan 26 19:04:03 crc kubenswrapper[4737]: I0126 19:04:03.420718 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5e950231-d00c-4fbd-b9de-a93d2d86eb36-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bd28j\" (UID: \"5e950231-d00c-4fbd-b9de-a93d2d86eb36\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bd28j" Jan 26 19:04:03 crc kubenswrapper[4737]: I0126 19:04:03.426172 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5e950231-d00c-4fbd-b9de-a93d2d86eb36-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bd28j\" (UID: \"5e950231-d00c-4fbd-b9de-a93d2d86eb36\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bd28j" Jan 26 19:04:03 crc kubenswrapper[4737]: I0126 19:04:03.429579 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e950231-d00c-4fbd-b9de-a93d2d86eb36-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bd28j\" (UID: \"5e950231-d00c-4fbd-b9de-a93d2d86eb36\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bd28j" Jan 26 19:04:03 crc kubenswrapper[4737]: I0126 19:04:03.482399 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6zgp\" (UniqueName: \"kubernetes.io/projected/5e950231-d00c-4fbd-b9de-a93d2d86eb36-kube-api-access-r6zgp\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bd28j\" (UID: \"5e950231-d00c-4fbd-b9de-a93d2d86eb36\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bd28j" Jan 26 19:04:03 crc kubenswrapper[4737]: I0126 19:04:03.753279 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bd28j" Jan 26 19:04:04 crc kubenswrapper[4737]: I0126 19:04:04.310002 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bd28j"] Jan 26 19:04:04 crc kubenswrapper[4737]: I0126 19:04:04.328064 4737 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 19:04:04 crc kubenswrapper[4737]: I0126 19:04:04.972860 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bd28j" event={"ID":"5e950231-d00c-4fbd-b9de-a93d2d86eb36","Type":"ContainerStarted","Data":"5240f018fc5f5c3752f2fbfe0204bb18a8c5448dfbd9502ed6bad02e6c91b2a5"} Jan 26 19:04:05 crc kubenswrapper[4737]: I0126 19:04:05.984977 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bd28j" event={"ID":"5e950231-d00c-4fbd-b9de-a93d2d86eb36","Type":"ContainerStarted","Data":"a5bd376a41e4120692e95b89aa45df2fc2cf489c6ab1ea79b92ff0ae844c5c8e"} Jan 26 19:04:06 crc kubenswrapper[4737]: I0126 19:04:06.018151 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bd28j" podStartSLOduration=2.590052129 podStartE2EDuration="3.018129494s" podCreationTimestamp="2026-01-26 19:04:03 +0000 UTC" firstStartedPulling="2026-01-26 19:04:04.327809546 +0000 UTC m=+2017.636004264" lastFinishedPulling="2026-01-26 19:04:04.755886921 +0000 UTC m=+2018.064081629" observedRunningTime="2026-01-26 19:04:06.006376627 +0000 UTC m=+2019.314571345" watchObservedRunningTime="2026-01-26 19:04:06.018129494 +0000 UTC m=+2019.326324202" Jan 26 19:04:07 crc kubenswrapper[4737]: I0126 19:04:07.052203 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-z8mqw"] Jan 26 19:04:07 crc kubenswrapper[4737]: I0126 19:04:07.065239 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-z8mqw"] Jan 26 19:04:09 crc kubenswrapper[4737]: I0126 19:04:09.000953 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9db6e67-d109-41f6-bd12-a68553ab3bf6" path="/var/lib/kubelet/pods/b9db6e67-d109-41f6-bd12-a68553ab3bf6/volumes" Jan 26 19:04:21 crc kubenswrapper[4737]: I0126 19:04:21.042900 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-2mhwn"] Jan 26 19:04:21 crc kubenswrapper[4737]: I0126 19:04:21.055978 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-2mhwn"] Jan 26 19:04:22 crc kubenswrapper[4737]: I0126 19:04:22.037156 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-8950-account-create-update-l8njp"] Jan 26 19:04:22 crc kubenswrapper[4737]: I0126 19:04:22.051630 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-8950-account-create-update-l8njp"] Jan 26 19:04:22 crc kubenswrapper[4737]: I0126 19:04:22.995612 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15d05428-1fe0-474f-8b0e-761f90c035bd" path="/var/lib/kubelet/pods/15d05428-1fe0-474f-8b0e-761f90c035bd/volumes" Jan 26 19:04:22 crc kubenswrapper[4737]: I0126 19:04:22.997704 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e85eb58-126a-4fe4-9006-e46c8baceac8" path="/var/lib/kubelet/pods/7e85eb58-126a-4fe4-9006-e46c8baceac8/volumes" Jan 26 19:04:23 crc kubenswrapper[4737]: I0126 19:04:23.083751 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-j7bgx"] Jan 26 19:04:23 crc kubenswrapper[4737]: I0126 19:04:23.096546 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-w2r6t"] Jan 26 19:04:23 crc kubenswrapper[4737]: I0126 19:04:23.109343 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-j7bgx"] Jan 26 19:04:23 crc kubenswrapper[4737]: I0126 19:04:23.123117 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-w2r6t"] Jan 26 19:04:24 crc kubenswrapper[4737]: I0126 19:04:24.996019 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30b6ccc8-eb69-4780-b3dc-f53000859836" path="/var/lib/kubelet/pods/30b6ccc8-eb69-4780-b3dc-f53000859836/volumes" Jan 26 19:04:24 crc kubenswrapper[4737]: I0126 19:04:24.997292 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcff1539-022e-45f1-9e55-2e633b8a0346" path="/var/lib/kubelet/pods/bcff1539-022e-45f1-9e55-2e633b8a0346/volumes" Jan 26 19:04:26 crc kubenswrapper[4737]: I0126 19:04:26.043834 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-698d-account-create-update-vzz2k"] Jan 26 19:04:26 crc kubenswrapper[4737]: I0126 19:04:26.068555 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-p7gjm"] Jan 26 19:04:26 crc kubenswrapper[4737]: I0126 19:04:26.079475 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-698d-account-create-update-vzz2k"] Jan 26 19:04:26 crc kubenswrapper[4737]: I0126 19:04:26.090768 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-6bef-account-create-update-nnbl4"] Jan 26 19:04:26 crc kubenswrapper[4737]: I0126 19:04:26.101623 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-74b2-account-create-update-7gqdr"] Jan 26 19:04:26 crc kubenswrapper[4737]: I0126 19:04:26.112118 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-p7gjm"] Jan 26 19:04:26 crc kubenswrapper[4737]: I0126 19:04:26.123382 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-6bef-account-create-update-nnbl4"] Jan 26 19:04:26 crc kubenswrapper[4737]: I0126 19:04:26.133208 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-74b2-account-create-update-7gqdr"] Jan 26 19:04:26 crc kubenswrapper[4737]: I0126 19:04:26.998321 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89375687-18cd-4325-87c3-6be0a83ebfd1" path="/var/lib/kubelet/pods/89375687-18cd-4325-87c3-6be0a83ebfd1/volumes" Jan 26 19:04:26 crc kubenswrapper[4737]: I0126 19:04:26.999865 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a431f6b9-1717-4441-88e6-81b22a7abde0" path="/var/lib/kubelet/pods/a431f6b9-1717-4441-88e6-81b22a7abde0/volumes" Jan 26 19:04:27 crc kubenswrapper[4737]: I0126 19:04:27.001216 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f06fdcbc-c0a2-4149-903f-cad2c7c9dc9a" path="/var/lib/kubelet/pods/f06fdcbc-c0a2-4149-903f-cad2c7c9dc9a/volumes" Jan 26 19:04:27 crc kubenswrapper[4737]: I0126 19:04:27.002416 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f82324be-8ee8-45b6-8f16-23c70c1e9011" path="/var/lib/kubelet/pods/f82324be-8ee8-45b6-8f16-23c70c1e9011/volumes" Jan 26 19:04:31 crc kubenswrapper[4737]: I0126 19:04:31.069003 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-z87tf"] Jan 26 19:04:31 crc kubenswrapper[4737]: I0126 19:04:31.118161 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-z87tf"] Jan 26 19:04:32 crc kubenswrapper[4737]: I0126 19:04:32.656491 4737 scope.go:117] "RemoveContainer" containerID="a6f695f816a8da29b882b244979f3aa7a5752def6e1521c19f59a81dc9ca9de8" Jan 26 19:04:32 crc kubenswrapper[4737]: I0126 19:04:32.688454 4737 scope.go:117] "RemoveContainer" containerID="354db663809afa6815ca0106d2eb43df1c1cfc00166bdc2dd4bfad1209d5940c" Jan 26 19:04:32 crc kubenswrapper[4737]: I0126 19:04:32.759840 4737 scope.go:117] "RemoveContainer" containerID="95edc5c0585e3b1e7fa8f478d2913c1d1b1bb8aa7d88d2db5c8f3c342eae47be" Jan 26 19:04:32 crc kubenswrapper[4737]: I0126 19:04:32.821931 4737 scope.go:117] "RemoveContainer" containerID="35e081f01a7a2ffb1625e45a980fb08216ed8ae600eff76c6c04e8be9677a3bc" Jan 26 19:04:32 crc kubenswrapper[4737]: I0126 19:04:32.873917 4737 scope.go:117] "RemoveContainer" containerID="c0b86c586a0dd74fc3f94c27cc8df4b69d63c9e535411907c719269b90d2d16d" Jan 26 19:04:32 crc kubenswrapper[4737]: I0126 19:04:32.944357 4737 scope.go:117] "RemoveContainer" containerID="1f6b732a694472af725010e14d95dd6f76a2262a7c4689b900956af6492c75b9" Jan 26 19:04:33 crc kubenswrapper[4737]: I0126 19:04:33.000504 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86138c40-9654-4e2b-8fe9-13d418f93750" path="/var/lib/kubelet/pods/86138c40-9654-4e2b-8fe9-13d418f93750/volumes" Jan 26 19:04:33 crc kubenswrapper[4737]: I0126 19:04:33.008298 4737 scope.go:117] "RemoveContainer" containerID="eea43a1d80a8f10682440132a37fc26ab6737c3a964263f4286dce176f6d7459" Jan 26 19:04:33 crc kubenswrapper[4737]: I0126 19:04:33.038499 4737 scope.go:117] "RemoveContainer" containerID="10d526d891d2442ffbe1d9dbb86dd489ff37736db0231dc3417d39be137f6a19" Jan 26 19:04:33 crc kubenswrapper[4737]: I0126 19:04:33.063096 4737 scope.go:117] "RemoveContainer" containerID="0598d9556f51bbdab4b3ce9937f1dc5b50d46abd1d8303745114ea66bc7f0ce4" Jan 26 19:04:33 crc kubenswrapper[4737]: I0126 19:04:33.094860 4737 scope.go:117] "RemoveContainer" containerID="b5695aa4f93e4692032724d9acaf7537beb4dbcf2ee8a5e8ac43a123d4724b66" Jan 26 19:04:33 crc kubenswrapper[4737]: I0126 19:04:33.122484 4737 scope.go:117] "RemoveContainer" containerID="28013b9ec0f1f9f3bb8e98e8f8a262e6f3f2c7edcfdbec931ddaec24c8c15a96" Jan 26 19:04:33 crc kubenswrapper[4737]: I0126 19:04:33.149160 4737 scope.go:117] "RemoveContainer" containerID="80e54d313887f949b25e630eb5d0517b1f60fa9851f9bc2d5bf26545c5ad7579" Jan 26 19:04:33 crc kubenswrapper[4737]: I0126 19:04:33.169380 4737 scope.go:117] "RemoveContainer" containerID="4cd50035a3e48d31c3cd0ab6dcc00f71ad619f8897624e53fd8f7336e60488b6" Jan 26 19:04:33 crc kubenswrapper[4737]: I0126 19:04:33.192449 4737 scope.go:117] "RemoveContainer" containerID="5580bb646518ef6e746d4366b7a2e9e14969d9e203b4138af6f9580bd603416d" Jan 26 19:04:33 crc kubenswrapper[4737]: I0126 19:04:33.233790 4737 scope.go:117] "RemoveContainer" containerID="19fb879be2b1f5e4c909d3ae1f53209501da34f7194b3ac04975a500296a26f0" Jan 26 19:04:33 crc kubenswrapper[4737]: I0126 19:04:33.257241 4737 scope.go:117] "RemoveContainer" containerID="86bc79a2bab351c2dfebd5e893856dac42d6ef63370648a2e52d8bb5de7625af" Jan 26 19:04:33 crc kubenswrapper[4737]: I0126 19:04:33.290590 4737 scope.go:117] "RemoveContainer" containerID="126f46c0424173aaa97d436ffa78f8be0f8c62dedad0d5fac4a866c3980104a2" Jan 26 19:04:33 crc kubenswrapper[4737]: I0126 19:04:33.325568 4737 scope.go:117] "RemoveContainer" containerID="c0be04bd7efa2bcfe9cd8b7e461991b005c3c61b1d8d6258725a790524bb355d" Jan 26 19:04:33 crc kubenswrapper[4737]: I0126 19:04:33.350820 4737 scope.go:117] "RemoveContainer" containerID="58a5ad454430f0076c66274925b3c8b8a3b05c45ac6d886158b079bd6965f426" Jan 26 19:04:33 crc kubenswrapper[4737]: I0126 19:04:33.378002 4737 scope.go:117] "RemoveContainer" containerID="0233166c83b96ca780f8bf20d11d9e4c36794a2c5a1095fcae0d8f4383628120" Jan 26 19:04:33 crc kubenswrapper[4737]: I0126 19:04:33.403057 4737 scope.go:117] "RemoveContainer" containerID="74c87092c1d976faa5b23ff53c96d4f88178745c10530b0f67f1dd31577b9725" Jan 26 19:04:33 crc kubenswrapper[4737]: I0126 19:04:33.425535 4737 scope.go:117] "RemoveContainer" containerID="ebc5e53482312752e5620db1c3faf6a43156abe8180fd55a0742f27476539166" Jan 26 19:04:33 crc kubenswrapper[4737]: I0126 19:04:33.450243 4737 scope.go:117] "RemoveContainer" containerID="c71cbd06b9f2c1575dd0d13338932cb066dc87ea66baddeafdefe031565e0be4" Jan 26 19:04:33 crc kubenswrapper[4737]: I0126 19:04:33.474159 4737 scope.go:117] "RemoveContainer" containerID="e42da6a8906b8f31ce69bc4df544287660d854164363428666a1ec3065ce5f7e" Jan 26 19:05:02 crc kubenswrapper[4737]: I0126 19:05:02.047164 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-sk8gf"] Jan 26 19:05:02 crc kubenswrapper[4737]: I0126 19:05:02.058997 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-sk8gf"] Jan 26 19:05:02 crc kubenswrapper[4737]: I0126 19:05:02.996731 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccac15d0-8553-4c25-9bac-4f65d06e7d0e" path="/var/lib/kubelet/pods/ccac15d0-8553-4c25-9bac-4f65d06e7d0e/volumes" Jan 26 19:05:17 crc kubenswrapper[4737]: I0126 19:05:17.035255 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-8nbml"] Jan 26 19:05:17 crc kubenswrapper[4737]: I0126 19:05:17.048956 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-vbj8n"] Jan 26 19:05:17 crc kubenswrapper[4737]: I0126 19:05:17.060998 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-8nbml"] Jan 26 19:05:17 crc kubenswrapper[4737]: I0126 19:05:17.072744 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-vbj8n"] Jan 26 19:05:19 crc kubenswrapper[4737]: I0126 19:05:19.017466 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11147190-1d45-4798-83d7-449cd574a296" path="/var/lib/kubelet/pods/11147190-1d45-4798-83d7-449cd574a296/volumes" Jan 26 19:05:19 crc kubenswrapper[4737]: I0126 19:05:19.022925 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59ecae78-d5c7-4104-b28e-fd9d70a69dc5" path="/var/lib/kubelet/pods/59ecae78-d5c7-4104-b28e-fd9d70a69dc5/volumes" Jan 26 19:05:25 crc kubenswrapper[4737]: I0126 19:05:25.042743 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-crvp5"] Jan 26 19:05:25 crc kubenswrapper[4737]: I0126 19:05:25.054596 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-crvp5"] Jan 26 19:05:27 crc kubenswrapper[4737]: I0126 19:05:27.022300 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31ee14c5-9b8d-4903-afc7-0b7c643b2756" path="/var/lib/kubelet/pods/31ee14c5-9b8d-4903-afc7-0b7c643b2756/volumes" Jan 26 19:05:27 crc kubenswrapper[4737]: I0126 19:05:27.052690 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-5pb7v"] Jan 26 19:05:27 crc kubenswrapper[4737]: I0126 19:05:27.068263 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-5pb7v"] Jan 26 19:05:28 crc kubenswrapper[4737]: I0126 19:05:28.994811 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cac069b5-db5e-47ec-ada0-7e6acf1af111" path="/var/lib/kubelet/pods/cac069b5-db5e-47ec-ada0-7e6acf1af111/volumes" Jan 26 19:05:34 crc kubenswrapper[4737]: I0126 19:05:34.243241 4737 scope.go:117] "RemoveContainer" containerID="a2cb887eb23910e377c3962c778ebf4c69b9b70feab0dfb04d4461abc41fd260" Jan 26 19:05:34 crc kubenswrapper[4737]: I0126 19:05:34.269991 4737 scope.go:117] "RemoveContainer" containerID="0812037e61aaa15557e83ff51841b9c58954816a3e829827c7b6ca441d2a80ac" Jan 26 19:05:34 crc kubenswrapper[4737]: I0126 19:05:34.354107 4737 scope.go:117] "RemoveContainer" containerID="8da202852f6931d217e4caa89c850e91d6bf2550e6e26e0f040d0f3d96273499" Jan 26 19:05:34 crc kubenswrapper[4737]: I0126 19:05:34.410456 4737 scope.go:117] "RemoveContainer" containerID="1a55b5355727b4b9301d1e272dea5dd64862e9b091b399e73471988209bb6ceb" Jan 26 19:05:34 crc kubenswrapper[4737]: I0126 19:05:34.487742 4737 scope.go:117] "RemoveContainer" containerID="cc68631ceb5ab7897346be8341af243713cf34e8432f039ed3d3d66dbcd8ac62" Jan 26 19:06:00 crc kubenswrapper[4737]: I0126 19:06:00.949176 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:06:00 crc kubenswrapper[4737]: I0126 19:06:00.949817 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:06:27 crc kubenswrapper[4737]: I0126 19:06:27.619123 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-n5k4b"] Jan 26 19:06:27 crc kubenswrapper[4737]: I0126 19:06:27.622572 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n5k4b" Jan 26 19:06:27 crc kubenswrapper[4737]: I0126 19:06:27.648879 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n5k4b"] Jan 26 19:06:27 crc kubenswrapper[4737]: I0126 19:06:27.713954 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe416f29-4340-4371-a15c-d37b65291650-catalog-content\") pod \"redhat-operators-n5k4b\" (UID: \"fe416f29-4340-4371-a15c-d37b65291650\") " pod="openshift-marketplace/redhat-operators-n5k4b" Jan 26 19:06:27 crc kubenswrapper[4737]: I0126 19:06:27.714718 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe416f29-4340-4371-a15c-d37b65291650-utilities\") pod \"redhat-operators-n5k4b\" (UID: \"fe416f29-4340-4371-a15c-d37b65291650\") " pod="openshift-marketplace/redhat-operators-n5k4b" Jan 26 19:06:27 crc kubenswrapper[4737]: I0126 19:06:27.714854 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdlc5\" (UniqueName: \"kubernetes.io/projected/fe416f29-4340-4371-a15c-d37b65291650-kube-api-access-xdlc5\") pod \"redhat-operators-n5k4b\" (UID: \"fe416f29-4340-4371-a15c-d37b65291650\") " pod="openshift-marketplace/redhat-operators-n5k4b" Jan 26 19:06:27 crc kubenswrapper[4737]: I0126 19:06:27.818935 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe416f29-4340-4371-a15c-d37b65291650-catalog-content\") pod \"redhat-operators-n5k4b\" (UID: \"fe416f29-4340-4371-a15c-d37b65291650\") " pod="openshift-marketplace/redhat-operators-n5k4b" Jan 26 19:06:27 crc kubenswrapper[4737]: I0126 19:06:27.819099 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe416f29-4340-4371-a15c-d37b65291650-utilities\") pod \"redhat-operators-n5k4b\" (UID: \"fe416f29-4340-4371-a15c-d37b65291650\") " pod="openshift-marketplace/redhat-operators-n5k4b" Jan 26 19:06:27 crc kubenswrapper[4737]: I0126 19:06:27.819151 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdlc5\" (UniqueName: \"kubernetes.io/projected/fe416f29-4340-4371-a15c-d37b65291650-kube-api-access-xdlc5\") pod \"redhat-operators-n5k4b\" (UID: \"fe416f29-4340-4371-a15c-d37b65291650\") " pod="openshift-marketplace/redhat-operators-n5k4b" Jan 26 19:06:27 crc kubenswrapper[4737]: I0126 19:06:27.820205 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe416f29-4340-4371-a15c-d37b65291650-catalog-content\") pod \"redhat-operators-n5k4b\" (UID: \"fe416f29-4340-4371-a15c-d37b65291650\") " pod="openshift-marketplace/redhat-operators-n5k4b" Jan 26 19:06:27 crc kubenswrapper[4737]: I0126 19:06:27.820358 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe416f29-4340-4371-a15c-d37b65291650-utilities\") pod \"redhat-operators-n5k4b\" (UID: \"fe416f29-4340-4371-a15c-d37b65291650\") " pod="openshift-marketplace/redhat-operators-n5k4b" Jan 26 19:06:27 crc kubenswrapper[4737]: I0126 19:06:27.843743 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdlc5\" (UniqueName: \"kubernetes.io/projected/fe416f29-4340-4371-a15c-d37b65291650-kube-api-access-xdlc5\") pod \"redhat-operators-n5k4b\" (UID: \"fe416f29-4340-4371-a15c-d37b65291650\") " pod="openshift-marketplace/redhat-operators-n5k4b" Jan 26 19:06:27 crc kubenswrapper[4737]: I0126 19:06:27.959836 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n5k4b" Jan 26 19:06:28 crc kubenswrapper[4737]: I0126 19:06:28.511900 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n5k4b"] Jan 26 19:06:28 crc kubenswrapper[4737]: W0126 19:06:28.514480 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe416f29_4340_4371_a15c_d37b65291650.slice/crio-cb6040d8f73f8fec24c51cf539d5e7a65333200e2cdb643869759324ad463a2b WatchSource:0}: Error finding container cb6040d8f73f8fec24c51cf539d5e7a65333200e2cdb643869759324ad463a2b: Status 404 returned error can't find the container with id cb6040d8f73f8fec24c51cf539d5e7a65333200e2cdb643869759324ad463a2b Jan 26 19:06:28 crc kubenswrapper[4737]: I0126 19:06:28.784287 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n5k4b" event={"ID":"fe416f29-4340-4371-a15c-d37b65291650","Type":"ContainerStarted","Data":"cb6040d8f73f8fec24c51cf539d5e7a65333200e2cdb643869759324ad463a2b"} Jan 26 19:06:29 crc kubenswrapper[4737]: I0126 19:06:29.839425 4737 generic.go:334] "Generic (PLEG): container finished" podID="fe416f29-4340-4371-a15c-d37b65291650" containerID="6e4dc0b363b4ad902a271c469c6fd71c2ca59e39fd15387bfdbe36c2e47b2741" exitCode=0 Jan 26 19:06:29 crc kubenswrapper[4737]: I0126 19:06:29.839741 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n5k4b" event={"ID":"fe416f29-4340-4371-a15c-d37b65291650","Type":"ContainerDied","Data":"6e4dc0b363b4ad902a271c469c6fd71c2ca59e39fd15387bfdbe36c2e47b2741"} Jan 26 19:06:30 crc kubenswrapper[4737]: I0126 19:06:30.854251 4737 generic.go:334] "Generic (PLEG): container finished" podID="5e950231-d00c-4fbd-b9de-a93d2d86eb36" containerID="a5bd376a41e4120692e95b89aa45df2fc2cf489c6ab1ea79b92ff0ae844c5c8e" exitCode=0 Jan 26 19:06:30 crc kubenswrapper[4737]: I0126 19:06:30.854350 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bd28j" event={"ID":"5e950231-d00c-4fbd-b9de-a93d2d86eb36","Type":"ContainerDied","Data":"a5bd376a41e4120692e95b89aa45df2fc2cf489c6ab1ea79b92ff0ae844c5c8e"} Jan 26 19:06:30 crc kubenswrapper[4737]: I0126 19:06:30.948850 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:06:30 crc kubenswrapper[4737]: I0126 19:06:30.948927 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:06:31 crc kubenswrapper[4737]: I0126 19:06:31.866501 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n5k4b" event={"ID":"fe416f29-4340-4371-a15c-d37b65291650","Type":"ContainerStarted","Data":"97f24ff07d25edd07924b160964b1e8177e132bd9a4f0620d504c12d20c322e9"} Jan 26 19:06:32 crc kubenswrapper[4737]: I0126 19:06:32.449722 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bd28j" Jan 26 19:06:32 crc kubenswrapper[4737]: I0126 19:06:32.549232 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6zgp\" (UniqueName: \"kubernetes.io/projected/5e950231-d00c-4fbd-b9de-a93d2d86eb36-kube-api-access-r6zgp\") pod \"5e950231-d00c-4fbd-b9de-a93d2d86eb36\" (UID: \"5e950231-d00c-4fbd-b9de-a93d2d86eb36\") " Jan 26 19:06:32 crc kubenswrapper[4737]: I0126 19:06:32.549469 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e950231-d00c-4fbd-b9de-a93d2d86eb36-inventory\") pod \"5e950231-d00c-4fbd-b9de-a93d2d86eb36\" (UID: \"5e950231-d00c-4fbd-b9de-a93d2d86eb36\") " Jan 26 19:06:32 crc kubenswrapper[4737]: I0126 19:06:32.549579 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5e950231-d00c-4fbd-b9de-a93d2d86eb36-ssh-key-openstack-edpm-ipam\") pod \"5e950231-d00c-4fbd-b9de-a93d2d86eb36\" (UID: \"5e950231-d00c-4fbd-b9de-a93d2d86eb36\") " Jan 26 19:06:32 crc kubenswrapper[4737]: I0126 19:06:32.619818 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e950231-d00c-4fbd-b9de-a93d2d86eb36-kube-api-access-r6zgp" (OuterVolumeSpecName: "kube-api-access-r6zgp") pod "5e950231-d00c-4fbd-b9de-a93d2d86eb36" (UID: "5e950231-d00c-4fbd-b9de-a93d2d86eb36"). InnerVolumeSpecName "kube-api-access-r6zgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:06:32 crc kubenswrapper[4737]: I0126 19:06:32.654547 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e950231-d00c-4fbd-b9de-a93d2d86eb36-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5e950231-d00c-4fbd-b9de-a93d2d86eb36" (UID: "5e950231-d00c-4fbd-b9de-a93d2d86eb36"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:06:32 crc kubenswrapper[4737]: I0126 19:06:32.654897 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5e950231-d00c-4fbd-b9de-a93d2d86eb36-ssh-key-openstack-edpm-ipam\") pod \"5e950231-d00c-4fbd-b9de-a93d2d86eb36\" (UID: \"5e950231-d00c-4fbd-b9de-a93d2d86eb36\") " Jan 26 19:06:32 crc kubenswrapper[4737]: I0126 19:06:32.655796 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6zgp\" (UniqueName: \"kubernetes.io/projected/5e950231-d00c-4fbd-b9de-a93d2d86eb36-kube-api-access-r6zgp\") on node \"crc\" DevicePath \"\"" Jan 26 19:06:32 crc kubenswrapper[4737]: W0126 19:06:32.655875 4737 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/5e950231-d00c-4fbd-b9de-a93d2d86eb36/volumes/kubernetes.io~secret/ssh-key-openstack-edpm-ipam Jan 26 19:06:32 crc kubenswrapper[4737]: I0126 19:06:32.655884 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e950231-d00c-4fbd-b9de-a93d2d86eb36-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5e950231-d00c-4fbd-b9de-a93d2d86eb36" (UID: "5e950231-d00c-4fbd-b9de-a93d2d86eb36"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:06:32 crc kubenswrapper[4737]: I0126 19:06:32.668118 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e950231-d00c-4fbd-b9de-a93d2d86eb36-inventory" (OuterVolumeSpecName: "inventory") pod "5e950231-d00c-4fbd-b9de-a93d2d86eb36" (UID: "5e950231-d00c-4fbd-b9de-a93d2d86eb36"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:06:32 crc kubenswrapper[4737]: I0126 19:06:32.758424 4737 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5e950231-d00c-4fbd-b9de-a93d2d86eb36-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 19:06:32 crc kubenswrapper[4737]: I0126 19:06:32.758461 4737 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e950231-d00c-4fbd-b9de-a93d2d86eb36-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 19:06:32 crc kubenswrapper[4737]: I0126 19:06:32.881758 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bd28j" Jan 26 19:06:32 crc kubenswrapper[4737]: I0126 19:06:32.881744 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bd28j" event={"ID":"5e950231-d00c-4fbd-b9de-a93d2d86eb36","Type":"ContainerDied","Data":"5240f018fc5f5c3752f2fbfe0204bb18a8c5448dfbd9502ed6bad02e6c91b2a5"} Jan 26 19:06:32 crc kubenswrapper[4737]: I0126 19:06:32.881831 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5240f018fc5f5c3752f2fbfe0204bb18a8c5448dfbd9502ed6bad02e6c91b2a5" Jan 26 19:06:33 crc kubenswrapper[4737]: I0126 19:06:33.009841 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-m2vxk"] Jan 26 19:06:33 crc kubenswrapper[4737]: E0126 19:06:33.010379 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e950231-d00c-4fbd-b9de-a93d2d86eb36" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 26 19:06:33 crc kubenswrapper[4737]: I0126 19:06:33.010404 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e950231-d00c-4fbd-b9de-a93d2d86eb36" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 26 19:06:33 crc kubenswrapper[4737]: I0126 19:06:33.010677 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e950231-d00c-4fbd-b9de-a93d2d86eb36" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 26 19:06:33 crc kubenswrapper[4737]: I0126 19:06:33.011527 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-m2vxk" Jan 26 19:06:33 crc kubenswrapper[4737]: I0126 19:06:33.015017 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 19:06:33 crc kubenswrapper[4737]: I0126 19:06:33.015478 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xlvv9" Jan 26 19:06:33 crc kubenswrapper[4737]: I0126 19:06:33.015497 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 19:06:33 crc kubenswrapper[4737]: I0126 19:06:33.017293 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 19:06:33 crc kubenswrapper[4737]: I0126 19:06:33.026481 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-m2vxk"] Jan 26 19:06:33 crc kubenswrapper[4737]: I0126 19:06:33.066464 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f606c12b-460a-4ec1-ac57-d4e5667945de-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-m2vxk\" (UID: \"f606c12b-460a-4ec1-ac57-d4e5667945de\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-m2vxk" Jan 26 19:06:33 crc kubenswrapper[4737]: I0126 19:06:33.066601 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2xlk\" (UniqueName: \"kubernetes.io/projected/f606c12b-460a-4ec1-ac57-d4e5667945de-kube-api-access-z2xlk\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-m2vxk\" (UID: \"f606c12b-460a-4ec1-ac57-d4e5667945de\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-m2vxk" Jan 26 19:06:33 crc kubenswrapper[4737]: I0126 19:06:33.066763 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f606c12b-460a-4ec1-ac57-d4e5667945de-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-m2vxk\" (UID: \"f606c12b-460a-4ec1-ac57-d4e5667945de\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-m2vxk" Jan 26 19:06:33 crc kubenswrapper[4737]: I0126 19:06:33.169684 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f606c12b-460a-4ec1-ac57-d4e5667945de-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-m2vxk\" (UID: \"f606c12b-460a-4ec1-ac57-d4e5667945de\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-m2vxk" Jan 26 19:06:33 crc kubenswrapper[4737]: I0126 19:06:33.169787 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f606c12b-460a-4ec1-ac57-d4e5667945de-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-m2vxk\" (UID: \"f606c12b-460a-4ec1-ac57-d4e5667945de\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-m2vxk" Jan 26 19:06:33 crc kubenswrapper[4737]: I0126 19:06:33.169893 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2xlk\" (UniqueName: \"kubernetes.io/projected/f606c12b-460a-4ec1-ac57-d4e5667945de-kube-api-access-z2xlk\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-m2vxk\" (UID: \"f606c12b-460a-4ec1-ac57-d4e5667945de\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-m2vxk" Jan 26 19:06:33 crc kubenswrapper[4737]: I0126 19:06:33.181625 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f606c12b-460a-4ec1-ac57-d4e5667945de-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-m2vxk\" (UID: \"f606c12b-460a-4ec1-ac57-d4e5667945de\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-m2vxk" Jan 26 19:06:33 crc kubenswrapper[4737]: I0126 19:06:33.181644 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f606c12b-460a-4ec1-ac57-d4e5667945de-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-m2vxk\" (UID: \"f606c12b-460a-4ec1-ac57-d4e5667945de\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-m2vxk" Jan 26 19:06:33 crc kubenswrapper[4737]: I0126 19:06:33.192430 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2xlk\" (UniqueName: \"kubernetes.io/projected/f606c12b-460a-4ec1-ac57-d4e5667945de-kube-api-access-z2xlk\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-m2vxk\" (UID: \"f606c12b-460a-4ec1-ac57-d4e5667945de\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-m2vxk" Jan 26 19:06:33 crc kubenswrapper[4737]: I0126 19:06:33.329700 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-m2vxk" Jan 26 19:06:34 crc kubenswrapper[4737]: I0126 19:06:34.108024 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-m2vxk"] Jan 26 19:06:34 crc kubenswrapper[4737]: I0126 19:06:34.907677 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-m2vxk" event={"ID":"f606c12b-460a-4ec1-ac57-d4e5667945de","Type":"ContainerStarted","Data":"f13c5725c9460fb4d5b0d9fbea7858152a6441b7f44dad8f70d6c0e71355785d"} Jan 26 19:06:34 crc kubenswrapper[4737]: I0126 19:06:34.920793 4737 generic.go:334] "Generic (PLEG): container finished" podID="fe416f29-4340-4371-a15c-d37b65291650" containerID="97f24ff07d25edd07924b160964b1e8177e132bd9a4f0620d504c12d20c322e9" exitCode=0 Jan 26 19:06:34 crc kubenswrapper[4737]: I0126 19:06:34.921226 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n5k4b" event={"ID":"fe416f29-4340-4371-a15c-d37b65291650","Type":"ContainerDied","Data":"97f24ff07d25edd07924b160964b1e8177e132bd9a4f0620d504c12d20c322e9"} Jan 26 19:06:35 crc kubenswrapper[4737]: I0126 19:06:35.062535 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-h7hhd"] Jan 26 19:06:35 crc kubenswrapper[4737]: I0126 19:06:35.076301 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-75zv4"] Jan 26 19:06:35 crc kubenswrapper[4737]: I0126 19:06:35.093608 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-h7hhd"] Jan 26 19:06:35 crc kubenswrapper[4737]: I0126 19:06:35.106814 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-75zv4"] Jan 26 19:06:35 crc kubenswrapper[4737]: I0126 19:06:35.934539 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n5k4b" event={"ID":"fe416f29-4340-4371-a15c-d37b65291650","Type":"ContainerStarted","Data":"9f0c0e3ad0941f1b4634bf0d5f13943f6ce86491a450cb253c4c8f6700a3597f"} Jan 26 19:06:35 crc kubenswrapper[4737]: I0126 19:06:35.936416 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-m2vxk" event={"ID":"f606c12b-460a-4ec1-ac57-d4e5667945de","Type":"ContainerStarted","Data":"7dbb0b6e810ad6fba96a74fd49770b3f9124983938edb8e2099cbb1e1abb6ae6"} Jan 26 19:06:35 crc kubenswrapper[4737]: I0126 19:06:35.959916 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-m2vxk" podStartSLOduration=3.2356988 podStartE2EDuration="3.959893257s" podCreationTimestamp="2026-01-26 19:06:32 +0000 UTC" firstStartedPulling="2026-01-26 19:06:34.110349198 +0000 UTC m=+2167.418543906" lastFinishedPulling="2026-01-26 19:06:34.834543655 +0000 UTC m=+2168.142738363" observedRunningTime="2026-01-26 19:06:35.954160686 +0000 UTC m=+2169.262355404" watchObservedRunningTime="2026-01-26 19:06:35.959893257 +0000 UTC m=+2169.268087965" Jan 26 19:06:35 crc kubenswrapper[4737]: I0126 19:06:35.975051 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-n5k4b" podStartSLOduration=3.478831866 podStartE2EDuration="8.975030457s" podCreationTimestamp="2026-01-26 19:06:27 +0000 UTC" firstStartedPulling="2026-01-26 19:06:29.846810427 +0000 UTC m=+2163.155005135" lastFinishedPulling="2026-01-26 19:06:35.343009018 +0000 UTC m=+2168.651203726" observedRunningTime="2026-01-26 19:06:35.972471695 +0000 UTC m=+2169.280666423" watchObservedRunningTime="2026-01-26 19:06:35.975030457 +0000 UTC m=+2169.283225165" Jan 26 19:06:36 crc kubenswrapper[4737]: I0126 19:06:36.030507 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-g9xvw"] Jan 26 19:06:36 crc kubenswrapper[4737]: I0126 19:06:36.044251 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-g9xvw"] Jan 26 19:06:36 crc kubenswrapper[4737]: I0126 19:06:36.994430 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ba096b6-6f0d-4c4f-afd1-40d5c8ba5e7f" path="/var/lib/kubelet/pods/8ba096b6-6f0d-4c4f-afd1-40d5c8ba5e7f/volumes" Jan 26 19:06:36 crc kubenswrapper[4737]: I0126 19:06:36.995854 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8df0f55f-4f9c-4ef5-88f1-16a3a5ec1d47" path="/var/lib/kubelet/pods/8df0f55f-4f9c-4ef5-88f1-16a3a5ec1d47/volumes" Jan 26 19:06:36 crc kubenswrapper[4737]: I0126 19:06:36.997016 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e07b7037-d1bb-485f-a2e0-951b51de8c74" path="/var/lib/kubelet/pods/e07b7037-d1bb-485f-a2e0-951b51de8c74/volumes" Jan 26 19:06:37 crc kubenswrapper[4737]: I0126 19:06:37.036419 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-4c57-account-create-update-xw7qm"] Jan 26 19:06:37 crc kubenswrapper[4737]: I0126 19:06:37.054355 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-4c57-account-create-update-xw7qm"] Jan 26 19:06:37 crc kubenswrapper[4737]: I0126 19:06:37.066409 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-2bf9-account-create-update-kccxv"] Jan 26 19:06:37 crc kubenswrapper[4737]: I0126 19:06:37.076319 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-3193-account-create-update-8m9fw"] Jan 26 19:06:37 crc kubenswrapper[4737]: I0126 19:06:37.086267 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-2bf9-account-create-update-kccxv"] Jan 26 19:06:37 crc kubenswrapper[4737]: I0126 19:06:37.095953 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-3193-account-create-update-8m9fw"] Jan 26 19:06:37 crc kubenswrapper[4737]: I0126 19:06:37.960683 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-n5k4b" Jan 26 19:06:37 crc kubenswrapper[4737]: I0126 19:06:37.960732 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-n5k4b" Jan 26 19:06:38 crc kubenswrapper[4737]: I0126 19:06:38.995294 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bcd08ca-7be6-4684-b83d-19a94dee32ad" path="/var/lib/kubelet/pods/0bcd08ca-7be6-4684-b83d-19a94dee32ad/volumes" Jan 26 19:06:38 crc kubenswrapper[4737]: I0126 19:06:38.996524 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea8f2357-50f9-46d8-9527-f04533ce926b" path="/var/lib/kubelet/pods/ea8f2357-50f9-46d8-9527-f04533ce926b/volumes" Jan 26 19:06:38 crc kubenswrapper[4737]: I0126 19:06:38.998171 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec2b468d-e649-4320-8687-bc3b4ed09593" path="/var/lib/kubelet/pods/ec2b468d-e649-4320-8687-bc3b4ed09593/volumes" Jan 26 19:06:39 crc kubenswrapper[4737]: I0126 19:06:39.009162 4737 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n5k4b" podUID="fe416f29-4340-4371-a15c-d37b65291650" containerName="registry-server" probeResult="failure" output=< Jan 26 19:06:39 crc kubenswrapper[4737]: timeout: failed to connect service ":50051" within 1s Jan 26 19:06:39 crc kubenswrapper[4737]: > Jan 26 19:06:48 crc kubenswrapper[4737]: I0126 19:06:48.012287 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-n5k4b" Jan 26 19:06:48 crc kubenswrapper[4737]: I0126 19:06:48.066932 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-n5k4b" Jan 26 19:06:51 crc kubenswrapper[4737]: I0126 19:06:51.782119 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n5k4b"] Jan 26 19:06:51 crc kubenswrapper[4737]: I0126 19:06:51.784054 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-n5k4b" podUID="fe416f29-4340-4371-a15c-d37b65291650" containerName="registry-server" containerID="cri-o://9f0c0e3ad0941f1b4634bf0d5f13943f6ce86491a450cb253c4c8f6700a3597f" gracePeriod=2 Jan 26 19:06:52 crc kubenswrapper[4737]: I0126 19:06:52.115566 4737 generic.go:334] "Generic (PLEG): container finished" podID="fe416f29-4340-4371-a15c-d37b65291650" containerID="9f0c0e3ad0941f1b4634bf0d5f13943f6ce86491a450cb253c4c8f6700a3597f" exitCode=0 Jan 26 19:06:52 crc kubenswrapper[4737]: I0126 19:06:52.116001 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n5k4b" event={"ID":"fe416f29-4340-4371-a15c-d37b65291650","Type":"ContainerDied","Data":"9f0c0e3ad0941f1b4634bf0d5f13943f6ce86491a450cb253c4c8f6700a3597f"} Jan 26 19:06:52 crc kubenswrapper[4737]: I0126 19:06:52.380260 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n5k4b" Jan 26 19:06:52 crc kubenswrapper[4737]: I0126 19:06:52.463012 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdlc5\" (UniqueName: \"kubernetes.io/projected/fe416f29-4340-4371-a15c-d37b65291650-kube-api-access-xdlc5\") pod \"fe416f29-4340-4371-a15c-d37b65291650\" (UID: \"fe416f29-4340-4371-a15c-d37b65291650\") " Jan 26 19:06:52 crc kubenswrapper[4737]: I0126 19:06:52.463143 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe416f29-4340-4371-a15c-d37b65291650-utilities\") pod \"fe416f29-4340-4371-a15c-d37b65291650\" (UID: \"fe416f29-4340-4371-a15c-d37b65291650\") " Jan 26 19:06:52 crc kubenswrapper[4737]: I0126 19:06:52.463259 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe416f29-4340-4371-a15c-d37b65291650-catalog-content\") pod \"fe416f29-4340-4371-a15c-d37b65291650\" (UID: \"fe416f29-4340-4371-a15c-d37b65291650\") " Jan 26 19:06:52 crc kubenswrapper[4737]: I0126 19:06:52.464406 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe416f29-4340-4371-a15c-d37b65291650-utilities" (OuterVolumeSpecName: "utilities") pod "fe416f29-4340-4371-a15c-d37b65291650" (UID: "fe416f29-4340-4371-a15c-d37b65291650"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:06:52 crc kubenswrapper[4737]: I0126 19:06:52.476278 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe416f29-4340-4371-a15c-d37b65291650-kube-api-access-xdlc5" (OuterVolumeSpecName: "kube-api-access-xdlc5") pod "fe416f29-4340-4371-a15c-d37b65291650" (UID: "fe416f29-4340-4371-a15c-d37b65291650"). InnerVolumeSpecName "kube-api-access-xdlc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:06:52 crc kubenswrapper[4737]: I0126 19:06:52.569203 4737 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe416f29-4340-4371-a15c-d37b65291650-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:06:52 crc kubenswrapper[4737]: I0126 19:06:52.569535 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdlc5\" (UniqueName: \"kubernetes.io/projected/fe416f29-4340-4371-a15c-d37b65291650-kube-api-access-xdlc5\") on node \"crc\" DevicePath \"\"" Jan 26 19:06:52 crc kubenswrapper[4737]: I0126 19:06:52.608393 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe416f29-4340-4371-a15c-d37b65291650-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fe416f29-4340-4371-a15c-d37b65291650" (UID: "fe416f29-4340-4371-a15c-d37b65291650"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:06:52 crc kubenswrapper[4737]: I0126 19:06:52.672511 4737 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe416f29-4340-4371-a15c-d37b65291650-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:06:53 crc kubenswrapper[4737]: I0126 19:06:53.128987 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n5k4b" event={"ID":"fe416f29-4340-4371-a15c-d37b65291650","Type":"ContainerDied","Data":"cb6040d8f73f8fec24c51cf539d5e7a65333200e2cdb643869759324ad463a2b"} Jan 26 19:06:53 crc kubenswrapper[4737]: I0126 19:06:53.129063 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n5k4b" Jan 26 19:06:53 crc kubenswrapper[4737]: I0126 19:06:53.129361 4737 scope.go:117] "RemoveContainer" containerID="9f0c0e3ad0941f1b4634bf0d5f13943f6ce86491a450cb253c4c8f6700a3597f" Jan 26 19:06:53 crc kubenswrapper[4737]: I0126 19:06:53.164878 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n5k4b"] Jan 26 19:06:53 crc kubenswrapper[4737]: I0126 19:06:53.173506 4737 scope.go:117] "RemoveContainer" containerID="97f24ff07d25edd07924b160964b1e8177e132bd9a4f0620d504c12d20c322e9" Jan 26 19:06:53 crc kubenswrapper[4737]: I0126 19:06:53.186831 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-n5k4b"] Jan 26 19:06:53 crc kubenswrapper[4737]: I0126 19:06:53.202028 4737 scope.go:117] "RemoveContainer" containerID="6e4dc0b363b4ad902a271c469c6fd71c2ca59e39fd15387bfdbe36c2e47b2741" Jan 26 19:06:54 crc kubenswrapper[4737]: I0126 19:06:54.997474 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe416f29-4340-4371-a15c-d37b65291650" path="/var/lib/kubelet/pods/fe416f29-4340-4371-a15c-d37b65291650/volumes" Jan 26 19:07:00 crc kubenswrapper[4737]: I0126 19:07:00.948881 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:07:00 crc kubenswrapper[4737]: I0126 19:07:00.949436 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:07:00 crc kubenswrapper[4737]: I0126 19:07:00.949499 4737 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" Jan 26 19:07:00 crc kubenswrapper[4737]: I0126 19:07:00.950479 4737 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"128858a05e84587d74f8a27fb380177b3d24231b3df428cd4848c4a2148ba1b3"} pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 19:07:00 crc kubenswrapper[4737]: I0126 19:07:00.950571 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" containerID="cri-o://128858a05e84587d74f8a27fb380177b3d24231b3df428cd4848c4a2148ba1b3" gracePeriod=600 Jan 26 19:07:01 crc kubenswrapper[4737]: I0126 19:07:01.253988 4737 generic.go:334] "Generic (PLEG): container finished" podID="afd75772-7900-46c3-b392-afb075e1cc08" containerID="128858a05e84587d74f8a27fb380177b3d24231b3df428cd4848c4a2148ba1b3" exitCode=0 Jan 26 19:07:01 crc kubenswrapper[4737]: I0126 19:07:01.254108 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" event={"ID":"afd75772-7900-46c3-b392-afb075e1cc08","Type":"ContainerDied","Data":"128858a05e84587d74f8a27fb380177b3d24231b3df428cd4848c4a2148ba1b3"} Jan 26 19:07:01 crc kubenswrapper[4737]: I0126 19:07:01.254919 4737 scope.go:117] "RemoveContainer" containerID="1118354a04db19a991298cf7d8a2d128f4afb57f133e36502b231054abcee336" Jan 26 19:07:02 crc kubenswrapper[4737]: I0126 19:07:02.268326 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" event={"ID":"afd75772-7900-46c3-b392-afb075e1cc08","Type":"ContainerStarted","Data":"dc219edb88bc1e0e52f10e642b13d911ee1bfd5a5f90c09cf6a4ad6a9f1a4b7f"} Jan 26 19:07:05 crc kubenswrapper[4737]: I0126 19:07:05.615138 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fp7d9"] Jan 26 19:07:05 crc kubenswrapper[4737]: E0126 19:07:05.616728 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe416f29-4340-4371-a15c-d37b65291650" containerName="registry-server" Jan 26 19:07:05 crc kubenswrapper[4737]: I0126 19:07:05.616748 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe416f29-4340-4371-a15c-d37b65291650" containerName="registry-server" Jan 26 19:07:05 crc kubenswrapper[4737]: E0126 19:07:05.616771 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe416f29-4340-4371-a15c-d37b65291650" containerName="extract-utilities" Jan 26 19:07:05 crc kubenswrapper[4737]: I0126 19:07:05.616780 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe416f29-4340-4371-a15c-d37b65291650" containerName="extract-utilities" Jan 26 19:07:05 crc kubenswrapper[4737]: E0126 19:07:05.616839 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe416f29-4340-4371-a15c-d37b65291650" containerName="extract-content" Jan 26 19:07:05 crc kubenswrapper[4737]: I0126 19:07:05.616846 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe416f29-4340-4371-a15c-d37b65291650" containerName="extract-content" Jan 26 19:07:05 crc kubenswrapper[4737]: I0126 19:07:05.617183 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe416f29-4340-4371-a15c-d37b65291650" containerName="registry-server" Jan 26 19:07:05 crc kubenswrapper[4737]: I0126 19:07:05.619915 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fp7d9" Jan 26 19:07:05 crc kubenswrapper[4737]: I0126 19:07:05.632513 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fp7d9"] Jan 26 19:07:05 crc kubenswrapper[4737]: I0126 19:07:05.816106 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d31247e-13b1-41dd-9253-884166bd540c-catalog-content\") pod \"certified-operators-fp7d9\" (UID: \"5d31247e-13b1-41dd-9253-884166bd540c\") " pod="openshift-marketplace/certified-operators-fp7d9" Jan 26 19:07:05 crc kubenswrapper[4737]: I0126 19:07:05.816910 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4wtw\" (UniqueName: \"kubernetes.io/projected/5d31247e-13b1-41dd-9253-884166bd540c-kube-api-access-t4wtw\") pod \"certified-operators-fp7d9\" (UID: \"5d31247e-13b1-41dd-9253-884166bd540c\") " pod="openshift-marketplace/certified-operators-fp7d9" Jan 26 19:07:05 crc kubenswrapper[4737]: I0126 19:07:05.817131 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d31247e-13b1-41dd-9253-884166bd540c-utilities\") pod \"certified-operators-fp7d9\" (UID: \"5d31247e-13b1-41dd-9253-884166bd540c\") " pod="openshift-marketplace/certified-operators-fp7d9" Jan 26 19:07:05 crc kubenswrapper[4737]: I0126 19:07:05.919653 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4wtw\" (UniqueName: \"kubernetes.io/projected/5d31247e-13b1-41dd-9253-884166bd540c-kube-api-access-t4wtw\") pod \"certified-operators-fp7d9\" (UID: \"5d31247e-13b1-41dd-9253-884166bd540c\") " pod="openshift-marketplace/certified-operators-fp7d9" Jan 26 19:07:05 crc kubenswrapper[4737]: I0126 19:07:05.920030 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d31247e-13b1-41dd-9253-884166bd540c-utilities\") pod \"certified-operators-fp7d9\" (UID: \"5d31247e-13b1-41dd-9253-884166bd540c\") " pod="openshift-marketplace/certified-operators-fp7d9" Jan 26 19:07:05 crc kubenswrapper[4737]: I0126 19:07:05.920246 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d31247e-13b1-41dd-9253-884166bd540c-catalog-content\") pod \"certified-operators-fp7d9\" (UID: \"5d31247e-13b1-41dd-9253-884166bd540c\") " pod="openshift-marketplace/certified-operators-fp7d9" Jan 26 19:07:05 crc kubenswrapper[4737]: I0126 19:07:05.920661 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d31247e-13b1-41dd-9253-884166bd540c-utilities\") pod \"certified-operators-fp7d9\" (UID: \"5d31247e-13b1-41dd-9253-884166bd540c\") " pod="openshift-marketplace/certified-operators-fp7d9" Jan 26 19:07:05 crc kubenswrapper[4737]: I0126 19:07:05.920836 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d31247e-13b1-41dd-9253-884166bd540c-catalog-content\") pod \"certified-operators-fp7d9\" (UID: \"5d31247e-13b1-41dd-9253-884166bd540c\") " pod="openshift-marketplace/certified-operators-fp7d9" Jan 26 19:07:05 crc kubenswrapper[4737]: I0126 19:07:05.943787 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4wtw\" (UniqueName: \"kubernetes.io/projected/5d31247e-13b1-41dd-9253-884166bd540c-kube-api-access-t4wtw\") pod \"certified-operators-fp7d9\" (UID: \"5d31247e-13b1-41dd-9253-884166bd540c\") " pod="openshift-marketplace/certified-operators-fp7d9" Jan 26 19:07:05 crc kubenswrapper[4737]: I0126 19:07:05.954268 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fp7d9" Jan 26 19:07:06 crc kubenswrapper[4737]: I0126 19:07:06.550229 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fp7d9"] Jan 26 19:07:07 crc kubenswrapper[4737]: I0126 19:07:07.331843 4737 generic.go:334] "Generic (PLEG): container finished" podID="5d31247e-13b1-41dd-9253-884166bd540c" containerID="344cee3d4e62323fd0195dcd98c18e29f87dfdc9a4e5af7f55577ad0fd81e43b" exitCode=0 Jan 26 19:07:07 crc kubenswrapper[4737]: I0126 19:07:07.331927 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fp7d9" event={"ID":"5d31247e-13b1-41dd-9253-884166bd540c","Type":"ContainerDied","Data":"344cee3d4e62323fd0195dcd98c18e29f87dfdc9a4e5af7f55577ad0fd81e43b"} Jan 26 19:07:07 crc kubenswrapper[4737]: I0126 19:07:07.332214 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fp7d9" event={"ID":"5d31247e-13b1-41dd-9253-884166bd540c","Type":"ContainerStarted","Data":"216a8f9debe1f63a41229137c293142255a698e481a2938e36fd3f4e5bfcc3bc"} Jan 26 19:07:08 crc kubenswrapper[4737]: I0126 19:07:08.344496 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fp7d9" event={"ID":"5d31247e-13b1-41dd-9253-884166bd540c","Type":"ContainerStarted","Data":"157a952ab8b38c9f126b7a2fbb930082baf1bee6869fc0a3df2b83646a6ad05c"} Jan 26 19:07:09 crc kubenswrapper[4737]: I0126 19:07:09.358650 4737 generic.go:334] "Generic (PLEG): container finished" podID="5d31247e-13b1-41dd-9253-884166bd540c" containerID="157a952ab8b38c9f126b7a2fbb930082baf1bee6869fc0a3df2b83646a6ad05c" exitCode=0 Jan 26 19:07:09 crc kubenswrapper[4737]: I0126 19:07:09.358730 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fp7d9" event={"ID":"5d31247e-13b1-41dd-9253-884166bd540c","Type":"ContainerDied","Data":"157a952ab8b38c9f126b7a2fbb930082baf1bee6869fc0a3df2b83646a6ad05c"} Jan 26 19:07:10 crc kubenswrapper[4737]: I0126 19:07:10.055378 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-s2dcv"] Jan 26 19:07:10 crc kubenswrapper[4737]: I0126 19:07:10.065780 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-s2dcv"] Jan 26 19:07:10 crc kubenswrapper[4737]: I0126 19:07:10.373823 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fp7d9" event={"ID":"5d31247e-13b1-41dd-9253-884166bd540c","Type":"ContainerStarted","Data":"b0a21abd3e116d259c5677a976b215f3cdb83dc4b30ba6474db928d312070682"} Jan 26 19:07:10 crc kubenswrapper[4737]: I0126 19:07:10.404345 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fp7d9" podStartSLOduration=2.998168936 podStartE2EDuration="5.404325327s" podCreationTimestamp="2026-01-26 19:07:05 +0000 UTC" firstStartedPulling="2026-01-26 19:07:07.333752194 +0000 UTC m=+2200.641946902" lastFinishedPulling="2026-01-26 19:07:09.739908585 +0000 UTC m=+2203.048103293" observedRunningTime="2026-01-26 19:07:10.394742503 +0000 UTC m=+2203.702937221" watchObservedRunningTime="2026-01-26 19:07:10.404325327 +0000 UTC m=+2203.712520035" Jan 26 19:07:10 crc kubenswrapper[4737]: I0126 19:07:10.996167 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a24b527-6d52-4550-9e95-543e53e4a0fc" path="/var/lib/kubelet/pods/9a24b527-6d52-4550-9e95-543e53e4a0fc/volumes" Jan 26 19:07:15 crc kubenswrapper[4737]: I0126 19:07:15.955242 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fp7d9" Jan 26 19:07:15 crc kubenswrapper[4737]: I0126 19:07:15.955850 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fp7d9" Jan 26 19:07:16 crc kubenswrapper[4737]: I0126 19:07:16.007104 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fp7d9" Jan 26 19:07:16 crc kubenswrapper[4737]: I0126 19:07:16.494704 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fp7d9" Jan 26 19:07:16 crc kubenswrapper[4737]: I0126 19:07:16.548497 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fp7d9"] Jan 26 19:07:18 crc kubenswrapper[4737]: I0126 19:07:18.455534 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fp7d9" podUID="5d31247e-13b1-41dd-9253-884166bd540c" containerName="registry-server" containerID="cri-o://b0a21abd3e116d259c5677a976b215f3cdb83dc4b30ba6474db928d312070682" gracePeriod=2 Jan 26 19:07:19 crc kubenswrapper[4737]: I0126 19:07:19.034129 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fp7d9" Jan 26 19:07:19 crc kubenswrapper[4737]: I0126 19:07:19.116749 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d31247e-13b1-41dd-9253-884166bd540c-utilities\") pod \"5d31247e-13b1-41dd-9253-884166bd540c\" (UID: \"5d31247e-13b1-41dd-9253-884166bd540c\") " Jan 26 19:07:19 crc kubenswrapper[4737]: I0126 19:07:19.117217 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4wtw\" (UniqueName: \"kubernetes.io/projected/5d31247e-13b1-41dd-9253-884166bd540c-kube-api-access-t4wtw\") pod \"5d31247e-13b1-41dd-9253-884166bd540c\" (UID: \"5d31247e-13b1-41dd-9253-884166bd540c\") " Jan 26 19:07:19 crc kubenswrapper[4737]: I0126 19:07:19.117247 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d31247e-13b1-41dd-9253-884166bd540c-catalog-content\") pod \"5d31247e-13b1-41dd-9253-884166bd540c\" (UID: \"5d31247e-13b1-41dd-9253-884166bd540c\") " Jan 26 19:07:19 crc kubenswrapper[4737]: I0126 19:07:19.117826 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d31247e-13b1-41dd-9253-884166bd540c-utilities" (OuterVolumeSpecName: "utilities") pod "5d31247e-13b1-41dd-9253-884166bd540c" (UID: "5d31247e-13b1-41dd-9253-884166bd540c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:07:19 crc kubenswrapper[4737]: I0126 19:07:19.123397 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d31247e-13b1-41dd-9253-884166bd540c-kube-api-access-t4wtw" (OuterVolumeSpecName: "kube-api-access-t4wtw") pod "5d31247e-13b1-41dd-9253-884166bd540c" (UID: "5d31247e-13b1-41dd-9253-884166bd540c"). InnerVolumeSpecName "kube-api-access-t4wtw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:07:19 crc kubenswrapper[4737]: I0126 19:07:19.166304 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d31247e-13b1-41dd-9253-884166bd540c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5d31247e-13b1-41dd-9253-884166bd540c" (UID: "5d31247e-13b1-41dd-9253-884166bd540c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:07:19 crc kubenswrapper[4737]: I0126 19:07:19.219985 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4wtw\" (UniqueName: \"kubernetes.io/projected/5d31247e-13b1-41dd-9253-884166bd540c-kube-api-access-t4wtw\") on node \"crc\" DevicePath \"\"" Jan 26 19:07:19 crc kubenswrapper[4737]: I0126 19:07:19.220025 4737 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d31247e-13b1-41dd-9253-884166bd540c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:07:19 crc kubenswrapper[4737]: I0126 19:07:19.220040 4737 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d31247e-13b1-41dd-9253-884166bd540c-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:07:19 crc kubenswrapper[4737]: I0126 19:07:19.467453 4737 generic.go:334] "Generic (PLEG): container finished" podID="5d31247e-13b1-41dd-9253-884166bd540c" containerID="b0a21abd3e116d259c5677a976b215f3cdb83dc4b30ba6474db928d312070682" exitCode=0 Jan 26 19:07:19 crc kubenswrapper[4737]: I0126 19:07:19.467506 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fp7d9" event={"ID":"5d31247e-13b1-41dd-9253-884166bd540c","Type":"ContainerDied","Data":"b0a21abd3e116d259c5677a976b215f3cdb83dc4b30ba6474db928d312070682"} Jan 26 19:07:19 crc kubenswrapper[4737]: I0126 19:07:19.467537 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fp7d9" event={"ID":"5d31247e-13b1-41dd-9253-884166bd540c","Type":"ContainerDied","Data":"216a8f9debe1f63a41229137c293142255a698e481a2938e36fd3f4e5bfcc3bc"} Jan 26 19:07:19 crc kubenswrapper[4737]: I0126 19:07:19.467555 4737 scope.go:117] "RemoveContainer" containerID="b0a21abd3e116d259c5677a976b215f3cdb83dc4b30ba6474db928d312070682" Jan 26 19:07:19 crc kubenswrapper[4737]: I0126 19:07:19.467693 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fp7d9" Jan 26 19:07:19 crc kubenswrapper[4737]: I0126 19:07:19.501908 4737 scope.go:117] "RemoveContainer" containerID="157a952ab8b38c9f126b7a2fbb930082baf1bee6869fc0a3df2b83646a6ad05c" Jan 26 19:07:19 crc kubenswrapper[4737]: I0126 19:07:19.526891 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fp7d9"] Jan 26 19:07:19 crc kubenswrapper[4737]: I0126 19:07:19.540332 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fp7d9"] Jan 26 19:07:19 crc kubenswrapper[4737]: I0126 19:07:19.546729 4737 scope.go:117] "RemoveContainer" containerID="344cee3d4e62323fd0195dcd98c18e29f87dfdc9a4e5af7f55577ad0fd81e43b" Jan 26 19:07:19 crc kubenswrapper[4737]: I0126 19:07:19.595511 4737 scope.go:117] "RemoveContainer" containerID="b0a21abd3e116d259c5677a976b215f3cdb83dc4b30ba6474db928d312070682" Jan 26 19:07:19 crc kubenswrapper[4737]: E0126 19:07:19.596158 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0a21abd3e116d259c5677a976b215f3cdb83dc4b30ba6474db928d312070682\": container with ID starting with b0a21abd3e116d259c5677a976b215f3cdb83dc4b30ba6474db928d312070682 not found: ID does not exist" containerID="b0a21abd3e116d259c5677a976b215f3cdb83dc4b30ba6474db928d312070682" Jan 26 19:07:19 crc kubenswrapper[4737]: I0126 19:07:19.597122 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0a21abd3e116d259c5677a976b215f3cdb83dc4b30ba6474db928d312070682"} err="failed to get container status \"b0a21abd3e116d259c5677a976b215f3cdb83dc4b30ba6474db928d312070682\": rpc error: code = NotFound desc = could not find container \"b0a21abd3e116d259c5677a976b215f3cdb83dc4b30ba6474db928d312070682\": container with ID starting with b0a21abd3e116d259c5677a976b215f3cdb83dc4b30ba6474db928d312070682 not found: ID does not exist" Jan 26 19:07:19 crc kubenswrapper[4737]: I0126 19:07:19.597258 4737 scope.go:117] "RemoveContainer" containerID="157a952ab8b38c9f126b7a2fbb930082baf1bee6869fc0a3df2b83646a6ad05c" Jan 26 19:07:19 crc kubenswrapper[4737]: E0126 19:07:19.598552 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"157a952ab8b38c9f126b7a2fbb930082baf1bee6869fc0a3df2b83646a6ad05c\": container with ID starting with 157a952ab8b38c9f126b7a2fbb930082baf1bee6869fc0a3df2b83646a6ad05c not found: ID does not exist" containerID="157a952ab8b38c9f126b7a2fbb930082baf1bee6869fc0a3df2b83646a6ad05c" Jan 26 19:07:19 crc kubenswrapper[4737]: I0126 19:07:19.598677 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"157a952ab8b38c9f126b7a2fbb930082baf1bee6869fc0a3df2b83646a6ad05c"} err="failed to get container status \"157a952ab8b38c9f126b7a2fbb930082baf1bee6869fc0a3df2b83646a6ad05c\": rpc error: code = NotFound desc = could not find container \"157a952ab8b38c9f126b7a2fbb930082baf1bee6869fc0a3df2b83646a6ad05c\": container with ID starting with 157a952ab8b38c9f126b7a2fbb930082baf1bee6869fc0a3df2b83646a6ad05c not found: ID does not exist" Jan 26 19:07:19 crc kubenswrapper[4737]: I0126 19:07:19.598773 4737 scope.go:117] "RemoveContainer" containerID="344cee3d4e62323fd0195dcd98c18e29f87dfdc9a4e5af7f55577ad0fd81e43b" Jan 26 19:07:19 crc kubenswrapper[4737]: E0126 19:07:19.599251 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"344cee3d4e62323fd0195dcd98c18e29f87dfdc9a4e5af7f55577ad0fd81e43b\": container with ID starting with 344cee3d4e62323fd0195dcd98c18e29f87dfdc9a4e5af7f55577ad0fd81e43b not found: ID does not exist" containerID="344cee3d4e62323fd0195dcd98c18e29f87dfdc9a4e5af7f55577ad0fd81e43b" Jan 26 19:07:19 crc kubenswrapper[4737]: I0126 19:07:19.599301 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"344cee3d4e62323fd0195dcd98c18e29f87dfdc9a4e5af7f55577ad0fd81e43b"} err="failed to get container status \"344cee3d4e62323fd0195dcd98c18e29f87dfdc9a4e5af7f55577ad0fd81e43b\": rpc error: code = NotFound desc = could not find container \"344cee3d4e62323fd0195dcd98c18e29f87dfdc9a4e5af7f55577ad0fd81e43b\": container with ID starting with 344cee3d4e62323fd0195dcd98c18e29f87dfdc9a4e5af7f55577ad0fd81e43b not found: ID does not exist" Jan 26 19:07:20 crc kubenswrapper[4737]: I0126 19:07:20.995706 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d31247e-13b1-41dd-9253-884166bd540c" path="/var/lib/kubelet/pods/5d31247e-13b1-41dd-9253-884166bd540c/volumes" Jan 26 19:07:33 crc kubenswrapper[4737]: I0126 19:07:33.049684 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-j866g"] Jan 26 19:07:33 crc kubenswrapper[4737]: I0126 19:07:33.060498 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-j866g"] Jan 26 19:07:34 crc kubenswrapper[4737]: I0126 19:07:34.679462 4737 scope.go:117] "RemoveContainer" containerID="7a5ab34ef8c18a3f42ad36a0fe0dea4a3a1521e1b4027853de0176849388bfc1" Jan 26 19:07:34 crc kubenswrapper[4737]: I0126 19:07:34.723246 4737 scope.go:117] "RemoveContainer" containerID="bd8c76d9bc90f419a1408d1209e85ee77bc4dfcfc2d4fe09745ce39583a6c986" Jan 26 19:07:34 crc kubenswrapper[4737]: I0126 19:07:34.779960 4737 scope.go:117] "RemoveContainer" containerID="a5ccc9d03de02387d0b5845fc439b27e44b744fd08f9f6e0335a795f24f6a471" Jan 26 19:07:34 crc kubenswrapper[4737]: I0126 19:07:34.846364 4737 scope.go:117] "RemoveContainer" containerID="9550a24705751f7a1b329052cfaa40e7a39b4389b6801007d25f80bc6fe485a2" Jan 26 19:07:34 crc kubenswrapper[4737]: I0126 19:07:34.915796 4737 scope.go:117] "RemoveContainer" containerID="0fa505b2da759bce2da43968177c07e66fba26bacdb584fa732d441bb7bca5c5" Jan 26 19:07:34 crc kubenswrapper[4737]: I0126 19:07:34.979282 4737 scope.go:117] "RemoveContainer" containerID="8676037a138c03205571ce081641fb8e12b7eb1050fb674d7b55b047ce4b6d95" Jan 26 19:07:35 crc kubenswrapper[4737]: I0126 19:07:35.001060 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bea1a20-5eb7-4003-8fdd-43ecb5fb550a" path="/var/lib/kubelet/pods/5bea1a20-5eb7-4003-8fdd-43ecb5fb550a/volumes" Jan 26 19:07:35 crc kubenswrapper[4737]: I0126 19:07:35.064788 4737 scope.go:117] "RemoveContainer" containerID="fa5bc3e8945224807808ed3cc617cc7e4a6f9b8c9f533c7884cc9455bbaeda2c" Jan 26 19:07:36 crc kubenswrapper[4737]: I0126 19:07:36.040098 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-b22wn"] Jan 26 19:07:36 crc kubenswrapper[4737]: I0126 19:07:36.055691 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-b22wn"] Jan 26 19:07:36 crc kubenswrapper[4737]: I0126 19:07:36.067840 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-qcqmg"] Jan 26 19:07:36 crc kubenswrapper[4737]: I0126 19:07:36.080995 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-qcqmg"] Jan 26 19:07:36 crc kubenswrapper[4737]: I0126 19:07:36.997559 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="410f0427-0248-40f9-adc7-33af510f7842" path="/var/lib/kubelet/pods/410f0427-0248-40f9-adc7-33af510f7842/volumes" Jan 26 19:07:36 crc kubenswrapper[4737]: I0126 19:07:36.998770 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e850b319-4b13-4da1-a138-3373c2c6ecd2" path="/var/lib/kubelet/pods/e850b319-4b13-4da1-a138-3373c2c6ecd2/volumes" Jan 26 19:07:37 crc kubenswrapper[4737]: I0126 19:07:37.031501 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-7e93-account-create-update-gxsv8"] Jan 26 19:07:37 crc kubenswrapper[4737]: I0126 19:07:37.041429 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-7e93-account-create-update-gxsv8"] Jan 26 19:07:38 crc kubenswrapper[4737]: I0126 19:07:38.996117 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45673b67-f4a4-4100-adfa-6cdb3a83f093" path="/var/lib/kubelet/pods/45673b67-f4a4-4100-adfa-6cdb3a83f093/volumes" Jan 26 19:07:49 crc kubenswrapper[4737]: I0126 19:07:49.789011 4737 generic.go:334] "Generic (PLEG): container finished" podID="f606c12b-460a-4ec1-ac57-d4e5667945de" containerID="7dbb0b6e810ad6fba96a74fd49770b3f9124983938edb8e2099cbb1e1abb6ae6" exitCode=0 Jan 26 19:07:49 crc kubenswrapper[4737]: I0126 19:07:49.789085 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-m2vxk" event={"ID":"f606c12b-460a-4ec1-ac57-d4e5667945de","Type":"ContainerDied","Data":"7dbb0b6e810ad6fba96a74fd49770b3f9124983938edb8e2099cbb1e1abb6ae6"} Jan 26 19:07:51 crc kubenswrapper[4737]: I0126 19:07:51.294931 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-m2vxk" Jan 26 19:07:51 crc kubenswrapper[4737]: I0126 19:07:51.420246 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f606c12b-460a-4ec1-ac57-d4e5667945de-ssh-key-openstack-edpm-ipam\") pod \"f606c12b-460a-4ec1-ac57-d4e5667945de\" (UID: \"f606c12b-460a-4ec1-ac57-d4e5667945de\") " Jan 26 19:07:51 crc kubenswrapper[4737]: I0126 19:07:51.420856 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2xlk\" (UniqueName: \"kubernetes.io/projected/f606c12b-460a-4ec1-ac57-d4e5667945de-kube-api-access-z2xlk\") pod \"f606c12b-460a-4ec1-ac57-d4e5667945de\" (UID: \"f606c12b-460a-4ec1-ac57-d4e5667945de\") " Jan 26 19:07:51 crc kubenswrapper[4737]: I0126 19:07:51.420969 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f606c12b-460a-4ec1-ac57-d4e5667945de-inventory\") pod \"f606c12b-460a-4ec1-ac57-d4e5667945de\" (UID: \"f606c12b-460a-4ec1-ac57-d4e5667945de\") " Jan 26 19:07:51 crc kubenswrapper[4737]: I0126 19:07:51.433519 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f606c12b-460a-4ec1-ac57-d4e5667945de-kube-api-access-z2xlk" (OuterVolumeSpecName: "kube-api-access-z2xlk") pod "f606c12b-460a-4ec1-ac57-d4e5667945de" (UID: "f606c12b-460a-4ec1-ac57-d4e5667945de"). InnerVolumeSpecName "kube-api-access-z2xlk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:07:51 crc kubenswrapper[4737]: I0126 19:07:51.465536 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f606c12b-460a-4ec1-ac57-d4e5667945de-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f606c12b-460a-4ec1-ac57-d4e5667945de" (UID: "f606c12b-460a-4ec1-ac57-d4e5667945de"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:07:51 crc kubenswrapper[4737]: I0126 19:07:51.466552 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f606c12b-460a-4ec1-ac57-d4e5667945de-inventory" (OuterVolumeSpecName: "inventory") pod "f606c12b-460a-4ec1-ac57-d4e5667945de" (UID: "f606c12b-460a-4ec1-ac57-d4e5667945de"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:07:51 crc kubenswrapper[4737]: I0126 19:07:51.523860 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2xlk\" (UniqueName: \"kubernetes.io/projected/f606c12b-460a-4ec1-ac57-d4e5667945de-kube-api-access-z2xlk\") on node \"crc\" DevicePath \"\"" Jan 26 19:07:51 crc kubenswrapper[4737]: I0126 19:07:51.524193 4737 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f606c12b-460a-4ec1-ac57-d4e5667945de-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 19:07:51 crc kubenswrapper[4737]: I0126 19:07:51.524264 4737 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f606c12b-460a-4ec1-ac57-d4e5667945de-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 19:07:51 crc kubenswrapper[4737]: I0126 19:07:51.810369 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-m2vxk" event={"ID":"f606c12b-460a-4ec1-ac57-d4e5667945de","Type":"ContainerDied","Data":"f13c5725c9460fb4d5b0d9fbea7858152a6441b7f44dad8f70d6c0e71355785d"} Jan 26 19:07:51 crc kubenswrapper[4737]: I0126 19:07:51.810418 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f13c5725c9460fb4d5b0d9fbea7858152a6441b7f44dad8f70d6c0e71355785d" Jan 26 19:07:51 crc kubenswrapper[4737]: I0126 19:07:51.810479 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-m2vxk" Jan 26 19:07:52 crc kubenswrapper[4737]: I0126 19:07:52.006131 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-74xdm"] Jan 26 19:07:52 crc kubenswrapper[4737]: E0126 19:07:52.007783 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d31247e-13b1-41dd-9253-884166bd540c" containerName="extract-utilities" Jan 26 19:07:52 crc kubenswrapper[4737]: I0126 19:07:52.007904 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d31247e-13b1-41dd-9253-884166bd540c" containerName="extract-utilities" Jan 26 19:07:52 crc kubenswrapper[4737]: E0126 19:07:52.008014 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d31247e-13b1-41dd-9253-884166bd540c" containerName="registry-server" Jan 26 19:07:52 crc kubenswrapper[4737]: I0126 19:07:52.008091 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d31247e-13b1-41dd-9253-884166bd540c" containerName="registry-server" Jan 26 19:07:52 crc kubenswrapper[4737]: E0126 19:07:52.008174 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d31247e-13b1-41dd-9253-884166bd540c" containerName="extract-content" Jan 26 19:07:52 crc kubenswrapper[4737]: I0126 19:07:52.008229 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d31247e-13b1-41dd-9253-884166bd540c" containerName="extract-content" Jan 26 19:07:52 crc kubenswrapper[4737]: E0126 19:07:52.008304 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f606c12b-460a-4ec1-ac57-d4e5667945de" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 26 19:07:52 crc kubenswrapper[4737]: I0126 19:07:52.008357 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="f606c12b-460a-4ec1-ac57-d4e5667945de" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 26 19:07:52 crc kubenswrapper[4737]: I0126 19:07:52.008647 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="f606c12b-460a-4ec1-ac57-d4e5667945de" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 26 19:07:52 crc kubenswrapper[4737]: I0126 19:07:52.008729 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d31247e-13b1-41dd-9253-884166bd540c" containerName="registry-server" Jan 26 19:07:52 crc kubenswrapper[4737]: I0126 19:07:52.009625 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-74xdm" Jan 26 19:07:52 crc kubenswrapper[4737]: I0126 19:07:52.020546 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 19:07:52 crc kubenswrapper[4737]: I0126 19:07:52.020811 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 19:07:52 crc kubenswrapper[4737]: I0126 19:07:52.020920 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 19:07:52 crc kubenswrapper[4737]: I0126 19:07:52.021313 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xlvv9" Jan 26 19:07:52 crc kubenswrapper[4737]: I0126 19:07:52.034458 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-74xdm"] Jan 26 19:07:52 crc kubenswrapper[4737]: I0126 19:07:52.038330 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk4rk\" (UniqueName: \"kubernetes.io/projected/bb314574-7438-4911-8b54-a1ccfa5a907d-kube-api-access-lk4rk\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-74xdm\" (UID: \"bb314574-7438-4911-8b54-a1ccfa5a907d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-74xdm" Jan 26 19:07:52 crc kubenswrapper[4737]: I0126 19:07:52.038460 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bb314574-7438-4911-8b54-a1ccfa5a907d-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-74xdm\" (UID: \"bb314574-7438-4911-8b54-a1ccfa5a907d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-74xdm" Jan 26 19:07:52 crc kubenswrapper[4737]: I0126 19:07:52.038488 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bb314574-7438-4911-8b54-a1ccfa5a907d-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-74xdm\" (UID: \"bb314574-7438-4911-8b54-a1ccfa5a907d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-74xdm" Jan 26 19:07:52 crc kubenswrapper[4737]: I0126 19:07:52.140741 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bb314574-7438-4911-8b54-a1ccfa5a907d-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-74xdm\" (UID: \"bb314574-7438-4911-8b54-a1ccfa5a907d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-74xdm" Jan 26 19:07:52 crc kubenswrapper[4737]: I0126 19:07:52.141040 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bb314574-7438-4911-8b54-a1ccfa5a907d-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-74xdm\" (UID: \"bb314574-7438-4911-8b54-a1ccfa5a907d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-74xdm" Jan 26 19:07:52 crc kubenswrapper[4737]: I0126 19:07:52.141378 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk4rk\" (UniqueName: \"kubernetes.io/projected/bb314574-7438-4911-8b54-a1ccfa5a907d-kube-api-access-lk4rk\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-74xdm\" (UID: \"bb314574-7438-4911-8b54-a1ccfa5a907d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-74xdm" Jan 26 19:07:52 crc kubenswrapper[4737]: I0126 19:07:52.147587 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bb314574-7438-4911-8b54-a1ccfa5a907d-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-74xdm\" (UID: \"bb314574-7438-4911-8b54-a1ccfa5a907d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-74xdm" Jan 26 19:07:52 crc kubenswrapper[4737]: I0126 19:07:52.147632 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bb314574-7438-4911-8b54-a1ccfa5a907d-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-74xdm\" (UID: \"bb314574-7438-4911-8b54-a1ccfa5a907d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-74xdm" Jan 26 19:07:52 crc kubenswrapper[4737]: I0126 19:07:52.164354 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk4rk\" (UniqueName: \"kubernetes.io/projected/bb314574-7438-4911-8b54-a1ccfa5a907d-kube-api-access-lk4rk\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-74xdm\" (UID: \"bb314574-7438-4911-8b54-a1ccfa5a907d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-74xdm" Jan 26 19:07:52 crc kubenswrapper[4737]: I0126 19:07:52.337941 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-74xdm" Jan 26 19:07:52 crc kubenswrapper[4737]: I0126 19:07:52.932534 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-74xdm"] Jan 26 19:07:53 crc kubenswrapper[4737]: I0126 19:07:53.834752 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-74xdm" event={"ID":"bb314574-7438-4911-8b54-a1ccfa5a907d","Type":"ContainerStarted","Data":"7169f23915fad763d3a91f96c9fc3cd8dd368a3c85ee2b371a5a785f355a4f6f"} Jan 26 19:07:53 crc kubenswrapper[4737]: I0126 19:07:53.838391 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-74xdm" event={"ID":"bb314574-7438-4911-8b54-a1ccfa5a907d","Type":"ContainerStarted","Data":"0416229b085ad172ccd35abad22dedaca4dd73a3e902d38b8316f5c4ed255baf"} Jan 26 19:07:53 crc kubenswrapper[4737]: I0126 19:07:53.864135 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-74xdm" podStartSLOduration=2.448751454 podStartE2EDuration="2.864063186s" podCreationTimestamp="2026-01-26 19:07:51 +0000 UTC" firstStartedPulling="2026-01-26 19:07:52.936276132 +0000 UTC m=+2246.244470840" lastFinishedPulling="2026-01-26 19:07:53.351587864 +0000 UTC m=+2246.659782572" observedRunningTime="2026-01-26 19:07:53.856026109 +0000 UTC m=+2247.164220847" watchObservedRunningTime="2026-01-26 19:07:53.864063186 +0000 UTC m=+2247.172257894" Jan 26 19:07:58 crc kubenswrapper[4737]: I0126 19:07:58.887924 4737 generic.go:334] "Generic (PLEG): container finished" podID="bb314574-7438-4911-8b54-a1ccfa5a907d" containerID="7169f23915fad763d3a91f96c9fc3cd8dd368a3c85ee2b371a5a785f355a4f6f" exitCode=0 Jan 26 19:07:58 crc kubenswrapper[4737]: I0126 19:07:58.888608 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-74xdm" event={"ID":"bb314574-7438-4911-8b54-a1ccfa5a907d","Type":"ContainerDied","Data":"7169f23915fad763d3a91f96c9fc3cd8dd368a3c85ee2b371a5a785f355a4f6f"} Jan 26 19:08:00 crc kubenswrapper[4737]: I0126 19:08:00.488805 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-74xdm" Jan 26 19:08:00 crc kubenswrapper[4737]: I0126 19:08:00.552406 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lk4rk\" (UniqueName: \"kubernetes.io/projected/bb314574-7438-4911-8b54-a1ccfa5a907d-kube-api-access-lk4rk\") pod \"bb314574-7438-4911-8b54-a1ccfa5a907d\" (UID: \"bb314574-7438-4911-8b54-a1ccfa5a907d\") " Jan 26 19:08:00 crc kubenswrapper[4737]: I0126 19:08:00.552625 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bb314574-7438-4911-8b54-a1ccfa5a907d-ssh-key-openstack-edpm-ipam\") pod \"bb314574-7438-4911-8b54-a1ccfa5a907d\" (UID: \"bb314574-7438-4911-8b54-a1ccfa5a907d\") " Jan 26 19:08:00 crc kubenswrapper[4737]: I0126 19:08:00.552820 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bb314574-7438-4911-8b54-a1ccfa5a907d-inventory\") pod \"bb314574-7438-4911-8b54-a1ccfa5a907d\" (UID: \"bb314574-7438-4911-8b54-a1ccfa5a907d\") " Jan 26 19:08:00 crc kubenswrapper[4737]: I0126 19:08:00.557508 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb314574-7438-4911-8b54-a1ccfa5a907d-kube-api-access-lk4rk" (OuterVolumeSpecName: "kube-api-access-lk4rk") pod "bb314574-7438-4911-8b54-a1ccfa5a907d" (UID: "bb314574-7438-4911-8b54-a1ccfa5a907d"). InnerVolumeSpecName "kube-api-access-lk4rk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:08:00 crc kubenswrapper[4737]: I0126 19:08:00.585108 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb314574-7438-4911-8b54-a1ccfa5a907d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bb314574-7438-4911-8b54-a1ccfa5a907d" (UID: "bb314574-7438-4911-8b54-a1ccfa5a907d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:08:00 crc kubenswrapper[4737]: I0126 19:08:00.596949 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb314574-7438-4911-8b54-a1ccfa5a907d-inventory" (OuterVolumeSpecName: "inventory") pod "bb314574-7438-4911-8b54-a1ccfa5a907d" (UID: "bb314574-7438-4911-8b54-a1ccfa5a907d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:08:00 crc kubenswrapper[4737]: I0126 19:08:00.666159 4737 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bb314574-7438-4911-8b54-a1ccfa5a907d-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 19:08:00 crc kubenswrapper[4737]: I0126 19:08:00.666197 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lk4rk\" (UniqueName: \"kubernetes.io/projected/bb314574-7438-4911-8b54-a1ccfa5a907d-kube-api-access-lk4rk\") on node \"crc\" DevicePath \"\"" Jan 26 19:08:00 crc kubenswrapper[4737]: I0126 19:08:00.666209 4737 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bb314574-7438-4911-8b54-a1ccfa5a907d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 19:08:00 crc kubenswrapper[4737]: I0126 19:08:00.914239 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-74xdm" event={"ID":"bb314574-7438-4911-8b54-a1ccfa5a907d","Type":"ContainerDied","Data":"0416229b085ad172ccd35abad22dedaca4dd73a3e902d38b8316f5c4ed255baf"} Jan 26 19:08:00 crc kubenswrapper[4737]: I0126 19:08:00.914528 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0416229b085ad172ccd35abad22dedaca4dd73a3e902d38b8316f5c4ed255baf" Jan 26 19:08:00 crc kubenswrapper[4737]: I0126 19:08:00.914283 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-74xdm" Jan 26 19:08:01 crc kubenswrapper[4737]: I0126 19:08:01.017368 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-lj8qk"] Jan 26 19:08:01 crc kubenswrapper[4737]: E0126 19:08:01.018872 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb314574-7438-4911-8b54-a1ccfa5a907d" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 26 19:08:01 crc kubenswrapper[4737]: I0126 19:08:01.018903 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb314574-7438-4911-8b54-a1ccfa5a907d" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 26 19:08:01 crc kubenswrapper[4737]: I0126 19:08:01.019284 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb314574-7438-4911-8b54-a1ccfa5a907d" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 26 19:08:01 crc kubenswrapper[4737]: I0126 19:08:01.020414 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lj8qk" Jan 26 19:08:01 crc kubenswrapper[4737]: I0126 19:08:01.023521 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 19:08:01 crc kubenswrapper[4737]: I0126 19:08:01.023754 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 19:08:01 crc kubenswrapper[4737]: I0126 19:08:01.023871 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 19:08:01 crc kubenswrapper[4737]: I0126 19:08:01.024328 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xlvv9" Jan 26 19:08:01 crc kubenswrapper[4737]: I0126 19:08:01.056621 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-lj8qk"] Jan 26 19:08:01 crc kubenswrapper[4737]: I0126 19:08:01.178989 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zq2c\" (UniqueName: \"kubernetes.io/projected/8f08d498-ef07-4e31-ab34-d68972740f02-kube-api-access-4zq2c\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lj8qk\" (UID: \"8f08d498-ef07-4e31-ab34-d68972740f02\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lj8qk" Jan 26 19:08:01 crc kubenswrapper[4737]: I0126 19:08:01.179040 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8f08d498-ef07-4e31-ab34-d68972740f02-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lj8qk\" (UID: \"8f08d498-ef07-4e31-ab34-d68972740f02\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lj8qk" Jan 26 19:08:01 crc kubenswrapper[4737]: I0126 19:08:01.179223 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8f08d498-ef07-4e31-ab34-d68972740f02-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lj8qk\" (UID: \"8f08d498-ef07-4e31-ab34-d68972740f02\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lj8qk" Jan 26 19:08:01 crc kubenswrapper[4737]: I0126 19:08:01.282700 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zq2c\" (UniqueName: \"kubernetes.io/projected/8f08d498-ef07-4e31-ab34-d68972740f02-kube-api-access-4zq2c\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lj8qk\" (UID: \"8f08d498-ef07-4e31-ab34-d68972740f02\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lj8qk" Jan 26 19:08:01 crc kubenswrapper[4737]: I0126 19:08:01.282744 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8f08d498-ef07-4e31-ab34-d68972740f02-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lj8qk\" (UID: \"8f08d498-ef07-4e31-ab34-d68972740f02\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lj8qk" Jan 26 19:08:01 crc kubenswrapper[4737]: I0126 19:08:01.282813 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8f08d498-ef07-4e31-ab34-d68972740f02-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lj8qk\" (UID: \"8f08d498-ef07-4e31-ab34-d68972740f02\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lj8qk" Jan 26 19:08:01 crc kubenswrapper[4737]: I0126 19:08:01.287709 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8f08d498-ef07-4e31-ab34-d68972740f02-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lj8qk\" (UID: \"8f08d498-ef07-4e31-ab34-d68972740f02\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lj8qk" Jan 26 19:08:01 crc kubenswrapper[4737]: I0126 19:08:01.290197 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8f08d498-ef07-4e31-ab34-d68972740f02-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lj8qk\" (UID: \"8f08d498-ef07-4e31-ab34-d68972740f02\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lj8qk" Jan 26 19:08:01 crc kubenswrapper[4737]: I0126 19:08:01.300227 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zq2c\" (UniqueName: \"kubernetes.io/projected/8f08d498-ef07-4e31-ab34-d68972740f02-kube-api-access-4zq2c\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lj8qk\" (UID: \"8f08d498-ef07-4e31-ab34-d68972740f02\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lj8qk" Jan 26 19:08:01 crc kubenswrapper[4737]: I0126 19:08:01.344672 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lj8qk" Jan 26 19:08:01 crc kubenswrapper[4737]: I0126 19:08:01.908466 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-lj8qk"] Jan 26 19:08:01 crc kubenswrapper[4737]: I0126 19:08:01.957363 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lj8qk" event={"ID":"8f08d498-ef07-4e31-ab34-d68972740f02","Type":"ContainerStarted","Data":"0fdd12ecb3127acf98297b3091007db1e9a8cb03e2d25597b27a1f2019815d4c"} Jan 26 19:08:02 crc kubenswrapper[4737]: I0126 19:08:02.980400 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lj8qk" event={"ID":"8f08d498-ef07-4e31-ab34-d68972740f02","Type":"ContainerStarted","Data":"b5bc454b3134567f89c712fd06eab65c5149189ef13ec2fdc661be2087fe8dad"} Jan 26 19:08:03 crc kubenswrapper[4737]: I0126 19:08:03.044662 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lj8qk" podStartSLOduration=2.506116472 podStartE2EDuration="3.044613032s" podCreationTimestamp="2026-01-26 19:08:00 +0000 UTC" firstStartedPulling="2026-01-26 19:08:01.929522551 +0000 UTC m=+2255.237717259" lastFinishedPulling="2026-01-26 19:08:02.468019111 +0000 UTC m=+2255.776213819" observedRunningTime="2026-01-26 19:08:03.016186667 +0000 UTC m=+2256.324381375" watchObservedRunningTime="2026-01-26 19:08:03.044613032 +0000 UTC m=+2256.352807740" Jan 26 19:08:19 crc kubenswrapper[4737]: I0126 19:08:19.058255 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-xzv46"] Jan 26 19:08:19 crc kubenswrapper[4737]: I0126 19:08:19.071845 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-xzv46"] Jan 26 19:08:21 crc kubenswrapper[4737]: I0126 19:08:20.998187 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2c3196f-2796-452a-ab7f-59145e00d722" path="/var/lib/kubelet/pods/d2c3196f-2796-452a-ab7f-59145e00d722/volumes" Jan 26 19:08:35 crc kubenswrapper[4737]: I0126 19:08:35.303398 4737 scope.go:117] "RemoveContainer" containerID="b87cd7a0a35b679ccd76d7661f35f934ceb713391288b1d35c9a4830710d2f82" Jan 26 19:08:35 crc kubenswrapper[4737]: I0126 19:08:35.336785 4737 scope.go:117] "RemoveContainer" containerID="9c5c86e220b689720e2541702ca731231d3515f7071e96ed7256880fbe86cb2e" Jan 26 19:08:35 crc kubenswrapper[4737]: I0126 19:08:35.391134 4737 scope.go:117] "RemoveContainer" containerID="3ac8d17f683e9b94a8213e038309cebb9dd9baa77a53a58ffa3f54c75f7a7901" Jan 26 19:08:35 crc kubenswrapper[4737]: I0126 19:08:35.458456 4737 scope.go:117] "RemoveContainer" containerID="6e12c8d35ef900f0488ae3a40792a50def952d0040c672c6ceb17da7f17f4422" Jan 26 19:08:35 crc kubenswrapper[4737]: I0126 19:08:35.524978 4737 scope.go:117] "RemoveContainer" containerID="b8f1aa0848e0a3f4d0a592fd5228b2391f3981971cb36c36e7aec34ce8cd5abb" Jan 26 19:08:40 crc kubenswrapper[4737]: I0126 19:08:40.401413 4737 generic.go:334] "Generic (PLEG): container finished" podID="8f08d498-ef07-4e31-ab34-d68972740f02" containerID="b5bc454b3134567f89c712fd06eab65c5149189ef13ec2fdc661be2087fe8dad" exitCode=0 Jan 26 19:08:40 crc kubenswrapper[4737]: I0126 19:08:40.401502 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lj8qk" event={"ID":"8f08d498-ef07-4e31-ab34-d68972740f02","Type":"ContainerDied","Data":"b5bc454b3134567f89c712fd06eab65c5149189ef13ec2fdc661be2087fe8dad"} Jan 26 19:08:42 crc kubenswrapper[4737]: I0126 19:08:41.896851 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lj8qk" Jan 26 19:08:42 crc kubenswrapper[4737]: I0126 19:08:42.077220 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8f08d498-ef07-4e31-ab34-d68972740f02-ssh-key-openstack-edpm-ipam\") pod \"8f08d498-ef07-4e31-ab34-d68972740f02\" (UID: \"8f08d498-ef07-4e31-ab34-d68972740f02\") " Jan 26 19:08:42 crc kubenswrapper[4737]: I0126 19:08:42.077352 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8f08d498-ef07-4e31-ab34-d68972740f02-inventory\") pod \"8f08d498-ef07-4e31-ab34-d68972740f02\" (UID: \"8f08d498-ef07-4e31-ab34-d68972740f02\") " Jan 26 19:08:42 crc kubenswrapper[4737]: I0126 19:08:42.077549 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zq2c\" (UniqueName: \"kubernetes.io/projected/8f08d498-ef07-4e31-ab34-d68972740f02-kube-api-access-4zq2c\") pod \"8f08d498-ef07-4e31-ab34-d68972740f02\" (UID: \"8f08d498-ef07-4e31-ab34-d68972740f02\") " Jan 26 19:08:42 crc kubenswrapper[4737]: I0126 19:08:42.084603 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f08d498-ef07-4e31-ab34-d68972740f02-kube-api-access-4zq2c" (OuterVolumeSpecName: "kube-api-access-4zq2c") pod "8f08d498-ef07-4e31-ab34-d68972740f02" (UID: "8f08d498-ef07-4e31-ab34-d68972740f02"). InnerVolumeSpecName "kube-api-access-4zq2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:08:42 crc kubenswrapper[4737]: I0126 19:08:42.126874 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f08d498-ef07-4e31-ab34-d68972740f02-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8f08d498-ef07-4e31-ab34-d68972740f02" (UID: "8f08d498-ef07-4e31-ab34-d68972740f02"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:08:42 crc kubenswrapper[4737]: I0126 19:08:42.134377 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f08d498-ef07-4e31-ab34-d68972740f02-inventory" (OuterVolumeSpecName: "inventory") pod "8f08d498-ef07-4e31-ab34-d68972740f02" (UID: "8f08d498-ef07-4e31-ab34-d68972740f02"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:08:42 crc kubenswrapper[4737]: I0126 19:08:42.181050 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zq2c\" (UniqueName: \"kubernetes.io/projected/8f08d498-ef07-4e31-ab34-d68972740f02-kube-api-access-4zq2c\") on node \"crc\" DevicePath \"\"" Jan 26 19:08:42 crc kubenswrapper[4737]: I0126 19:08:42.181125 4737 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8f08d498-ef07-4e31-ab34-d68972740f02-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 19:08:42 crc kubenswrapper[4737]: I0126 19:08:42.181146 4737 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8f08d498-ef07-4e31-ab34-d68972740f02-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 19:08:42 crc kubenswrapper[4737]: I0126 19:08:42.425396 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lj8qk" event={"ID":"8f08d498-ef07-4e31-ab34-d68972740f02","Type":"ContainerDied","Data":"0fdd12ecb3127acf98297b3091007db1e9a8cb03e2d25597b27a1f2019815d4c"} Jan 26 19:08:42 crc kubenswrapper[4737]: I0126 19:08:42.425438 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fdd12ecb3127acf98297b3091007db1e9a8cb03e2d25597b27a1f2019815d4c" Jan 26 19:08:42 crc kubenswrapper[4737]: I0126 19:08:42.425460 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lj8qk" Jan 26 19:08:42 crc kubenswrapper[4737]: I0126 19:08:42.520869 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-trclj"] Jan 26 19:08:42 crc kubenswrapper[4737]: E0126 19:08:42.521413 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f08d498-ef07-4e31-ab34-d68972740f02" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 26 19:08:42 crc kubenswrapper[4737]: I0126 19:08:42.521433 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f08d498-ef07-4e31-ab34-d68972740f02" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 26 19:08:42 crc kubenswrapper[4737]: I0126 19:08:42.521637 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f08d498-ef07-4e31-ab34-d68972740f02" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 26 19:08:42 crc kubenswrapper[4737]: I0126 19:08:42.522586 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-trclj" Jan 26 19:08:42 crc kubenswrapper[4737]: I0126 19:08:42.525892 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 19:08:42 crc kubenswrapper[4737]: I0126 19:08:42.525938 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 19:08:42 crc kubenswrapper[4737]: I0126 19:08:42.526087 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 19:08:42 crc kubenswrapper[4737]: I0126 19:08:42.526255 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xlvv9" Jan 26 19:08:42 crc kubenswrapper[4737]: I0126 19:08:42.531775 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-trclj"] Jan 26 19:08:42 crc kubenswrapper[4737]: I0126 19:08:42.692469 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e5cc8a39-bca0-4175-a418-a24c75e5bc06-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-trclj\" (UID: \"e5cc8a39-bca0-4175-a418-a24c75e5bc06\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-trclj" Jan 26 19:08:42 crc kubenswrapper[4737]: I0126 19:08:42.692542 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mqnz\" (UniqueName: \"kubernetes.io/projected/e5cc8a39-bca0-4175-a418-a24c75e5bc06-kube-api-access-4mqnz\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-trclj\" (UID: \"e5cc8a39-bca0-4175-a418-a24c75e5bc06\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-trclj" Jan 26 19:08:42 crc kubenswrapper[4737]: I0126 19:08:42.692988 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e5cc8a39-bca0-4175-a418-a24c75e5bc06-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-trclj\" (UID: \"e5cc8a39-bca0-4175-a418-a24c75e5bc06\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-trclj" Jan 26 19:08:42 crc kubenswrapper[4737]: I0126 19:08:42.795796 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e5cc8a39-bca0-4175-a418-a24c75e5bc06-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-trclj\" (UID: \"e5cc8a39-bca0-4175-a418-a24c75e5bc06\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-trclj" Jan 26 19:08:42 crc kubenswrapper[4737]: I0126 19:08:42.795851 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mqnz\" (UniqueName: \"kubernetes.io/projected/e5cc8a39-bca0-4175-a418-a24c75e5bc06-kube-api-access-4mqnz\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-trclj\" (UID: \"e5cc8a39-bca0-4175-a418-a24c75e5bc06\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-trclj" Jan 26 19:08:42 crc kubenswrapper[4737]: I0126 19:08:42.796006 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e5cc8a39-bca0-4175-a418-a24c75e5bc06-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-trclj\" (UID: \"e5cc8a39-bca0-4175-a418-a24c75e5bc06\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-trclj" Jan 26 19:08:42 crc kubenswrapper[4737]: I0126 19:08:42.800350 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e5cc8a39-bca0-4175-a418-a24c75e5bc06-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-trclj\" (UID: \"e5cc8a39-bca0-4175-a418-a24c75e5bc06\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-trclj" Jan 26 19:08:42 crc kubenswrapper[4737]: I0126 19:08:42.801991 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e5cc8a39-bca0-4175-a418-a24c75e5bc06-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-trclj\" (UID: \"e5cc8a39-bca0-4175-a418-a24c75e5bc06\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-trclj" Jan 26 19:08:42 crc kubenswrapper[4737]: I0126 19:08:42.813026 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mqnz\" (UniqueName: \"kubernetes.io/projected/e5cc8a39-bca0-4175-a418-a24c75e5bc06-kube-api-access-4mqnz\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-trclj\" (UID: \"e5cc8a39-bca0-4175-a418-a24c75e5bc06\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-trclj" Jan 26 19:08:42 crc kubenswrapper[4737]: I0126 19:08:42.839949 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-trclj" Jan 26 19:08:43 crc kubenswrapper[4737]: I0126 19:08:43.507959 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-trclj"] Jan 26 19:08:44 crc kubenswrapper[4737]: I0126 19:08:44.446627 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-trclj" event={"ID":"e5cc8a39-bca0-4175-a418-a24c75e5bc06","Type":"ContainerStarted","Data":"0dc49cb144d3c23e3822459f43c5fa0d6d73d37914398e264e1445e399ac67b7"} Jan 26 19:08:44 crc kubenswrapper[4737]: I0126 19:08:44.447143 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-trclj" event={"ID":"e5cc8a39-bca0-4175-a418-a24c75e5bc06","Type":"ContainerStarted","Data":"1220b6e1985432d5828b65c6a6cfa77f8be29b90b9682552284d88d6bf8002f2"} Jan 26 19:08:44 crc kubenswrapper[4737]: I0126 19:08:44.472854 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-trclj" podStartSLOduration=1.837274662 podStartE2EDuration="2.472834831s" podCreationTimestamp="2026-01-26 19:08:42 +0000 UTC" firstStartedPulling="2026-01-26 19:08:43.515320615 +0000 UTC m=+2296.823515323" lastFinishedPulling="2026-01-26 19:08:44.150880784 +0000 UTC m=+2297.459075492" observedRunningTime="2026-01-26 19:08:44.459927325 +0000 UTC m=+2297.768122033" watchObservedRunningTime="2026-01-26 19:08:44.472834831 +0000 UTC m=+2297.781029529" Jan 26 19:09:30 crc kubenswrapper[4737]: I0126 19:09:30.948983 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:09:30 crc kubenswrapper[4737]: I0126 19:09:30.949518 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:09:38 crc kubenswrapper[4737]: I0126 19:09:38.063231 4737 generic.go:334] "Generic (PLEG): container finished" podID="e5cc8a39-bca0-4175-a418-a24c75e5bc06" containerID="0dc49cb144d3c23e3822459f43c5fa0d6d73d37914398e264e1445e399ac67b7" exitCode=0 Jan 26 19:09:38 crc kubenswrapper[4737]: I0126 19:09:38.064112 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-trclj" event={"ID":"e5cc8a39-bca0-4175-a418-a24c75e5bc06","Type":"ContainerDied","Data":"0dc49cb144d3c23e3822459f43c5fa0d6d73d37914398e264e1445e399ac67b7"} Jan 26 19:09:39 crc kubenswrapper[4737]: I0126 19:09:39.646412 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-trclj" Jan 26 19:09:39 crc kubenswrapper[4737]: I0126 19:09:39.699185 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e5cc8a39-bca0-4175-a418-a24c75e5bc06-ssh-key-openstack-edpm-ipam\") pod \"e5cc8a39-bca0-4175-a418-a24c75e5bc06\" (UID: \"e5cc8a39-bca0-4175-a418-a24c75e5bc06\") " Jan 26 19:09:39 crc kubenswrapper[4737]: I0126 19:09:39.699422 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mqnz\" (UniqueName: \"kubernetes.io/projected/e5cc8a39-bca0-4175-a418-a24c75e5bc06-kube-api-access-4mqnz\") pod \"e5cc8a39-bca0-4175-a418-a24c75e5bc06\" (UID: \"e5cc8a39-bca0-4175-a418-a24c75e5bc06\") " Jan 26 19:09:39 crc kubenswrapper[4737]: I0126 19:09:39.699559 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e5cc8a39-bca0-4175-a418-a24c75e5bc06-inventory\") pod \"e5cc8a39-bca0-4175-a418-a24c75e5bc06\" (UID: \"e5cc8a39-bca0-4175-a418-a24c75e5bc06\") " Jan 26 19:09:39 crc kubenswrapper[4737]: I0126 19:09:39.708557 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5cc8a39-bca0-4175-a418-a24c75e5bc06-kube-api-access-4mqnz" (OuterVolumeSpecName: "kube-api-access-4mqnz") pod "e5cc8a39-bca0-4175-a418-a24c75e5bc06" (UID: "e5cc8a39-bca0-4175-a418-a24c75e5bc06"). InnerVolumeSpecName "kube-api-access-4mqnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:09:39 crc kubenswrapper[4737]: I0126 19:09:39.743457 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5cc8a39-bca0-4175-a418-a24c75e5bc06-inventory" (OuterVolumeSpecName: "inventory") pod "e5cc8a39-bca0-4175-a418-a24c75e5bc06" (UID: "e5cc8a39-bca0-4175-a418-a24c75e5bc06"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:09:39 crc kubenswrapper[4737]: I0126 19:09:39.748687 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5cc8a39-bca0-4175-a418-a24c75e5bc06-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e5cc8a39-bca0-4175-a418-a24c75e5bc06" (UID: "e5cc8a39-bca0-4175-a418-a24c75e5bc06"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:09:39 crc kubenswrapper[4737]: I0126 19:09:39.802534 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mqnz\" (UniqueName: \"kubernetes.io/projected/e5cc8a39-bca0-4175-a418-a24c75e5bc06-kube-api-access-4mqnz\") on node \"crc\" DevicePath \"\"" Jan 26 19:09:39 crc kubenswrapper[4737]: I0126 19:09:39.802585 4737 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e5cc8a39-bca0-4175-a418-a24c75e5bc06-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 19:09:39 crc kubenswrapper[4737]: I0126 19:09:39.802596 4737 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e5cc8a39-bca0-4175-a418-a24c75e5bc06-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 19:09:40 crc kubenswrapper[4737]: I0126 19:09:40.087008 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-trclj" event={"ID":"e5cc8a39-bca0-4175-a418-a24c75e5bc06","Type":"ContainerDied","Data":"1220b6e1985432d5828b65c6a6cfa77f8be29b90b9682552284d88d6bf8002f2"} Jan 26 19:09:40 crc kubenswrapper[4737]: I0126 19:09:40.087189 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1220b6e1985432d5828b65c6a6cfa77f8be29b90b9682552284d88d6bf8002f2" Jan 26 19:09:40 crc kubenswrapper[4737]: I0126 19:09:40.087210 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-trclj" Jan 26 19:09:40 crc kubenswrapper[4737]: I0126 19:09:40.206485 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-h2hhm"] Jan 26 19:09:40 crc kubenswrapper[4737]: E0126 19:09:40.207098 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5cc8a39-bca0-4175-a418-a24c75e5bc06" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 26 19:09:40 crc kubenswrapper[4737]: I0126 19:09:40.207120 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5cc8a39-bca0-4175-a418-a24c75e5bc06" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 26 19:09:40 crc kubenswrapper[4737]: I0126 19:09:40.207441 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5cc8a39-bca0-4175-a418-a24c75e5bc06" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 26 19:09:40 crc kubenswrapper[4737]: I0126 19:09:40.208394 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-h2hhm" Jan 26 19:09:40 crc kubenswrapper[4737]: I0126 19:09:40.212365 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xlvv9" Jan 26 19:09:40 crc kubenswrapper[4737]: I0126 19:09:40.212570 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 19:09:40 crc kubenswrapper[4737]: I0126 19:09:40.212699 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 19:09:40 crc kubenswrapper[4737]: I0126 19:09:40.214371 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 19:09:40 crc kubenswrapper[4737]: I0126 19:09:40.243350 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-h2hhm"] Jan 26 19:09:40 crc kubenswrapper[4737]: E0126 19:09:40.248001 4737 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5cc8a39_bca0_4175_a418_a24c75e5bc06.slice/crio-1220b6e1985432d5828b65c6a6cfa77f8be29b90b9682552284d88d6bf8002f2\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5cc8a39_bca0_4175_a418_a24c75e5bc06.slice\": RecentStats: unable to find data in memory cache]" Jan 26 19:09:40 crc kubenswrapper[4737]: I0126 19:09:40.334543 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/395dd2b5-3055-45e9-b528-9bc97b61743f-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-h2hhm\" (UID: \"395dd2b5-3055-45e9-b528-9bc97b61743f\") " pod="openstack/ssh-known-hosts-edpm-deployment-h2hhm" Jan 26 19:09:40 crc kubenswrapper[4737]: I0126 19:09:40.334845 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bkjt\" (UniqueName: \"kubernetes.io/projected/395dd2b5-3055-45e9-b528-9bc97b61743f-kube-api-access-2bkjt\") pod \"ssh-known-hosts-edpm-deployment-h2hhm\" (UID: \"395dd2b5-3055-45e9-b528-9bc97b61743f\") " pod="openstack/ssh-known-hosts-edpm-deployment-h2hhm" Jan 26 19:09:40 crc kubenswrapper[4737]: I0126 19:09:40.334976 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/395dd2b5-3055-45e9-b528-9bc97b61743f-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-h2hhm\" (UID: \"395dd2b5-3055-45e9-b528-9bc97b61743f\") " pod="openstack/ssh-known-hosts-edpm-deployment-h2hhm" Jan 26 19:09:40 crc kubenswrapper[4737]: I0126 19:09:40.437019 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/395dd2b5-3055-45e9-b528-9bc97b61743f-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-h2hhm\" (UID: \"395dd2b5-3055-45e9-b528-9bc97b61743f\") " pod="openstack/ssh-known-hosts-edpm-deployment-h2hhm" Jan 26 19:09:40 crc kubenswrapper[4737]: I0126 19:09:40.437189 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/395dd2b5-3055-45e9-b528-9bc97b61743f-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-h2hhm\" (UID: \"395dd2b5-3055-45e9-b528-9bc97b61743f\") " pod="openstack/ssh-known-hosts-edpm-deployment-h2hhm" Jan 26 19:09:40 crc kubenswrapper[4737]: I0126 19:09:40.437219 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bkjt\" (UniqueName: \"kubernetes.io/projected/395dd2b5-3055-45e9-b528-9bc97b61743f-kube-api-access-2bkjt\") pod \"ssh-known-hosts-edpm-deployment-h2hhm\" (UID: \"395dd2b5-3055-45e9-b528-9bc97b61743f\") " pod="openstack/ssh-known-hosts-edpm-deployment-h2hhm" Jan 26 19:09:40 crc kubenswrapper[4737]: I0126 19:09:40.442800 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/395dd2b5-3055-45e9-b528-9bc97b61743f-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-h2hhm\" (UID: \"395dd2b5-3055-45e9-b528-9bc97b61743f\") " pod="openstack/ssh-known-hosts-edpm-deployment-h2hhm" Jan 26 19:09:40 crc kubenswrapper[4737]: I0126 19:09:40.449014 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/395dd2b5-3055-45e9-b528-9bc97b61743f-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-h2hhm\" (UID: \"395dd2b5-3055-45e9-b528-9bc97b61743f\") " pod="openstack/ssh-known-hosts-edpm-deployment-h2hhm" Jan 26 19:09:40 crc kubenswrapper[4737]: I0126 19:09:40.477553 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bkjt\" (UniqueName: \"kubernetes.io/projected/395dd2b5-3055-45e9-b528-9bc97b61743f-kube-api-access-2bkjt\") pod \"ssh-known-hosts-edpm-deployment-h2hhm\" (UID: \"395dd2b5-3055-45e9-b528-9bc97b61743f\") " pod="openstack/ssh-known-hosts-edpm-deployment-h2hhm" Jan 26 19:09:40 crc kubenswrapper[4737]: I0126 19:09:40.533193 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-h2hhm" Jan 26 19:09:41 crc kubenswrapper[4737]: I0126 19:09:41.118088 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-h2hhm"] Jan 26 19:09:41 crc kubenswrapper[4737]: I0126 19:09:41.123242 4737 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 19:09:42 crc kubenswrapper[4737]: I0126 19:09:42.108029 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-h2hhm" event={"ID":"395dd2b5-3055-45e9-b528-9bc97b61743f","Type":"ContainerStarted","Data":"2fe12cf2daef8a2b09c87a017de8c728c5790eb82cef89a754b5499d4835fc78"} Jan 26 19:09:42 crc kubenswrapper[4737]: I0126 19:09:42.108790 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-h2hhm" event={"ID":"395dd2b5-3055-45e9-b528-9bc97b61743f","Type":"ContainerStarted","Data":"51879b17de5cc04d6d9bc80215a446a9223e645af4ada20b40ea5c36657bdf0d"} Jan 26 19:09:42 crc kubenswrapper[4737]: I0126 19:09:42.130701 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-h2hhm" podStartSLOduration=1.652625546 podStartE2EDuration="2.130673947s" podCreationTimestamp="2026-01-26 19:09:40 +0000 UTC" firstStartedPulling="2026-01-26 19:09:41.12294011 +0000 UTC m=+2354.431134818" lastFinishedPulling="2026-01-26 19:09:41.600988501 +0000 UTC m=+2354.909183219" observedRunningTime="2026-01-26 19:09:42.119798591 +0000 UTC m=+2355.427993299" watchObservedRunningTime="2026-01-26 19:09:42.130673947 +0000 UTC m=+2355.438868685" Jan 26 19:09:44 crc kubenswrapper[4737]: I0126 19:09:44.050201 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-k2vkj"] Jan 26 19:09:44 crc kubenswrapper[4737]: I0126 19:09:44.065046 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-k2vkj"] Jan 26 19:09:44 crc kubenswrapper[4737]: I0126 19:09:44.996827 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f3a0926-ce79-4117-b8e6-96fcf0a492fc" path="/var/lib/kubelet/pods/7f3a0926-ce79-4117-b8e6-96fcf0a492fc/volumes" Jan 26 19:09:48 crc kubenswrapper[4737]: E0126 19:09:48.248048 4737 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod395dd2b5_3055_45e9_b528_9bc97b61743f.slice/crio-2fe12cf2daef8a2b09c87a017de8c728c5790eb82cef89a754b5499d4835fc78.scope\": RecentStats: unable to find data in memory cache]" Jan 26 19:09:48 crc kubenswrapper[4737]: E0126 19:09:48.248391 4737 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod395dd2b5_3055_45e9_b528_9bc97b61743f.slice/crio-conmon-2fe12cf2daef8a2b09c87a017de8c728c5790eb82cef89a754b5499d4835fc78.scope\": RecentStats: unable to find data in memory cache]" Jan 26 19:09:49 crc kubenswrapper[4737]: I0126 19:09:49.177393 4737 generic.go:334] "Generic (PLEG): container finished" podID="395dd2b5-3055-45e9-b528-9bc97b61743f" containerID="2fe12cf2daef8a2b09c87a017de8c728c5790eb82cef89a754b5499d4835fc78" exitCode=0 Jan 26 19:09:49 crc kubenswrapper[4737]: I0126 19:09:49.177488 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-h2hhm" event={"ID":"395dd2b5-3055-45e9-b528-9bc97b61743f","Type":"ContainerDied","Data":"2fe12cf2daef8a2b09c87a017de8c728c5790eb82cef89a754b5499d4835fc78"} Jan 26 19:09:50 crc kubenswrapper[4737]: I0126 19:09:50.667234 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-h2hhm" Jan 26 19:09:50 crc kubenswrapper[4737]: I0126 19:09:50.790493 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/395dd2b5-3055-45e9-b528-9bc97b61743f-inventory-0\") pod \"395dd2b5-3055-45e9-b528-9bc97b61743f\" (UID: \"395dd2b5-3055-45e9-b528-9bc97b61743f\") " Jan 26 19:09:50 crc kubenswrapper[4737]: I0126 19:09:50.790607 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bkjt\" (UniqueName: \"kubernetes.io/projected/395dd2b5-3055-45e9-b528-9bc97b61743f-kube-api-access-2bkjt\") pod \"395dd2b5-3055-45e9-b528-9bc97b61743f\" (UID: \"395dd2b5-3055-45e9-b528-9bc97b61743f\") " Jan 26 19:09:50 crc kubenswrapper[4737]: I0126 19:09:50.790784 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/395dd2b5-3055-45e9-b528-9bc97b61743f-ssh-key-openstack-edpm-ipam\") pod \"395dd2b5-3055-45e9-b528-9bc97b61743f\" (UID: \"395dd2b5-3055-45e9-b528-9bc97b61743f\") " Jan 26 19:09:50 crc kubenswrapper[4737]: I0126 19:09:50.803277 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/395dd2b5-3055-45e9-b528-9bc97b61743f-kube-api-access-2bkjt" (OuterVolumeSpecName: "kube-api-access-2bkjt") pod "395dd2b5-3055-45e9-b528-9bc97b61743f" (UID: "395dd2b5-3055-45e9-b528-9bc97b61743f"). InnerVolumeSpecName "kube-api-access-2bkjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:09:50 crc kubenswrapper[4737]: I0126 19:09:50.840414 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/395dd2b5-3055-45e9-b528-9bc97b61743f-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "395dd2b5-3055-45e9-b528-9bc97b61743f" (UID: "395dd2b5-3055-45e9-b528-9bc97b61743f"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:09:50 crc kubenswrapper[4737]: I0126 19:09:50.842951 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/395dd2b5-3055-45e9-b528-9bc97b61743f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "395dd2b5-3055-45e9-b528-9bc97b61743f" (UID: "395dd2b5-3055-45e9-b528-9bc97b61743f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:09:50 crc kubenswrapper[4737]: I0126 19:09:50.894330 4737 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/395dd2b5-3055-45e9-b528-9bc97b61743f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 19:09:50 crc kubenswrapper[4737]: I0126 19:09:50.894366 4737 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/395dd2b5-3055-45e9-b528-9bc97b61743f-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 26 19:09:50 crc kubenswrapper[4737]: I0126 19:09:50.894376 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bkjt\" (UniqueName: \"kubernetes.io/projected/395dd2b5-3055-45e9-b528-9bc97b61743f-kube-api-access-2bkjt\") on node \"crc\" DevicePath \"\"" Jan 26 19:09:51 crc kubenswrapper[4737]: I0126 19:09:51.198678 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-h2hhm" event={"ID":"395dd2b5-3055-45e9-b528-9bc97b61743f","Type":"ContainerDied","Data":"51879b17de5cc04d6d9bc80215a446a9223e645af4ada20b40ea5c36657bdf0d"} Jan 26 19:09:51 crc kubenswrapper[4737]: I0126 19:09:51.199036 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51879b17de5cc04d6d9bc80215a446a9223e645af4ada20b40ea5c36657bdf0d" Jan 26 19:09:51 crc kubenswrapper[4737]: I0126 19:09:51.198815 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-h2hhm" Jan 26 19:09:51 crc kubenswrapper[4737]: I0126 19:09:51.263534 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-4krm6"] Jan 26 19:09:51 crc kubenswrapper[4737]: E0126 19:09:51.264052 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="395dd2b5-3055-45e9-b528-9bc97b61743f" containerName="ssh-known-hosts-edpm-deployment" Jan 26 19:09:51 crc kubenswrapper[4737]: I0126 19:09:51.264086 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="395dd2b5-3055-45e9-b528-9bc97b61743f" containerName="ssh-known-hosts-edpm-deployment" Jan 26 19:09:51 crc kubenswrapper[4737]: I0126 19:09:51.264338 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="395dd2b5-3055-45e9-b528-9bc97b61743f" containerName="ssh-known-hosts-edpm-deployment" Jan 26 19:09:51 crc kubenswrapper[4737]: I0126 19:09:51.265209 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4krm6" Jan 26 19:09:51 crc kubenswrapper[4737]: I0126 19:09:51.269385 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 19:09:51 crc kubenswrapper[4737]: I0126 19:09:51.269696 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 19:09:51 crc kubenswrapper[4737]: I0126 19:09:51.269868 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xlvv9" Jan 26 19:09:51 crc kubenswrapper[4737]: I0126 19:09:51.274063 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 19:09:51 crc kubenswrapper[4737]: I0126 19:09:51.280888 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-4krm6"] Jan 26 19:09:51 crc kubenswrapper[4737]: I0126 19:09:51.406577 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2440805a-4477-42f6-bc13-01fc157e1b94-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4krm6\" (UID: \"2440805a-4477-42f6-bc13-01fc157e1b94\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4krm6" Jan 26 19:09:51 crc kubenswrapper[4737]: I0126 19:09:51.406703 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2440805a-4477-42f6-bc13-01fc157e1b94-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4krm6\" (UID: \"2440805a-4477-42f6-bc13-01fc157e1b94\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4krm6" Jan 26 19:09:51 crc kubenswrapper[4737]: I0126 19:09:51.407240 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srll7\" (UniqueName: \"kubernetes.io/projected/2440805a-4477-42f6-bc13-01fc157e1b94-kube-api-access-srll7\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4krm6\" (UID: \"2440805a-4477-42f6-bc13-01fc157e1b94\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4krm6" Jan 26 19:09:51 crc kubenswrapper[4737]: I0126 19:09:51.510183 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2440805a-4477-42f6-bc13-01fc157e1b94-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4krm6\" (UID: \"2440805a-4477-42f6-bc13-01fc157e1b94\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4krm6" Jan 26 19:09:51 crc kubenswrapper[4737]: I0126 19:09:51.510404 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srll7\" (UniqueName: \"kubernetes.io/projected/2440805a-4477-42f6-bc13-01fc157e1b94-kube-api-access-srll7\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4krm6\" (UID: \"2440805a-4477-42f6-bc13-01fc157e1b94\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4krm6" Jan 26 19:09:51 crc kubenswrapper[4737]: I0126 19:09:51.510549 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2440805a-4477-42f6-bc13-01fc157e1b94-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4krm6\" (UID: \"2440805a-4477-42f6-bc13-01fc157e1b94\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4krm6" Jan 26 19:09:51 crc kubenswrapper[4737]: I0126 19:09:51.519248 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2440805a-4477-42f6-bc13-01fc157e1b94-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4krm6\" (UID: \"2440805a-4477-42f6-bc13-01fc157e1b94\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4krm6" Jan 26 19:09:51 crc kubenswrapper[4737]: I0126 19:09:51.519676 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2440805a-4477-42f6-bc13-01fc157e1b94-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4krm6\" (UID: \"2440805a-4477-42f6-bc13-01fc157e1b94\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4krm6" Jan 26 19:09:51 crc kubenswrapper[4737]: I0126 19:09:51.539205 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srll7\" (UniqueName: \"kubernetes.io/projected/2440805a-4477-42f6-bc13-01fc157e1b94-kube-api-access-srll7\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4krm6\" (UID: \"2440805a-4477-42f6-bc13-01fc157e1b94\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4krm6" Jan 26 19:09:51 crc kubenswrapper[4737]: I0126 19:09:51.581558 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4krm6" Jan 26 19:09:52 crc kubenswrapper[4737]: I0126 19:09:52.121726 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-4krm6"] Jan 26 19:09:52 crc kubenswrapper[4737]: I0126 19:09:52.215189 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4krm6" event={"ID":"2440805a-4477-42f6-bc13-01fc157e1b94","Type":"ContainerStarted","Data":"17441b0e19ce9b1a2e1c6e727ab4736b046bfd42024eb7f4b3fbc81f5984d5d7"} Jan 26 19:09:53 crc kubenswrapper[4737]: I0126 19:09:53.235688 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4krm6" event={"ID":"2440805a-4477-42f6-bc13-01fc157e1b94","Type":"ContainerStarted","Data":"5bb08c5694507b1a801a2f0127599ce01d2c07985a20bfd38a58a964bdef290c"} Jan 26 19:09:53 crc kubenswrapper[4737]: I0126 19:09:53.262779 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4krm6" podStartSLOduration=1.818616609 podStartE2EDuration="2.262757349s" podCreationTimestamp="2026-01-26 19:09:51 +0000 UTC" firstStartedPulling="2026-01-26 19:09:52.125376776 +0000 UTC m=+2365.433571484" lastFinishedPulling="2026-01-26 19:09:52.569517516 +0000 UTC m=+2365.877712224" observedRunningTime="2026-01-26 19:09:53.259836347 +0000 UTC m=+2366.568031055" watchObservedRunningTime="2026-01-26 19:09:53.262757349 +0000 UTC m=+2366.570952057" Jan 26 19:10:00 crc kubenswrapper[4737]: I0126 19:10:00.305496 4737 generic.go:334] "Generic (PLEG): container finished" podID="2440805a-4477-42f6-bc13-01fc157e1b94" containerID="5bb08c5694507b1a801a2f0127599ce01d2c07985a20bfd38a58a964bdef290c" exitCode=0 Jan 26 19:10:00 crc kubenswrapper[4737]: I0126 19:10:00.305591 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4krm6" event={"ID":"2440805a-4477-42f6-bc13-01fc157e1b94","Type":"ContainerDied","Data":"5bb08c5694507b1a801a2f0127599ce01d2c07985a20bfd38a58a964bdef290c"} Jan 26 19:10:00 crc kubenswrapper[4737]: I0126 19:10:00.949478 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:10:00 crc kubenswrapper[4737]: I0126 19:10:00.949549 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:10:01 crc kubenswrapper[4737]: I0126 19:10:01.785641 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4krm6" Jan 26 19:10:01 crc kubenswrapper[4737]: I0126 19:10:01.860726 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2440805a-4477-42f6-bc13-01fc157e1b94-ssh-key-openstack-edpm-ipam\") pod \"2440805a-4477-42f6-bc13-01fc157e1b94\" (UID: \"2440805a-4477-42f6-bc13-01fc157e1b94\") " Jan 26 19:10:01 crc kubenswrapper[4737]: I0126 19:10:01.861017 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2440805a-4477-42f6-bc13-01fc157e1b94-inventory\") pod \"2440805a-4477-42f6-bc13-01fc157e1b94\" (UID: \"2440805a-4477-42f6-bc13-01fc157e1b94\") " Jan 26 19:10:01 crc kubenswrapper[4737]: I0126 19:10:01.861428 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srll7\" (UniqueName: \"kubernetes.io/projected/2440805a-4477-42f6-bc13-01fc157e1b94-kube-api-access-srll7\") pod \"2440805a-4477-42f6-bc13-01fc157e1b94\" (UID: \"2440805a-4477-42f6-bc13-01fc157e1b94\") " Jan 26 19:10:01 crc kubenswrapper[4737]: I0126 19:10:01.869555 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2440805a-4477-42f6-bc13-01fc157e1b94-kube-api-access-srll7" (OuterVolumeSpecName: "kube-api-access-srll7") pod "2440805a-4477-42f6-bc13-01fc157e1b94" (UID: "2440805a-4477-42f6-bc13-01fc157e1b94"). InnerVolumeSpecName "kube-api-access-srll7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:10:01 crc kubenswrapper[4737]: I0126 19:10:01.899040 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2440805a-4477-42f6-bc13-01fc157e1b94-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2440805a-4477-42f6-bc13-01fc157e1b94" (UID: "2440805a-4477-42f6-bc13-01fc157e1b94"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:10:01 crc kubenswrapper[4737]: I0126 19:10:01.899710 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2440805a-4477-42f6-bc13-01fc157e1b94-inventory" (OuterVolumeSpecName: "inventory") pod "2440805a-4477-42f6-bc13-01fc157e1b94" (UID: "2440805a-4477-42f6-bc13-01fc157e1b94"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:10:01 crc kubenswrapper[4737]: I0126 19:10:01.967166 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srll7\" (UniqueName: \"kubernetes.io/projected/2440805a-4477-42f6-bc13-01fc157e1b94-kube-api-access-srll7\") on node \"crc\" DevicePath \"\"" Jan 26 19:10:01 crc kubenswrapper[4737]: I0126 19:10:01.967234 4737 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2440805a-4477-42f6-bc13-01fc157e1b94-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 19:10:01 crc kubenswrapper[4737]: I0126 19:10:01.967250 4737 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2440805a-4477-42f6-bc13-01fc157e1b94-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 19:10:02 crc kubenswrapper[4737]: I0126 19:10:02.325383 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4krm6" event={"ID":"2440805a-4477-42f6-bc13-01fc157e1b94","Type":"ContainerDied","Data":"17441b0e19ce9b1a2e1c6e727ab4736b046bfd42024eb7f4b3fbc81f5984d5d7"} Jan 26 19:10:02 crc kubenswrapper[4737]: I0126 19:10:02.325425 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17441b0e19ce9b1a2e1c6e727ab4736b046bfd42024eb7f4b3fbc81f5984d5d7" Jan 26 19:10:02 crc kubenswrapper[4737]: I0126 19:10:02.325448 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4krm6" Jan 26 19:10:02 crc kubenswrapper[4737]: I0126 19:10:02.399563 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h27rt"] Jan 26 19:10:02 crc kubenswrapper[4737]: E0126 19:10:02.400264 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2440805a-4477-42f6-bc13-01fc157e1b94" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 26 19:10:02 crc kubenswrapper[4737]: I0126 19:10:02.400294 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="2440805a-4477-42f6-bc13-01fc157e1b94" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 26 19:10:02 crc kubenswrapper[4737]: I0126 19:10:02.400632 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="2440805a-4477-42f6-bc13-01fc157e1b94" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 26 19:10:02 crc kubenswrapper[4737]: I0126 19:10:02.419181 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h27rt" Jan 26 19:10:02 crc kubenswrapper[4737]: I0126 19:10:02.422706 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xlvv9" Jan 26 19:10:02 crc kubenswrapper[4737]: I0126 19:10:02.426925 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 19:10:02 crc kubenswrapper[4737]: I0126 19:10:02.427182 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 19:10:02 crc kubenswrapper[4737]: I0126 19:10:02.429139 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 19:10:02 crc kubenswrapper[4737]: I0126 19:10:02.442089 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h27rt"] Jan 26 19:10:02 crc kubenswrapper[4737]: I0126 19:10:02.478653 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34f77dce-aaea-4249-be45-fa7c47b5616b-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-h27rt\" (UID: \"34f77dce-aaea-4249-be45-fa7c47b5616b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h27rt" Jan 26 19:10:02 crc kubenswrapper[4737]: I0126 19:10:02.478868 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crdkw\" (UniqueName: \"kubernetes.io/projected/34f77dce-aaea-4249-be45-fa7c47b5616b-kube-api-access-crdkw\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-h27rt\" (UID: \"34f77dce-aaea-4249-be45-fa7c47b5616b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h27rt" Jan 26 19:10:02 crc kubenswrapper[4737]: I0126 19:10:02.479310 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/34f77dce-aaea-4249-be45-fa7c47b5616b-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-h27rt\" (UID: \"34f77dce-aaea-4249-be45-fa7c47b5616b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h27rt" Jan 26 19:10:02 crc kubenswrapper[4737]: I0126 19:10:02.581712 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crdkw\" (UniqueName: \"kubernetes.io/projected/34f77dce-aaea-4249-be45-fa7c47b5616b-kube-api-access-crdkw\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-h27rt\" (UID: \"34f77dce-aaea-4249-be45-fa7c47b5616b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h27rt" Jan 26 19:10:02 crc kubenswrapper[4737]: I0126 19:10:02.582176 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/34f77dce-aaea-4249-be45-fa7c47b5616b-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-h27rt\" (UID: \"34f77dce-aaea-4249-be45-fa7c47b5616b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h27rt" Jan 26 19:10:02 crc kubenswrapper[4737]: I0126 19:10:02.582286 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34f77dce-aaea-4249-be45-fa7c47b5616b-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-h27rt\" (UID: \"34f77dce-aaea-4249-be45-fa7c47b5616b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h27rt" Jan 26 19:10:02 crc kubenswrapper[4737]: I0126 19:10:02.587532 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/34f77dce-aaea-4249-be45-fa7c47b5616b-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-h27rt\" (UID: \"34f77dce-aaea-4249-be45-fa7c47b5616b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h27rt" Jan 26 19:10:02 crc kubenswrapper[4737]: I0126 19:10:02.591775 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34f77dce-aaea-4249-be45-fa7c47b5616b-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-h27rt\" (UID: \"34f77dce-aaea-4249-be45-fa7c47b5616b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h27rt" Jan 26 19:10:02 crc kubenswrapper[4737]: I0126 19:10:02.599262 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crdkw\" (UniqueName: \"kubernetes.io/projected/34f77dce-aaea-4249-be45-fa7c47b5616b-kube-api-access-crdkw\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-h27rt\" (UID: \"34f77dce-aaea-4249-be45-fa7c47b5616b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h27rt" Jan 26 19:10:02 crc kubenswrapper[4737]: I0126 19:10:02.745494 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h27rt" Jan 26 19:10:03 crc kubenswrapper[4737]: I0126 19:10:03.314101 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h27rt"] Jan 26 19:10:03 crc kubenswrapper[4737]: I0126 19:10:03.335887 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h27rt" event={"ID":"34f77dce-aaea-4249-be45-fa7c47b5616b","Type":"ContainerStarted","Data":"574a1b77b191bf11cef6e142726cc6084fe23652de253be2822f8b172bc98744"} Jan 26 19:10:04 crc kubenswrapper[4737]: I0126 19:10:04.348324 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h27rt" event={"ID":"34f77dce-aaea-4249-be45-fa7c47b5616b","Type":"ContainerStarted","Data":"2286ca108f7cabafb7c484bc6edf0b10a2c370ad1899092a773b8cd0d0ed207e"} Jan 26 19:10:04 crc kubenswrapper[4737]: I0126 19:10:04.376582 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h27rt" podStartSLOduration=1.97551391 podStartE2EDuration="2.376560685s" podCreationTimestamp="2026-01-26 19:10:02 +0000 UTC" firstStartedPulling="2026-01-26 19:10:03.314805713 +0000 UTC m=+2376.623000421" lastFinishedPulling="2026-01-26 19:10:03.715852488 +0000 UTC m=+2377.024047196" observedRunningTime="2026-01-26 19:10:04.363981526 +0000 UTC m=+2377.672176254" watchObservedRunningTime="2026-01-26 19:10:04.376560685 +0000 UTC m=+2377.684755393" Jan 26 19:10:13 crc kubenswrapper[4737]: I0126 19:10:13.442596 4737 generic.go:334] "Generic (PLEG): container finished" podID="34f77dce-aaea-4249-be45-fa7c47b5616b" containerID="2286ca108f7cabafb7c484bc6edf0b10a2c370ad1899092a773b8cd0d0ed207e" exitCode=0 Jan 26 19:10:13 crc kubenswrapper[4737]: I0126 19:10:13.442676 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h27rt" event={"ID":"34f77dce-aaea-4249-be45-fa7c47b5616b","Type":"ContainerDied","Data":"2286ca108f7cabafb7c484bc6edf0b10a2c370ad1899092a773b8cd0d0ed207e"} Jan 26 19:10:14 crc kubenswrapper[4737]: I0126 19:10:14.893122 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h27rt" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.008402 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/34f77dce-aaea-4249-be45-fa7c47b5616b-ssh-key-openstack-edpm-ipam\") pod \"34f77dce-aaea-4249-be45-fa7c47b5616b\" (UID: \"34f77dce-aaea-4249-be45-fa7c47b5616b\") " Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.008615 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34f77dce-aaea-4249-be45-fa7c47b5616b-inventory\") pod \"34f77dce-aaea-4249-be45-fa7c47b5616b\" (UID: \"34f77dce-aaea-4249-be45-fa7c47b5616b\") " Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.008652 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crdkw\" (UniqueName: \"kubernetes.io/projected/34f77dce-aaea-4249-be45-fa7c47b5616b-kube-api-access-crdkw\") pod \"34f77dce-aaea-4249-be45-fa7c47b5616b\" (UID: \"34f77dce-aaea-4249-be45-fa7c47b5616b\") " Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.016508 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34f77dce-aaea-4249-be45-fa7c47b5616b-kube-api-access-crdkw" (OuterVolumeSpecName: "kube-api-access-crdkw") pod "34f77dce-aaea-4249-be45-fa7c47b5616b" (UID: "34f77dce-aaea-4249-be45-fa7c47b5616b"). InnerVolumeSpecName "kube-api-access-crdkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.040412 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34f77dce-aaea-4249-be45-fa7c47b5616b-inventory" (OuterVolumeSpecName: "inventory") pod "34f77dce-aaea-4249-be45-fa7c47b5616b" (UID: "34f77dce-aaea-4249-be45-fa7c47b5616b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.040435 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34f77dce-aaea-4249-be45-fa7c47b5616b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "34f77dce-aaea-4249-be45-fa7c47b5616b" (UID: "34f77dce-aaea-4249-be45-fa7c47b5616b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.111828 4737 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/34f77dce-aaea-4249-be45-fa7c47b5616b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.111861 4737 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34f77dce-aaea-4249-be45-fa7c47b5616b-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.111870 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crdkw\" (UniqueName: \"kubernetes.io/projected/34f77dce-aaea-4249-be45-fa7c47b5616b-kube-api-access-crdkw\") on node \"crc\" DevicePath \"\"" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.466393 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h27rt" event={"ID":"34f77dce-aaea-4249-be45-fa7c47b5616b","Type":"ContainerDied","Data":"574a1b77b191bf11cef6e142726cc6084fe23652de253be2822f8b172bc98744"} Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.466940 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="574a1b77b191bf11cef6e142726cc6084fe23652de253be2822f8b172bc98744" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.467022 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-h27rt" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.558977 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzz54"] Jan 26 19:10:15 crc kubenswrapper[4737]: E0126 19:10:15.559644 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34f77dce-aaea-4249-be45-fa7c47b5616b" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.559670 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="34f77dce-aaea-4249-be45-fa7c47b5616b" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.559944 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="34f77dce-aaea-4249-be45-fa7c47b5616b" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.560998 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzz54" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.563541 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.563745 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.563750 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.563996 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.564276 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.564297 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.565234 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xlvv9" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.565630 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.568372 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.581558 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzz54"] Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.623861 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa425b93-9221-4f0b-b0fd-7995e092f8f1-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzz54\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzz54" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.623904 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa425b93-9221-4f0b-b0fd-7995e092f8f1-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzz54\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzz54" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.623932 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa425b93-9221-4f0b-b0fd-7995e092f8f1-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzz54\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzz54" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.623981 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkzh7\" (UniqueName: \"kubernetes.io/projected/fa425b93-9221-4f0b-b0fd-7995e092f8f1-kube-api-access-zkzh7\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzz54\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzz54" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.624118 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fa425b93-9221-4f0b-b0fd-7995e092f8f1-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzz54\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzz54" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.624177 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fa425b93-9221-4f0b-b0fd-7995e092f8f1-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzz54\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzz54" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.624207 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fa425b93-9221-4f0b-b0fd-7995e092f8f1-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzz54\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzz54" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.624229 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa425b93-9221-4f0b-b0fd-7995e092f8f1-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzz54\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzz54" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.624269 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa425b93-9221-4f0b-b0fd-7995e092f8f1-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzz54\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzz54" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.624321 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa425b93-9221-4f0b-b0fd-7995e092f8f1-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzz54\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzz54" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.624349 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa425b93-9221-4f0b-b0fd-7995e092f8f1-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzz54\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzz54" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.624368 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa425b93-9221-4f0b-b0fd-7995e092f8f1-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzz54\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzz54" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.624397 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fa425b93-9221-4f0b-b0fd-7995e092f8f1-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzz54\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzz54" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.624429 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fa425b93-9221-4f0b-b0fd-7995e092f8f1-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzz54\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzz54" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.624449 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fa425b93-9221-4f0b-b0fd-7995e092f8f1-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzz54\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzz54" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.624492 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa425b93-9221-4f0b-b0fd-7995e092f8f1-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzz54\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzz54" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.726466 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkzh7\" (UniqueName: \"kubernetes.io/projected/fa425b93-9221-4f0b-b0fd-7995e092f8f1-kube-api-access-zkzh7\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzz54\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzz54" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.727098 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fa425b93-9221-4f0b-b0fd-7995e092f8f1-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzz54\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzz54" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.727198 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fa425b93-9221-4f0b-b0fd-7995e092f8f1-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzz54\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzz54" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.727233 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fa425b93-9221-4f0b-b0fd-7995e092f8f1-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzz54\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzz54" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.727263 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa425b93-9221-4f0b-b0fd-7995e092f8f1-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzz54\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzz54" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.727323 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa425b93-9221-4f0b-b0fd-7995e092f8f1-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzz54\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzz54" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.727386 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa425b93-9221-4f0b-b0fd-7995e092f8f1-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzz54\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzz54" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.727429 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa425b93-9221-4f0b-b0fd-7995e092f8f1-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzz54\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzz54" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.727456 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa425b93-9221-4f0b-b0fd-7995e092f8f1-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzz54\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzz54" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.727501 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fa425b93-9221-4f0b-b0fd-7995e092f8f1-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzz54\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzz54" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.727554 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fa425b93-9221-4f0b-b0fd-7995e092f8f1-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzz54\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzz54" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.727584 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fa425b93-9221-4f0b-b0fd-7995e092f8f1-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzz54\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzz54" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.727655 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa425b93-9221-4f0b-b0fd-7995e092f8f1-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzz54\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzz54" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.727781 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa425b93-9221-4f0b-b0fd-7995e092f8f1-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzz54\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzz54" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.727819 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa425b93-9221-4f0b-b0fd-7995e092f8f1-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzz54\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzz54" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.727853 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa425b93-9221-4f0b-b0fd-7995e092f8f1-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzz54\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzz54" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.735692 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fa425b93-9221-4f0b-b0fd-7995e092f8f1-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzz54\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzz54" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.736483 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fa425b93-9221-4f0b-b0fd-7995e092f8f1-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzz54\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzz54" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.736607 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fa425b93-9221-4f0b-b0fd-7995e092f8f1-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzz54\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzz54" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.737613 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa425b93-9221-4f0b-b0fd-7995e092f8f1-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzz54\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzz54" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.738840 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa425b93-9221-4f0b-b0fd-7995e092f8f1-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzz54\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzz54" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.739731 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fa425b93-9221-4f0b-b0fd-7995e092f8f1-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzz54\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzz54" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.739795 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa425b93-9221-4f0b-b0fd-7995e092f8f1-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzz54\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzz54" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.740969 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa425b93-9221-4f0b-b0fd-7995e092f8f1-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzz54\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzz54" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.743194 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa425b93-9221-4f0b-b0fd-7995e092f8f1-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzz54\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzz54" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.744111 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa425b93-9221-4f0b-b0fd-7995e092f8f1-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzz54\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzz54" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.744606 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fa425b93-9221-4f0b-b0fd-7995e092f8f1-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzz54\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzz54" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.744943 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fa425b93-9221-4f0b-b0fd-7995e092f8f1-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzz54\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzz54" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.748232 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa425b93-9221-4f0b-b0fd-7995e092f8f1-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzz54\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzz54" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.748410 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa425b93-9221-4f0b-b0fd-7995e092f8f1-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzz54\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzz54" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.758572 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa425b93-9221-4f0b-b0fd-7995e092f8f1-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzz54\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzz54" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.763289 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkzh7\" (UniqueName: \"kubernetes.io/projected/fa425b93-9221-4f0b-b0fd-7995e092f8f1-kube-api-access-zkzh7\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzz54\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzz54" Jan 26 19:10:15 crc kubenswrapper[4737]: I0126 19:10:15.882791 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzz54" Jan 26 19:10:16 crc kubenswrapper[4737]: I0126 19:10:16.447304 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzz54"] Jan 26 19:10:16 crc kubenswrapper[4737]: I0126 19:10:16.478458 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzz54" event={"ID":"fa425b93-9221-4f0b-b0fd-7995e092f8f1","Type":"ContainerStarted","Data":"ab26aef8559b34c649572f84d6abfbb2c6312a78b7976d9fc62009b81db045b4"} Jan 26 19:10:17 crc kubenswrapper[4737]: I0126 19:10:17.491053 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzz54" event={"ID":"fa425b93-9221-4f0b-b0fd-7995e092f8f1","Type":"ContainerStarted","Data":"221481e20936d31bbe5bf1bd6dbd3d0d197281ecfcaaaf393ff0d3f9dce30cad"} Jan 26 19:10:17 crc kubenswrapper[4737]: I0126 19:10:17.530883 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzz54" podStartSLOduration=2.143558139 podStartE2EDuration="2.530848827s" podCreationTimestamp="2026-01-26 19:10:15 +0000 UTC" firstStartedPulling="2026-01-26 19:10:16.447429116 +0000 UTC m=+2389.755623824" lastFinishedPulling="2026-01-26 19:10:16.834719804 +0000 UTC m=+2390.142914512" observedRunningTime="2026-01-26 19:10:17.520627896 +0000 UTC m=+2390.828822614" watchObservedRunningTime="2026-01-26 19:10:17.530848827 +0000 UTC m=+2390.839043535" Jan 26 19:10:24 crc kubenswrapper[4737]: I0126 19:10:24.042974 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-pv5s6"] Jan 26 19:10:24 crc kubenswrapper[4737]: I0126 19:10:24.054228 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-pv5s6"] Jan 26 19:10:24 crc kubenswrapper[4737]: I0126 19:10:24.994857 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc" path="/var/lib/kubelet/pods/6b0d6ef5-d1e3-4a80-83c1-04f01fc707dc/volumes" Jan 26 19:10:30 crc kubenswrapper[4737]: I0126 19:10:30.949036 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:10:30 crc kubenswrapper[4737]: I0126 19:10:30.949606 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:10:30 crc kubenswrapper[4737]: I0126 19:10:30.949650 4737 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" Jan 26 19:10:30 crc kubenswrapper[4737]: I0126 19:10:30.950779 4737 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dc219edb88bc1e0e52f10e642b13d911ee1bfd5a5f90c09cf6a4ad6a9f1a4b7f"} pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 19:10:30 crc kubenswrapper[4737]: I0126 19:10:30.950836 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" containerID="cri-o://dc219edb88bc1e0e52f10e642b13d911ee1bfd5a5f90c09cf6a4ad6a9f1a4b7f" gracePeriod=600 Jan 26 19:10:31 crc kubenswrapper[4737]: E0126 19:10:31.085635 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:10:31 crc kubenswrapper[4737]: I0126 19:10:31.653969 4737 generic.go:334] "Generic (PLEG): container finished" podID="afd75772-7900-46c3-b392-afb075e1cc08" containerID="dc219edb88bc1e0e52f10e642b13d911ee1bfd5a5f90c09cf6a4ad6a9f1a4b7f" exitCode=0 Jan 26 19:10:31 crc kubenswrapper[4737]: I0126 19:10:31.654017 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" event={"ID":"afd75772-7900-46c3-b392-afb075e1cc08","Type":"ContainerDied","Data":"dc219edb88bc1e0e52f10e642b13d911ee1bfd5a5f90c09cf6a4ad6a9f1a4b7f"} Jan 26 19:10:31 crc kubenswrapper[4737]: I0126 19:10:31.654060 4737 scope.go:117] "RemoveContainer" containerID="128858a05e84587d74f8a27fb380177b3d24231b3df428cd4848c4a2148ba1b3" Jan 26 19:10:31 crc kubenswrapper[4737]: I0126 19:10:31.654881 4737 scope.go:117] "RemoveContainer" containerID="dc219edb88bc1e0e52f10e642b13d911ee1bfd5a5f90c09cf6a4ad6a9f1a4b7f" Jan 26 19:10:31 crc kubenswrapper[4737]: E0126 19:10:31.655204 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:10:35 crc kubenswrapper[4737]: I0126 19:10:35.691690 4737 scope.go:117] "RemoveContainer" containerID="0f3e5a988859ee2f6011c7e863d600a3f9dc924ea7d67718e116dfa56ddf5a40" Jan 26 19:10:35 crc kubenswrapper[4737]: I0126 19:10:35.717518 4737 scope.go:117] "RemoveContainer" containerID="63e9ba0775d01058dfbad686887b37bfe07af5dfdd2248eb938214feb97f3122" Jan 26 19:10:44 crc kubenswrapper[4737]: I0126 19:10:44.982313 4737 scope.go:117] "RemoveContainer" containerID="dc219edb88bc1e0e52f10e642b13d911ee1bfd5a5f90c09cf6a4ad6a9f1a4b7f" Jan 26 19:10:44 crc kubenswrapper[4737]: E0126 19:10:44.983152 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:10:59 crc kubenswrapper[4737]: I0126 19:10:59.982231 4737 scope.go:117] "RemoveContainer" containerID="dc219edb88bc1e0e52f10e642b13d911ee1bfd5a5f90c09cf6a4ad6a9f1a4b7f" Jan 26 19:10:59 crc kubenswrapper[4737]: E0126 19:10:59.984514 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:11:01 crc kubenswrapper[4737]: I0126 19:11:01.974595 4737 generic.go:334] "Generic (PLEG): container finished" podID="fa425b93-9221-4f0b-b0fd-7995e092f8f1" containerID="221481e20936d31bbe5bf1bd6dbd3d0d197281ecfcaaaf393ff0d3f9dce30cad" exitCode=0 Jan 26 19:11:01 crc kubenswrapper[4737]: I0126 19:11:01.974658 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzz54" event={"ID":"fa425b93-9221-4f0b-b0fd-7995e092f8f1","Type":"ContainerDied","Data":"221481e20936d31bbe5bf1bd6dbd3d0d197281ecfcaaaf393ff0d3f9dce30cad"} Jan 26 19:11:03 crc kubenswrapper[4737]: I0126 19:11:03.535663 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzz54" Jan 26 19:11:03 crc kubenswrapper[4737]: I0126 19:11:03.589236 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fa425b93-9221-4f0b-b0fd-7995e092f8f1-openstack-edpm-ipam-ovn-default-certs-0\") pod \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " Jan 26 19:11:03 crc kubenswrapper[4737]: I0126 19:11:03.589284 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa425b93-9221-4f0b-b0fd-7995e092f8f1-inventory\") pod \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " Jan 26 19:11:03 crc kubenswrapper[4737]: I0126 19:11:03.589339 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa425b93-9221-4f0b-b0fd-7995e092f8f1-libvirt-combined-ca-bundle\") pod \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " Jan 26 19:11:03 crc kubenswrapper[4737]: I0126 19:11:03.589400 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa425b93-9221-4f0b-b0fd-7995e092f8f1-nova-combined-ca-bundle\") pod \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " Jan 26 19:11:03 crc kubenswrapper[4737]: I0126 19:11:03.589442 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa425b93-9221-4f0b-b0fd-7995e092f8f1-telemetry-power-monitoring-combined-ca-bundle\") pod \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " Jan 26 19:11:03 crc kubenswrapper[4737]: I0126 19:11:03.589479 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fa425b93-9221-4f0b-b0fd-7995e092f8f1-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " Jan 26 19:11:03 crc kubenswrapper[4737]: I0126 19:11:03.589512 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa425b93-9221-4f0b-b0fd-7995e092f8f1-telemetry-combined-ca-bundle\") pod \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " Jan 26 19:11:03 crc kubenswrapper[4737]: I0126 19:11:03.589529 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fa425b93-9221-4f0b-b0fd-7995e092f8f1-ssh-key-openstack-edpm-ipam\") pod \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " Jan 26 19:11:03 crc kubenswrapper[4737]: I0126 19:11:03.589662 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fa425b93-9221-4f0b-b0fd-7995e092f8f1-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " Jan 26 19:11:03 crc kubenswrapper[4737]: I0126 19:11:03.589682 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkzh7\" (UniqueName: \"kubernetes.io/projected/fa425b93-9221-4f0b-b0fd-7995e092f8f1-kube-api-access-zkzh7\") pod \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " Jan 26 19:11:03 crc kubenswrapper[4737]: I0126 19:11:03.589788 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fa425b93-9221-4f0b-b0fd-7995e092f8f1-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " Jan 26 19:11:03 crc kubenswrapper[4737]: I0126 19:11:03.589825 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa425b93-9221-4f0b-b0fd-7995e092f8f1-bootstrap-combined-ca-bundle\") pod \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " Jan 26 19:11:03 crc kubenswrapper[4737]: I0126 19:11:03.589843 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa425b93-9221-4f0b-b0fd-7995e092f8f1-repo-setup-combined-ca-bundle\") pod \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " Jan 26 19:11:03 crc kubenswrapper[4737]: I0126 19:11:03.589893 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa425b93-9221-4f0b-b0fd-7995e092f8f1-ovn-combined-ca-bundle\") pod \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " Jan 26 19:11:03 crc kubenswrapper[4737]: I0126 19:11:03.589995 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fa425b93-9221-4f0b-b0fd-7995e092f8f1-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " Jan 26 19:11:03 crc kubenswrapper[4737]: I0126 19:11:03.590049 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa425b93-9221-4f0b-b0fd-7995e092f8f1-neutron-metadata-combined-ca-bundle\") pod \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\" (UID: \"fa425b93-9221-4f0b-b0fd-7995e092f8f1\") " Jan 26 19:11:03 crc kubenswrapper[4737]: I0126 19:11:03.599659 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa425b93-9221-4f0b-b0fd-7995e092f8f1-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0") pod "fa425b93-9221-4f0b-b0fd-7995e092f8f1" (UID: "fa425b93-9221-4f0b-b0fd-7995e092f8f1"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:11:03 crc kubenswrapper[4737]: I0126 19:11:03.605573 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa425b93-9221-4f0b-b0fd-7995e092f8f1-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "fa425b93-9221-4f0b-b0fd-7995e092f8f1" (UID: "fa425b93-9221-4f0b-b0fd-7995e092f8f1"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:11:03 crc kubenswrapper[4737]: I0126 19:11:03.605610 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa425b93-9221-4f0b-b0fd-7995e092f8f1-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "fa425b93-9221-4f0b-b0fd-7995e092f8f1" (UID: "fa425b93-9221-4f0b-b0fd-7995e092f8f1"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:11:03 crc kubenswrapper[4737]: I0126 19:11:03.605798 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa425b93-9221-4f0b-b0fd-7995e092f8f1-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "fa425b93-9221-4f0b-b0fd-7995e092f8f1" (UID: "fa425b93-9221-4f0b-b0fd-7995e092f8f1"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:11:03 crc kubenswrapper[4737]: I0126 19:11:03.605849 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa425b93-9221-4f0b-b0fd-7995e092f8f1-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "fa425b93-9221-4f0b-b0fd-7995e092f8f1" (UID: "fa425b93-9221-4f0b-b0fd-7995e092f8f1"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:11:03 crc kubenswrapper[4737]: I0126 19:11:03.606180 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa425b93-9221-4f0b-b0fd-7995e092f8f1-kube-api-access-zkzh7" (OuterVolumeSpecName: "kube-api-access-zkzh7") pod "fa425b93-9221-4f0b-b0fd-7995e092f8f1" (UID: "fa425b93-9221-4f0b-b0fd-7995e092f8f1"). InnerVolumeSpecName "kube-api-access-zkzh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:11:03 crc kubenswrapper[4737]: I0126 19:11:03.606353 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa425b93-9221-4f0b-b0fd-7995e092f8f1-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "fa425b93-9221-4f0b-b0fd-7995e092f8f1" (UID: "fa425b93-9221-4f0b-b0fd-7995e092f8f1"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:11:03 crc kubenswrapper[4737]: I0126 19:11:03.606399 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa425b93-9221-4f0b-b0fd-7995e092f8f1-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "fa425b93-9221-4f0b-b0fd-7995e092f8f1" (UID: "fa425b93-9221-4f0b-b0fd-7995e092f8f1"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:11:03 crc kubenswrapper[4737]: I0126 19:11:03.608326 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa425b93-9221-4f0b-b0fd-7995e092f8f1-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "fa425b93-9221-4f0b-b0fd-7995e092f8f1" (UID: "fa425b93-9221-4f0b-b0fd-7995e092f8f1"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:11:03 crc kubenswrapper[4737]: I0126 19:11:03.609050 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa425b93-9221-4f0b-b0fd-7995e092f8f1-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "fa425b93-9221-4f0b-b0fd-7995e092f8f1" (UID: "fa425b93-9221-4f0b-b0fd-7995e092f8f1"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:11:03 crc kubenswrapper[4737]: I0126 19:11:03.609922 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa425b93-9221-4f0b-b0fd-7995e092f8f1-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "fa425b93-9221-4f0b-b0fd-7995e092f8f1" (UID: "fa425b93-9221-4f0b-b0fd-7995e092f8f1"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:11:03 crc kubenswrapper[4737]: I0126 19:11:03.609985 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa425b93-9221-4f0b-b0fd-7995e092f8f1-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "fa425b93-9221-4f0b-b0fd-7995e092f8f1" (UID: "fa425b93-9221-4f0b-b0fd-7995e092f8f1"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:11:03 crc kubenswrapper[4737]: I0126 19:11:03.610063 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa425b93-9221-4f0b-b0fd-7995e092f8f1-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "fa425b93-9221-4f0b-b0fd-7995e092f8f1" (UID: "fa425b93-9221-4f0b-b0fd-7995e092f8f1"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:11:03 crc kubenswrapper[4737]: I0126 19:11:03.613419 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa425b93-9221-4f0b-b0fd-7995e092f8f1-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "fa425b93-9221-4f0b-b0fd-7995e092f8f1" (UID: "fa425b93-9221-4f0b-b0fd-7995e092f8f1"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:11:03 crc kubenswrapper[4737]: I0126 19:11:03.631306 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa425b93-9221-4f0b-b0fd-7995e092f8f1-inventory" (OuterVolumeSpecName: "inventory") pod "fa425b93-9221-4f0b-b0fd-7995e092f8f1" (UID: "fa425b93-9221-4f0b-b0fd-7995e092f8f1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:11:03 crc kubenswrapper[4737]: I0126 19:11:03.642958 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa425b93-9221-4f0b-b0fd-7995e092f8f1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fa425b93-9221-4f0b-b0fd-7995e092f8f1" (UID: "fa425b93-9221-4f0b-b0fd-7995e092f8f1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:11:03 crc kubenswrapper[4737]: I0126 19:11:03.692434 4737 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa425b93-9221-4f0b-b0fd-7995e092f8f1-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:11:03 crc kubenswrapper[4737]: I0126 19:11:03.692467 4737 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fa425b93-9221-4f0b-b0fd-7995e092f8f1-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 26 19:11:03 crc kubenswrapper[4737]: I0126 19:11:03.692480 4737 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa425b93-9221-4f0b-b0fd-7995e092f8f1-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 19:11:03 crc kubenswrapper[4737]: I0126 19:11:03.692493 4737 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa425b93-9221-4f0b-b0fd-7995e092f8f1-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:11:03 crc kubenswrapper[4737]: I0126 19:11:03.692508 4737 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa425b93-9221-4f0b-b0fd-7995e092f8f1-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:11:03 crc kubenswrapper[4737]: I0126 19:11:03.692520 4737 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa425b93-9221-4f0b-b0fd-7995e092f8f1-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:11:03 crc kubenswrapper[4737]: I0126 19:11:03.692533 4737 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fa425b93-9221-4f0b-b0fd-7995e092f8f1-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 26 19:11:03 crc kubenswrapper[4737]: I0126 19:11:03.692549 4737 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa425b93-9221-4f0b-b0fd-7995e092f8f1-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:11:03 crc kubenswrapper[4737]: I0126 19:11:03.692587 4737 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fa425b93-9221-4f0b-b0fd-7995e092f8f1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 19:11:03 crc kubenswrapper[4737]: I0126 19:11:03.692605 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkzh7\" (UniqueName: \"kubernetes.io/projected/fa425b93-9221-4f0b-b0fd-7995e092f8f1-kube-api-access-zkzh7\") on node \"crc\" DevicePath \"\"" Jan 26 19:11:03 crc kubenswrapper[4737]: I0126 19:11:03.692618 4737 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fa425b93-9221-4f0b-b0fd-7995e092f8f1-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 26 19:11:03 crc kubenswrapper[4737]: I0126 19:11:03.692698 4737 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fa425b93-9221-4f0b-b0fd-7995e092f8f1-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 26 19:11:03 crc kubenswrapper[4737]: I0126 19:11:03.692715 4737 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa425b93-9221-4f0b-b0fd-7995e092f8f1-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:11:03 crc kubenswrapper[4737]: I0126 19:11:03.692802 4737 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa425b93-9221-4f0b-b0fd-7995e092f8f1-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:11:03 crc kubenswrapper[4737]: I0126 19:11:03.692825 4737 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa425b93-9221-4f0b-b0fd-7995e092f8f1-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:11:03 crc kubenswrapper[4737]: I0126 19:11:03.692840 4737 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fa425b93-9221-4f0b-b0fd-7995e092f8f1-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 26 19:11:04 crc kubenswrapper[4737]: I0126 19:11:04.004738 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzz54" event={"ID":"fa425b93-9221-4f0b-b0fd-7995e092f8f1","Type":"ContainerDied","Data":"ab26aef8559b34c649572f84d6abfbb2c6312a78b7976d9fc62009b81db045b4"} Jan 26 19:11:04 crc kubenswrapper[4737]: I0126 19:11:04.004843 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab26aef8559b34c649572f84d6abfbb2c6312a78b7976d9fc62009b81db045b4" Jan 26 19:11:04 crc kubenswrapper[4737]: I0126 19:11:04.004800 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzz54" Jan 26 19:11:04 crc kubenswrapper[4737]: I0126 19:11:04.127174 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-9r2p8"] Jan 26 19:11:04 crc kubenswrapper[4737]: E0126 19:11:04.127706 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa425b93-9221-4f0b-b0fd-7995e092f8f1" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 26 19:11:04 crc kubenswrapper[4737]: I0126 19:11:04.127730 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa425b93-9221-4f0b-b0fd-7995e092f8f1" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 26 19:11:04 crc kubenswrapper[4737]: I0126 19:11:04.128053 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa425b93-9221-4f0b-b0fd-7995e092f8f1" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 26 19:11:04 crc kubenswrapper[4737]: I0126 19:11:04.129187 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9r2p8" Jan 26 19:11:04 crc kubenswrapper[4737]: I0126 19:11:04.133019 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xlvv9" Jan 26 19:11:04 crc kubenswrapper[4737]: I0126 19:11:04.133171 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 19:11:04 crc kubenswrapper[4737]: I0126 19:11:04.133221 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 19:11:04 crc kubenswrapper[4737]: I0126 19:11:04.133460 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 19:11:04 crc kubenswrapper[4737]: I0126 19:11:04.133611 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 26 19:11:04 crc kubenswrapper[4737]: I0126 19:11:04.158276 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-9r2p8"] Jan 26 19:11:04 crc kubenswrapper[4737]: I0126 19:11:04.207383 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7602eee6-3627-420f-8e44-c19689be75de-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9r2p8\" (UID: \"7602eee6-3627-420f-8e44-c19689be75de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9r2p8" Jan 26 19:11:04 crc kubenswrapper[4737]: I0126 19:11:04.207676 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7602eee6-3627-420f-8e44-c19689be75de-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9r2p8\" (UID: \"7602eee6-3627-420f-8e44-c19689be75de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9r2p8" Jan 26 19:11:04 crc kubenswrapper[4737]: I0126 19:11:04.207834 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7602eee6-3627-420f-8e44-c19689be75de-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9r2p8\" (UID: \"7602eee6-3627-420f-8e44-c19689be75de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9r2p8" Jan 26 19:11:04 crc kubenswrapper[4737]: I0126 19:11:04.207991 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp9d7\" (UniqueName: \"kubernetes.io/projected/7602eee6-3627-420f-8e44-c19689be75de-kube-api-access-mp9d7\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9r2p8\" (UID: \"7602eee6-3627-420f-8e44-c19689be75de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9r2p8" Jan 26 19:11:04 crc kubenswrapper[4737]: I0126 19:11:04.208331 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7602eee6-3627-420f-8e44-c19689be75de-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9r2p8\" (UID: \"7602eee6-3627-420f-8e44-c19689be75de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9r2p8" Jan 26 19:11:04 crc kubenswrapper[4737]: I0126 19:11:04.310707 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mp9d7\" (UniqueName: \"kubernetes.io/projected/7602eee6-3627-420f-8e44-c19689be75de-kube-api-access-mp9d7\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9r2p8\" (UID: \"7602eee6-3627-420f-8e44-c19689be75de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9r2p8" Jan 26 19:11:04 crc kubenswrapper[4737]: I0126 19:11:04.311133 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7602eee6-3627-420f-8e44-c19689be75de-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9r2p8\" (UID: \"7602eee6-3627-420f-8e44-c19689be75de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9r2p8" Jan 26 19:11:04 crc kubenswrapper[4737]: I0126 19:11:04.311308 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7602eee6-3627-420f-8e44-c19689be75de-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9r2p8\" (UID: \"7602eee6-3627-420f-8e44-c19689be75de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9r2p8" Jan 26 19:11:04 crc kubenswrapper[4737]: I0126 19:11:04.311394 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7602eee6-3627-420f-8e44-c19689be75de-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9r2p8\" (UID: \"7602eee6-3627-420f-8e44-c19689be75de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9r2p8" Jan 26 19:11:04 crc kubenswrapper[4737]: I0126 19:11:04.311495 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7602eee6-3627-420f-8e44-c19689be75de-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9r2p8\" (UID: \"7602eee6-3627-420f-8e44-c19689be75de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9r2p8" Jan 26 19:11:04 crc kubenswrapper[4737]: I0126 19:11:04.313332 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7602eee6-3627-420f-8e44-c19689be75de-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9r2p8\" (UID: \"7602eee6-3627-420f-8e44-c19689be75de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9r2p8" Jan 26 19:11:04 crc kubenswrapper[4737]: I0126 19:11:04.317035 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7602eee6-3627-420f-8e44-c19689be75de-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9r2p8\" (UID: \"7602eee6-3627-420f-8e44-c19689be75de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9r2p8" Jan 26 19:11:04 crc kubenswrapper[4737]: I0126 19:11:04.317114 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7602eee6-3627-420f-8e44-c19689be75de-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9r2p8\" (UID: \"7602eee6-3627-420f-8e44-c19689be75de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9r2p8" Jan 26 19:11:04 crc kubenswrapper[4737]: I0126 19:11:04.331328 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7602eee6-3627-420f-8e44-c19689be75de-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9r2p8\" (UID: \"7602eee6-3627-420f-8e44-c19689be75de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9r2p8" Jan 26 19:11:04 crc kubenswrapper[4737]: I0126 19:11:04.337148 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp9d7\" (UniqueName: \"kubernetes.io/projected/7602eee6-3627-420f-8e44-c19689be75de-kube-api-access-mp9d7\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9r2p8\" (UID: \"7602eee6-3627-420f-8e44-c19689be75de\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9r2p8" Jan 26 19:11:04 crc kubenswrapper[4737]: I0126 19:11:04.469444 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9r2p8" Jan 26 19:11:05 crc kubenswrapper[4737]: I0126 19:11:05.017642 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-9r2p8"] Jan 26 19:11:06 crc kubenswrapper[4737]: I0126 19:11:06.032097 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9r2p8" event={"ID":"7602eee6-3627-420f-8e44-c19689be75de","Type":"ContainerStarted","Data":"c75ebf6905f138c6cd09107266a654f1275933ae38b4f5688a4546b33d212c54"} Jan 26 19:11:06 crc kubenswrapper[4737]: I0126 19:11:06.032937 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9r2p8" event={"ID":"7602eee6-3627-420f-8e44-c19689be75de","Type":"ContainerStarted","Data":"d2808b5b34f8830de52775cbb2911d56a7b7c94bf90884df62bee3634c3d783f"} Jan 26 19:11:06 crc kubenswrapper[4737]: I0126 19:11:06.068892 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9r2p8" podStartSLOduration=1.54858588 podStartE2EDuration="2.068870276s" podCreationTimestamp="2026-01-26 19:11:04 +0000 UTC" firstStartedPulling="2026-01-26 19:11:05.008724375 +0000 UTC m=+2438.316919083" lastFinishedPulling="2026-01-26 19:11:05.529008771 +0000 UTC m=+2438.837203479" observedRunningTime="2026-01-26 19:11:06.050951117 +0000 UTC m=+2439.359145825" watchObservedRunningTime="2026-01-26 19:11:06.068870276 +0000 UTC m=+2439.377064984" Jan 26 19:11:13 crc kubenswrapper[4737]: I0126 19:11:13.982939 4737 scope.go:117] "RemoveContainer" containerID="dc219edb88bc1e0e52f10e642b13d911ee1bfd5a5f90c09cf6a4ad6a9f1a4b7f" Jan 26 19:11:13 crc kubenswrapper[4737]: E0126 19:11:13.984449 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:11:26 crc kubenswrapper[4737]: I0126 19:11:26.992939 4737 scope.go:117] "RemoveContainer" containerID="dc219edb88bc1e0e52f10e642b13d911ee1bfd5a5f90c09cf6a4ad6a9f1a4b7f" Jan 26 19:11:26 crc kubenswrapper[4737]: E0126 19:11:26.994026 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:11:40 crc kubenswrapper[4737]: I0126 19:11:40.982857 4737 scope.go:117] "RemoveContainer" containerID="dc219edb88bc1e0e52f10e642b13d911ee1bfd5a5f90c09cf6a4ad6a9f1a4b7f" Jan 26 19:11:40 crc kubenswrapper[4737]: E0126 19:11:40.983733 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:11:52 crc kubenswrapper[4737]: I0126 19:11:52.983062 4737 scope.go:117] "RemoveContainer" containerID="dc219edb88bc1e0e52f10e642b13d911ee1bfd5a5f90c09cf6a4ad6a9f1a4b7f" Jan 26 19:11:52 crc kubenswrapper[4737]: E0126 19:11:52.984462 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:12:07 crc kubenswrapper[4737]: I0126 19:12:07.982849 4737 scope.go:117] "RemoveContainer" containerID="dc219edb88bc1e0e52f10e642b13d911ee1bfd5a5f90c09cf6a4ad6a9f1a4b7f" Jan 26 19:12:07 crc kubenswrapper[4737]: E0126 19:12:07.984285 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:12:13 crc kubenswrapper[4737]: I0126 19:12:13.883330 4737 generic.go:334] "Generic (PLEG): container finished" podID="7602eee6-3627-420f-8e44-c19689be75de" containerID="c75ebf6905f138c6cd09107266a654f1275933ae38b4f5688a4546b33d212c54" exitCode=0 Jan 26 19:12:13 crc kubenswrapper[4737]: I0126 19:12:13.883412 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9r2p8" event={"ID":"7602eee6-3627-420f-8e44-c19689be75de","Type":"ContainerDied","Data":"c75ebf6905f138c6cd09107266a654f1275933ae38b4f5688a4546b33d212c54"} Jan 26 19:12:15 crc kubenswrapper[4737]: I0126 19:12:15.425916 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9r2p8" Jan 26 19:12:15 crc kubenswrapper[4737]: I0126 19:12:15.487060 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7602eee6-3627-420f-8e44-c19689be75de-ovn-combined-ca-bundle\") pod \"7602eee6-3627-420f-8e44-c19689be75de\" (UID: \"7602eee6-3627-420f-8e44-c19689be75de\") " Jan 26 19:12:15 crc kubenswrapper[4737]: I0126 19:12:15.487126 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7602eee6-3627-420f-8e44-c19689be75de-inventory\") pod \"7602eee6-3627-420f-8e44-c19689be75de\" (UID: \"7602eee6-3627-420f-8e44-c19689be75de\") " Jan 26 19:12:15 crc kubenswrapper[4737]: I0126 19:12:15.487147 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7602eee6-3627-420f-8e44-c19689be75de-ssh-key-openstack-edpm-ipam\") pod \"7602eee6-3627-420f-8e44-c19689be75de\" (UID: \"7602eee6-3627-420f-8e44-c19689be75de\") " Jan 26 19:12:15 crc kubenswrapper[4737]: I0126 19:12:15.487192 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mp9d7\" (UniqueName: \"kubernetes.io/projected/7602eee6-3627-420f-8e44-c19689be75de-kube-api-access-mp9d7\") pod \"7602eee6-3627-420f-8e44-c19689be75de\" (UID: \"7602eee6-3627-420f-8e44-c19689be75de\") " Jan 26 19:12:15 crc kubenswrapper[4737]: I0126 19:12:15.487229 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7602eee6-3627-420f-8e44-c19689be75de-ovncontroller-config-0\") pod \"7602eee6-3627-420f-8e44-c19689be75de\" (UID: \"7602eee6-3627-420f-8e44-c19689be75de\") " Jan 26 19:12:15 crc kubenswrapper[4737]: I0126 19:12:15.494300 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7602eee6-3627-420f-8e44-c19689be75de-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "7602eee6-3627-420f-8e44-c19689be75de" (UID: "7602eee6-3627-420f-8e44-c19689be75de"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:12:15 crc kubenswrapper[4737]: I0126 19:12:15.495412 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7602eee6-3627-420f-8e44-c19689be75de-kube-api-access-mp9d7" (OuterVolumeSpecName: "kube-api-access-mp9d7") pod "7602eee6-3627-420f-8e44-c19689be75de" (UID: "7602eee6-3627-420f-8e44-c19689be75de"). InnerVolumeSpecName "kube-api-access-mp9d7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:12:15 crc kubenswrapper[4737]: I0126 19:12:15.519539 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7602eee6-3627-420f-8e44-c19689be75de-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "7602eee6-3627-420f-8e44-c19689be75de" (UID: "7602eee6-3627-420f-8e44-c19689be75de"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:12:15 crc kubenswrapper[4737]: I0126 19:12:15.533449 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7602eee6-3627-420f-8e44-c19689be75de-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7602eee6-3627-420f-8e44-c19689be75de" (UID: "7602eee6-3627-420f-8e44-c19689be75de"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:12:15 crc kubenswrapper[4737]: I0126 19:12:15.555386 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7602eee6-3627-420f-8e44-c19689be75de-inventory" (OuterVolumeSpecName: "inventory") pod "7602eee6-3627-420f-8e44-c19689be75de" (UID: "7602eee6-3627-420f-8e44-c19689be75de"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:12:15 crc kubenswrapper[4737]: I0126 19:12:15.590025 4737 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7602eee6-3627-420f-8e44-c19689be75de-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:12:15 crc kubenswrapper[4737]: I0126 19:12:15.590082 4737 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7602eee6-3627-420f-8e44-c19689be75de-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 19:12:15 crc kubenswrapper[4737]: I0126 19:12:15.590093 4737 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7602eee6-3627-420f-8e44-c19689be75de-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 19:12:15 crc kubenswrapper[4737]: I0126 19:12:15.590109 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mp9d7\" (UniqueName: \"kubernetes.io/projected/7602eee6-3627-420f-8e44-c19689be75de-kube-api-access-mp9d7\") on node \"crc\" DevicePath \"\"" Jan 26 19:12:15 crc kubenswrapper[4737]: I0126 19:12:15.590117 4737 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7602eee6-3627-420f-8e44-c19689be75de-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 19:12:15 crc kubenswrapper[4737]: I0126 19:12:15.906807 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9r2p8" event={"ID":"7602eee6-3627-420f-8e44-c19689be75de","Type":"ContainerDied","Data":"d2808b5b34f8830de52775cbb2911d56a7b7c94bf90884df62bee3634c3d783f"} Jan 26 19:12:15 crc kubenswrapper[4737]: I0126 19:12:15.906852 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2808b5b34f8830de52775cbb2911d56a7b7c94bf90884df62bee3634c3d783f" Jan 26 19:12:15 crc kubenswrapper[4737]: I0126 19:12:15.906860 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9r2p8" Jan 26 19:12:16 crc kubenswrapper[4737]: I0126 19:12:16.024895 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ms8mp"] Jan 26 19:12:16 crc kubenswrapper[4737]: E0126 19:12:16.025500 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7602eee6-3627-420f-8e44-c19689be75de" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 26 19:12:16 crc kubenswrapper[4737]: I0126 19:12:16.025521 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="7602eee6-3627-420f-8e44-c19689be75de" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 26 19:12:16 crc kubenswrapper[4737]: I0126 19:12:16.025764 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="7602eee6-3627-420f-8e44-c19689be75de" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 26 19:12:16 crc kubenswrapper[4737]: I0126 19:12:16.026731 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ms8mp" Jan 26 19:12:16 crc kubenswrapper[4737]: I0126 19:12:16.033939 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 26 19:12:16 crc kubenswrapper[4737]: I0126 19:12:16.034332 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 19:12:16 crc kubenswrapper[4737]: I0126 19:12:16.034529 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xlvv9" Jan 26 19:12:16 crc kubenswrapper[4737]: I0126 19:12:16.034660 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 26 19:12:16 crc kubenswrapper[4737]: I0126 19:12:16.034702 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 19:12:16 crc kubenswrapper[4737]: I0126 19:12:16.034869 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 19:12:16 crc kubenswrapper[4737]: I0126 19:12:16.044813 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ms8mp"] Jan 26 19:12:16 crc kubenswrapper[4737]: I0126 19:12:16.101734 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnskb\" (UniqueName: \"kubernetes.io/projected/f03ef699-8fd7-4aad-a3a5-8a7306048d86-kube-api-access-rnskb\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ms8mp\" (UID: \"f03ef699-8fd7-4aad-a3a5-8a7306048d86\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ms8mp" Jan 26 19:12:16 crc kubenswrapper[4737]: I0126 19:12:16.101809 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f03ef699-8fd7-4aad-a3a5-8a7306048d86-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ms8mp\" (UID: \"f03ef699-8fd7-4aad-a3a5-8a7306048d86\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ms8mp" Jan 26 19:12:16 crc kubenswrapper[4737]: I0126 19:12:16.101832 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f03ef699-8fd7-4aad-a3a5-8a7306048d86-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ms8mp\" (UID: \"f03ef699-8fd7-4aad-a3a5-8a7306048d86\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ms8mp" Jan 26 19:12:16 crc kubenswrapper[4737]: I0126 19:12:16.102263 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f03ef699-8fd7-4aad-a3a5-8a7306048d86-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ms8mp\" (UID: \"f03ef699-8fd7-4aad-a3a5-8a7306048d86\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ms8mp" Jan 26 19:12:16 crc kubenswrapper[4737]: I0126 19:12:16.102443 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f03ef699-8fd7-4aad-a3a5-8a7306048d86-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ms8mp\" (UID: \"f03ef699-8fd7-4aad-a3a5-8a7306048d86\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ms8mp" Jan 26 19:12:16 crc kubenswrapper[4737]: I0126 19:12:16.102621 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f03ef699-8fd7-4aad-a3a5-8a7306048d86-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ms8mp\" (UID: \"f03ef699-8fd7-4aad-a3a5-8a7306048d86\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ms8mp" Jan 26 19:12:16 crc kubenswrapper[4737]: I0126 19:12:16.204803 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnskb\" (UniqueName: \"kubernetes.io/projected/f03ef699-8fd7-4aad-a3a5-8a7306048d86-kube-api-access-rnskb\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ms8mp\" (UID: \"f03ef699-8fd7-4aad-a3a5-8a7306048d86\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ms8mp" Jan 26 19:12:16 crc kubenswrapper[4737]: I0126 19:12:16.204879 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f03ef699-8fd7-4aad-a3a5-8a7306048d86-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ms8mp\" (UID: \"f03ef699-8fd7-4aad-a3a5-8a7306048d86\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ms8mp" Jan 26 19:12:16 crc kubenswrapper[4737]: I0126 19:12:16.204906 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f03ef699-8fd7-4aad-a3a5-8a7306048d86-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ms8mp\" (UID: \"f03ef699-8fd7-4aad-a3a5-8a7306048d86\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ms8mp" Jan 26 19:12:16 crc kubenswrapper[4737]: I0126 19:12:16.204992 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f03ef699-8fd7-4aad-a3a5-8a7306048d86-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ms8mp\" (UID: \"f03ef699-8fd7-4aad-a3a5-8a7306048d86\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ms8mp" Jan 26 19:12:16 crc kubenswrapper[4737]: I0126 19:12:16.205040 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f03ef699-8fd7-4aad-a3a5-8a7306048d86-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ms8mp\" (UID: \"f03ef699-8fd7-4aad-a3a5-8a7306048d86\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ms8mp" Jan 26 19:12:16 crc kubenswrapper[4737]: I0126 19:12:16.205169 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f03ef699-8fd7-4aad-a3a5-8a7306048d86-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ms8mp\" (UID: \"f03ef699-8fd7-4aad-a3a5-8a7306048d86\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ms8mp" Jan 26 19:12:16 crc kubenswrapper[4737]: I0126 19:12:16.209521 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f03ef699-8fd7-4aad-a3a5-8a7306048d86-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ms8mp\" (UID: \"f03ef699-8fd7-4aad-a3a5-8a7306048d86\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ms8mp" Jan 26 19:12:16 crc kubenswrapper[4737]: I0126 19:12:16.209934 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f03ef699-8fd7-4aad-a3a5-8a7306048d86-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ms8mp\" (UID: \"f03ef699-8fd7-4aad-a3a5-8a7306048d86\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ms8mp" Jan 26 19:12:16 crc kubenswrapper[4737]: I0126 19:12:16.209987 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f03ef699-8fd7-4aad-a3a5-8a7306048d86-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ms8mp\" (UID: \"f03ef699-8fd7-4aad-a3a5-8a7306048d86\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ms8mp" Jan 26 19:12:16 crc kubenswrapper[4737]: I0126 19:12:16.210892 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f03ef699-8fd7-4aad-a3a5-8a7306048d86-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ms8mp\" (UID: \"f03ef699-8fd7-4aad-a3a5-8a7306048d86\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ms8mp" Jan 26 19:12:16 crc kubenswrapper[4737]: I0126 19:12:16.211915 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f03ef699-8fd7-4aad-a3a5-8a7306048d86-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ms8mp\" (UID: \"f03ef699-8fd7-4aad-a3a5-8a7306048d86\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ms8mp" Jan 26 19:12:16 crc kubenswrapper[4737]: I0126 19:12:16.226184 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnskb\" (UniqueName: \"kubernetes.io/projected/f03ef699-8fd7-4aad-a3a5-8a7306048d86-kube-api-access-rnskb\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ms8mp\" (UID: \"f03ef699-8fd7-4aad-a3a5-8a7306048d86\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ms8mp" Jan 26 19:12:16 crc kubenswrapper[4737]: I0126 19:12:16.358646 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ms8mp" Jan 26 19:12:17 crc kubenswrapper[4737]: I0126 19:12:17.037356 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ms8mp"] Jan 26 19:12:17 crc kubenswrapper[4737]: I0126 19:12:17.937776 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ms8mp" event={"ID":"f03ef699-8fd7-4aad-a3a5-8a7306048d86","Type":"ContainerStarted","Data":"12abbb23ed42e267fff96531a71b11764c3afc111d07eb429a12161ddda4becb"} Jan 26 19:12:17 crc kubenswrapper[4737]: I0126 19:12:17.938145 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ms8mp" event={"ID":"f03ef699-8fd7-4aad-a3a5-8a7306048d86","Type":"ContainerStarted","Data":"6a015a76b2437fd5c35529ffcb381d8bf08f0d696cb381accbcd35b7e03818f5"} Jan 26 19:12:17 crc kubenswrapper[4737]: I0126 19:12:17.960689 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ms8mp" podStartSLOduration=2.534095577 podStartE2EDuration="2.960660457s" podCreationTimestamp="2026-01-26 19:12:15 +0000 UTC" firstStartedPulling="2026-01-26 19:12:17.073608816 +0000 UTC m=+2510.381803524" lastFinishedPulling="2026-01-26 19:12:17.500173706 +0000 UTC m=+2510.808368404" observedRunningTime="2026-01-26 19:12:17.957222963 +0000 UTC m=+2511.265417681" watchObservedRunningTime="2026-01-26 19:12:17.960660457 +0000 UTC m=+2511.268855165" Jan 26 19:12:19 crc kubenswrapper[4737]: I0126 19:12:19.982682 4737 scope.go:117] "RemoveContainer" containerID="dc219edb88bc1e0e52f10e642b13d911ee1bfd5a5f90c09cf6a4ad6a9f1a4b7f" Jan 26 19:12:19 crc kubenswrapper[4737]: E0126 19:12:19.983501 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:12:21 crc kubenswrapper[4737]: I0126 19:12:21.620255 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-r747k"] Jan 26 19:12:21 crc kubenswrapper[4737]: I0126 19:12:21.623394 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r747k" Jan 26 19:12:21 crc kubenswrapper[4737]: I0126 19:12:21.676935 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r747k"] Jan 26 19:12:21 crc kubenswrapper[4737]: I0126 19:12:21.801559 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wstfq\" (UniqueName: \"kubernetes.io/projected/8d11352c-49a8-45c7-a27f-77bcfbc14bf2-kube-api-access-wstfq\") pod \"redhat-marketplace-r747k\" (UID: \"8d11352c-49a8-45c7-a27f-77bcfbc14bf2\") " pod="openshift-marketplace/redhat-marketplace-r747k" Jan 26 19:12:21 crc kubenswrapper[4737]: I0126 19:12:21.801694 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d11352c-49a8-45c7-a27f-77bcfbc14bf2-catalog-content\") pod \"redhat-marketplace-r747k\" (UID: \"8d11352c-49a8-45c7-a27f-77bcfbc14bf2\") " pod="openshift-marketplace/redhat-marketplace-r747k" Jan 26 19:12:21 crc kubenswrapper[4737]: I0126 19:12:21.801886 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d11352c-49a8-45c7-a27f-77bcfbc14bf2-utilities\") pod \"redhat-marketplace-r747k\" (UID: \"8d11352c-49a8-45c7-a27f-77bcfbc14bf2\") " pod="openshift-marketplace/redhat-marketplace-r747k" Jan 26 19:12:21 crc kubenswrapper[4737]: I0126 19:12:21.905933 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wstfq\" (UniqueName: \"kubernetes.io/projected/8d11352c-49a8-45c7-a27f-77bcfbc14bf2-kube-api-access-wstfq\") pod \"redhat-marketplace-r747k\" (UID: \"8d11352c-49a8-45c7-a27f-77bcfbc14bf2\") " pod="openshift-marketplace/redhat-marketplace-r747k" Jan 26 19:12:21 crc kubenswrapper[4737]: I0126 19:12:21.906150 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d11352c-49a8-45c7-a27f-77bcfbc14bf2-catalog-content\") pod \"redhat-marketplace-r747k\" (UID: \"8d11352c-49a8-45c7-a27f-77bcfbc14bf2\") " pod="openshift-marketplace/redhat-marketplace-r747k" Jan 26 19:12:21 crc kubenswrapper[4737]: I0126 19:12:21.906476 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d11352c-49a8-45c7-a27f-77bcfbc14bf2-utilities\") pod \"redhat-marketplace-r747k\" (UID: \"8d11352c-49a8-45c7-a27f-77bcfbc14bf2\") " pod="openshift-marketplace/redhat-marketplace-r747k" Jan 26 19:12:21 crc kubenswrapper[4737]: I0126 19:12:21.907291 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d11352c-49a8-45c7-a27f-77bcfbc14bf2-utilities\") pod \"redhat-marketplace-r747k\" (UID: \"8d11352c-49a8-45c7-a27f-77bcfbc14bf2\") " pod="openshift-marketplace/redhat-marketplace-r747k" Jan 26 19:12:21 crc kubenswrapper[4737]: I0126 19:12:21.909207 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d11352c-49a8-45c7-a27f-77bcfbc14bf2-catalog-content\") pod \"redhat-marketplace-r747k\" (UID: \"8d11352c-49a8-45c7-a27f-77bcfbc14bf2\") " pod="openshift-marketplace/redhat-marketplace-r747k" Jan 26 19:12:21 crc kubenswrapper[4737]: I0126 19:12:21.959458 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wstfq\" (UniqueName: \"kubernetes.io/projected/8d11352c-49a8-45c7-a27f-77bcfbc14bf2-kube-api-access-wstfq\") pod \"redhat-marketplace-r747k\" (UID: \"8d11352c-49a8-45c7-a27f-77bcfbc14bf2\") " pod="openshift-marketplace/redhat-marketplace-r747k" Jan 26 19:12:21 crc kubenswrapper[4737]: I0126 19:12:21.978065 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r747k" Jan 26 19:12:22 crc kubenswrapper[4737]: I0126 19:12:22.586719 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r747k"] Jan 26 19:12:23 crc kubenswrapper[4737]: I0126 19:12:22.999737 4737 generic.go:334] "Generic (PLEG): container finished" podID="8d11352c-49a8-45c7-a27f-77bcfbc14bf2" containerID="b4dbbd304a451d03a59943fccc906d0ae7ad3af6175b6c40ac2e78a5394eecec" exitCode=0 Jan 26 19:12:23 crc kubenswrapper[4737]: I0126 19:12:22.999835 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r747k" event={"ID":"8d11352c-49a8-45c7-a27f-77bcfbc14bf2","Type":"ContainerDied","Data":"b4dbbd304a451d03a59943fccc906d0ae7ad3af6175b6c40ac2e78a5394eecec"} Jan 26 19:12:23 crc kubenswrapper[4737]: I0126 19:12:23.000026 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r747k" event={"ID":"8d11352c-49a8-45c7-a27f-77bcfbc14bf2","Type":"ContainerStarted","Data":"a9e767297ab134c0a91f1bb3eb3f9171bc6c01b12c0231e3b76b7bc10dae8ea1"} Jan 26 19:12:26 crc kubenswrapper[4737]: I0126 19:12:26.040564 4737 generic.go:334] "Generic (PLEG): container finished" podID="8d11352c-49a8-45c7-a27f-77bcfbc14bf2" containerID="707cd2482abfab8f581f655f2f7d440a545e5743c8bedbe3b0136bf6786c8bc8" exitCode=0 Jan 26 19:12:26 crc kubenswrapper[4737]: I0126 19:12:26.040639 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r747k" event={"ID":"8d11352c-49a8-45c7-a27f-77bcfbc14bf2","Type":"ContainerDied","Data":"707cd2482abfab8f581f655f2f7d440a545e5743c8bedbe3b0136bf6786c8bc8"} Jan 26 19:12:27 crc kubenswrapper[4737]: I0126 19:12:27.055496 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r747k" event={"ID":"8d11352c-49a8-45c7-a27f-77bcfbc14bf2","Type":"ContainerStarted","Data":"4dba1f506c630e98cb6c893dee4ad7168b88027226fccbb3543c9c112d2a5282"} Jan 26 19:12:27 crc kubenswrapper[4737]: I0126 19:12:27.085189 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-r747k" podStartSLOduration=2.503280909 podStartE2EDuration="6.085159368s" podCreationTimestamp="2026-01-26 19:12:21 +0000 UTC" firstStartedPulling="2026-01-26 19:12:23.002713737 +0000 UTC m=+2516.310908445" lastFinishedPulling="2026-01-26 19:12:26.584592196 +0000 UTC m=+2519.892786904" observedRunningTime="2026-01-26 19:12:27.073910483 +0000 UTC m=+2520.382105211" watchObservedRunningTime="2026-01-26 19:12:27.085159368 +0000 UTC m=+2520.393354076" Jan 26 19:12:30 crc kubenswrapper[4737]: I0126 19:12:30.982861 4737 scope.go:117] "RemoveContainer" containerID="dc219edb88bc1e0e52f10e642b13d911ee1bfd5a5f90c09cf6a4ad6a9f1a4b7f" Jan 26 19:12:30 crc kubenswrapper[4737]: E0126 19:12:30.983801 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:12:31 crc kubenswrapper[4737]: I0126 19:12:31.979214 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-r747k" Jan 26 19:12:31 crc kubenswrapper[4737]: I0126 19:12:31.979662 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-r747k" Jan 26 19:12:32 crc kubenswrapper[4737]: I0126 19:12:32.031291 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-r747k" Jan 26 19:12:32 crc kubenswrapper[4737]: I0126 19:12:32.161525 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-r747k" Jan 26 19:12:32 crc kubenswrapper[4737]: I0126 19:12:32.275382 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r747k"] Jan 26 19:12:34 crc kubenswrapper[4737]: I0126 19:12:34.125590 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-r747k" podUID="8d11352c-49a8-45c7-a27f-77bcfbc14bf2" containerName="registry-server" containerID="cri-o://4dba1f506c630e98cb6c893dee4ad7168b88027226fccbb3543c9c112d2a5282" gracePeriod=2 Jan 26 19:12:35 crc kubenswrapper[4737]: I0126 19:12:35.138939 4737 generic.go:334] "Generic (PLEG): container finished" podID="8d11352c-49a8-45c7-a27f-77bcfbc14bf2" containerID="4dba1f506c630e98cb6c893dee4ad7168b88027226fccbb3543c9c112d2a5282" exitCode=0 Jan 26 19:12:35 crc kubenswrapper[4737]: I0126 19:12:35.139024 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r747k" event={"ID":"8d11352c-49a8-45c7-a27f-77bcfbc14bf2","Type":"ContainerDied","Data":"4dba1f506c630e98cb6c893dee4ad7168b88027226fccbb3543c9c112d2a5282"} Jan 26 19:12:35 crc kubenswrapper[4737]: I0126 19:12:35.637320 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r747k" Jan 26 19:12:35 crc kubenswrapper[4737]: I0126 19:12:35.665328 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wstfq\" (UniqueName: \"kubernetes.io/projected/8d11352c-49a8-45c7-a27f-77bcfbc14bf2-kube-api-access-wstfq\") pod \"8d11352c-49a8-45c7-a27f-77bcfbc14bf2\" (UID: \"8d11352c-49a8-45c7-a27f-77bcfbc14bf2\") " Jan 26 19:12:35 crc kubenswrapper[4737]: I0126 19:12:35.666114 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d11352c-49a8-45c7-a27f-77bcfbc14bf2-utilities\") pod \"8d11352c-49a8-45c7-a27f-77bcfbc14bf2\" (UID: \"8d11352c-49a8-45c7-a27f-77bcfbc14bf2\") " Jan 26 19:12:35 crc kubenswrapper[4737]: I0126 19:12:35.666139 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d11352c-49a8-45c7-a27f-77bcfbc14bf2-catalog-content\") pod \"8d11352c-49a8-45c7-a27f-77bcfbc14bf2\" (UID: \"8d11352c-49a8-45c7-a27f-77bcfbc14bf2\") " Jan 26 19:12:35 crc kubenswrapper[4737]: I0126 19:12:35.667086 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d11352c-49a8-45c7-a27f-77bcfbc14bf2-utilities" (OuterVolumeSpecName: "utilities") pod "8d11352c-49a8-45c7-a27f-77bcfbc14bf2" (UID: "8d11352c-49a8-45c7-a27f-77bcfbc14bf2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:12:35 crc kubenswrapper[4737]: I0126 19:12:35.674480 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d11352c-49a8-45c7-a27f-77bcfbc14bf2-kube-api-access-wstfq" (OuterVolumeSpecName: "kube-api-access-wstfq") pod "8d11352c-49a8-45c7-a27f-77bcfbc14bf2" (UID: "8d11352c-49a8-45c7-a27f-77bcfbc14bf2"). InnerVolumeSpecName "kube-api-access-wstfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:12:35 crc kubenswrapper[4737]: I0126 19:12:35.705955 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d11352c-49a8-45c7-a27f-77bcfbc14bf2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8d11352c-49a8-45c7-a27f-77bcfbc14bf2" (UID: "8d11352c-49a8-45c7-a27f-77bcfbc14bf2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:12:35 crc kubenswrapper[4737]: I0126 19:12:35.769203 4737 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d11352c-49a8-45c7-a27f-77bcfbc14bf2-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:12:35 crc kubenswrapper[4737]: I0126 19:12:35.769239 4737 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d11352c-49a8-45c7-a27f-77bcfbc14bf2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:12:35 crc kubenswrapper[4737]: I0126 19:12:35.769252 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wstfq\" (UniqueName: \"kubernetes.io/projected/8d11352c-49a8-45c7-a27f-77bcfbc14bf2-kube-api-access-wstfq\") on node \"crc\" DevicePath \"\"" Jan 26 19:12:36 crc kubenswrapper[4737]: I0126 19:12:36.151962 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r747k" event={"ID":"8d11352c-49a8-45c7-a27f-77bcfbc14bf2","Type":"ContainerDied","Data":"a9e767297ab134c0a91f1bb3eb3f9171bc6c01b12c0231e3b76b7bc10dae8ea1"} Jan 26 19:12:36 crc kubenswrapper[4737]: I0126 19:12:36.152011 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r747k" Jan 26 19:12:36 crc kubenswrapper[4737]: I0126 19:12:36.152025 4737 scope.go:117] "RemoveContainer" containerID="4dba1f506c630e98cb6c893dee4ad7168b88027226fccbb3543c9c112d2a5282" Jan 26 19:12:36 crc kubenswrapper[4737]: I0126 19:12:36.181493 4737 scope.go:117] "RemoveContainer" containerID="707cd2482abfab8f581f655f2f7d440a545e5743c8bedbe3b0136bf6786c8bc8" Jan 26 19:12:36 crc kubenswrapper[4737]: I0126 19:12:36.197788 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r747k"] Jan 26 19:12:36 crc kubenswrapper[4737]: I0126 19:12:36.208814 4737 scope.go:117] "RemoveContainer" containerID="b4dbbd304a451d03a59943fccc906d0ae7ad3af6175b6c40ac2e78a5394eecec" Jan 26 19:12:36 crc kubenswrapper[4737]: I0126 19:12:36.210060 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-r747k"] Jan 26 19:12:36 crc kubenswrapper[4737]: I0126 19:12:36.996118 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d11352c-49a8-45c7-a27f-77bcfbc14bf2" path="/var/lib/kubelet/pods/8d11352c-49a8-45c7-a27f-77bcfbc14bf2/volumes" Jan 26 19:12:38 crc kubenswrapper[4737]: I0126 19:12:38.865900 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-g8vp4"] Jan 26 19:12:38 crc kubenswrapper[4737]: E0126 19:12:38.866568 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d11352c-49a8-45c7-a27f-77bcfbc14bf2" containerName="extract-content" Jan 26 19:12:38 crc kubenswrapper[4737]: I0126 19:12:38.866586 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d11352c-49a8-45c7-a27f-77bcfbc14bf2" containerName="extract-content" Jan 26 19:12:38 crc kubenswrapper[4737]: E0126 19:12:38.866627 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d11352c-49a8-45c7-a27f-77bcfbc14bf2" containerName="extract-utilities" Jan 26 19:12:38 crc kubenswrapper[4737]: I0126 19:12:38.866635 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d11352c-49a8-45c7-a27f-77bcfbc14bf2" containerName="extract-utilities" Jan 26 19:12:38 crc kubenswrapper[4737]: E0126 19:12:38.866656 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d11352c-49a8-45c7-a27f-77bcfbc14bf2" containerName="registry-server" Jan 26 19:12:38 crc kubenswrapper[4737]: I0126 19:12:38.866667 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d11352c-49a8-45c7-a27f-77bcfbc14bf2" containerName="registry-server" Jan 26 19:12:38 crc kubenswrapper[4737]: I0126 19:12:38.866956 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d11352c-49a8-45c7-a27f-77bcfbc14bf2" containerName="registry-server" Jan 26 19:12:38 crc kubenswrapper[4737]: I0126 19:12:38.869081 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g8vp4" Jan 26 19:12:38 crc kubenswrapper[4737]: I0126 19:12:38.878179 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g8vp4"] Jan 26 19:12:38 crc kubenswrapper[4737]: I0126 19:12:38.950049 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e3e5c75-b555-443f-b919-e98330a5a21e-catalog-content\") pod \"community-operators-g8vp4\" (UID: \"4e3e5c75-b555-443f-b919-e98330a5a21e\") " pod="openshift-marketplace/community-operators-g8vp4" Jan 26 19:12:38 crc kubenswrapper[4737]: I0126 19:12:38.950310 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrlw7\" (UniqueName: \"kubernetes.io/projected/4e3e5c75-b555-443f-b919-e98330a5a21e-kube-api-access-vrlw7\") pod \"community-operators-g8vp4\" (UID: \"4e3e5c75-b555-443f-b919-e98330a5a21e\") " pod="openshift-marketplace/community-operators-g8vp4" Jan 26 19:12:38 crc kubenswrapper[4737]: I0126 19:12:38.950500 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e3e5c75-b555-443f-b919-e98330a5a21e-utilities\") pod \"community-operators-g8vp4\" (UID: \"4e3e5c75-b555-443f-b919-e98330a5a21e\") " pod="openshift-marketplace/community-operators-g8vp4" Jan 26 19:12:39 crc kubenswrapper[4737]: I0126 19:12:39.053279 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e3e5c75-b555-443f-b919-e98330a5a21e-catalog-content\") pod \"community-operators-g8vp4\" (UID: \"4e3e5c75-b555-443f-b919-e98330a5a21e\") " pod="openshift-marketplace/community-operators-g8vp4" Jan 26 19:12:39 crc kubenswrapper[4737]: I0126 19:12:39.053373 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrlw7\" (UniqueName: \"kubernetes.io/projected/4e3e5c75-b555-443f-b919-e98330a5a21e-kube-api-access-vrlw7\") pod \"community-operators-g8vp4\" (UID: \"4e3e5c75-b555-443f-b919-e98330a5a21e\") " pod="openshift-marketplace/community-operators-g8vp4" Jan 26 19:12:39 crc kubenswrapper[4737]: I0126 19:12:39.053457 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e3e5c75-b555-443f-b919-e98330a5a21e-utilities\") pod \"community-operators-g8vp4\" (UID: \"4e3e5c75-b555-443f-b919-e98330a5a21e\") " pod="openshift-marketplace/community-operators-g8vp4" Jan 26 19:12:39 crc kubenswrapper[4737]: I0126 19:12:39.053888 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e3e5c75-b555-443f-b919-e98330a5a21e-catalog-content\") pod \"community-operators-g8vp4\" (UID: \"4e3e5c75-b555-443f-b919-e98330a5a21e\") " pod="openshift-marketplace/community-operators-g8vp4" Jan 26 19:12:39 crc kubenswrapper[4737]: I0126 19:12:39.054391 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e3e5c75-b555-443f-b919-e98330a5a21e-utilities\") pod \"community-operators-g8vp4\" (UID: \"4e3e5c75-b555-443f-b919-e98330a5a21e\") " pod="openshift-marketplace/community-operators-g8vp4" Jan 26 19:12:39 crc kubenswrapper[4737]: I0126 19:12:39.081688 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrlw7\" (UniqueName: \"kubernetes.io/projected/4e3e5c75-b555-443f-b919-e98330a5a21e-kube-api-access-vrlw7\") pod \"community-operators-g8vp4\" (UID: \"4e3e5c75-b555-443f-b919-e98330a5a21e\") " pod="openshift-marketplace/community-operators-g8vp4" Jan 26 19:12:39 crc kubenswrapper[4737]: I0126 19:12:39.188681 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g8vp4" Jan 26 19:12:39 crc kubenswrapper[4737]: I0126 19:12:39.841442 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g8vp4"] Jan 26 19:12:40 crc kubenswrapper[4737]: I0126 19:12:40.202275 4737 generic.go:334] "Generic (PLEG): container finished" podID="4e3e5c75-b555-443f-b919-e98330a5a21e" containerID="86df8b59ed6d2cb17247429747e11918c1432a0bb7de73d0a634137d8ab48bed" exitCode=0 Jan 26 19:12:40 crc kubenswrapper[4737]: I0126 19:12:40.202398 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g8vp4" event={"ID":"4e3e5c75-b555-443f-b919-e98330a5a21e","Type":"ContainerDied","Data":"86df8b59ed6d2cb17247429747e11918c1432a0bb7de73d0a634137d8ab48bed"} Jan 26 19:12:40 crc kubenswrapper[4737]: I0126 19:12:40.202845 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g8vp4" event={"ID":"4e3e5c75-b555-443f-b919-e98330a5a21e","Type":"ContainerStarted","Data":"cc5042dcae74af99e2a26f347077d2af96e58bfea4daea231c7afcb86e6b93ae"} Jan 26 19:12:41 crc kubenswrapper[4737]: I0126 19:12:41.221646 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g8vp4" event={"ID":"4e3e5c75-b555-443f-b919-e98330a5a21e","Type":"ContainerStarted","Data":"80aa6d512024876c3f672a8b26f770cb619bcda545a3e1ff10f26b77f5281987"} Jan 26 19:12:42 crc kubenswrapper[4737]: I0126 19:12:42.982493 4737 scope.go:117] "RemoveContainer" containerID="dc219edb88bc1e0e52f10e642b13d911ee1bfd5a5f90c09cf6a4ad6a9f1a4b7f" Jan 26 19:12:42 crc kubenswrapper[4737]: E0126 19:12:42.983169 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:12:43 crc kubenswrapper[4737]: I0126 19:12:43.242693 4737 generic.go:334] "Generic (PLEG): container finished" podID="4e3e5c75-b555-443f-b919-e98330a5a21e" containerID="80aa6d512024876c3f672a8b26f770cb619bcda545a3e1ff10f26b77f5281987" exitCode=0 Jan 26 19:12:43 crc kubenswrapper[4737]: I0126 19:12:43.242759 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g8vp4" event={"ID":"4e3e5c75-b555-443f-b919-e98330a5a21e","Type":"ContainerDied","Data":"80aa6d512024876c3f672a8b26f770cb619bcda545a3e1ff10f26b77f5281987"} Jan 26 19:12:44 crc kubenswrapper[4737]: I0126 19:12:44.266608 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g8vp4" event={"ID":"4e3e5c75-b555-443f-b919-e98330a5a21e","Type":"ContainerStarted","Data":"325e824356665cf9fe511ace3bf189c23a660b8a7658c95ff0af4ec32f3c069d"} Jan 26 19:12:44 crc kubenswrapper[4737]: I0126 19:12:44.291061 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-g8vp4" podStartSLOduration=2.8675203849999997 podStartE2EDuration="6.291035105s" podCreationTimestamp="2026-01-26 19:12:38 +0000 UTC" firstStartedPulling="2026-01-26 19:12:40.205226489 +0000 UTC m=+2533.513421197" lastFinishedPulling="2026-01-26 19:12:43.628741209 +0000 UTC m=+2536.936935917" observedRunningTime="2026-01-26 19:12:44.288312458 +0000 UTC m=+2537.596507176" watchObservedRunningTime="2026-01-26 19:12:44.291035105 +0000 UTC m=+2537.599229813" Jan 26 19:12:49 crc kubenswrapper[4737]: I0126 19:12:49.189428 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-g8vp4" Jan 26 19:12:49 crc kubenswrapper[4737]: I0126 19:12:49.189736 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-g8vp4" Jan 26 19:12:49 crc kubenswrapper[4737]: I0126 19:12:49.259010 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-g8vp4" Jan 26 19:12:49 crc kubenswrapper[4737]: I0126 19:12:49.716232 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-g8vp4" Jan 26 19:12:49 crc kubenswrapper[4737]: I0126 19:12:49.771206 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g8vp4"] Jan 26 19:12:51 crc kubenswrapper[4737]: I0126 19:12:51.685866 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-g8vp4" podUID="4e3e5c75-b555-443f-b919-e98330a5a21e" containerName="registry-server" containerID="cri-o://325e824356665cf9fe511ace3bf189c23a660b8a7658c95ff0af4ec32f3c069d" gracePeriod=2 Jan 26 19:12:52 crc kubenswrapper[4737]: I0126 19:12:52.705352 4737 generic.go:334] "Generic (PLEG): container finished" podID="4e3e5c75-b555-443f-b919-e98330a5a21e" containerID="325e824356665cf9fe511ace3bf189c23a660b8a7658c95ff0af4ec32f3c069d" exitCode=0 Jan 26 19:12:52 crc kubenswrapper[4737]: I0126 19:12:52.705395 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g8vp4" event={"ID":"4e3e5c75-b555-443f-b919-e98330a5a21e","Type":"ContainerDied","Data":"325e824356665cf9fe511ace3bf189c23a660b8a7658c95ff0af4ec32f3c069d"} Jan 26 19:12:52 crc kubenswrapper[4737]: I0126 19:12:52.993396 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g8vp4" Jan 26 19:12:53 crc kubenswrapper[4737]: I0126 19:12:53.140324 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrlw7\" (UniqueName: \"kubernetes.io/projected/4e3e5c75-b555-443f-b919-e98330a5a21e-kube-api-access-vrlw7\") pod \"4e3e5c75-b555-443f-b919-e98330a5a21e\" (UID: \"4e3e5c75-b555-443f-b919-e98330a5a21e\") " Jan 26 19:12:53 crc kubenswrapper[4737]: I0126 19:12:53.140390 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e3e5c75-b555-443f-b919-e98330a5a21e-catalog-content\") pod \"4e3e5c75-b555-443f-b919-e98330a5a21e\" (UID: \"4e3e5c75-b555-443f-b919-e98330a5a21e\") " Jan 26 19:12:53 crc kubenswrapper[4737]: I0126 19:12:53.140864 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e3e5c75-b555-443f-b919-e98330a5a21e-utilities\") pod \"4e3e5c75-b555-443f-b919-e98330a5a21e\" (UID: \"4e3e5c75-b555-443f-b919-e98330a5a21e\") " Jan 26 19:12:53 crc kubenswrapper[4737]: I0126 19:12:53.141420 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e3e5c75-b555-443f-b919-e98330a5a21e-utilities" (OuterVolumeSpecName: "utilities") pod "4e3e5c75-b555-443f-b919-e98330a5a21e" (UID: "4e3e5c75-b555-443f-b919-e98330a5a21e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:12:53 crc kubenswrapper[4737]: I0126 19:12:53.141726 4737 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e3e5c75-b555-443f-b919-e98330a5a21e-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:12:53 crc kubenswrapper[4737]: I0126 19:12:53.149641 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e3e5c75-b555-443f-b919-e98330a5a21e-kube-api-access-vrlw7" (OuterVolumeSpecName: "kube-api-access-vrlw7") pod "4e3e5c75-b555-443f-b919-e98330a5a21e" (UID: "4e3e5c75-b555-443f-b919-e98330a5a21e"). InnerVolumeSpecName "kube-api-access-vrlw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:12:53 crc kubenswrapper[4737]: I0126 19:12:53.197924 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e3e5c75-b555-443f-b919-e98330a5a21e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4e3e5c75-b555-443f-b919-e98330a5a21e" (UID: "4e3e5c75-b555-443f-b919-e98330a5a21e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:12:53 crc kubenswrapper[4737]: I0126 19:12:53.246426 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrlw7\" (UniqueName: \"kubernetes.io/projected/4e3e5c75-b555-443f-b919-e98330a5a21e-kube-api-access-vrlw7\") on node \"crc\" DevicePath \"\"" Jan 26 19:12:53 crc kubenswrapper[4737]: I0126 19:12:53.246475 4737 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e3e5c75-b555-443f-b919-e98330a5a21e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:12:53 crc kubenswrapper[4737]: I0126 19:12:53.720732 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g8vp4" event={"ID":"4e3e5c75-b555-443f-b919-e98330a5a21e","Type":"ContainerDied","Data":"cc5042dcae74af99e2a26f347077d2af96e58bfea4daea231c7afcb86e6b93ae"} Jan 26 19:12:53 crc kubenswrapper[4737]: I0126 19:12:53.720783 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g8vp4" Jan 26 19:12:53 crc kubenswrapper[4737]: I0126 19:12:53.721140 4737 scope.go:117] "RemoveContainer" containerID="325e824356665cf9fe511ace3bf189c23a660b8a7658c95ff0af4ec32f3c069d" Jan 26 19:12:53 crc kubenswrapper[4737]: I0126 19:12:53.757964 4737 scope.go:117] "RemoveContainer" containerID="80aa6d512024876c3f672a8b26f770cb619bcda545a3e1ff10f26b77f5281987" Jan 26 19:12:53 crc kubenswrapper[4737]: I0126 19:12:53.800267 4737 scope.go:117] "RemoveContainer" containerID="86df8b59ed6d2cb17247429747e11918c1432a0bb7de73d0a634137d8ab48bed" Jan 26 19:12:53 crc kubenswrapper[4737]: I0126 19:12:53.831848 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g8vp4"] Jan 26 19:12:53 crc kubenswrapper[4737]: I0126 19:12:53.844889 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-g8vp4"] Jan 26 19:12:54 crc kubenswrapper[4737]: I0126 19:12:54.995838 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e3e5c75-b555-443f-b919-e98330a5a21e" path="/var/lib/kubelet/pods/4e3e5c75-b555-443f-b919-e98330a5a21e/volumes" Jan 26 19:12:56 crc kubenswrapper[4737]: I0126 19:12:56.990222 4737 scope.go:117] "RemoveContainer" containerID="dc219edb88bc1e0e52f10e642b13d911ee1bfd5a5f90c09cf6a4ad6a9f1a4b7f" Jan 26 19:12:56 crc kubenswrapper[4737]: E0126 19:12:56.990530 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:13:09 crc kubenswrapper[4737]: I0126 19:13:09.906689 4737 generic.go:334] "Generic (PLEG): container finished" podID="f03ef699-8fd7-4aad-a3a5-8a7306048d86" containerID="12abbb23ed42e267fff96531a71b11764c3afc111d07eb429a12161ddda4becb" exitCode=0 Jan 26 19:13:09 crc kubenswrapper[4737]: I0126 19:13:09.906941 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ms8mp" event={"ID":"f03ef699-8fd7-4aad-a3a5-8a7306048d86","Type":"ContainerDied","Data":"12abbb23ed42e267fff96531a71b11764c3afc111d07eb429a12161ddda4becb"} Jan 26 19:13:11 crc kubenswrapper[4737]: I0126 19:13:11.462827 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ms8mp" Jan 26 19:13:11 crc kubenswrapper[4737]: I0126 19:13:11.572891 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnskb\" (UniqueName: \"kubernetes.io/projected/f03ef699-8fd7-4aad-a3a5-8a7306048d86-kube-api-access-rnskb\") pod \"f03ef699-8fd7-4aad-a3a5-8a7306048d86\" (UID: \"f03ef699-8fd7-4aad-a3a5-8a7306048d86\") " Jan 26 19:13:11 crc kubenswrapper[4737]: I0126 19:13:11.572958 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f03ef699-8fd7-4aad-a3a5-8a7306048d86-inventory\") pod \"f03ef699-8fd7-4aad-a3a5-8a7306048d86\" (UID: \"f03ef699-8fd7-4aad-a3a5-8a7306048d86\") " Jan 26 19:13:11 crc kubenswrapper[4737]: I0126 19:13:11.572997 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f03ef699-8fd7-4aad-a3a5-8a7306048d86-nova-metadata-neutron-config-0\") pod \"f03ef699-8fd7-4aad-a3a5-8a7306048d86\" (UID: \"f03ef699-8fd7-4aad-a3a5-8a7306048d86\") " Jan 26 19:13:11 crc kubenswrapper[4737]: I0126 19:13:11.573130 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f03ef699-8fd7-4aad-a3a5-8a7306048d86-neutron-ovn-metadata-agent-neutron-config-0\") pod \"f03ef699-8fd7-4aad-a3a5-8a7306048d86\" (UID: \"f03ef699-8fd7-4aad-a3a5-8a7306048d86\") " Jan 26 19:13:11 crc kubenswrapper[4737]: I0126 19:13:11.573232 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f03ef699-8fd7-4aad-a3a5-8a7306048d86-neutron-metadata-combined-ca-bundle\") pod \"f03ef699-8fd7-4aad-a3a5-8a7306048d86\" (UID: \"f03ef699-8fd7-4aad-a3a5-8a7306048d86\") " Jan 26 19:13:11 crc kubenswrapper[4737]: I0126 19:13:11.573479 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f03ef699-8fd7-4aad-a3a5-8a7306048d86-ssh-key-openstack-edpm-ipam\") pod \"f03ef699-8fd7-4aad-a3a5-8a7306048d86\" (UID: \"f03ef699-8fd7-4aad-a3a5-8a7306048d86\") " Jan 26 19:13:11 crc kubenswrapper[4737]: I0126 19:13:11.609260 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f03ef699-8fd7-4aad-a3a5-8a7306048d86-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "f03ef699-8fd7-4aad-a3a5-8a7306048d86" (UID: "f03ef699-8fd7-4aad-a3a5-8a7306048d86"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:13:11 crc kubenswrapper[4737]: I0126 19:13:11.609865 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f03ef699-8fd7-4aad-a3a5-8a7306048d86-kube-api-access-rnskb" (OuterVolumeSpecName: "kube-api-access-rnskb") pod "f03ef699-8fd7-4aad-a3a5-8a7306048d86" (UID: "f03ef699-8fd7-4aad-a3a5-8a7306048d86"). InnerVolumeSpecName "kube-api-access-rnskb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:13:11 crc kubenswrapper[4737]: I0126 19:13:11.680972 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnskb\" (UniqueName: \"kubernetes.io/projected/f03ef699-8fd7-4aad-a3a5-8a7306048d86-kube-api-access-rnskb\") on node \"crc\" DevicePath \"\"" Jan 26 19:13:11 crc kubenswrapper[4737]: I0126 19:13:11.681303 4737 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f03ef699-8fd7-4aad-a3a5-8a7306048d86-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:13:11 crc kubenswrapper[4737]: I0126 19:13:11.708312 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f03ef699-8fd7-4aad-a3a5-8a7306048d86-inventory" (OuterVolumeSpecName: "inventory") pod "f03ef699-8fd7-4aad-a3a5-8a7306048d86" (UID: "f03ef699-8fd7-4aad-a3a5-8a7306048d86"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:13:11 crc kubenswrapper[4737]: I0126 19:13:11.714311 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f03ef699-8fd7-4aad-a3a5-8a7306048d86-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f03ef699-8fd7-4aad-a3a5-8a7306048d86" (UID: "f03ef699-8fd7-4aad-a3a5-8a7306048d86"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:13:11 crc kubenswrapper[4737]: I0126 19:13:11.736310 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f03ef699-8fd7-4aad-a3a5-8a7306048d86-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "f03ef699-8fd7-4aad-a3a5-8a7306048d86" (UID: "f03ef699-8fd7-4aad-a3a5-8a7306048d86"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:13:11 crc kubenswrapper[4737]: I0126 19:13:11.741153 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f03ef699-8fd7-4aad-a3a5-8a7306048d86-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "f03ef699-8fd7-4aad-a3a5-8a7306048d86" (UID: "f03ef699-8fd7-4aad-a3a5-8a7306048d86"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:13:11 crc kubenswrapper[4737]: I0126 19:13:11.783232 4737 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f03ef699-8fd7-4aad-a3a5-8a7306048d86-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 19:13:11 crc kubenswrapper[4737]: I0126 19:13:11.783269 4737 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f03ef699-8fd7-4aad-a3a5-8a7306048d86-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 19:13:11 crc kubenswrapper[4737]: I0126 19:13:11.783281 4737 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f03ef699-8fd7-4aad-a3a5-8a7306048d86-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 19:13:11 crc kubenswrapper[4737]: I0126 19:13:11.783291 4737 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f03ef699-8fd7-4aad-a3a5-8a7306048d86-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 19:13:11 crc kubenswrapper[4737]: I0126 19:13:11.927410 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ms8mp" event={"ID":"f03ef699-8fd7-4aad-a3a5-8a7306048d86","Type":"ContainerDied","Data":"6a015a76b2437fd5c35529ffcb381d8bf08f0d696cb381accbcd35b7e03818f5"} Jan 26 19:13:11 crc kubenswrapper[4737]: I0126 19:13:11.927445 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a015a76b2437fd5c35529ffcb381d8bf08f0d696cb381accbcd35b7e03818f5" Jan 26 19:13:11 crc kubenswrapper[4737]: I0126 19:13:11.927444 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ms8mp" Jan 26 19:13:11 crc kubenswrapper[4737]: I0126 19:13:11.983190 4737 scope.go:117] "RemoveContainer" containerID="dc219edb88bc1e0e52f10e642b13d911ee1bfd5a5f90c09cf6a4ad6a9f1a4b7f" Jan 26 19:13:11 crc kubenswrapper[4737]: E0126 19:13:11.983465 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:13:12 crc kubenswrapper[4737]: I0126 19:13:12.056750 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wlzfp"] Jan 26 19:13:12 crc kubenswrapper[4737]: E0126 19:13:12.057304 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e3e5c75-b555-443f-b919-e98330a5a21e" containerName="extract-content" Jan 26 19:13:12 crc kubenswrapper[4737]: I0126 19:13:12.057325 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e3e5c75-b555-443f-b919-e98330a5a21e" containerName="extract-content" Jan 26 19:13:12 crc kubenswrapper[4737]: E0126 19:13:12.057342 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e3e5c75-b555-443f-b919-e98330a5a21e" containerName="extract-utilities" Jan 26 19:13:12 crc kubenswrapper[4737]: I0126 19:13:12.057349 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e3e5c75-b555-443f-b919-e98330a5a21e" containerName="extract-utilities" Jan 26 19:13:12 crc kubenswrapper[4737]: E0126 19:13:12.057375 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e3e5c75-b555-443f-b919-e98330a5a21e" containerName="registry-server" Jan 26 19:13:12 crc kubenswrapper[4737]: I0126 19:13:12.057381 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e3e5c75-b555-443f-b919-e98330a5a21e" containerName="registry-server" Jan 26 19:13:12 crc kubenswrapper[4737]: E0126 19:13:12.057407 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f03ef699-8fd7-4aad-a3a5-8a7306048d86" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 26 19:13:12 crc kubenswrapper[4737]: I0126 19:13:12.057418 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="f03ef699-8fd7-4aad-a3a5-8a7306048d86" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 26 19:13:12 crc kubenswrapper[4737]: I0126 19:13:12.057663 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e3e5c75-b555-443f-b919-e98330a5a21e" containerName="registry-server" Jan 26 19:13:12 crc kubenswrapper[4737]: I0126 19:13:12.057697 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="f03ef699-8fd7-4aad-a3a5-8a7306048d86" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 26 19:13:12 crc kubenswrapper[4737]: I0126 19:13:12.058592 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wlzfp" Jan 26 19:13:12 crc kubenswrapper[4737]: I0126 19:13:12.060998 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 19:13:12 crc kubenswrapper[4737]: I0126 19:13:12.061650 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 19:13:12 crc kubenswrapper[4737]: I0126 19:13:12.061859 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xlvv9" Jan 26 19:13:12 crc kubenswrapper[4737]: I0126 19:13:12.061968 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 26 19:13:12 crc kubenswrapper[4737]: I0126 19:13:12.062238 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 19:13:12 crc kubenswrapper[4737]: I0126 19:13:12.100801 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wlzfp"] Jan 26 19:13:12 crc kubenswrapper[4737]: I0126 19:13:12.104037 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35694d2d-33da-4cab-96a8-4e14aa07b4f9-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wlzfp\" (UID: \"35694d2d-33da-4cab-96a8-4e14aa07b4f9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wlzfp" Jan 26 19:13:12 crc kubenswrapper[4737]: I0126 19:13:12.104138 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/35694d2d-33da-4cab-96a8-4e14aa07b4f9-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wlzfp\" (UID: \"35694d2d-33da-4cab-96a8-4e14aa07b4f9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wlzfp" Jan 26 19:13:12 crc kubenswrapper[4737]: I0126 19:13:12.104277 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl2dc\" (UniqueName: \"kubernetes.io/projected/35694d2d-33da-4cab-96a8-4e14aa07b4f9-kube-api-access-kl2dc\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wlzfp\" (UID: \"35694d2d-33da-4cab-96a8-4e14aa07b4f9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wlzfp" Jan 26 19:13:12 crc kubenswrapper[4737]: I0126 19:13:12.104409 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/35694d2d-33da-4cab-96a8-4e14aa07b4f9-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wlzfp\" (UID: \"35694d2d-33da-4cab-96a8-4e14aa07b4f9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wlzfp" Jan 26 19:13:12 crc kubenswrapper[4737]: I0126 19:13:12.104544 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35694d2d-33da-4cab-96a8-4e14aa07b4f9-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wlzfp\" (UID: \"35694d2d-33da-4cab-96a8-4e14aa07b4f9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wlzfp" Jan 26 19:13:12 crc kubenswrapper[4737]: I0126 19:13:12.205628 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35694d2d-33da-4cab-96a8-4e14aa07b4f9-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wlzfp\" (UID: \"35694d2d-33da-4cab-96a8-4e14aa07b4f9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wlzfp" Jan 26 19:13:12 crc kubenswrapper[4737]: I0126 19:13:12.205699 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35694d2d-33da-4cab-96a8-4e14aa07b4f9-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wlzfp\" (UID: \"35694d2d-33da-4cab-96a8-4e14aa07b4f9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wlzfp" Jan 26 19:13:12 crc kubenswrapper[4737]: I0126 19:13:12.205722 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/35694d2d-33da-4cab-96a8-4e14aa07b4f9-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wlzfp\" (UID: \"35694d2d-33da-4cab-96a8-4e14aa07b4f9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wlzfp" Jan 26 19:13:12 crc kubenswrapper[4737]: I0126 19:13:12.205792 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kl2dc\" (UniqueName: \"kubernetes.io/projected/35694d2d-33da-4cab-96a8-4e14aa07b4f9-kube-api-access-kl2dc\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wlzfp\" (UID: \"35694d2d-33da-4cab-96a8-4e14aa07b4f9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wlzfp" Jan 26 19:13:12 crc kubenswrapper[4737]: I0126 19:13:12.205860 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/35694d2d-33da-4cab-96a8-4e14aa07b4f9-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wlzfp\" (UID: \"35694d2d-33da-4cab-96a8-4e14aa07b4f9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wlzfp" Jan 26 19:13:12 crc kubenswrapper[4737]: I0126 19:13:12.210151 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/35694d2d-33da-4cab-96a8-4e14aa07b4f9-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wlzfp\" (UID: \"35694d2d-33da-4cab-96a8-4e14aa07b4f9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wlzfp" Jan 26 19:13:12 crc kubenswrapper[4737]: I0126 19:13:12.210755 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35694d2d-33da-4cab-96a8-4e14aa07b4f9-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wlzfp\" (UID: \"35694d2d-33da-4cab-96a8-4e14aa07b4f9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wlzfp" Jan 26 19:13:12 crc kubenswrapper[4737]: I0126 19:13:12.211031 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35694d2d-33da-4cab-96a8-4e14aa07b4f9-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wlzfp\" (UID: \"35694d2d-33da-4cab-96a8-4e14aa07b4f9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wlzfp" Jan 26 19:13:12 crc kubenswrapper[4737]: I0126 19:13:12.211195 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/35694d2d-33da-4cab-96a8-4e14aa07b4f9-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wlzfp\" (UID: \"35694d2d-33da-4cab-96a8-4e14aa07b4f9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wlzfp" Jan 26 19:13:12 crc kubenswrapper[4737]: I0126 19:13:12.227672 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kl2dc\" (UniqueName: \"kubernetes.io/projected/35694d2d-33da-4cab-96a8-4e14aa07b4f9-kube-api-access-kl2dc\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wlzfp\" (UID: \"35694d2d-33da-4cab-96a8-4e14aa07b4f9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wlzfp" Jan 26 19:13:12 crc kubenswrapper[4737]: I0126 19:13:12.389498 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wlzfp" Jan 26 19:13:12 crc kubenswrapper[4737]: I0126 19:13:12.967927 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wlzfp"] Jan 26 19:13:13 crc kubenswrapper[4737]: I0126 19:13:13.950094 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wlzfp" event={"ID":"35694d2d-33da-4cab-96a8-4e14aa07b4f9","Type":"ContainerStarted","Data":"bcdd31358b3f3768850b301c1e45b4ec0df86262c43ffb65de5f79d2ddf222d0"} Jan 26 19:13:13 crc kubenswrapper[4737]: I0126 19:13:13.950417 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wlzfp" event={"ID":"35694d2d-33da-4cab-96a8-4e14aa07b4f9","Type":"ContainerStarted","Data":"c70fde6ea4c33b6696220a0bb8688d89e784209fe5f373e7736995b5ff0bf95e"} Jan 26 19:13:13 crc kubenswrapper[4737]: I0126 19:13:13.977571 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wlzfp" podStartSLOduration=1.5912824780000001 podStartE2EDuration="1.977551194s" podCreationTimestamp="2026-01-26 19:13:12 +0000 UTC" firstStartedPulling="2026-01-26 19:13:12.983229908 +0000 UTC m=+2566.291424616" lastFinishedPulling="2026-01-26 19:13:13.369498624 +0000 UTC m=+2566.677693332" observedRunningTime="2026-01-26 19:13:13.966921732 +0000 UTC m=+2567.275116460" watchObservedRunningTime="2026-01-26 19:13:13.977551194 +0000 UTC m=+2567.285745902" Jan 26 19:13:25 crc kubenswrapper[4737]: I0126 19:13:25.982320 4737 scope.go:117] "RemoveContainer" containerID="dc219edb88bc1e0e52f10e642b13d911ee1bfd5a5f90c09cf6a4ad6a9f1a4b7f" Jan 26 19:13:25 crc kubenswrapper[4737]: E0126 19:13:25.984460 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:13:39 crc kubenswrapper[4737]: I0126 19:13:39.982454 4737 scope.go:117] "RemoveContainer" containerID="dc219edb88bc1e0e52f10e642b13d911ee1bfd5a5f90c09cf6a4ad6a9f1a4b7f" Jan 26 19:13:39 crc kubenswrapper[4737]: E0126 19:13:39.983464 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:13:52 crc kubenswrapper[4737]: I0126 19:13:52.982385 4737 scope.go:117] "RemoveContainer" containerID="dc219edb88bc1e0e52f10e642b13d911ee1bfd5a5f90c09cf6a4ad6a9f1a4b7f" Jan 26 19:13:52 crc kubenswrapper[4737]: E0126 19:13:52.983427 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:14:07 crc kubenswrapper[4737]: I0126 19:14:07.982217 4737 scope.go:117] "RemoveContainer" containerID="dc219edb88bc1e0e52f10e642b13d911ee1bfd5a5f90c09cf6a4ad6a9f1a4b7f" Jan 26 19:14:07 crc kubenswrapper[4737]: E0126 19:14:07.983157 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:14:19 crc kubenswrapper[4737]: I0126 19:14:19.982256 4737 scope.go:117] "RemoveContainer" containerID="dc219edb88bc1e0e52f10e642b13d911ee1bfd5a5f90c09cf6a4ad6a9f1a4b7f" Jan 26 19:14:19 crc kubenswrapper[4737]: E0126 19:14:19.983139 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:14:32 crc kubenswrapper[4737]: I0126 19:14:32.982568 4737 scope.go:117] "RemoveContainer" containerID="dc219edb88bc1e0e52f10e642b13d911ee1bfd5a5f90c09cf6a4ad6a9f1a4b7f" Jan 26 19:14:32 crc kubenswrapper[4737]: E0126 19:14:32.983768 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:14:44 crc kubenswrapper[4737]: I0126 19:14:44.982477 4737 scope.go:117] "RemoveContainer" containerID="dc219edb88bc1e0e52f10e642b13d911ee1bfd5a5f90c09cf6a4ad6a9f1a4b7f" Jan 26 19:14:44 crc kubenswrapper[4737]: E0126 19:14:44.983492 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:14:59 crc kubenswrapper[4737]: I0126 19:14:59.981989 4737 scope.go:117] "RemoveContainer" containerID="dc219edb88bc1e0e52f10e642b13d911ee1bfd5a5f90c09cf6a4ad6a9f1a4b7f" Jan 26 19:14:59 crc kubenswrapper[4737]: E0126 19:14:59.982837 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:15:00 crc kubenswrapper[4737]: I0126 19:15:00.164940 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490915-9jj25"] Jan 26 19:15:00 crc kubenswrapper[4737]: I0126 19:15:00.166616 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490915-9jj25" Jan 26 19:15:00 crc kubenswrapper[4737]: I0126 19:15:00.168858 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 19:15:00 crc kubenswrapper[4737]: I0126 19:15:00.169148 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 19:15:00 crc kubenswrapper[4737]: I0126 19:15:00.178806 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490915-9jj25"] Jan 26 19:15:00 crc kubenswrapper[4737]: I0126 19:15:00.281876 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jml86\" (UniqueName: \"kubernetes.io/projected/beeb9ebb-aa23-459a-b6f3-6ca0857850c4-kube-api-access-jml86\") pod \"collect-profiles-29490915-9jj25\" (UID: \"beeb9ebb-aa23-459a-b6f3-6ca0857850c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490915-9jj25" Jan 26 19:15:00 crc kubenswrapper[4737]: I0126 19:15:00.282040 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/beeb9ebb-aa23-459a-b6f3-6ca0857850c4-secret-volume\") pod \"collect-profiles-29490915-9jj25\" (UID: \"beeb9ebb-aa23-459a-b6f3-6ca0857850c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490915-9jj25" Jan 26 19:15:00 crc kubenswrapper[4737]: I0126 19:15:00.282325 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/beeb9ebb-aa23-459a-b6f3-6ca0857850c4-config-volume\") pod \"collect-profiles-29490915-9jj25\" (UID: \"beeb9ebb-aa23-459a-b6f3-6ca0857850c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490915-9jj25" Jan 26 19:15:00 crc kubenswrapper[4737]: I0126 19:15:00.385225 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/beeb9ebb-aa23-459a-b6f3-6ca0857850c4-secret-volume\") pod \"collect-profiles-29490915-9jj25\" (UID: \"beeb9ebb-aa23-459a-b6f3-6ca0857850c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490915-9jj25" Jan 26 19:15:00 crc kubenswrapper[4737]: I0126 19:15:00.385295 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/beeb9ebb-aa23-459a-b6f3-6ca0857850c4-config-volume\") pod \"collect-profiles-29490915-9jj25\" (UID: \"beeb9ebb-aa23-459a-b6f3-6ca0857850c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490915-9jj25" Jan 26 19:15:00 crc kubenswrapper[4737]: I0126 19:15:00.385465 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jml86\" (UniqueName: \"kubernetes.io/projected/beeb9ebb-aa23-459a-b6f3-6ca0857850c4-kube-api-access-jml86\") pod \"collect-profiles-29490915-9jj25\" (UID: \"beeb9ebb-aa23-459a-b6f3-6ca0857850c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490915-9jj25" Jan 26 19:15:00 crc kubenswrapper[4737]: I0126 19:15:00.386564 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/beeb9ebb-aa23-459a-b6f3-6ca0857850c4-config-volume\") pod \"collect-profiles-29490915-9jj25\" (UID: \"beeb9ebb-aa23-459a-b6f3-6ca0857850c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490915-9jj25" Jan 26 19:15:00 crc kubenswrapper[4737]: I0126 19:15:00.391895 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/beeb9ebb-aa23-459a-b6f3-6ca0857850c4-secret-volume\") pod \"collect-profiles-29490915-9jj25\" (UID: \"beeb9ebb-aa23-459a-b6f3-6ca0857850c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490915-9jj25" Jan 26 19:15:00 crc kubenswrapper[4737]: I0126 19:15:00.405922 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jml86\" (UniqueName: \"kubernetes.io/projected/beeb9ebb-aa23-459a-b6f3-6ca0857850c4-kube-api-access-jml86\") pod \"collect-profiles-29490915-9jj25\" (UID: \"beeb9ebb-aa23-459a-b6f3-6ca0857850c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490915-9jj25" Jan 26 19:15:00 crc kubenswrapper[4737]: I0126 19:15:00.497459 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490915-9jj25" Jan 26 19:15:01 crc kubenswrapper[4737]: I0126 19:15:01.026359 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490915-9jj25"] Jan 26 19:15:01 crc kubenswrapper[4737]: W0126 19:15:01.027830 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbeeb9ebb_aa23_459a_b6f3_6ca0857850c4.slice/crio-265ba4238f3ae44e238008745648dc8ea2d6a2ac3f8e9b3f162fdd2a3f97791e WatchSource:0}: Error finding container 265ba4238f3ae44e238008745648dc8ea2d6a2ac3f8e9b3f162fdd2a3f97791e: Status 404 returned error can't find the container with id 265ba4238f3ae44e238008745648dc8ea2d6a2ac3f8e9b3f162fdd2a3f97791e Jan 26 19:15:01 crc kubenswrapper[4737]: I0126 19:15:01.124810 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490915-9jj25" event={"ID":"beeb9ebb-aa23-459a-b6f3-6ca0857850c4","Type":"ContainerStarted","Data":"265ba4238f3ae44e238008745648dc8ea2d6a2ac3f8e9b3f162fdd2a3f97791e"} Jan 26 19:15:02 crc kubenswrapper[4737]: I0126 19:15:02.151817 4737 generic.go:334] "Generic (PLEG): container finished" podID="beeb9ebb-aa23-459a-b6f3-6ca0857850c4" containerID="b16304b51cc0fac85dc1c289bc4c7b4734327cd5ded61b60e24e1c9857b527e7" exitCode=0 Jan 26 19:15:02 crc kubenswrapper[4737]: I0126 19:15:02.152224 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490915-9jj25" event={"ID":"beeb9ebb-aa23-459a-b6f3-6ca0857850c4","Type":"ContainerDied","Data":"b16304b51cc0fac85dc1c289bc4c7b4734327cd5ded61b60e24e1c9857b527e7"} Jan 26 19:15:03 crc kubenswrapper[4737]: I0126 19:15:03.561708 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490915-9jj25" Jan 26 19:15:03 crc kubenswrapper[4737]: I0126 19:15:03.662766 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/beeb9ebb-aa23-459a-b6f3-6ca0857850c4-secret-volume\") pod \"beeb9ebb-aa23-459a-b6f3-6ca0857850c4\" (UID: \"beeb9ebb-aa23-459a-b6f3-6ca0857850c4\") " Jan 26 19:15:03 crc kubenswrapper[4737]: I0126 19:15:03.662912 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jml86\" (UniqueName: \"kubernetes.io/projected/beeb9ebb-aa23-459a-b6f3-6ca0857850c4-kube-api-access-jml86\") pod \"beeb9ebb-aa23-459a-b6f3-6ca0857850c4\" (UID: \"beeb9ebb-aa23-459a-b6f3-6ca0857850c4\") " Jan 26 19:15:03 crc kubenswrapper[4737]: I0126 19:15:03.663359 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/beeb9ebb-aa23-459a-b6f3-6ca0857850c4-config-volume\") pod \"beeb9ebb-aa23-459a-b6f3-6ca0857850c4\" (UID: \"beeb9ebb-aa23-459a-b6f3-6ca0857850c4\") " Jan 26 19:15:03 crc kubenswrapper[4737]: I0126 19:15:03.663966 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/beeb9ebb-aa23-459a-b6f3-6ca0857850c4-config-volume" (OuterVolumeSpecName: "config-volume") pod "beeb9ebb-aa23-459a-b6f3-6ca0857850c4" (UID: "beeb9ebb-aa23-459a-b6f3-6ca0857850c4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:15:03 crc kubenswrapper[4737]: I0126 19:15:03.669729 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/beeb9ebb-aa23-459a-b6f3-6ca0857850c4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "beeb9ebb-aa23-459a-b6f3-6ca0857850c4" (UID: "beeb9ebb-aa23-459a-b6f3-6ca0857850c4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:15:03 crc kubenswrapper[4737]: I0126 19:15:03.670380 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/beeb9ebb-aa23-459a-b6f3-6ca0857850c4-kube-api-access-jml86" (OuterVolumeSpecName: "kube-api-access-jml86") pod "beeb9ebb-aa23-459a-b6f3-6ca0857850c4" (UID: "beeb9ebb-aa23-459a-b6f3-6ca0857850c4"). InnerVolumeSpecName "kube-api-access-jml86". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:15:03 crc kubenswrapper[4737]: I0126 19:15:03.766301 4737 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/beeb9ebb-aa23-459a-b6f3-6ca0857850c4-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 19:15:03 crc kubenswrapper[4737]: I0126 19:15:03.766697 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jml86\" (UniqueName: \"kubernetes.io/projected/beeb9ebb-aa23-459a-b6f3-6ca0857850c4-kube-api-access-jml86\") on node \"crc\" DevicePath \"\"" Jan 26 19:15:03 crc kubenswrapper[4737]: I0126 19:15:03.766711 4737 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/beeb9ebb-aa23-459a-b6f3-6ca0857850c4-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 19:15:04 crc kubenswrapper[4737]: I0126 19:15:04.175399 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490915-9jj25" event={"ID":"beeb9ebb-aa23-459a-b6f3-6ca0857850c4","Type":"ContainerDied","Data":"265ba4238f3ae44e238008745648dc8ea2d6a2ac3f8e9b3f162fdd2a3f97791e"} Jan 26 19:15:04 crc kubenswrapper[4737]: I0126 19:15:04.175437 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490915-9jj25" Jan 26 19:15:04 crc kubenswrapper[4737]: I0126 19:15:04.175444 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="265ba4238f3ae44e238008745648dc8ea2d6a2ac3f8e9b3f162fdd2a3f97791e" Jan 26 19:15:04 crc kubenswrapper[4737]: I0126 19:15:04.641835 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490870-k4f69"] Jan 26 19:15:04 crc kubenswrapper[4737]: I0126 19:15:04.652788 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490870-k4f69"] Jan 26 19:15:04 crc kubenswrapper[4737]: I0126 19:15:04.995027 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac652a18-5fbd-483e-94d1-0782ee0cc3ac" path="/var/lib/kubelet/pods/ac652a18-5fbd-483e-94d1-0782ee0cc3ac/volumes" Jan 26 19:15:11 crc kubenswrapper[4737]: I0126 19:15:11.983439 4737 scope.go:117] "RemoveContainer" containerID="dc219edb88bc1e0e52f10e642b13d911ee1bfd5a5f90c09cf6a4ad6a9f1a4b7f" Jan 26 19:15:11 crc kubenswrapper[4737]: E0126 19:15:11.984409 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:15:23 crc kubenswrapper[4737]: I0126 19:15:23.982631 4737 scope.go:117] "RemoveContainer" containerID="dc219edb88bc1e0e52f10e642b13d911ee1bfd5a5f90c09cf6a4ad6a9f1a4b7f" Jan 26 19:15:23 crc kubenswrapper[4737]: E0126 19:15:23.983540 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:15:35 crc kubenswrapper[4737]: I0126 19:15:35.934028 4737 scope.go:117] "RemoveContainer" containerID="5843b80d4421ac37b77474ec11c8789e959f8d0527152c55f5e1fa7681a2742e" Jan 26 19:15:37 crc kubenswrapper[4737]: I0126 19:15:37.982490 4737 scope.go:117] "RemoveContainer" containerID="dc219edb88bc1e0e52f10e642b13d911ee1bfd5a5f90c09cf6a4ad6a9f1a4b7f" Jan 26 19:15:38 crc kubenswrapper[4737]: I0126 19:15:38.520676 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" event={"ID":"afd75772-7900-46c3-b392-afb075e1cc08","Type":"ContainerStarted","Data":"969d5bfda3f59282659c1c7839e3c4e96cf7dc6518f29fe22186994fb3b83944"} Jan 26 19:18:00 crc kubenswrapper[4737]: I0126 19:18:00.949316 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:18:00 crc kubenswrapper[4737]: I0126 19:18:00.949900 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:18:07 crc kubenswrapper[4737]: I0126 19:18:07.334596 4737 generic.go:334] "Generic (PLEG): container finished" podID="35694d2d-33da-4cab-96a8-4e14aa07b4f9" containerID="bcdd31358b3f3768850b301c1e45b4ec0df86262c43ffb65de5f79d2ddf222d0" exitCode=0 Jan 26 19:18:07 crc kubenswrapper[4737]: I0126 19:18:07.334790 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wlzfp" event={"ID":"35694d2d-33da-4cab-96a8-4e14aa07b4f9","Type":"ContainerDied","Data":"bcdd31358b3f3768850b301c1e45b4ec0df86262c43ffb65de5f79d2ddf222d0"} Jan 26 19:18:08 crc kubenswrapper[4737]: I0126 19:18:08.812103 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wlzfp" Jan 26 19:18:08 crc kubenswrapper[4737]: I0126 19:18:08.884396 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35694d2d-33da-4cab-96a8-4e14aa07b4f9-libvirt-combined-ca-bundle\") pod \"35694d2d-33da-4cab-96a8-4e14aa07b4f9\" (UID: \"35694d2d-33da-4cab-96a8-4e14aa07b4f9\") " Jan 26 19:18:08 crc kubenswrapper[4737]: I0126 19:18:08.884539 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/35694d2d-33da-4cab-96a8-4e14aa07b4f9-ssh-key-openstack-edpm-ipam\") pod \"35694d2d-33da-4cab-96a8-4e14aa07b4f9\" (UID: \"35694d2d-33da-4cab-96a8-4e14aa07b4f9\") " Jan 26 19:18:08 crc kubenswrapper[4737]: I0126 19:18:08.884641 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kl2dc\" (UniqueName: \"kubernetes.io/projected/35694d2d-33da-4cab-96a8-4e14aa07b4f9-kube-api-access-kl2dc\") pod \"35694d2d-33da-4cab-96a8-4e14aa07b4f9\" (UID: \"35694d2d-33da-4cab-96a8-4e14aa07b4f9\") " Jan 26 19:18:08 crc kubenswrapper[4737]: I0126 19:18:08.884748 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/35694d2d-33da-4cab-96a8-4e14aa07b4f9-libvirt-secret-0\") pod \"35694d2d-33da-4cab-96a8-4e14aa07b4f9\" (UID: \"35694d2d-33da-4cab-96a8-4e14aa07b4f9\") " Jan 26 19:18:08 crc kubenswrapper[4737]: I0126 19:18:08.884811 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35694d2d-33da-4cab-96a8-4e14aa07b4f9-inventory\") pod \"35694d2d-33da-4cab-96a8-4e14aa07b4f9\" (UID: \"35694d2d-33da-4cab-96a8-4e14aa07b4f9\") " Jan 26 19:18:08 crc kubenswrapper[4737]: I0126 19:18:08.890596 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35694d2d-33da-4cab-96a8-4e14aa07b4f9-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "35694d2d-33da-4cab-96a8-4e14aa07b4f9" (UID: "35694d2d-33da-4cab-96a8-4e14aa07b4f9"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:18:08 crc kubenswrapper[4737]: I0126 19:18:08.890725 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35694d2d-33da-4cab-96a8-4e14aa07b4f9-kube-api-access-kl2dc" (OuterVolumeSpecName: "kube-api-access-kl2dc") pod "35694d2d-33da-4cab-96a8-4e14aa07b4f9" (UID: "35694d2d-33da-4cab-96a8-4e14aa07b4f9"). InnerVolumeSpecName "kube-api-access-kl2dc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:18:08 crc kubenswrapper[4737]: I0126 19:18:08.918042 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35694d2d-33da-4cab-96a8-4e14aa07b4f9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "35694d2d-33da-4cab-96a8-4e14aa07b4f9" (UID: "35694d2d-33da-4cab-96a8-4e14aa07b4f9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:18:08 crc kubenswrapper[4737]: I0126 19:18:08.921434 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35694d2d-33da-4cab-96a8-4e14aa07b4f9-inventory" (OuterVolumeSpecName: "inventory") pod "35694d2d-33da-4cab-96a8-4e14aa07b4f9" (UID: "35694d2d-33da-4cab-96a8-4e14aa07b4f9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:18:08 crc kubenswrapper[4737]: I0126 19:18:08.922910 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35694d2d-33da-4cab-96a8-4e14aa07b4f9-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "35694d2d-33da-4cab-96a8-4e14aa07b4f9" (UID: "35694d2d-33da-4cab-96a8-4e14aa07b4f9"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:18:08 crc kubenswrapper[4737]: I0126 19:18:08.988431 4737 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/35694d2d-33da-4cab-96a8-4e14aa07b4f9-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 26 19:18:08 crc kubenswrapper[4737]: I0126 19:18:08.988469 4737 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35694d2d-33da-4cab-96a8-4e14aa07b4f9-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 19:18:08 crc kubenswrapper[4737]: I0126 19:18:08.988482 4737 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35694d2d-33da-4cab-96a8-4e14aa07b4f9-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:18:08 crc kubenswrapper[4737]: I0126 19:18:08.988493 4737 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/35694d2d-33da-4cab-96a8-4e14aa07b4f9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 19:18:08 crc kubenswrapper[4737]: I0126 19:18:08.988502 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kl2dc\" (UniqueName: \"kubernetes.io/projected/35694d2d-33da-4cab-96a8-4e14aa07b4f9-kube-api-access-kl2dc\") on node \"crc\" DevicePath \"\"" Jan 26 19:18:09 crc kubenswrapper[4737]: I0126 19:18:09.358290 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wlzfp" event={"ID":"35694d2d-33da-4cab-96a8-4e14aa07b4f9","Type":"ContainerDied","Data":"c70fde6ea4c33b6696220a0bb8688d89e784209fe5f373e7736995b5ff0bf95e"} Jan 26 19:18:09 crc kubenswrapper[4737]: I0126 19:18:09.358335 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c70fde6ea4c33b6696220a0bb8688d89e784209fe5f373e7736995b5ff0bf95e" Jan 26 19:18:09 crc kubenswrapper[4737]: I0126 19:18:09.358401 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wlzfp" Jan 26 19:18:09 crc kubenswrapper[4737]: I0126 19:18:09.452387 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-m7qxj"] Jan 26 19:18:09 crc kubenswrapper[4737]: E0126 19:18:09.453026 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="beeb9ebb-aa23-459a-b6f3-6ca0857850c4" containerName="collect-profiles" Jan 26 19:18:09 crc kubenswrapper[4737]: I0126 19:18:09.453058 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="beeb9ebb-aa23-459a-b6f3-6ca0857850c4" containerName="collect-profiles" Jan 26 19:18:09 crc kubenswrapper[4737]: E0126 19:18:09.453153 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35694d2d-33da-4cab-96a8-4e14aa07b4f9" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 26 19:18:09 crc kubenswrapper[4737]: I0126 19:18:09.453165 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="35694d2d-33da-4cab-96a8-4e14aa07b4f9" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 26 19:18:09 crc kubenswrapper[4737]: I0126 19:18:09.453431 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="beeb9ebb-aa23-459a-b6f3-6ca0857850c4" containerName="collect-profiles" Jan 26 19:18:09 crc kubenswrapper[4737]: I0126 19:18:09.453455 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="35694d2d-33da-4cab-96a8-4e14aa07b4f9" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 26 19:18:09 crc kubenswrapper[4737]: I0126 19:18:09.454470 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m7qxj" Jan 26 19:18:09 crc kubenswrapper[4737]: I0126 19:18:09.457040 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 19:18:09 crc kubenswrapper[4737]: I0126 19:18:09.457181 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xlvv9" Jan 26 19:18:09 crc kubenswrapper[4737]: I0126 19:18:09.457411 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 19:18:09 crc kubenswrapper[4737]: I0126 19:18:09.458188 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 26 19:18:09 crc kubenswrapper[4737]: I0126 19:18:09.458384 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 19:18:09 crc kubenswrapper[4737]: I0126 19:18:09.458853 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 26 19:18:09 crc kubenswrapper[4737]: I0126 19:18:09.462788 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 26 19:18:09 crc kubenswrapper[4737]: I0126 19:18:09.466584 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-m7qxj"] Jan 26 19:18:09 crc kubenswrapper[4737]: I0126 19:18:09.502303 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c1f6bd41-c1ed-47f9-a3db-03756845afbc-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-m7qxj\" (UID: \"c1f6bd41-c1ed-47f9-a3db-03756845afbc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m7qxj" Jan 26 19:18:09 crc kubenswrapper[4737]: I0126 19:18:09.502416 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1f6bd41-c1ed-47f9-a3db-03756845afbc-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-m7qxj\" (UID: \"c1f6bd41-c1ed-47f9-a3db-03756845afbc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m7qxj" Jan 26 19:18:09 crc kubenswrapper[4737]: I0126 19:18:09.502454 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c1f6bd41-c1ed-47f9-a3db-03756845afbc-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-m7qxj\" (UID: \"c1f6bd41-c1ed-47f9-a3db-03756845afbc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m7qxj" Jan 26 19:18:09 crc kubenswrapper[4737]: I0126 19:18:09.502470 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c1f6bd41-c1ed-47f9-a3db-03756845afbc-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-m7qxj\" (UID: \"c1f6bd41-c1ed-47f9-a3db-03756845afbc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m7qxj" Jan 26 19:18:09 crc kubenswrapper[4737]: I0126 19:18:09.502496 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c1f6bd41-c1ed-47f9-a3db-03756845afbc-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-m7qxj\" (UID: \"c1f6bd41-c1ed-47f9-a3db-03756845afbc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m7qxj" Jan 26 19:18:09 crc kubenswrapper[4737]: I0126 19:18:09.502545 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c1f6bd41-c1ed-47f9-a3db-03756845afbc-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-m7qxj\" (UID: \"c1f6bd41-c1ed-47f9-a3db-03756845afbc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m7qxj" Jan 26 19:18:09 crc kubenswrapper[4737]: I0126 19:18:09.502564 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ns5rt\" (UniqueName: \"kubernetes.io/projected/c1f6bd41-c1ed-47f9-a3db-03756845afbc-kube-api-access-ns5rt\") pod \"nova-edpm-deployment-openstack-edpm-ipam-m7qxj\" (UID: \"c1f6bd41-c1ed-47f9-a3db-03756845afbc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m7qxj" Jan 26 19:18:09 crc kubenswrapper[4737]: I0126 19:18:09.502583 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/c1f6bd41-c1ed-47f9-a3db-03756845afbc-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-m7qxj\" (UID: \"c1f6bd41-c1ed-47f9-a3db-03756845afbc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m7qxj" Jan 26 19:18:09 crc kubenswrapper[4737]: I0126 19:18:09.502640 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c1f6bd41-c1ed-47f9-a3db-03756845afbc-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-m7qxj\" (UID: \"c1f6bd41-c1ed-47f9-a3db-03756845afbc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m7qxj" Jan 26 19:18:09 crc kubenswrapper[4737]: I0126 19:18:09.605552 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c1f6bd41-c1ed-47f9-a3db-03756845afbc-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-m7qxj\" (UID: \"c1f6bd41-c1ed-47f9-a3db-03756845afbc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m7qxj" Jan 26 19:18:09 crc kubenswrapper[4737]: I0126 19:18:09.606144 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c1f6bd41-c1ed-47f9-a3db-03756845afbc-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-m7qxj\" (UID: \"c1f6bd41-c1ed-47f9-a3db-03756845afbc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m7qxj" Jan 26 19:18:09 crc kubenswrapper[4737]: I0126 19:18:09.606252 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c1f6bd41-c1ed-47f9-a3db-03756845afbc-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-m7qxj\" (UID: \"c1f6bd41-c1ed-47f9-a3db-03756845afbc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m7qxj" Jan 26 19:18:09 crc kubenswrapper[4737]: I0126 19:18:09.606284 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ns5rt\" (UniqueName: \"kubernetes.io/projected/c1f6bd41-c1ed-47f9-a3db-03756845afbc-kube-api-access-ns5rt\") pod \"nova-edpm-deployment-openstack-edpm-ipam-m7qxj\" (UID: \"c1f6bd41-c1ed-47f9-a3db-03756845afbc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m7qxj" Jan 26 19:18:09 crc kubenswrapper[4737]: I0126 19:18:09.606316 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/c1f6bd41-c1ed-47f9-a3db-03756845afbc-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-m7qxj\" (UID: \"c1f6bd41-c1ed-47f9-a3db-03756845afbc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m7qxj" Jan 26 19:18:09 crc kubenswrapper[4737]: I0126 19:18:09.606402 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c1f6bd41-c1ed-47f9-a3db-03756845afbc-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-m7qxj\" (UID: \"c1f6bd41-c1ed-47f9-a3db-03756845afbc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m7qxj" Jan 26 19:18:09 crc kubenswrapper[4737]: I0126 19:18:09.606554 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c1f6bd41-c1ed-47f9-a3db-03756845afbc-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-m7qxj\" (UID: \"c1f6bd41-c1ed-47f9-a3db-03756845afbc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m7qxj" Jan 26 19:18:09 crc kubenswrapper[4737]: I0126 19:18:09.606621 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1f6bd41-c1ed-47f9-a3db-03756845afbc-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-m7qxj\" (UID: \"c1f6bd41-c1ed-47f9-a3db-03756845afbc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m7qxj" Jan 26 19:18:09 crc kubenswrapper[4737]: I0126 19:18:09.606670 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c1f6bd41-c1ed-47f9-a3db-03756845afbc-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-m7qxj\" (UID: \"c1f6bd41-c1ed-47f9-a3db-03756845afbc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m7qxj" Jan 26 19:18:09 crc kubenswrapper[4737]: I0126 19:18:09.607568 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/c1f6bd41-c1ed-47f9-a3db-03756845afbc-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-m7qxj\" (UID: \"c1f6bd41-c1ed-47f9-a3db-03756845afbc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m7qxj" Jan 26 19:18:09 crc kubenswrapper[4737]: I0126 19:18:09.611175 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1f6bd41-c1ed-47f9-a3db-03756845afbc-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-m7qxj\" (UID: \"c1f6bd41-c1ed-47f9-a3db-03756845afbc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m7qxj" Jan 26 19:18:09 crc kubenswrapper[4737]: I0126 19:18:09.611566 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c1f6bd41-c1ed-47f9-a3db-03756845afbc-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-m7qxj\" (UID: \"c1f6bd41-c1ed-47f9-a3db-03756845afbc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m7qxj" Jan 26 19:18:09 crc kubenswrapper[4737]: I0126 19:18:09.611646 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c1f6bd41-c1ed-47f9-a3db-03756845afbc-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-m7qxj\" (UID: \"c1f6bd41-c1ed-47f9-a3db-03756845afbc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m7qxj" Jan 26 19:18:09 crc kubenswrapper[4737]: I0126 19:18:09.612543 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c1f6bd41-c1ed-47f9-a3db-03756845afbc-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-m7qxj\" (UID: \"c1f6bd41-c1ed-47f9-a3db-03756845afbc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m7qxj" Jan 26 19:18:09 crc kubenswrapper[4737]: I0126 19:18:09.612777 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c1f6bd41-c1ed-47f9-a3db-03756845afbc-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-m7qxj\" (UID: \"c1f6bd41-c1ed-47f9-a3db-03756845afbc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m7qxj" Jan 26 19:18:09 crc kubenswrapper[4737]: I0126 19:18:09.612918 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c1f6bd41-c1ed-47f9-a3db-03756845afbc-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-m7qxj\" (UID: \"c1f6bd41-c1ed-47f9-a3db-03756845afbc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m7qxj" Jan 26 19:18:09 crc kubenswrapper[4737]: I0126 19:18:09.624517 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ns5rt\" (UniqueName: \"kubernetes.io/projected/c1f6bd41-c1ed-47f9-a3db-03756845afbc-kube-api-access-ns5rt\") pod \"nova-edpm-deployment-openstack-edpm-ipam-m7qxj\" (UID: \"c1f6bd41-c1ed-47f9-a3db-03756845afbc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m7qxj" Jan 26 19:18:09 crc kubenswrapper[4737]: I0126 19:18:09.624639 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c1f6bd41-c1ed-47f9-a3db-03756845afbc-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-m7qxj\" (UID: \"c1f6bd41-c1ed-47f9-a3db-03756845afbc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m7qxj" Jan 26 19:18:09 crc kubenswrapper[4737]: I0126 19:18:09.777161 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m7qxj" Jan 26 19:18:10 crc kubenswrapper[4737]: I0126 19:18:10.404870 4737 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 19:18:10 crc kubenswrapper[4737]: I0126 19:18:10.410188 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-m7qxj"] Jan 26 19:18:11 crc kubenswrapper[4737]: I0126 19:18:11.385669 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m7qxj" event={"ID":"c1f6bd41-c1ed-47f9-a3db-03756845afbc","Type":"ContainerStarted","Data":"fc2c6c60087a3aa0b6d5d752239d76e09a8e1288552e8f2fff39a78272dfd39f"} Jan 26 19:18:12 crc kubenswrapper[4737]: I0126 19:18:12.397769 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m7qxj" event={"ID":"c1f6bd41-c1ed-47f9-a3db-03756845afbc","Type":"ContainerStarted","Data":"db1397174391824481990eab2ff197b1f0a995489f4f5f9ac653acfa7256ee2d"} Jan 26 19:18:12 crc kubenswrapper[4737]: I0126 19:18:12.415575 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m7qxj" podStartSLOduration=2.821986282 podStartE2EDuration="3.415558889s" podCreationTimestamp="2026-01-26 19:18:09 +0000 UTC" firstStartedPulling="2026-01-26 19:18:10.404649868 +0000 UTC m=+2863.712844586" lastFinishedPulling="2026-01-26 19:18:10.998222485 +0000 UTC m=+2864.306417193" observedRunningTime="2026-01-26 19:18:12.413663253 +0000 UTC m=+2865.721857971" watchObservedRunningTime="2026-01-26 19:18:12.415558889 +0000 UTC m=+2865.723753597" Jan 26 19:18:30 crc kubenswrapper[4737]: I0126 19:18:30.949812 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:18:30 crc kubenswrapper[4737]: I0126 19:18:30.950635 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:19:00 crc kubenswrapper[4737]: I0126 19:19:00.949470 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:19:00 crc kubenswrapper[4737]: I0126 19:19:00.950095 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:19:00 crc kubenswrapper[4737]: I0126 19:19:00.950154 4737 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" Jan 26 19:19:00 crc kubenswrapper[4737]: I0126 19:19:00.951165 4737 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"969d5bfda3f59282659c1c7839e3c4e96cf7dc6518f29fe22186994fb3b83944"} pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 19:19:00 crc kubenswrapper[4737]: I0126 19:19:00.951211 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" containerID="cri-o://969d5bfda3f59282659c1c7839e3c4e96cf7dc6518f29fe22186994fb3b83944" gracePeriod=600 Jan 26 19:19:02 crc kubenswrapper[4737]: I0126 19:19:02.017374 4737 generic.go:334] "Generic (PLEG): container finished" podID="afd75772-7900-46c3-b392-afb075e1cc08" containerID="969d5bfda3f59282659c1c7839e3c4e96cf7dc6518f29fe22186994fb3b83944" exitCode=0 Jan 26 19:19:02 crc kubenswrapper[4737]: I0126 19:19:02.017453 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" event={"ID":"afd75772-7900-46c3-b392-afb075e1cc08","Type":"ContainerDied","Data":"969d5bfda3f59282659c1c7839e3c4e96cf7dc6518f29fe22186994fb3b83944"} Jan 26 19:19:02 crc kubenswrapper[4737]: I0126 19:19:02.017919 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" event={"ID":"afd75772-7900-46c3-b392-afb075e1cc08","Type":"ContainerStarted","Data":"0456e4438ad40c4f582e22f53643b0ff4e17de4961aa6f19105c775dca022959"} Jan 26 19:19:02 crc kubenswrapper[4737]: I0126 19:19:02.017938 4737 scope.go:117] "RemoveContainer" containerID="dc219edb88bc1e0e52f10e642b13d911ee1bfd5a5f90c09cf6a4ad6a9f1a4b7f" Jan 26 19:20:48 crc kubenswrapper[4737]: I0126 19:20:48.196943 4737 generic.go:334] "Generic (PLEG): container finished" podID="c1f6bd41-c1ed-47f9-a3db-03756845afbc" containerID="db1397174391824481990eab2ff197b1f0a995489f4f5f9ac653acfa7256ee2d" exitCode=0 Jan 26 19:20:48 crc kubenswrapper[4737]: I0126 19:20:48.197030 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m7qxj" event={"ID":"c1f6bd41-c1ed-47f9-a3db-03756845afbc","Type":"ContainerDied","Data":"db1397174391824481990eab2ff197b1f0a995489f4f5f9ac653acfa7256ee2d"} Jan 26 19:20:49 crc kubenswrapper[4737]: I0126 19:20:49.728167 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m7qxj" Jan 26 19:20:49 crc kubenswrapper[4737]: I0126 19:20:49.786984 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c1f6bd41-c1ed-47f9-a3db-03756845afbc-nova-cell1-compute-config-1\") pod \"c1f6bd41-c1ed-47f9-a3db-03756845afbc\" (UID: \"c1f6bd41-c1ed-47f9-a3db-03756845afbc\") " Jan 26 19:20:49 crc kubenswrapper[4737]: I0126 19:20:49.787876 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c1f6bd41-c1ed-47f9-a3db-03756845afbc-inventory\") pod \"c1f6bd41-c1ed-47f9-a3db-03756845afbc\" (UID: \"c1f6bd41-c1ed-47f9-a3db-03756845afbc\") " Jan 26 19:20:49 crc kubenswrapper[4737]: I0126 19:20:49.788229 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c1f6bd41-c1ed-47f9-a3db-03756845afbc-ssh-key-openstack-edpm-ipam\") pod \"c1f6bd41-c1ed-47f9-a3db-03756845afbc\" (UID: \"c1f6bd41-c1ed-47f9-a3db-03756845afbc\") " Jan 26 19:20:49 crc kubenswrapper[4737]: I0126 19:20:49.788648 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1f6bd41-c1ed-47f9-a3db-03756845afbc-nova-combined-ca-bundle\") pod \"c1f6bd41-c1ed-47f9-a3db-03756845afbc\" (UID: \"c1f6bd41-c1ed-47f9-a3db-03756845afbc\") " Jan 26 19:20:49 crc kubenswrapper[4737]: I0126 19:20:49.788894 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/c1f6bd41-c1ed-47f9-a3db-03756845afbc-nova-extra-config-0\") pod \"c1f6bd41-c1ed-47f9-a3db-03756845afbc\" (UID: \"c1f6bd41-c1ed-47f9-a3db-03756845afbc\") " Jan 26 19:20:49 crc kubenswrapper[4737]: I0126 19:20:49.789243 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ns5rt\" (UniqueName: \"kubernetes.io/projected/c1f6bd41-c1ed-47f9-a3db-03756845afbc-kube-api-access-ns5rt\") pod \"c1f6bd41-c1ed-47f9-a3db-03756845afbc\" (UID: \"c1f6bd41-c1ed-47f9-a3db-03756845afbc\") " Jan 26 19:20:49 crc kubenswrapper[4737]: I0126 19:20:49.789501 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c1f6bd41-c1ed-47f9-a3db-03756845afbc-nova-cell1-compute-config-0\") pod \"c1f6bd41-c1ed-47f9-a3db-03756845afbc\" (UID: \"c1f6bd41-c1ed-47f9-a3db-03756845afbc\") " Jan 26 19:20:49 crc kubenswrapper[4737]: I0126 19:20:49.789655 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c1f6bd41-c1ed-47f9-a3db-03756845afbc-nova-migration-ssh-key-0\") pod \"c1f6bd41-c1ed-47f9-a3db-03756845afbc\" (UID: \"c1f6bd41-c1ed-47f9-a3db-03756845afbc\") " Jan 26 19:20:49 crc kubenswrapper[4737]: I0126 19:20:49.789868 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c1f6bd41-c1ed-47f9-a3db-03756845afbc-nova-migration-ssh-key-1\") pod \"c1f6bd41-c1ed-47f9-a3db-03756845afbc\" (UID: \"c1f6bd41-c1ed-47f9-a3db-03756845afbc\") " Jan 26 19:20:49 crc kubenswrapper[4737]: I0126 19:20:49.815632 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1f6bd41-c1ed-47f9-a3db-03756845afbc-kube-api-access-ns5rt" (OuterVolumeSpecName: "kube-api-access-ns5rt") pod "c1f6bd41-c1ed-47f9-a3db-03756845afbc" (UID: "c1f6bd41-c1ed-47f9-a3db-03756845afbc"). InnerVolumeSpecName "kube-api-access-ns5rt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:20:49 crc kubenswrapper[4737]: I0126 19:20:49.816793 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1f6bd41-c1ed-47f9-a3db-03756845afbc-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "c1f6bd41-c1ed-47f9-a3db-03756845afbc" (UID: "c1f6bd41-c1ed-47f9-a3db-03756845afbc"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:20:49 crc kubenswrapper[4737]: I0126 19:20:49.829958 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1f6bd41-c1ed-47f9-a3db-03756845afbc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c1f6bd41-c1ed-47f9-a3db-03756845afbc" (UID: "c1f6bd41-c1ed-47f9-a3db-03756845afbc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:20:49 crc kubenswrapper[4737]: I0126 19:20:49.838275 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1f6bd41-c1ed-47f9-a3db-03756845afbc-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "c1f6bd41-c1ed-47f9-a3db-03756845afbc" (UID: "c1f6bd41-c1ed-47f9-a3db-03756845afbc"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:20:49 crc kubenswrapper[4737]: I0126 19:20:49.842060 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1f6bd41-c1ed-47f9-a3db-03756845afbc-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "c1f6bd41-c1ed-47f9-a3db-03756845afbc" (UID: "c1f6bd41-c1ed-47f9-a3db-03756845afbc"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:20:49 crc kubenswrapper[4737]: I0126 19:20:49.847347 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1f6bd41-c1ed-47f9-a3db-03756845afbc-inventory" (OuterVolumeSpecName: "inventory") pod "c1f6bd41-c1ed-47f9-a3db-03756845afbc" (UID: "c1f6bd41-c1ed-47f9-a3db-03756845afbc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:20:49 crc kubenswrapper[4737]: I0126 19:20:49.855269 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1f6bd41-c1ed-47f9-a3db-03756845afbc-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "c1f6bd41-c1ed-47f9-a3db-03756845afbc" (UID: "c1f6bd41-c1ed-47f9-a3db-03756845afbc"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:20:49 crc kubenswrapper[4737]: I0126 19:20:49.855608 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1f6bd41-c1ed-47f9-a3db-03756845afbc-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "c1f6bd41-c1ed-47f9-a3db-03756845afbc" (UID: "c1f6bd41-c1ed-47f9-a3db-03756845afbc"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:20:49 crc kubenswrapper[4737]: I0126 19:20:49.856916 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1f6bd41-c1ed-47f9-a3db-03756845afbc-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "c1f6bd41-c1ed-47f9-a3db-03756845afbc" (UID: "c1f6bd41-c1ed-47f9-a3db-03756845afbc"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:20:49 crc kubenswrapper[4737]: I0126 19:20:49.894949 4737 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c1f6bd41-c1ed-47f9-a3db-03756845afbc-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 26 19:20:49 crc kubenswrapper[4737]: I0126 19:20:49.894988 4737 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c1f6bd41-c1ed-47f9-a3db-03756845afbc-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 26 19:20:49 crc kubenswrapper[4737]: I0126 19:20:49.895004 4737 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c1f6bd41-c1ed-47f9-a3db-03756845afbc-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 19:20:49 crc kubenswrapper[4737]: I0126 19:20:49.895015 4737 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c1f6bd41-c1ed-47f9-a3db-03756845afbc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 19:20:49 crc kubenswrapper[4737]: I0126 19:20:49.895027 4737 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1f6bd41-c1ed-47f9-a3db-03756845afbc-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:20:49 crc kubenswrapper[4737]: I0126 19:20:49.895039 4737 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/c1f6bd41-c1ed-47f9-a3db-03756845afbc-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 19:20:49 crc kubenswrapper[4737]: I0126 19:20:49.895050 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ns5rt\" (UniqueName: \"kubernetes.io/projected/c1f6bd41-c1ed-47f9-a3db-03756845afbc-kube-api-access-ns5rt\") on node \"crc\" DevicePath \"\"" Jan 26 19:20:49 crc kubenswrapper[4737]: I0126 19:20:49.895061 4737 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c1f6bd41-c1ed-47f9-a3db-03756845afbc-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 19:20:49 crc kubenswrapper[4737]: I0126 19:20:49.895104 4737 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c1f6bd41-c1ed-47f9-a3db-03756845afbc-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 26 19:20:50 crc kubenswrapper[4737]: I0126 19:20:50.248899 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m7qxj" event={"ID":"c1f6bd41-c1ed-47f9-a3db-03756845afbc","Type":"ContainerDied","Data":"fc2c6c60087a3aa0b6d5d752239d76e09a8e1288552e8f2fff39a78272dfd39f"} Jan 26 19:20:50 crc kubenswrapper[4737]: I0126 19:20:50.248939 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc2c6c60087a3aa0b6d5d752239d76e09a8e1288552e8f2fff39a78272dfd39f" Jan 26 19:20:50 crc kubenswrapper[4737]: I0126 19:20:50.248970 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m7qxj" Jan 26 19:20:50 crc kubenswrapper[4737]: I0126 19:20:50.346491 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v27w7"] Jan 26 19:20:50 crc kubenswrapper[4737]: E0126 19:20:50.347182 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1f6bd41-c1ed-47f9-a3db-03756845afbc" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 26 19:20:50 crc kubenswrapper[4737]: I0126 19:20:50.347209 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1f6bd41-c1ed-47f9-a3db-03756845afbc" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 26 19:20:50 crc kubenswrapper[4737]: I0126 19:20:50.347511 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1f6bd41-c1ed-47f9-a3db-03756845afbc" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 26 19:20:50 crc kubenswrapper[4737]: I0126 19:20:50.348544 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v27w7" Jan 26 19:20:50 crc kubenswrapper[4737]: I0126 19:20:50.351719 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 19:20:50 crc kubenswrapper[4737]: I0126 19:20:50.351767 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 19:20:50 crc kubenswrapper[4737]: I0126 19:20:50.351988 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 19:20:50 crc kubenswrapper[4737]: I0126 19:20:50.352208 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 26 19:20:50 crc kubenswrapper[4737]: I0126 19:20:50.352333 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xlvv9" Jan 26 19:20:50 crc kubenswrapper[4737]: I0126 19:20:50.379609 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v27w7"] Jan 26 19:20:50 crc kubenswrapper[4737]: I0126 19:20:50.510326 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6bacdfa3-047c-42c9-a233-7daac1e8b65d-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v27w7\" (UID: \"6bacdfa3-047c-42c9-a233-7daac1e8b65d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v27w7" Jan 26 19:20:50 crc kubenswrapper[4737]: I0126 19:20:50.510404 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6bacdfa3-047c-42c9-a233-7daac1e8b65d-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v27w7\" (UID: \"6bacdfa3-047c-42c9-a233-7daac1e8b65d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v27w7" Jan 26 19:20:50 crc kubenswrapper[4737]: I0126 19:20:50.510632 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bacdfa3-047c-42c9-a233-7daac1e8b65d-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v27w7\" (UID: \"6bacdfa3-047c-42c9-a233-7daac1e8b65d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v27w7" Jan 26 19:20:50 crc kubenswrapper[4737]: I0126 19:20:50.510763 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6z2h\" (UniqueName: \"kubernetes.io/projected/6bacdfa3-047c-42c9-a233-7daac1e8b65d-kube-api-access-b6z2h\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v27w7\" (UID: \"6bacdfa3-047c-42c9-a233-7daac1e8b65d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v27w7" Jan 26 19:20:50 crc kubenswrapper[4737]: I0126 19:20:50.510872 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/6bacdfa3-047c-42c9-a233-7daac1e8b65d-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v27w7\" (UID: \"6bacdfa3-047c-42c9-a233-7daac1e8b65d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v27w7" Jan 26 19:20:50 crc kubenswrapper[4737]: I0126 19:20:50.510919 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/6bacdfa3-047c-42c9-a233-7daac1e8b65d-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v27w7\" (UID: \"6bacdfa3-047c-42c9-a233-7daac1e8b65d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v27w7" Jan 26 19:20:50 crc kubenswrapper[4737]: I0126 19:20:50.511094 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/6bacdfa3-047c-42c9-a233-7daac1e8b65d-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v27w7\" (UID: \"6bacdfa3-047c-42c9-a233-7daac1e8b65d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v27w7" Jan 26 19:20:50 crc kubenswrapper[4737]: I0126 19:20:50.613444 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6z2h\" (UniqueName: \"kubernetes.io/projected/6bacdfa3-047c-42c9-a233-7daac1e8b65d-kube-api-access-b6z2h\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v27w7\" (UID: \"6bacdfa3-047c-42c9-a233-7daac1e8b65d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v27w7" Jan 26 19:20:50 crc kubenswrapper[4737]: I0126 19:20:50.613587 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/6bacdfa3-047c-42c9-a233-7daac1e8b65d-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v27w7\" (UID: \"6bacdfa3-047c-42c9-a233-7daac1e8b65d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v27w7" Jan 26 19:20:50 crc kubenswrapper[4737]: I0126 19:20:50.613623 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/6bacdfa3-047c-42c9-a233-7daac1e8b65d-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v27w7\" (UID: \"6bacdfa3-047c-42c9-a233-7daac1e8b65d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v27w7" Jan 26 19:20:50 crc kubenswrapper[4737]: I0126 19:20:50.613703 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/6bacdfa3-047c-42c9-a233-7daac1e8b65d-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v27w7\" (UID: \"6bacdfa3-047c-42c9-a233-7daac1e8b65d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v27w7" Jan 26 19:20:50 crc kubenswrapper[4737]: I0126 19:20:50.613904 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6bacdfa3-047c-42c9-a233-7daac1e8b65d-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v27w7\" (UID: \"6bacdfa3-047c-42c9-a233-7daac1e8b65d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v27w7" Jan 26 19:20:50 crc kubenswrapper[4737]: I0126 19:20:50.613995 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6bacdfa3-047c-42c9-a233-7daac1e8b65d-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v27w7\" (UID: \"6bacdfa3-047c-42c9-a233-7daac1e8b65d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v27w7" Jan 26 19:20:50 crc kubenswrapper[4737]: I0126 19:20:50.614172 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bacdfa3-047c-42c9-a233-7daac1e8b65d-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v27w7\" (UID: \"6bacdfa3-047c-42c9-a233-7daac1e8b65d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v27w7" Jan 26 19:20:50 crc kubenswrapper[4737]: I0126 19:20:50.618161 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/6bacdfa3-047c-42c9-a233-7daac1e8b65d-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v27w7\" (UID: \"6bacdfa3-047c-42c9-a233-7daac1e8b65d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v27w7" Jan 26 19:20:50 crc kubenswrapper[4737]: I0126 19:20:50.618208 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/6bacdfa3-047c-42c9-a233-7daac1e8b65d-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v27w7\" (UID: \"6bacdfa3-047c-42c9-a233-7daac1e8b65d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v27w7" Jan 26 19:20:50 crc kubenswrapper[4737]: I0126 19:20:50.619770 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6bacdfa3-047c-42c9-a233-7daac1e8b65d-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v27w7\" (UID: \"6bacdfa3-047c-42c9-a233-7daac1e8b65d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v27w7" Jan 26 19:20:50 crc kubenswrapper[4737]: I0126 19:20:50.619792 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bacdfa3-047c-42c9-a233-7daac1e8b65d-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v27w7\" (UID: \"6bacdfa3-047c-42c9-a233-7daac1e8b65d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v27w7" Jan 26 19:20:50 crc kubenswrapper[4737]: I0126 19:20:50.620232 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/6bacdfa3-047c-42c9-a233-7daac1e8b65d-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v27w7\" (UID: \"6bacdfa3-047c-42c9-a233-7daac1e8b65d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v27w7" Jan 26 19:20:50 crc kubenswrapper[4737]: I0126 19:20:50.620578 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6bacdfa3-047c-42c9-a233-7daac1e8b65d-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v27w7\" (UID: \"6bacdfa3-047c-42c9-a233-7daac1e8b65d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v27w7" Jan 26 19:20:50 crc kubenswrapper[4737]: I0126 19:20:50.635530 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6z2h\" (UniqueName: \"kubernetes.io/projected/6bacdfa3-047c-42c9-a233-7daac1e8b65d-kube-api-access-b6z2h\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v27w7\" (UID: \"6bacdfa3-047c-42c9-a233-7daac1e8b65d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v27w7" Jan 26 19:20:50 crc kubenswrapper[4737]: I0126 19:20:50.697568 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v27w7" Jan 26 19:20:51 crc kubenswrapper[4737]: I0126 19:20:51.260996 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v27w7"] Jan 26 19:20:52 crc kubenswrapper[4737]: I0126 19:20:52.271980 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v27w7" event={"ID":"6bacdfa3-047c-42c9-a233-7daac1e8b65d","Type":"ContainerStarted","Data":"4b5d1cec237eb4c8965255a38cb29f70573136aa2a2325e12381921ebe306f8f"} Jan 26 19:20:53 crc kubenswrapper[4737]: I0126 19:20:53.284447 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v27w7" event={"ID":"6bacdfa3-047c-42c9-a233-7daac1e8b65d","Type":"ContainerStarted","Data":"649bdc39080d814c22f03ac0246241ec4a5f80570c8aa472e5c8e3d00d872675"} Jan 26 19:20:53 crc kubenswrapper[4737]: I0126 19:20:53.312805 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v27w7" podStartSLOduration=2.091510538 podStartE2EDuration="3.31276577s" podCreationTimestamp="2026-01-26 19:20:50 +0000 UTC" firstStartedPulling="2026-01-26 19:20:51.262209554 +0000 UTC m=+3024.570404262" lastFinishedPulling="2026-01-26 19:20:52.483464786 +0000 UTC m=+3025.791659494" observedRunningTime="2026-01-26 19:20:53.30709802 +0000 UTC m=+3026.615292728" watchObservedRunningTime="2026-01-26 19:20:53.31276577 +0000 UTC m=+3026.620960478" Jan 26 19:21:30 crc kubenswrapper[4737]: I0126 19:21:30.565422 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lzd7s"] Jan 26 19:21:30 crc kubenswrapper[4737]: I0126 19:21:30.569807 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lzd7s" Jan 26 19:21:30 crc kubenswrapper[4737]: I0126 19:21:30.583415 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lzd7s"] Jan 26 19:21:30 crc kubenswrapper[4737]: I0126 19:21:30.657284 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh8d8\" (UniqueName: \"kubernetes.io/projected/a7c95d0d-6804-4095-8640-b8376e00ad4e-kube-api-access-hh8d8\") pod \"certified-operators-lzd7s\" (UID: \"a7c95d0d-6804-4095-8640-b8376e00ad4e\") " pod="openshift-marketplace/certified-operators-lzd7s" Jan 26 19:21:30 crc kubenswrapper[4737]: I0126 19:21:30.657369 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7c95d0d-6804-4095-8640-b8376e00ad4e-utilities\") pod \"certified-operators-lzd7s\" (UID: \"a7c95d0d-6804-4095-8640-b8376e00ad4e\") " pod="openshift-marketplace/certified-operators-lzd7s" Jan 26 19:21:30 crc kubenswrapper[4737]: I0126 19:21:30.657432 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7c95d0d-6804-4095-8640-b8376e00ad4e-catalog-content\") pod \"certified-operators-lzd7s\" (UID: \"a7c95d0d-6804-4095-8640-b8376e00ad4e\") " pod="openshift-marketplace/certified-operators-lzd7s" Jan 26 19:21:30 crc kubenswrapper[4737]: I0126 19:21:30.759489 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh8d8\" (UniqueName: \"kubernetes.io/projected/a7c95d0d-6804-4095-8640-b8376e00ad4e-kube-api-access-hh8d8\") pod \"certified-operators-lzd7s\" (UID: \"a7c95d0d-6804-4095-8640-b8376e00ad4e\") " pod="openshift-marketplace/certified-operators-lzd7s" Jan 26 19:21:30 crc kubenswrapper[4737]: I0126 19:21:30.759591 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7c95d0d-6804-4095-8640-b8376e00ad4e-utilities\") pod \"certified-operators-lzd7s\" (UID: \"a7c95d0d-6804-4095-8640-b8376e00ad4e\") " pod="openshift-marketplace/certified-operators-lzd7s" Jan 26 19:21:30 crc kubenswrapper[4737]: I0126 19:21:30.759661 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7c95d0d-6804-4095-8640-b8376e00ad4e-catalog-content\") pod \"certified-operators-lzd7s\" (UID: \"a7c95d0d-6804-4095-8640-b8376e00ad4e\") " pod="openshift-marketplace/certified-operators-lzd7s" Jan 26 19:21:30 crc kubenswrapper[4737]: I0126 19:21:30.760379 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7c95d0d-6804-4095-8640-b8376e00ad4e-catalog-content\") pod \"certified-operators-lzd7s\" (UID: \"a7c95d0d-6804-4095-8640-b8376e00ad4e\") " pod="openshift-marketplace/certified-operators-lzd7s" Jan 26 19:21:30 crc kubenswrapper[4737]: I0126 19:21:30.760523 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7c95d0d-6804-4095-8640-b8376e00ad4e-utilities\") pod \"certified-operators-lzd7s\" (UID: \"a7c95d0d-6804-4095-8640-b8376e00ad4e\") " pod="openshift-marketplace/certified-operators-lzd7s" Jan 26 19:21:30 crc kubenswrapper[4737]: I0126 19:21:30.790277 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh8d8\" (UniqueName: \"kubernetes.io/projected/a7c95d0d-6804-4095-8640-b8376e00ad4e-kube-api-access-hh8d8\") pod \"certified-operators-lzd7s\" (UID: \"a7c95d0d-6804-4095-8640-b8376e00ad4e\") " pod="openshift-marketplace/certified-operators-lzd7s" Jan 26 19:21:30 crc kubenswrapper[4737]: I0126 19:21:30.897880 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lzd7s" Jan 26 19:21:30 crc kubenswrapper[4737]: I0126 19:21:30.949617 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:21:30 crc kubenswrapper[4737]: I0126 19:21:30.949962 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:21:31 crc kubenswrapper[4737]: I0126 19:21:31.441031 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lzd7s"] Jan 26 19:21:31 crc kubenswrapper[4737]: I0126 19:21:31.737387 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lzd7s" event={"ID":"a7c95d0d-6804-4095-8640-b8376e00ad4e","Type":"ContainerStarted","Data":"7b415708e9e5a59e846fbb4b7ac4169198f138e49542854f805713f6056e231c"} Jan 26 19:21:32 crc kubenswrapper[4737]: I0126 19:21:32.749231 4737 generic.go:334] "Generic (PLEG): container finished" podID="a7c95d0d-6804-4095-8640-b8376e00ad4e" containerID="d2c560b45d6d33684ab6a85e532a88ec572064212c6a4b09088314a24b35ac9c" exitCode=0 Jan 26 19:21:32 crc kubenswrapper[4737]: I0126 19:21:32.749342 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lzd7s" event={"ID":"a7c95d0d-6804-4095-8640-b8376e00ad4e","Type":"ContainerDied","Data":"d2c560b45d6d33684ab6a85e532a88ec572064212c6a4b09088314a24b35ac9c"} Jan 26 19:21:33 crc kubenswrapper[4737]: I0126 19:21:33.777353 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2h2rg"] Jan 26 19:21:33 crc kubenswrapper[4737]: I0126 19:21:33.785233 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2h2rg" Jan 26 19:21:33 crc kubenswrapper[4737]: I0126 19:21:33.831044 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2h2rg"] Jan 26 19:21:33 crc kubenswrapper[4737]: I0126 19:21:33.948261 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f2kf\" (UniqueName: \"kubernetes.io/projected/4304c6bc-d0ec-48bc-8c17-7aa4957588e7-kube-api-access-6f2kf\") pod \"redhat-operators-2h2rg\" (UID: \"4304c6bc-d0ec-48bc-8c17-7aa4957588e7\") " pod="openshift-marketplace/redhat-operators-2h2rg" Jan 26 19:21:33 crc kubenswrapper[4737]: I0126 19:21:33.948369 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4304c6bc-d0ec-48bc-8c17-7aa4957588e7-utilities\") pod \"redhat-operators-2h2rg\" (UID: \"4304c6bc-d0ec-48bc-8c17-7aa4957588e7\") " pod="openshift-marketplace/redhat-operators-2h2rg" Jan 26 19:21:33 crc kubenswrapper[4737]: I0126 19:21:33.948569 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4304c6bc-d0ec-48bc-8c17-7aa4957588e7-catalog-content\") pod \"redhat-operators-2h2rg\" (UID: \"4304c6bc-d0ec-48bc-8c17-7aa4957588e7\") " pod="openshift-marketplace/redhat-operators-2h2rg" Jan 26 19:21:34 crc kubenswrapper[4737]: I0126 19:21:34.050899 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4304c6bc-d0ec-48bc-8c17-7aa4957588e7-catalog-content\") pod \"redhat-operators-2h2rg\" (UID: \"4304c6bc-d0ec-48bc-8c17-7aa4957588e7\") " pod="openshift-marketplace/redhat-operators-2h2rg" Jan 26 19:21:34 crc kubenswrapper[4737]: I0126 19:21:34.051339 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f2kf\" (UniqueName: \"kubernetes.io/projected/4304c6bc-d0ec-48bc-8c17-7aa4957588e7-kube-api-access-6f2kf\") pod \"redhat-operators-2h2rg\" (UID: \"4304c6bc-d0ec-48bc-8c17-7aa4957588e7\") " pod="openshift-marketplace/redhat-operators-2h2rg" Jan 26 19:21:34 crc kubenswrapper[4737]: I0126 19:21:34.051486 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4304c6bc-d0ec-48bc-8c17-7aa4957588e7-catalog-content\") pod \"redhat-operators-2h2rg\" (UID: \"4304c6bc-d0ec-48bc-8c17-7aa4957588e7\") " pod="openshift-marketplace/redhat-operators-2h2rg" Jan 26 19:21:34 crc kubenswrapper[4737]: I0126 19:21:34.051956 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4304c6bc-d0ec-48bc-8c17-7aa4957588e7-utilities\") pod \"redhat-operators-2h2rg\" (UID: \"4304c6bc-d0ec-48bc-8c17-7aa4957588e7\") " pod="openshift-marketplace/redhat-operators-2h2rg" Jan 26 19:21:34 crc kubenswrapper[4737]: I0126 19:21:34.052446 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4304c6bc-d0ec-48bc-8c17-7aa4957588e7-utilities\") pod \"redhat-operators-2h2rg\" (UID: \"4304c6bc-d0ec-48bc-8c17-7aa4957588e7\") " pod="openshift-marketplace/redhat-operators-2h2rg" Jan 26 19:21:34 crc kubenswrapper[4737]: I0126 19:21:34.092892 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f2kf\" (UniqueName: \"kubernetes.io/projected/4304c6bc-d0ec-48bc-8c17-7aa4957588e7-kube-api-access-6f2kf\") pod \"redhat-operators-2h2rg\" (UID: \"4304c6bc-d0ec-48bc-8c17-7aa4957588e7\") " pod="openshift-marketplace/redhat-operators-2h2rg" Jan 26 19:21:34 crc kubenswrapper[4737]: I0126 19:21:34.127463 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2h2rg" Jan 26 19:21:34 crc kubenswrapper[4737]: I0126 19:21:34.788336 4737 generic.go:334] "Generic (PLEG): container finished" podID="a7c95d0d-6804-4095-8640-b8376e00ad4e" containerID="0a5979b6080dfbc22d77a233b61b107fa496f1e5a25ec3a158a03156766ac821" exitCode=0 Jan 26 19:21:34 crc kubenswrapper[4737]: I0126 19:21:34.788430 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lzd7s" event={"ID":"a7c95d0d-6804-4095-8640-b8376e00ad4e","Type":"ContainerDied","Data":"0a5979b6080dfbc22d77a233b61b107fa496f1e5a25ec3a158a03156766ac821"} Jan 26 19:21:34 crc kubenswrapper[4737]: W0126 19:21:34.830234 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4304c6bc_d0ec_48bc_8c17_7aa4957588e7.slice/crio-21e8c4c226721fb905913025e503833e4a9c0c3df11c6ecf715b5e7ebcf3c811 WatchSource:0}: Error finding container 21e8c4c226721fb905913025e503833e4a9c0c3df11c6ecf715b5e7ebcf3c811: Status 404 returned error can't find the container with id 21e8c4c226721fb905913025e503833e4a9c0c3df11c6ecf715b5e7ebcf3c811 Jan 26 19:21:34 crc kubenswrapper[4737]: I0126 19:21:34.849064 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2h2rg"] Jan 26 19:21:35 crc kubenswrapper[4737]: I0126 19:21:35.799984 4737 generic.go:334] "Generic (PLEG): container finished" podID="4304c6bc-d0ec-48bc-8c17-7aa4957588e7" containerID="da2138a1963ac2288e977911f9259607e82d6948ca9469233a7330674062c9dc" exitCode=0 Jan 26 19:21:35 crc kubenswrapper[4737]: I0126 19:21:35.801636 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2h2rg" event={"ID":"4304c6bc-d0ec-48bc-8c17-7aa4957588e7","Type":"ContainerDied","Data":"da2138a1963ac2288e977911f9259607e82d6948ca9469233a7330674062c9dc"} Jan 26 19:21:35 crc kubenswrapper[4737]: I0126 19:21:35.801780 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2h2rg" event={"ID":"4304c6bc-d0ec-48bc-8c17-7aa4957588e7","Type":"ContainerStarted","Data":"21e8c4c226721fb905913025e503833e4a9c0c3df11c6ecf715b5e7ebcf3c811"} Jan 26 19:21:36 crc kubenswrapper[4737]: I0126 19:21:36.814210 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lzd7s" event={"ID":"a7c95d0d-6804-4095-8640-b8376e00ad4e","Type":"ContainerStarted","Data":"7051fe1f759fa8eb179f4745183819a4d06ed53c2fc1f2af0557fd4b756677c3"} Jan 26 19:21:36 crc kubenswrapper[4737]: I0126 19:21:36.885690 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lzd7s" podStartSLOduration=3.23030571 podStartE2EDuration="6.885672133s" podCreationTimestamp="2026-01-26 19:21:30 +0000 UTC" firstStartedPulling="2026-01-26 19:21:32.751891068 +0000 UTC m=+3066.060085776" lastFinishedPulling="2026-01-26 19:21:36.407257491 +0000 UTC m=+3069.715452199" observedRunningTime="2026-01-26 19:21:36.884655329 +0000 UTC m=+3070.192850037" watchObservedRunningTime="2026-01-26 19:21:36.885672133 +0000 UTC m=+3070.193866841" Jan 26 19:21:38 crc kubenswrapper[4737]: I0126 19:21:38.840576 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2h2rg" event={"ID":"4304c6bc-d0ec-48bc-8c17-7aa4957588e7","Type":"ContainerStarted","Data":"01d2ba3fe5b555e3be533192d51283dc82beedb0203b67aa868584c3614643dd"} Jan 26 19:21:40 crc kubenswrapper[4737]: I0126 19:21:40.898734 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lzd7s" Jan 26 19:21:40 crc kubenswrapper[4737]: I0126 19:21:40.899320 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lzd7s" Jan 26 19:21:40 crc kubenswrapper[4737]: I0126 19:21:40.955285 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lzd7s" Jan 26 19:21:41 crc kubenswrapper[4737]: I0126 19:21:41.921842 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lzd7s" Jan 26 19:21:45 crc kubenswrapper[4737]: I0126 19:21:45.563341 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lzd7s"] Jan 26 19:21:45 crc kubenswrapper[4737]: I0126 19:21:45.564083 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lzd7s" podUID="a7c95d0d-6804-4095-8640-b8376e00ad4e" containerName="registry-server" containerID="cri-o://7051fe1f759fa8eb179f4745183819a4d06ed53c2fc1f2af0557fd4b756677c3" gracePeriod=2 Jan 26 19:21:45 crc kubenswrapper[4737]: I0126 19:21:45.932435 4737 generic.go:334] "Generic (PLEG): container finished" podID="4304c6bc-d0ec-48bc-8c17-7aa4957588e7" containerID="01d2ba3fe5b555e3be533192d51283dc82beedb0203b67aa868584c3614643dd" exitCode=0 Jan 26 19:21:45 crc kubenswrapper[4737]: I0126 19:21:45.932521 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2h2rg" event={"ID":"4304c6bc-d0ec-48bc-8c17-7aa4957588e7","Type":"ContainerDied","Data":"01d2ba3fe5b555e3be533192d51283dc82beedb0203b67aa868584c3614643dd"} Jan 26 19:21:46 crc kubenswrapper[4737]: I0126 19:21:46.945399 4737 generic.go:334] "Generic (PLEG): container finished" podID="a7c95d0d-6804-4095-8640-b8376e00ad4e" containerID="7051fe1f759fa8eb179f4745183819a4d06ed53c2fc1f2af0557fd4b756677c3" exitCode=0 Jan 26 19:21:46 crc kubenswrapper[4737]: I0126 19:21:46.945478 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lzd7s" event={"ID":"a7c95d0d-6804-4095-8640-b8376e00ad4e","Type":"ContainerDied","Data":"7051fe1f759fa8eb179f4745183819a4d06ed53c2fc1f2af0557fd4b756677c3"} Jan 26 19:21:47 crc kubenswrapper[4737]: I0126 19:21:47.956458 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2h2rg" event={"ID":"4304c6bc-d0ec-48bc-8c17-7aa4957588e7","Type":"ContainerStarted","Data":"2b3975a9b47528dc9c541edfe2b0aecb0189470224152e6c3efe7d9f83ec137b"} Jan 26 19:21:47 crc kubenswrapper[4737]: I0126 19:21:47.959282 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lzd7s" event={"ID":"a7c95d0d-6804-4095-8640-b8376e00ad4e","Type":"ContainerDied","Data":"7b415708e9e5a59e846fbb4b7ac4169198f138e49542854f805713f6056e231c"} Jan 26 19:21:47 crc kubenswrapper[4737]: I0126 19:21:47.959308 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b415708e9e5a59e846fbb4b7ac4169198f138e49542854f805713f6056e231c" Jan 26 19:21:48 crc kubenswrapper[4737]: I0126 19:21:48.008966 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lzd7s" Jan 26 19:21:48 crc kubenswrapper[4737]: I0126 19:21:48.109922 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hh8d8\" (UniqueName: \"kubernetes.io/projected/a7c95d0d-6804-4095-8640-b8376e00ad4e-kube-api-access-hh8d8\") pod \"a7c95d0d-6804-4095-8640-b8376e00ad4e\" (UID: \"a7c95d0d-6804-4095-8640-b8376e00ad4e\") " Jan 26 19:21:48 crc kubenswrapper[4737]: I0126 19:21:48.110344 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7c95d0d-6804-4095-8640-b8376e00ad4e-catalog-content\") pod \"a7c95d0d-6804-4095-8640-b8376e00ad4e\" (UID: \"a7c95d0d-6804-4095-8640-b8376e00ad4e\") " Jan 26 19:21:48 crc kubenswrapper[4737]: I0126 19:21:48.112583 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7c95d0d-6804-4095-8640-b8376e00ad4e-utilities\") pod \"a7c95d0d-6804-4095-8640-b8376e00ad4e\" (UID: \"a7c95d0d-6804-4095-8640-b8376e00ad4e\") " Jan 26 19:21:48 crc kubenswrapper[4737]: I0126 19:21:48.113297 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7c95d0d-6804-4095-8640-b8376e00ad4e-utilities" (OuterVolumeSpecName: "utilities") pod "a7c95d0d-6804-4095-8640-b8376e00ad4e" (UID: "a7c95d0d-6804-4095-8640-b8376e00ad4e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:21:48 crc kubenswrapper[4737]: I0126 19:21:48.114030 4737 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7c95d0d-6804-4095-8640-b8376e00ad4e-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:21:48 crc kubenswrapper[4737]: I0126 19:21:48.119461 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7c95d0d-6804-4095-8640-b8376e00ad4e-kube-api-access-hh8d8" (OuterVolumeSpecName: "kube-api-access-hh8d8") pod "a7c95d0d-6804-4095-8640-b8376e00ad4e" (UID: "a7c95d0d-6804-4095-8640-b8376e00ad4e"). InnerVolumeSpecName "kube-api-access-hh8d8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:21:48 crc kubenswrapper[4737]: I0126 19:21:48.173677 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7c95d0d-6804-4095-8640-b8376e00ad4e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a7c95d0d-6804-4095-8640-b8376e00ad4e" (UID: "a7c95d0d-6804-4095-8640-b8376e00ad4e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:21:48 crc kubenswrapper[4737]: I0126 19:21:48.215526 4737 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7c95d0d-6804-4095-8640-b8376e00ad4e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:21:48 crc kubenswrapper[4737]: I0126 19:21:48.215913 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hh8d8\" (UniqueName: \"kubernetes.io/projected/a7c95d0d-6804-4095-8640-b8376e00ad4e-kube-api-access-hh8d8\") on node \"crc\" DevicePath \"\"" Jan 26 19:21:48 crc kubenswrapper[4737]: I0126 19:21:48.977034 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lzd7s" Jan 26 19:21:49 crc kubenswrapper[4737]: I0126 19:21:49.009407 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2h2rg" podStartSLOduration=5.289514222 podStartE2EDuration="16.009384784s" podCreationTimestamp="2026-01-26 19:21:33 +0000 UTC" firstStartedPulling="2026-01-26 19:21:36.81607925 +0000 UTC m=+3070.124273958" lastFinishedPulling="2026-01-26 19:21:47.535949812 +0000 UTC m=+3080.844144520" observedRunningTime="2026-01-26 19:21:49.001707747 +0000 UTC m=+3082.309902475" watchObservedRunningTime="2026-01-26 19:21:49.009384784 +0000 UTC m=+3082.317579492" Jan 26 19:21:49 crc kubenswrapper[4737]: I0126 19:21:49.040970 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lzd7s"] Jan 26 19:21:49 crc kubenswrapper[4737]: I0126 19:21:49.051027 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lzd7s"] Jan 26 19:21:51 crc kubenswrapper[4737]: I0126 19:21:51.000036 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7c95d0d-6804-4095-8640-b8376e00ad4e" path="/var/lib/kubelet/pods/a7c95d0d-6804-4095-8640-b8376e00ad4e/volumes" Jan 26 19:21:54 crc kubenswrapper[4737]: I0126 19:21:54.128528 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2h2rg" Jan 26 19:21:54 crc kubenswrapper[4737]: I0126 19:21:54.128982 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2h2rg" Jan 26 19:21:55 crc kubenswrapper[4737]: I0126 19:21:55.196342 4737 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2h2rg" podUID="4304c6bc-d0ec-48bc-8c17-7aa4957588e7" containerName="registry-server" probeResult="failure" output=< Jan 26 19:21:55 crc kubenswrapper[4737]: timeout: failed to connect service ":50051" within 1s Jan 26 19:21:55 crc kubenswrapper[4737]: > Jan 26 19:22:00 crc kubenswrapper[4737]: I0126 19:22:00.949697 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:22:00 crc kubenswrapper[4737]: I0126 19:22:00.950365 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:22:04 crc kubenswrapper[4737]: I0126 19:22:04.181894 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2h2rg" Jan 26 19:22:04 crc kubenswrapper[4737]: I0126 19:22:04.252427 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2h2rg" Jan 26 19:22:07 crc kubenswrapper[4737]: I0126 19:22:07.757256 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2h2rg"] Jan 26 19:22:07 crc kubenswrapper[4737]: I0126 19:22:07.758152 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2h2rg" podUID="4304c6bc-d0ec-48bc-8c17-7aa4957588e7" containerName="registry-server" containerID="cri-o://2b3975a9b47528dc9c541edfe2b0aecb0189470224152e6c3efe7d9f83ec137b" gracePeriod=2 Jan 26 19:22:09 crc kubenswrapper[4737]: I0126 19:22:09.211989 4737 generic.go:334] "Generic (PLEG): container finished" podID="4304c6bc-d0ec-48bc-8c17-7aa4957588e7" containerID="2b3975a9b47528dc9c541edfe2b0aecb0189470224152e6c3efe7d9f83ec137b" exitCode=0 Jan 26 19:22:09 crc kubenswrapper[4737]: I0126 19:22:09.212045 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2h2rg" event={"ID":"4304c6bc-d0ec-48bc-8c17-7aa4957588e7","Type":"ContainerDied","Data":"2b3975a9b47528dc9c541edfe2b0aecb0189470224152e6c3efe7d9f83ec137b"} Jan 26 19:22:09 crc kubenswrapper[4737]: I0126 19:22:09.330459 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2h2rg" Jan 26 19:22:09 crc kubenswrapper[4737]: I0126 19:22:09.483523 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4304c6bc-d0ec-48bc-8c17-7aa4957588e7-catalog-content\") pod \"4304c6bc-d0ec-48bc-8c17-7aa4957588e7\" (UID: \"4304c6bc-d0ec-48bc-8c17-7aa4957588e7\") " Jan 26 19:22:09 crc kubenswrapper[4737]: I0126 19:22:09.483933 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4304c6bc-d0ec-48bc-8c17-7aa4957588e7-utilities\") pod \"4304c6bc-d0ec-48bc-8c17-7aa4957588e7\" (UID: \"4304c6bc-d0ec-48bc-8c17-7aa4957588e7\") " Jan 26 19:22:09 crc kubenswrapper[4737]: I0126 19:22:09.484034 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6f2kf\" (UniqueName: \"kubernetes.io/projected/4304c6bc-d0ec-48bc-8c17-7aa4957588e7-kube-api-access-6f2kf\") pod \"4304c6bc-d0ec-48bc-8c17-7aa4957588e7\" (UID: \"4304c6bc-d0ec-48bc-8c17-7aa4957588e7\") " Jan 26 19:22:09 crc kubenswrapper[4737]: I0126 19:22:09.484881 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4304c6bc-d0ec-48bc-8c17-7aa4957588e7-utilities" (OuterVolumeSpecName: "utilities") pod "4304c6bc-d0ec-48bc-8c17-7aa4957588e7" (UID: "4304c6bc-d0ec-48bc-8c17-7aa4957588e7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:22:09 crc kubenswrapper[4737]: I0126 19:22:09.490086 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4304c6bc-d0ec-48bc-8c17-7aa4957588e7-kube-api-access-6f2kf" (OuterVolumeSpecName: "kube-api-access-6f2kf") pod "4304c6bc-d0ec-48bc-8c17-7aa4957588e7" (UID: "4304c6bc-d0ec-48bc-8c17-7aa4957588e7"). InnerVolumeSpecName "kube-api-access-6f2kf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:22:09 crc kubenswrapper[4737]: I0126 19:22:09.587590 4737 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4304c6bc-d0ec-48bc-8c17-7aa4957588e7-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:22:09 crc kubenswrapper[4737]: I0126 19:22:09.587647 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6f2kf\" (UniqueName: \"kubernetes.io/projected/4304c6bc-d0ec-48bc-8c17-7aa4957588e7-kube-api-access-6f2kf\") on node \"crc\" DevicePath \"\"" Jan 26 19:22:09 crc kubenswrapper[4737]: I0126 19:22:09.635626 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4304c6bc-d0ec-48bc-8c17-7aa4957588e7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4304c6bc-d0ec-48bc-8c17-7aa4957588e7" (UID: "4304c6bc-d0ec-48bc-8c17-7aa4957588e7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:22:09 crc kubenswrapper[4737]: I0126 19:22:09.690496 4737 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4304c6bc-d0ec-48bc-8c17-7aa4957588e7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:22:10 crc kubenswrapper[4737]: I0126 19:22:10.226574 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2h2rg" event={"ID":"4304c6bc-d0ec-48bc-8c17-7aa4957588e7","Type":"ContainerDied","Data":"21e8c4c226721fb905913025e503833e4a9c0c3df11c6ecf715b5e7ebcf3c811"} Jan 26 19:22:10 crc kubenswrapper[4737]: I0126 19:22:10.226954 4737 scope.go:117] "RemoveContainer" containerID="2b3975a9b47528dc9c541edfe2b0aecb0189470224152e6c3efe7d9f83ec137b" Jan 26 19:22:10 crc kubenswrapper[4737]: I0126 19:22:10.226663 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2h2rg" Jan 26 19:22:10 crc kubenswrapper[4737]: I0126 19:22:10.273486 4737 scope.go:117] "RemoveContainer" containerID="01d2ba3fe5b555e3be533192d51283dc82beedb0203b67aa868584c3614643dd" Jan 26 19:22:10 crc kubenswrapper[4737]: I0126 19:22:10.289092 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2h2rg"] Jan 26 19:22:10 crc kubenswrapper[4737]: I0126 19:22:10.302524 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2h2rg"] Jan 26 19:22:10 crc kubenswrapper[4737]: I0126 19:22:10.309703 4737 scope.go:117] "RemoveContainer" containerID="da2138a1963ac2288e977911f9259607e82d6948ca9469233a7330674062c9dc" Jan 26 19:22:11 crc kubenswrapper[4737]: I0126 19:22:11.001089 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4304c6bc-d0ec-48bc-8c17-7aa4957588e7" path="/var/lib/kubelet/pods/4304c6bc-d0ec-48bc-8c17-7aa4957588e7/volumes" Jan 26 19:22:21 crc kubenswrapper[4737]: I0126 19:22:21.923300 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-64d24"] Jan 26 19:22:21 crc kubenswrapper[4737]: E0126 19:22:21.924508 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4304c6bc-d0ec-48bc-8c17-7aa4957588e7" containerName="extract-content" Jan 26 19:22:21 crc kubenswrapper[4737]: I0126 19:22:21.924532 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="4304c6bc-d0ec-48bc-8c17-7aa4957588e7" containerName="extract-content" Jan 26 19:22:21 crc kubenswrapper[4737]: E0126 19:22:21.924554 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4304c6bc-d0ec-48bc-8c17-7aa4957588e7" containerName="registry-server" Jan 26 19:22:21 crc kubenswrapper[4737]: I0126 19:22:21.924563 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="4304c6bc-d0ec-48bc-8c17-7aa4957588e7" containerName="registry-server" Jan 26 19:22:21 crc kubenswrapper[4737]: E0126 19:22:21.924582 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7c95d0d-6804-4095-8640-b8376e00ad4e" containerName="extract-content" Jan 26 19:22:21 crc kubenswrapper[4737]: I0126 19:22:21.924590 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7c95d0d-6804-4095-8640-b8376e00ad4e" containerName="extract-content" Jan 26 19:22:21 crc kubenswrapper[4737]: E0126 19:22:21.924618 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4304c6bc-d0ec-48bc-8c17-7aa4957588e7" containerName="extract-utilities" Jan 26 19:22:21 crc kubenswrapper[4737]: I0126 19:22:21.924626 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="4304c6bc-d0ec-48bc-8c17-7aa4957588e7" containerName="extract-utilities" Jan 26 19:22:21 crc kubenswrapper[4737]: E0126 19:22:21.924643 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7c95d0d-6804-4095-8640-b8376e00ad4e" containerName="registry-server" Jan 26 19:22:21 crc kubenswrapper[4737]: I0126 19:22:21.924650 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7c95d0d-6804-4095-8640-b8376e00ad4e" containerName="registry-server" Jan 26 19:22:21 crc kubenswrapper[4737]: E0126 19:22:21.924664 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7c95d0d-6804-4095-8640-b8376e00ad4e" containerName="extract-utilities" Jan 26 19:22:21 crc kubenswrapper[4737]: I0126 19:22:21.924674 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7c95d0d-6804-4095-8640-b8376e00ad4e" containerName="extract-utilities" Jan 26 19:22:21 crc kubenswrapper[4737]: I0126 19:22:21.925001 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7c95d0d-6804-4095-8640-b8376e00ad4e" containerName="registry-server" Jan 26 19:22:21 crc kubenswrapper[4737]: I0126 19:22:21.925022 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="4304c6bc-d0ec-48bc-8c17-7aa4957588e7" containerName="registry-server" Jan 26 19:22:21 crc kubenswrapper[4737]: I0126 19:22:21.927878 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-64d24" Jan 26 19:22:21 crc kubenswrapper[4737]: I0126 19:22:21.938586 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-64d24"] Jan 26 19:22:22 crc kubenswrapper[4737]: I0126 19:22:22.029450 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9hfj\" (UniqueName: \"kubernetes.io/projected/f2f1ffc0-d867-4872-988f-71a58f4e1659-kube-api-access-j9hfj\") pod \"redhat-marketplace-64d24\" (UID: \"f2f1ffc0-d867-4872-988f-71a58f4e1659\") " pod="openshift-marketplace/redhat-marketplace-64d24" Jan 26 19:22:22 crc kubenswrapper[4737]: I0126 19:22:22.029554 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2f1ffc0-d867-4872-988f-71a58f4e1659-utilities\") pod \"redhat-marketplace-64d24\" (UID: \"f2f1ffc0-d867-4872-988f-71a58f4e1659\") " pod="openshift-marketplace/redhat-marketplace-64d24" Jan 26 19:22:22 crc kubenswrapper[4737]: I0126 19:22:22.029574 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2f1ffc0-d867-4872-988f-71a58f4e1659-catalog-content\") pod \"redhat-marketplace-64d24\" (UID: \"f2f1ffc0-d867-4872-988f-71a58f4e1659\") " pod="openshift-marketplace/redhat-marketplace-64d24" Jan 26 19:22:22 crc kubenswrapper[4737]: I0126 19:22:22.132926 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9hfj\" (UniqueName: \"kubernetes.io/projected/f2f1ffc0-d867-4872-988f-71a58f4e1659-kube-api-access-j9hfj\") pod \"redhat-marketplace-64d24\" (UID: \"f2f1ffc0-d867-4872-988f-71a58f4e1659\") " pod="openshift-marketplace/redhat-marketplace-64d24" Jan 26 19:22:22 crc kubenswrapper[4737]: I0126 19:22:22.133170 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2f1ffc0-d867-4872-988f-71a58f4e1659-utilities\") pod \"redhat-marketplace-64d24\" (UID: \"f2f1ffc0-d867-4872-988f-71a58f4e1659\") " pod="openshift-marketplace/redhat-marketplace-64d24" Jan 26 19:22:22 crc kubenswrapper[4737]: I0126 19:22:22.133202 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2f1ffc0-d867-4872-988f-71a58f4e1659-catalog-content\") pod \"redhat-marketplace-64d24\" (UID: \"f2f1ffc0-d867-4872-988f-71a58f4e1659\") " pod="openshift-marketplace/redhat-marketplace-64d24" Jan 26 19:22:22 crc kubenswrapper[4737]: I0126 19:22:22.133806 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2f1ffc0-d867-4872-988f-71a58f4e1659-utilities\") pod \"redhat-marketplace-64d24\" (UID: \"f2f1ffc0-d867-4872-988f-71a58f4e1659\") " pod="openshift-marketplace/redhat-marketplace-64d24" Jan 26 19:22:22 crc kubenswrapper[4737]: I0126 19:22:22.134045 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2f1ffc0-d867-4872-988f-71a58f4e1659-catalog-content\") pod \"redhat-marketplace-64d24\" (UID: \"f2f1ffc0-d867-4872-988f-71a58f4e1659\") " pod="openshift-marketplace/redhat-marketplace-64d24" Jan 26 19:22:22 crc kubenswrapper[4737]: I0126 19:22:22.154412 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9hfj\" (UniqueName: \"kubernetes.io/projected/f2f1ffc0-d867-4872-988f-71a58f4e1659-kube-api-access-j9hfj\") pod \"redhat-marketplace-64d24\" (UID: \"f2f1ffc0-d867-4872-988f-71a58f4e1659\") " pod="openshift-marketplace/redhat-marketplace-64d24" Jan 26 19:22:22 crc kubenswrapper[4737]: I0126 19:22:22.267400 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-64d24" Jan 26 19:22:22 crc kubenswrapper[4737]: I0126 19:22:22.842643 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-64d24"] Jan 26 19:22:23 crc kubenswrapper[4737]: I0126 19:22:23.380749 4737 generic.go:334] "Generic (PLEG): container finished" podID="f2f1ffc0-d867-4872-988f-71a58f4e1659" containerID="a49b2871827eb80815da89e06a05468ef43d54bd2269db019ab1de392f0e81de" exitCode=0 Jan 26 19:22:23 crc kubenswrapper[4737]: I0126 19:22:23.380823 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-64d24" event={"ID":"f2f1ffc0-d867-4872-988f-71a58f4e1659","Type":"ContainerDied","Data":"a49b2871827eb80815da89e06a05468ef43d54bd2269db019ab1de392f0e81de"} Jan 26 19:22:23 crc kubenswrapper[4737]: I0126 19:22:23.381108 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-64d24" event={"ID":"f2f1ffc0-d867-4872-988f-71a58f4e1659","Type":"ContainerStarted","Data":"9be4421c85d1689da605c0b380a1e89bf620c42ed47e5374ea793fdf67860ebd"} Jan 26 19:22:25 crc kubenswrapper[4737]: I0126 19:22:25.410700 4737 generic.go:334] "Generic (PLEG): container finished" podID="f2f1ffc0-d867-4872-988f-71a58f4e1659" containerID="f6c0385cf92a1086f9139f65911a07531376f76ca6250d72439e74c5cf03e587" exitCode=0 Jan 26 19:22:25 crc kubenswrapper[4737]: I0126 19:22:25.410763 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-64d24" event={"ID":"f2f1ffc0-d867-4872-988f-71a58f4e1659","Type":"ContainerDied","Data":"f6c0385cf92a1086f9139f65911a07531376f76ca6250d72439e74c5cf03e587"} Jan 26 19:22:26 crc kubenswrapper[4737]: I0126 19:22:26.428247 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-64d24" event={"ID":"f2f1ffc0-d867-4872-988f-71a58f4e1659","Type":"ContainerStarted","Data":"873715ac1d7945cb40c8df26700217731878b6dfab780a00263484d92a7a5f5d"} Jan 26 19:22:26 crc kubenswrapper[4737]: I0126 19:22:26.462482 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-64d24" podStartSLOduration=2.988374166 podStartE2EDuration="5.462454428s" podCreationTimestamp="2026-01-26 19:22:21 +0000 UTC" firstStartedPulling="2026-01-26 19:22:23.385453565 +0000 UTC m=+3116.693648273" lastFinishedPulling="2026-01-26 19:22:25.859533837 +0000 UTC m=+3119.167728535" observedRunningTime="2026-01-26 19:22:26.456562213 +0000 UTC m=+3119.764756931" watchObservedRunningTime="2026-01-26 19:22:26.462454428 +0000 UTC m=+3119.770649136" Jan 26 19:22:30 crc kubenswrapper[4737]: I0126 19:22:30.948846 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:22:30 crc kubenswrapper[4737]: I0126 19:22:30.949479 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:22:30 crc kubenswrapper[4737]: I0126 19:22:30.949542 4737 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" Jan 26 19:22:30 crc kubenswrapper[4737]: I0126 19:22:30.950610 4737 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0456e4438ad40c4f582e22f53643b0ff4e17de4961aa6f19105c775dca022959"} pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 19:22:30 crc kubenswrapper[4737]: I0126 19:22:30.950692 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" containerID="cri-o://0456e4438ad40c4f582e22f53643b0ff4e17de4961aa6f19105c775dca022959" gracePeriod=600 Jan 26 19:22:31 crc kubenswrapper[4737]: I0126 19:22:31.561014 4737 generic.go:334] "Generic (PLEG): container finished" podID="afd75772-7900-46c3-b392-afb075e1cc08" containerID="0456e4438ad40c4f582e22f53643b0ff4e17de4961aa6f19105c775dca022959" exitCode=0 Jan 26 19:22:31 crc kubenswrapper[4737]: I0126 19:22:31.561433 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" event={"ID":"afd75772-7900-46c3-b392-afb075e1cc08","Type":"ContainerDied","Data":"0456e4438ad40c4f582e22f53643b0ff4e17de4961aa6f19105c775dca022959"} Jan 26 19:22:31 crc kubenswrapper[4737]: I0126 19:22:31.561495 4737 scope.go:117] "RemoveContainer" containerID="969d5bfda3f59282659c1c7839e3c4e96cf7dc6518f29fe22186994fb3b83944" Jan 26 19:22:31 crc kubenswrapper[4737]: E0126 19:22:31.586367 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:22:32 crc kubenswrapper[4737]: I0126 19:22:32.268555 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-64d24" Jan 26 19:22:32 crc kubenswrapper[4737]: I0126 19:22:32.268911 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-64d24" Jan 26 19:22:32 crc kubenswrapper[4737]: I0126 19:22:32.320494 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-64d24" Jan 26 19:22:32 crc kubenswrapper[4737]: I0126 19:22:32.574835 4737 scope.go:117] "RemoveContainer" containerID="0456e4438ad40c4f582e22f53643b0ff4e17de4961aa6f19105c775dca022959" Jan 26 19:22:32 crc kubenswrapper[4737]: E0126 19:22:32.575304 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:22:32 crc kubenswrapper[4737]: I0126 19:22:32.645213 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-64d24" Jan 26 19:22:32 crc kubenswrapper[4737]: I0126 19:22:32.706089 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-64d24"] Jan 26 19:22:34 crc kubenswrapper[4737]: I0126 19:22:34.593245 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-64d24" podUID="f2f1ffc0-d867-4872-988f-71a58f4e1659" containerName="registry-server" containerID="cri-o://873715ac1d7945cb40c8df26700217731878b6dfab780a00263484d92a7a5f5d" gracePeriod=2 Jan 26 19:22:34 crc kubenswrapper[4737]: E0126 19:22:34.867342 4737 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2f1ffc0_d867_4872_988f_71a58f4e1659.slice/crio-conmon-873715ac1d7945cb40c8df26700217731878b6dfab780a00263484d92a7a5f5d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2f1ffc0_d867_4872_988f_71a58f4e1659.slice/crio-873715ac1d7945cb40c8df26700217731878b6dfab780a00263484d92a7a5f5d.scope\": RecentStats: unable to find data in memory cache]" Jan 26 19:22:35 crc kubenswrapper[4737]: I0126 19:22:35.104573 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-64d24" Jan 26 19:22:35 crc kubenswrapper[4737]: I0126 19:22:35.194389 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2f1ffc0-d867-4872-988f-71a58f4e1659-utilities\") pod \"f2f1ffc0-d867-4872-988f-71a58f4e1659\" (UID: \"f2f1ffc0-d867-4872-988f-71a58f4e1659\") " Jan 26 19:22:35 crc kubenswrapper[4737]: I0126 19:22:35.194458 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2f1ffc0-d867-4872-988f-71a58f4e1659-catalog-content\") pod \"f2f1ffc0-d867-4872-988f-71a58f4e1659\" (UID: \"f2f1ffc0-d867-4872-988f-71a58f4e1659\") " Jan 26 19:22:35 crc kubenswrapper[4737]: I0126 19:22:35.194514 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9hfj\" (UniqueName: \"kubernetes.io/projected/f2f1ffc0-d867-4872-988f-71a58f4e1659-kube-api-access-j9hfj\") pod \"f2f1ffc0-d867-4872-988f-71a58f4e1659\" (UID: \"f2f1ffc0-d867-4872-988f-71a58f4e1659\") " Jan 26 19:22:35 crc kubenswrapper[4737]: I0126 19:22:35.195589 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2f1ffc0-d867-4872-988f-71a58f4e1659-utilities" (OuterVolumeSpecName: "utilities") pod "f2f1ffc0-d867-4872-988f-71a58f4e1659" (UID: "f2f1ffc0-d867-4872-988f-71a58f4e1659"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:22:35 crc kubenswrapper[4737]: I0126 19:22:35.201499 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2f1ffc0-d867-4872-988f-71a58f4e1659-kube-api-access-j9hfj" (OuterVolumeSpecName: "kube-api-access-j9hfj") pod "f2f1ffc0-d867-4872-988f-71a58f4e1659" (UID: "f2f1ffc0-d867-4872-988f-71a58f4e1659"). InnerVolumeSpecName "kube-api-access-j9hfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:22:35 crc kubenswrapper[4737]: I0126 19:22:35.219191 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2f1ffc0-d867-4872-988f-71a58f4e1659-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f2f1ffc0-d867-4872-988f-71a58f4e1659" (UID: "f2f1ffc0-d867-4872-988f-71a58f4e1659"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:22:35 crc kubenswrapper[4737]: I0126 19:22:35.298732 4737 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2f1ffc0-d867-4872-988f-71a58f4e1659-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:22:35 crc kubenswrapper[4737]: I0126 19:22:35.298764 4737 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2f1ffc0-d867-4872-988f-71a58f4e1659-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:22:35 crc kubenswrapper[4737]: I0126 19:22:35.298776 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9hfj\" (UniqueName: \"kubernetes.io/projected/f2f1ffc0-d867-4872-988f-71a58f4e1659-kube-api-access-j9hfj\") on node \"crc\" DevicePath \"\"" Jan 26 19:22:35 crc kubenswrapper[4737]: I0126 19:22:35.606453 4737 generic.go:334] "Generic (PLEG): container finished" podID="f2f1ffc0-d867-4872-988f-71a58f4e1659" containerID="873715ac1d7945cb40c8df26700217731878b6dfab780a00263484d92a7a5f5d" exitCode=0 Jan 26 19:22:35 crc kubenswrapper[4737]: I0126 19:22:35.606501 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-64d24" event={"ID":"f2f1ffc0-d867-4872-988f-71a58f4e1659","Type":"ContainerDied","Data":"873715ac1d7945cb40c8df26700217731878b6dfab780a00263484d92a7a5f5d"} Jan 26 19:22:35 crc kubenswrapper[4737]: I0126 19:22:35.606782 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-64d24" event={"ID":"f2f1ffc0-d867-4872-988f-71a58f4e1659","Type":"ContainerDied","Data":"9be4421c85d1689da605c0b380a1e89bf620c42ed47e5374ea793fdf67860ebd"} Jan 26 19:22:35 crc kubenswrapper[4737]: I0126 19:22:35.606803 4737 scope.go:117] "RemoveContainer" containerID="873715ac1d7945cb40c8df26700217731878b6dfab780a00263484d92a7a5f5d" Jan 26 19:22:35 crc kubenswrapper[4737]: I0126 19:22:35.606561 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-64d24" Jan 26 19:22:35 crc kubenswrapper[4737]: I0126 19:22:35.642473 4737 scope.go:117] "RemoveContainer" containerID="f6c0385cf92a1086f9139f65911a07531376f76ca6250d72439e74c5cf03e587" Jan 26 19:22:35 crc kubenswrapper[4737]: I0126 19:22:35.668881 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-64d24"] Jan 26 19:22:35 crc kubenswrapper[4737]: I0126 19:22:35.682615 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-64d24"] Jan 26 19:22:35 crc kubenswrapper[4737]: I0126 19:22:35.692326 4737 scope.go:117] "RemoveContainer" containerID="a49b2871827eb80815da89e06a05468ef43d54bd2269db019ab1de392f0e81de" Jan 26 19:22:35 crc kubenswrapper[4737]: I0126 19:22:35.773327 4737 scope.go:117] "RemoveContainer" containerID="873715ac1d7945cb40c8df26700217731878b6dfab780a00263484d92a7a5f5d" Jan 26 19:22:35 crc kubenswrapper[4737]: E0126 19:22:35.773735 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"873715ac1d7945cb40c8df26700217731878b6dfab780a00263484d92a7a5f5d\": container with ID starting with 873715ac1d7945cb40c8df26700217731878b6dfab780a00263484d92a7a5f5d not found: ID does not exist" containerID="873715ac1d7945cb40c8df26700217731878b6dfab780a00263484d92a7a5f5d" Jan 26 19:22:35 crc kubenswrapper[4737]: I0126 19:22:35.773827 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"873715ac1d7945cb40c8df26700217731878b6dfab780a00263484d92a7a5f5d"} err="failed to get container status \"873715ac1d7945cb40c8df26700217731878b6dfab780a00263484d92a7a5f5d\": rpc error: code = NotFound desc = could not find container \"873715ac1d7945cb40c8df26700217731878b6dfab780a00263484d92a7a5f5d\": container with ID starting with 873715ac1d7945cb40c8df26700217731878b6dfab780a00263484d92a7a5f5d not found: ID does not exist" Jan 26 19:22:35 crc kubenswrapper[4737]: I0126 19:22:35.773909 4737 scope.go:117] "RemoveContainer" containerID="f6c0385cf92a1086f9139f65911a07531376f76ca6250d72439e74c5cf03e587" Jan 26 19:22:35 crc kubenswrapper[4737]: E0126 19:22:35.774359 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6c0385cf92a1086f9139f65911a07531376f76ca6250d72439e74c5cf03e587\": container with ID starting with f6c0385cf92a1086f9139f65911a07531376f76ca6250d72439e74c5cf03e587 not found: ID does not exist" containerID="f6c0385cf92a1086f9139f65911a07531376f76ca6250d72439e74c5cf03e587" Jan 26 19:22:35 crc kubenswrapper[4737]: I0126 19:22:35.774389 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6c0385cf92a1086f9139f65911a07531376f76ca6250d72439e74c5cf03e587"} err="failed to get container status \"f6c0385cf92a1086f9139f65911a07531376f76ca6250d72439e74c5cf03e587\": rpc error: code = NotFound desc = could not find container \"f6c0385cf92a1086f9139f65911a07531376f76ca6250d72439e74c5cf03e587\": container with ID starting with f6c0385cf92a1086f9139f65911a07531376f76ca6250d72439e74c5cf03e587 not found: ID does not exist" Jan 26 19:22:35 crc kubenswrapper[4737]: I0126 19:22:35.774411 4737 scope.go:117] "RemoveContainer" containerID="a49b2871827eb80815da89e06a05468ef43d54bd2269db019ab1de392f0e81de" Jan 26 19:22:35 crc kubenswrapper[4737]: E0126 19:22:35.774866 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a49b2871827eb80815da89e06a05468ef43d54bd2269db019ab1de392f0e81de\": container with ID starting with a49b2871827eb80815da89e06a05468ef43d54bd2269db019ab1de392f0e81de not found: ID does not exist" containerID="a49b2871827eb80815da89e06a05468ef43d54bd2269db019ab1de392f0e81de" Jan 26 19:22:35 crc kubenswrapper[4737]: I0126 19:22:35.774930 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a49b2871827eb80815da89e06a05468ef43d54bd2269db019ab1de392f0e81de"} err="failed to get container status \"a49b2871827eb80815da89e06a05468ef43d54bd2269db019ab1de392f0e81de\": rpc error: code = NotFound desc = could not find container \"a49b2871827eb80815da89e06a05468ef43d54bd2269db019ab1de392f0e81de\": container with ID starting with a49b2871827eb80815da89e06a05468ef43d54bd2269db019ab1de392f0e81de not found: ID does not exist" Jan 26 19:22:36 crc kubenswrapper[4737]: I0126 19:22:36.998219 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2f1ffc0-d867-4872-988f-71a58f4e1659" path="/var/lib/kubelet/pods/f2f1ffc0-d867-4872-988f-71a58f4e1659/volumes" Jan 26 19:22:44 crc kubenswrapper[4737]: I0126 19:22:44.668350 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854wx9kv" podUID="5175d9d3-4bf9-4f52-be13-e33b02e03592" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 19:22:44 crc kubenswrapper[4737]: I0126 19:22:44.668593 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854wx9kv" podUID="5175d9d3-4bf9-4f52-be13-e33b02e03592" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 19:22:44 crc kubenswrapper[4737]: I0126 19:22:44.983019 4737 scope.go:117] "RemoveContainer" containerID="0456e4438ad40c4f582e22f53643b0ff4e17de4961aa6f19105c775dca022959" Jan 26 19:22:44 crc kubenswrapper[4737]: E0126 19:22:44.983384 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:22:58 crc kubenswrapper[4737]: I0126 19:22:58.983092 4737 scope.go:117] "RemoveContainer" containerID="0456e4438ad40c4f582e22f53643b0ff4e17de4961aa6f19105c775dca022959" Jan 26 19:22:58 crc kubenswrapper[4737]: E0126 19:22:58.985267 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:23:12 crc kubenswrapper[4737]: I0126 19:23:12.983770 4737 scope.go:117] "RemoveContainer" containerID="0456e4438ad40c4f582e22f53643b0ff4e17de4961aa6f19105c775dca022959" Jan 26 19:23:12 crc kubenswrapper[4737]: E0126 19:23:12.984680 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:23:23 crc kubenswrapper[4737]: I0126 19:23:23.981984 4737 scope.go:117] "RemoveContainer" containerID="0456e4438ad40c4f582e22f53643b0ff4e17de4961aa6f19105c775dca022959" Jan 26 19:23:23 crc kubenswrapper[4737]: E0126 19:23:23.982855 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:23:32 crc kubenswrapper[4737]: I0126 19:23:32.447499 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5cmdj"] Jan 26 19:23:32 crc kubenswrapper[4737]: E0126 19:23:32.448789 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2f1ffc0-d867-4872-988f-71a58f4e1659" containerName="registry-server" Jan 26 19:23:32 crc kubenswrapper[4737]: I0126 19:23:32.448807 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2f1ffc0-d867-4872-988f-71a58f4e1659" containerName="registry-server" Jan 26 19:23:32 crc kubenswrapper[4737]: E0126 19:23:32.448842 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2f1ffc0-d867-4872-988f-71a58f4e1659" containerName="extract-utilities" Jan 26 19:23:32 crc kubenswrapper[4737]: I0126 19:23:32.448850 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2f1ffc0-d867-4872-988f-71a58f4e1659" containerName="extract-utilities" Jan 26 19:23:32 crc kubenswrapper[4737]: E0126 19:23:32.448865 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2f1ffc0-d867-4872-988f-71a58f4e1659" containerName="extract-content" Jan 26 19:23:32 crc kubenswrapper[4737]: I0126 19:23:32.448873 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2f1ffc0-d867-4872-988f-71a58f4e1659" containerName="extract-content" Jan 26 19:23:32 crc kubenswrapper[4737]: I0126 19:23:32.449173 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2f1ffc0-d867-4872-988f-71a58f4e1659" containerName="registry-server" Jan 26 19:23:32 crc kubenswrapper[4737]: I0126 19:23:32.451239 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5cmdj" Jan 26 19:23:32 crc kubenswrapper[4737]: I0126 19:23:32.463889 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5cmdj"] Jan 26 19:23:32 crc kubenswrapper[4737]: I0126 19:23:32.589204 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aeb623c7-98b7-4300-ac99-3fc6ea1e36a1-catalog-content\") pod \"community-operators-5cmdj\" (UID: \"aeb623c7-98b7-4300-ac99-3fc6ea1e36a1\") " pod="openshift-marketplace/community-operators-5cmdj" Jan 26 19:23:32 crc kubenswrapper[4737]: I0126 19:23:32.589369 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aeb623c7-98b7-4300-ac99-3fc6ea1e36a1-utilities\") pod \"community-operators-5cmdj\" (UID: \"aeb623c7-98b7-4300-ac99-3fc6ea1e36a1\") " pod="openshift-marketplace/community-operators-5cmdj" Jan 26 19:23:32 crc kubenswrapper[4737]: I0126 19:23:32.589401 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzlvx\" (UniqueName: \"kubernetes.io/projected/aeb623c7-98b7-4300-ac99-3fc6ea1e36a1-kube-api-access-vzlvx\") pod \"community-operators-5cmdj\" (UID: \"aeb623c7-98b7-4300-ac99-3fc6ea1e36a1\") " pod="openshift-marketplace/community-operators-5cmdj" Jan 26 19:23:32 crc kubenswrapper[4737]: I0126 19:23:32.727440 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aeb623c7-98b7-4300-ac99-3fc6ea1e36a1-catalog-content\") pod \"community-operators-5cmdj\" (UID: \"aeb623c7-98b7-4300-ac99-3fc6ea1e36a1\") " pod="openshift-marketplace/community-operators-5cmdj" Jan 26 19:23:32 crc kubenswrapper[4737]: I0126 19:23:32.727930 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aeb623c7-98b7-4300-ac99-3fc6ea1e36a1-utilities\") pod \"community-operators-5cmdj\" (UID: \"aeb623c7-98b7-4300-ac99-3fc6ea1e36a1\") " pod="openshift-marketplace/community-operators-5cmdj" Jan 26 19:23:32 crc kubenswrapper[4737]: I0126 19:23:32.727971 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzlvx\" (UniqueName: \"kubernetes.io/projected/aeb623c7-98b7-4300-ac99-3fc6ea1e36a1-kube-api-access-vzlvx\") pod \"community-operators-5cmdj\" (UID: \"aeb623c7-98b7-4300-ac99-3fc6ea1e36a1\") " pod="openshift-marketplace/community-operators-5cmdj" Jan 26 19:23:32 crc kubenswrapper[4737]: I0126 19:23:32.728015 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aeb623c7-98b7-4300-ac99-3fc6ea1e36a1-catalog-content\") pod \"community-operators-5cmdj\" (UID: \"aeb623c7-98b7-4300-ac99-3fc6ea1e36a1\") " pod="openshift-marketplace/community-operators-5cmdj" Jan 26 19:23:32 crc kubenswrapper[4737]: I0126 19:23:32.728557 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aeb623c7-98b7-4300-ac99-3fc6ea1e36a1-utilities\") pod \"community-operators-5cmdj\" (UID: \"aeb623c7-98b7-4300-ac99-3fc6ea1e36a1\") " pod="openshift-marketplace/community-operators-5cmdj" Jan 26 19:23:32 crc kubenswrapper[4737]: I0126 19:23:32.762805 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzlvx\" (UniqueName: \"kubernetes.io/projected/aeb623c7-98b7-4300-ac99-3fc6ea1e36a1-kube-api-access-vzlvx\") pod \"community-operators-5cmdj\" (UID: \"aeb623c7-98b7-4300-ac99-3fc6ea1e36a1\") " pod="openshift-marketplace/community-operators-5cmdj" Jan 26 19:23:32 crc kubenswrapper[4737]: I0126 19:23:32.801306 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5cmdj" Jan 26 19:23:33 crc kubenswrapper[4737]: I0126 19:23:33.465893 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5cmdj"] Jan 26 19:23:33 crc kubenswrapper[4737]: I0126 19:23:33.568166 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5cmdj" event={"ID":"aeb623c7-98b7-4300-ac99-3fc6ea1e36a1","Type":"ContainerStarted","Data":"8dc4447f07813310911e587b8420e7467da00b63a17499d0f699586a170b58ce"} Jan 26 19:23:34 crc kubenswrapper[4737]: I0126 19:23:34.591953 4737 generic.go:334] "Generic (PLEG): container finished" podID="aeb623c7-98b7-4300-ac99-3fc6ea1e36a1" containerID="dccb98b5bfa88f3baeec8f5229cfd6dec2abe64f8f70a404c927ed808014dca9" exitCode=0 Jan 26 19:23:34 crc kubenswrapper[4737]: I0126 19:23:34.592179 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5cmdj" event={"ID":"aeb623c7-98b7-4300-ac99-3fc6ea1e36a1","Type":"ContainerDied","Data":"dccb98b5bfa88f3baeec8f5229cfd6dec2abe64f8f70a404c927ed808014dca9"} Jan 26 19:23:34 crc kubenswrapper[4737]: I0126 19:23:34.594842 4737 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 19:23:36 crc kubenswrapper[4737]: I0126 19:23:36.619419 4737 generic.go:334] "Generic (PLEG): container finished" podID="aeb623c7-98b7-4300-ac99-3fc6ea1e36a1" containerID="0cb8e5013b8ff589c3280f1a752f75d77b1b390dde7fe568ec6b56a343876120" exitCode=0 Jan 26 19:23:36 crc kubenswrapper[4737]: I0126 19:23:36.619961 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5cmdj" event={"ID":"aeb623c7-98b7-4300-ac99-3fc6ea1e36a1","Type":"ContainerDied","Data":"0cb8e5013b8ff589c3280f1a752f75d77b1b390dde7fe568ec6b56a343876120"} Jan 26 19:23:36 crc kubenswrapper[4737]: I0126 19:23:36.992070 4737 scope.go:117] "RemoveContainer" containerID="0456e4438ad40c4f582e22f53643b0ff4e17de4961aa6f19105c775dca022959" Jan 26 19:23:36 crc kubenswrapper[4737]: E0126 19:23:36.992439 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:23:37 crc kubenswrapper[4737]: I0126 19:23:37.632915 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5cmdj" event={"ID":"aeb623c7-98b7-4300-ac99-3fc6ea1e36a1","Type":"ContainerStarted","Data":"0449be93f8a8e8c0ff6244bd378ba5862154536f5fca1aa4837dc5c4b9372bd8"} Jan 26 19:23:37 crc kubenswrapper[4737]: I0126 19:23:37.663369 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5cmdj" podStartSLOduration=3.19496901 podStartE2EDuration="5.663347292s" podCreationTimestamp="2026-01-26 19:23:32 +0000 UTC" firstStartedPulling="2026-01-26 19:23:34.594540711 +0000 UTC m=+3187.902735419" lastFinishedPulling="2026-01-26 19:23:37.062918983 +0000 UTC m=+3190.371113701" observedRunningTime="2026-01-26 19:23:37.653485231 +0000 UTC m=+3190.961679939" watchObservedRunningTime="2026-01-26 19:23:37.663347292 +0000 UTC m=+3190.971542000" Jan 26 19:23:42 crc kubenswrapper[4737]: I0126 19:23:42.802013 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5cmdj" Jan 26 19:23:42 crc kubenswrapper[4737]: I0126 19:23:42.802862 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5cmdj" Jan 26 19:23:42 crc kubenswrapper[4737]: I0126 19:23:42.874398 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5cmdj" Jan 26 19:23:43 crc kubenswrapper[4737]: I0126 19:23:43.782345 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5cmdj" Jan 26 19:23:43 crc kubenswrapper[4737]: I0126 19:23:43.850624 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5cmdj"] Jan 26 19:23:45 crc kubenswrapper[4737]: I0126 19:23:45.755422 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5cmdj" podUID="aeb623c7-98b7-4300-ac99-3fc6ea1e36a1" containerName="registry-server" containerID="cri-o://0449be93f8a8e8c0ff6244bd378ba5862154536f5fca1aa4837dc5c4b9372bd8" gracePeriod=2 Jan 26 19:23:46 crc kubenswrapper[4737]: I0126 19:23:46.368808 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5cmdj" Jan 26 19:23:46 crc kubenswrapper[4737]: I0126 19:23:46.449829 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aeb623c7-98b7-4300-ac99-3fc6ea1e36a1-catalog-content\") pod \"aeb623c7-98b7-4300-ac99-3fc6ea1e36a1\" (UID: \"aeb623c7-98b7-4300-ac99-3fc6ea1e36a1\") " Jan 26 19:23:46 crc kubenswrapper[4737]: I0126 19:23:46.450204 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzlvx\" (UniqueName: \"kubernetes.io/projected/aeb623c7-98b7-4300-ac99-3fc6ea1e36a1-kube-api-access-vzlvx\") pod \"aeb623c7-98b7-4300-ac99-3fc6ea1e36a1\" (UID: \"aeb623c7-98b7-4300-ac99-3fc6ea1e36a1\") " Jan 26 19:23:46 crc kubenswrapper[4737]: I0126 19:23:46.450383 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aeb623c7-98b7-4300-ac99-3fc6ea1e36a1-utilities\") pod \"aeb623c7-98b7-4300-ac99-3fc6ea1e36a1\" (UID: \"aeb623c7-98b7-4300-ac99-3fc6ea1e36a1\") " Jan 26 19:23:46 crc kubenswrapper[4737]: I0126 19:23:46.451253 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aeb623c7-98b7-4300-ac99-3fc6ea1e36a1-utilities" (OuterVolumeSpecName: "utilities") pod "aeb623c7-98b7-4300-ac99-3fc6ea1e36a1" (UID: "aeb623c7-98b7-4300-ac99-3fc6ea1e36a1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:23:46 crc kubenswrapper[4737]: I0126 19:23:46.469262 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aeb623c7-98b7-4300-ac99-3fc6ea1e36a1-kube-api-access-vzlvx" (OuterVolumeSpecName: "kube-api-access-vzlvx") pod "aeb623c7-98b7-4300-ac99-3fc6ea1e36a1" (UID: "aeb623c7-98b7-4300-ac99-3fc6ea1e36a1"). InnerVolumeSpecName "kube-api-access-vzlvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:23:46 crc kubenswrapper[4737]: I0126 19:23:46.553149 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzlvx\" (UniqueName: \"kubernetes.io/projected/aeb623c7-98b7-4300-ac99-3fc6ea1e36a1-kube-api-access-vzlvx\") on node \"crc\" DevicePath \"\"" Jan 26 19:23:46 crc kubenswrapper[4737]: I0126 19:23:46.553415 4737 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aeb623c7-98b7-4300-ac99-3fc6ea1e36a1-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:23:46 crc kubenswrapper[4737]: I0126 19:23:46.775448 4737 generic.go:334] "Generic (PLEG): container finished" podID="aeb623c7-98b7-4300-ac99-3fc6ea1e36a1" containerID="0449be93f8a8e8c0ff6244bd378ba5862154536f5fca1aa4837dc5c4b9372bd8" exitCode=0 Jan 26 19:23:46 crc kubenswrapper[4737]: I0126 19:23:46.775507 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5cmdj" event={"ID":"aeb623c7-98b7-4300-ac99-3fc6ea1e36a1","Type":"ContainerDied","Data":"0449be93f8a8e8c0ff6244bd378ba5862154536f5fca1aa4837dc5c4b9372bd8"} Jan 26 19:23:46 crc kubenswrapper[4737]: I0126 19:23:46.775548 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5cmdj" event={"ID":"aeb623c7-98b7-4300-ac99-3fc6ea1e36a1","Type":"ContainerDied","Data":"8dc4447f07813310911e587b8420e7467da00b63a17499d0f699586a170b58ce"} Jan 26 19:23:46 crc kubenswrapper[4737]: I0126 19:23:46.775577 4737 scope.go:117] "RemoveContainer" containerID="0449be93f8a8e8c0ff6244bd378ba5862154536f5fca1aa4837dc5c4b9372bd8" Jan 26 19:23:46 crc kubenswrapper[4737]: I0126 19:23:46.778764 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5cmdj" Jan 26 19:23:46 crc kubenswrapper[4737]: I0126 19:23:46.805037 4737 scope.go:117] "RemoveContainer" containerID="0cb8e5013b8ff589c3280f1a752f75d77b1b390dde7fe568ec6b56a343876120" Jan 26 19:23:46 crc kubenswrapper[4737]: I0126 19:23:46.816400 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aeb623c7-98b7-4300-ac99-3fc6ea1e36a1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aeb623c7-98b7-4300-ac99-3fc6ea1e36a1" (UID: "aeb623c7-98b7-4300-ac99-3fc6ea1e36a1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:23:46 crc kubenswrapper[4737]: I0126 19:23:46.837013 4737 scope.go:117] "RemoveContainer" containerID="dccb98b5bfa88f3baeec8f5229cfd6dec2abe64f8f70a404c927ed808014dca9" Jan 26 19:23:46 crc kubenswrapper[4737]: I0126 19:23:46.863622 4737 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aeb623c7-98b7-4300-ac99-3fc6ea1e36a1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:23:46 crc kubenswrapper[4737]: I0126 19:23:46.893533 4737 scope.go:117] "RemoveContainer" containerID="0449be93f8a8e8c0ff6244bd378ba5862154536f5fca1aa4837dc5c4b9372bd8" Jan 26 19:23:46 crc kubenswrapper[4737]: E0126 19:23:46.894028 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0449be93f8a8e8c0ff6244bd378ba5862154536f5fca1aa4837dc5c4b9372bd8\": container with ID starting with 0449be93f8a8e8c0ff6244bd378ba5862154536f5fca1aa4837dc5c4b9372bd8 not found: ID does not exist" containerID="0449be93f8a8e8c0ff6244bd378ba5862154536f5fca1aa4837dc5c4b9372bd8" Jan 26 19:23:46 crc kubenswrapper[4737]: I0126 19:23:46.894091 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0449be93f8a8e8c0ff6244bd378ba5862154536f5fca1aa4837dc5c4b9372bd8"} err="failed to get container status \"0449be93f8a8e8c0ff6244bd378ba5862154536f5fca1aa4837dc5c4b9372bd8\": rpc error: code = NotFound desc = could not find container \"0449be93f8a8e8c0ff6244bd378ba5862154536f5fca1aa4837dc5c4b9372bd8\": container with ID starting with 0449be93f8a8e8c0ff6244bd378ba5862154536f5fca1aa4837dc5c4b9372bd8 not found: ID does not exist" Jan 26 19:23:46 crc kubenswrapper[4737]: I0126 19:23:46.894123 4737 scope.go:117] "RemoveContainer" containerID="0cb8e5013b8ff589c3280f1a752f75d77b1b390dde7fe568ec6b56a343876120" Jan 26 19:23:46 crc kubenswrapper[4737]: E0126 19:23:46.894658 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cb8e5013b8ff589c3280f1a752f75d77b1b390dde7fe568ec6b56a343876120\": container with ID starting with 0cb8e5013b8ff589c3280f1a752f75d77b1b390dde7fe568ec6b56a343876120 not found: ID does not exist" containerID="0cb8e5013b8ff589c3280f1a752f75d77b1b390dde7fe568ec6b56a343876120" Jan 26 19:23:46 crc kubenswrapper[4737]: I0126 19:23:46.894681 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cb8e5013b8ff589c3280f1a752f75d77b1b390dde7fe568ec6b56a343876120"} err="failed to get container status \"0cb8e5013b8ff589c3280f1a752f75d77b1b390dde7fe568ec6b56a343876120\": rpc error: code = NotFound desc = could not find container \"0cb8e5013b8ff589c3280f1a752f75d77b1b390dde7fe568ec6b56a343876120\": container with ID starting with 0cb8e5013b8ff589c3280f1a752f75d77b1b390dde7fe568ec6b56a343876120 not found: ID does not exist" Jan 26 19:23:46 crc kubenswrapper[4737]: I0126 19:23:46.894702 4737 scope.go:117] "RemoveContainer" containerID="dccb98b5bfa88f3baeec8f5229cfd6dec2abe64f8f70a404c927ed808014dca9" Jan 26 19:23:46 crc kubenswrapper[4737]: E0126 19:23:46.895728 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dccb98b5bfa88f3baeec8f5229cfd6dec2abe64f8f70a404c927ed808014dca9\": container with ID starting with dccb98b5bfa88f3baeec8f5229cfd6dec2abe64f8f70a404c927ed808014dca9 not found: ID does not exist" containerID="dccb98b5bfa88f3baeec8f5229cfd6dec2abe64f8f70a404c927ed808014dca9" Jan 26 19:23:46 crc kubenswrapper[4737]: I0126 19:23:46.895770 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dccb98b5bfa88f3baeec8f5229cfd6dec2abe64f8f70a404c927ed808014dca9"} err="failed to get container status \"dccb98b5bfa88f3baeec8f5229cfd6dec2abe64f8f70a404c927ed808014dca9\": rpc error: code = NotFound desc = could not find container \"dccb98b5bfa88f3baeec8f5229cfd6dec2abe64f8f70a404c927ed808014dca9\": container with ID starting with dccb98b5bfa88f3baeec8f5229cfd6dec2abe64f8f70a404c927ed808014dca9 not found: ID does not exist" Jan 26 19:23:47 crc kubenswrapper[4737]: I0126 19:23:47.117606 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5cmdj"] Jan 26 19:23:47 crc kubenswrapper[4737]: I0126 19:23:47.130570 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5cmdj"] Jan 26 19:23:48 crc kubenswrapper[4737]: I0126 19:23:48.806359 4737 generic.go:334] "Generic (PLEG): container finished" podID="6bacdfa3-047c-42c9-a233-7daac1e8b65d" containerID="649bdc39080d814c22f03ac0246241ec4a5f80570c8aa472e5c8e3d00d872675" exitCode=0 Jan 26 19:23:48 crc kubenswrapper[4737]: I0126 19:23:48.806397 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v27w7" event={"ID":"6bacdfa3-047c-42c9-a233-7daac1e8b65d","Type":"ContainerDied","Data":"649bdc39080d814c22f03ac0246241ec4a5f80570c8aa472e5c8e3d00d872675"} Jan 26 19:23:49 crc kubenswrapper[4737]: I0126 19:23:48.999237 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aeb623c7-98b7-4300-ac99-3fc6ea1e36a1" path="/var/lib/kubelet/pods/aeb623c7-98b7-4300-ac99-3fc6ea1e36a1/volumes" Jan 26 19:23:49 crc kubenswrapper[4737]: I0126 19:23:49.982227 4737 scope.go:117] "RemoveContainer" containerID="0456e4438ad40c4f582e22f53643b0ff4e17de4961aa6f19105c775dca022959" Jan 26 19:23:49 crc kubenswrapper[4737]: E0126 19:23:49.982878 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:23:50 crc kubenswrapper[4737]: I0126 19:23:50.505164 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v27w7" Jan 26 19:23:50 crc kubenswrapper[4737]: I0126 19:23:50.669334 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bacdfa3-047c-42c9-a233-7daac1e8b65d-telemetry-combined-ca-bundle\") pod \"6bacdfa3-047c-42c9-a233-7daac1e8b65d\" (UID: \"6bacdfa3-047c-42c9-a233-7daac1e8b65d\") " Jan 26 19:23:50 crc kubenswrapper[4737]: I0126 19:23:50.669583 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6bacdfa3-047c-42c9-a233-7daac1e8b65d-ssh-key-openstack-edpm-ipam\") pod \"6bacdfa3-047c-42c9-a233-7daac1e8b65d\" (UID: \"6bacdfa3-047c-42c9-a233-7daac1e8b65d\") " Jan 26 19:23:50 crc kubenswrapper[4737]: I0126 19:23:50.669619 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6bacdfa3-047c-42c9-a233-7daac1e8b65d-inventory\") pod \"6bacdfa3-047c-42c9-a233-7daac1e8b65d\" (UID: \"6bacdfa3-047c-42c9-a233-7daac1e8b65d\") " Jan 26 19:23:50 crc kubenswrapper[4737]: I0126 19:23:50.669642 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6z2h\" (UniqueName: \"kubernetes.io/projected/6bacdfa3-047c-42c9-a233-7daac1e8b65d-kube-api-access-b6z2h\") pod \"6bacdfa3-047c-42c9-a233-7daac1e8b65d\" (UID: \"6bacdfa3-047c-42c9-a233-7daac1e8b65d\") " Jan 26 19:23:50 crc kubenswrapper[4737]: I0126 19:23:50.669694 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/6bacdfa3-047c-42c9-a233-7daac1e8b65d-ceilometer-compute-config-data-0\") pod \"6bacdfa3-047c-42c9-a233-7daac1e8b65d\" (UID: \"6bacdfa3-047c-42c9-a233-7daac1e8b65d\") " Jan 26 19:23:50 crc kubenswrapper[4737]: I0126 19:23:50.669775 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/6bacdfa3-047c-42c9-a233-7daac1e8b65d-ceilometer-compute-config-data-1\") pod \"6bacdfa3-047c-42c9-a233-7daac1e8b65d\" (UID: \"6bacdfa3-047c-42c9-a233-7daac1e8b65d\") " Jan 26 19:23:50 crc kubenswrapper[4737]: I0126 19:23:50.669836 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/6bacdfa3-047c-42c9-a233-7daac1e8b65d-ceilometer-compute-config-data-2\") pod \"6bacdfa3-047c-42c9-a233-7daac1e8b65d\" (UID: \"6bacdfa3-047c-42c9-a233-7daac1e8b65d\") " Jan 26 19:23:50 crc kubenswrapper[4737]: I0126 19:23:50.676866 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bacdfa3-047c-42c9-a233-7daac1e8b65d-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "6bacdfa3-047c-42c9-a233-7daac1e8b65d" (UID: "6bacdfa3-047c-42c9-a233-7daac1e8b65d"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:23:50 crc kubenswrapper[4737]: I0126 19:23:50.693822 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bacdfa3-047c-42c9-a233-7daac1e8b65d-kube-api-access-b6z2h" (OuterVolumeSpecName: "kube-api-access-b6z2h") pod "6bacdfa3-047c-42c9-a233-7daac1e8b65d" (UID: "6bacdfa3-047c-42c9-a233-7daac1e8b65d"). InnerVolumeSpecName "kube-api-access-b6z2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:23:50 crc kubenswrapper[4737]: I0126 19:23:50.707548 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bacdfa3-047c-42c9-a233-7daac1e8b65d-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "6bacdfa3-047c-42c9-a233-7daac1e8b65d" (UID: "6bacdfa3-047c-42c9-a233-7daac1e8b65d"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:23:50 crc kubenswrapper[4737]: I0126 19:23:50.716213 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bacdfa3-047c-42c9-a233-7daac1e8b65d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6bacdfa3-047c-42c9-a233-7daac1e8b65d" (UID: "6bacdfa3-047c-42c9-a233-7daac1e8b65d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:23:50 crc kubenswrapper[4737]: I0126 19:23:50.717144 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bacdfa3-047c-42c9-a233-7daac1e8b65d-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "6bacdfa3-047c-42c9-a233-7daac1e8b65d" (UID: "6bacdfa3-047c-42c9-a233-7daac1e8b65d"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:23:50 crc kubenswrapper[4737]: I0126 19:23:50.717791 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bacdfa3-047c-42c9-a233-7daac1e8b65d-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "6bacdfa3-047c-42c9-a233-7daac1e8b65d" (UID: "6bacdfa3-047c-42c9-a233-7daac1e8b65d"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:23:50 crc kubenswrapper[4737]: I0126 19:23:50.721183 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bacdfa3-047c-42c9-a233-7daac1e8b65d-inventory" (OuterVolumeSpecName: "inventory") pod "6bacdfa3-047c-42c9-a233-7daac1e8b65d" (UID: "6bacdfa3-047c-42c9-a233-7daac1e8b65d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:23:50 crc kubenswrapper[4737]: I0126 19:23:50.773356 4737 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6bacdfa3-047c-42c9-a233-7daac1e8b65d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 19:23:50 crc kubenswrapper[4737]: I0126 19:23:50.773580 4737 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6bacdfa3-047c-42c9-a233-7daac1e8b65d-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 19:23:50 crc kubenswrapper[4737]: I0126 19:23:50.774998 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6z2h\" (UniqueName: \"kubernetes.io/projected/6bacdfa3-047c-42c9-a233-7daac1e8b65d-kube-api-access-b6z2h\") on node \"crc\" DevicePath \"\"" Jan 26 19:23:50 crc kubenswrapper[4737]: I0126 19:23:50.775118 4737 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/6bacdfa3-047c-42c9-a233-7daac1e8b65d-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 26 19:23:50 crc kubenswrapper[4737]: I0126 19:23:50.775233 4737 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/6bacdfa3-047c-42c9-a233-7daac1e8b65d-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 26 19:23:50 crc kubenswrapper[4737]: I0126 19:23:50.775335 4737 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/6bacdfa3-047c-42c9-a233-7daac1e8b65d-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 26 19:23:50 crc kubenswrapper[4737]: I0126 19:23:50.775414 4737 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bacdfa3-047c-42c9-a233-7daac1e8b65d-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:23:50 crc kubenswrapper[4737]: I0126 19:23:50.850789 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v27w7" event={"ID":"6bacdfa3-047c-42c9-a233-7daac1e8b65d","Type":"ContainerDied","Data":"4b5d1cec237eb4c8965255a38cb29f70573136aa2a2325e12381921ebe306f8f"} Jan 26 19:23:50 crc kubenswrapper[4737]: I0126 19:23:50.850838 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v27w7" Jan 26 19:23:50 crc kubenswrapper[4737]: I0126 19:23:50.850838 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b5d1cec237eb4c8965255a38cb29f70573136aa2a2325e12381921ebe306f8f" Jan 26 19:23:50 crc kubenswrapper[4737]: I0126 19:23:50.967994 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-5rchb"] Jan 26 19:23:50 crc kubenswrapper[4737]: E0126 19:23:50.968570 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bacdfa3-047c-42c9-a233-7daac1e8b65d" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 26 19:23:50 crc kubenswrapper[4737]: I0126 19:23:50.968595 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bacdfa3-047c-42c9-a233-7daac1e8b65d" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 26 19:23:50 crc kubenswrapper[4737]: E0126 19:23:50.968621 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aeb623c7-98b7-4300-ac99-3fc6ea1e36a1" containerName="extract-utilities" Jan 26 19:23:50 crc kubenswrapper[4737]: I0126 19:23:50.968629 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="aeb623c7-98b7-4300-ac99-3fc6ea1e36a1" containerName="extract-utilities" Jan 26 19:23:50 crc kubenswrapper[4737]: E0126 19:23:50.968662 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aeb623c7-98b7-4300-ac99-3fc6ea1e36a1" containerName="extract-content" Jan 26 19:23:50 crc kubenswrapper[4737]: I0126 19:23:50.968671 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="aeb623c7-98b7-4300-ac99-3fc6ea1e36a1" containerName="extract-content" Jan 26 19:23:50 crc kubenswrapper[4737]: E0126 19:23:50.968703 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aeb623c7-98b7-4300-ac99-3fc6ea1e36a1" containerName="registry-server" Jan 26 19:23:50 crc kubenswrapper[4737]: I0126 19:23:50.968712 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="aeb623c7-98b7-4300-ac99-3fc6ea1e36a1" containerName="registry-server" Jan 26 19:23:50 crc kubenswrapper[4737]: I0126 19:23:50.968979 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bacdfa3-047c-42c9-a233-7daac1e8b65d" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 26 19:23:50 crc kubenswrapper[4737]: I0126 19:23:50.969004 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="aeb623c7-98b7-4300-ac99-3fc6ea1e36a1" containerName="registry-server" Jan 26 19:23:50 crc kubenswrapper[4737]: I0126 19:23:50.970095 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-5rchb" Jan 26 19:23:50 crc kubenswrapper[4737]: I0126 19:23:50.987903 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-ipmi-config-data" Jan 26 19:23:50 crc kubenswrapper[4737]: I0126 19:23:50.988930 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 19:23:50 crc kubenswrapper[4737]: I0126 19:23:50.989164 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 19:23:50 crc kubenswrapper[4737]: I0126 19:23:50.989286 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 19:23:50 crc kubenswrapper[4737]: I0126 19:23:50.989184 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xlvv9" Jan 26 19:23:50 crc kubenswrapper[4737]: I0126 19:23:50.995884 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/fe3a5992-1b84-4df9-bebe-3f0060fe631d-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-5rchb\" (UID: \"fe3a5992-1b84-4df9-bebe-3f0060fe631d\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-5rchb" Jan 26 19:23:50 crc kubenswrapper[4737]: I0126 19:23:50.996172 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe3a5992-1b84-4df9-bebe-3f0060fe631d-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-5rchb\" (UID: \"fe3a5992-1b84-4df9-bebe-3f0060fe631d\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-5rchb" Jan 26 19:23:50 crc kubenswrapper[4737]: I0126 19:23:50.996428 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/fe3a5992-1b84-4df9-bebe-3f0060fe631d-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-5rchb\" (UID: \"fe3a5992-1b84-4df9-bebe-3f0060fe631d\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-5rchb" Jan 26 19:23:50 crc kubenswrapper[4737]: I0126 19:23:50.996585 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/fe3a5992-1b84-4df9-bebe-3f0060fe631d-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-5rchb\" (UID: \"fe3a5992-1b84-4df9-bebe-3f0060fe631d\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-5rchb" Jan 26 19:23:50 crc kubenswrapper[4737]: I0126 19:23:50.996695 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg2xm\" (UniqueName: \"kubernetes.io/projected/fe3a5992-1b84-4df9-bebe-3f0060fe631d-kube-api-access-tg2xm\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-5rchb\" (UID: \"fe3a5992-1b84-4df9-bebe-3f0060fe631d\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-5rchb" Jan 26 19:23:50 crc kubenswrapper[4737]: I0126 19:23:50.996878 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fe3a5992-1b84-4df9-bebe-3f0060fe631d-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-5rchb\" (UID: \"fe3a5992-1b84-4df9-bebe-3f0060fe631d\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-5rchb" Jan 26 19:23:50 crc kubenswrapper[4737]: I0126 19:23:50.997543 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe3a5992-1b84-4df9-bebe-3f0060fe631d-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-5rchb\" (UID: \"fe3a5992-1b84-4df9-bebe-3f0060fe631d\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-5rchb" Jan 26 19:23:51 crc kubenswrapper[4737]: I0126 19:23:51.021553 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-5rchb"] Jan 26 19:23:51 crc kubenswrapper[4737]: I0126 19:23:51.099655 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe3a5992-1b84-4df9-bebe-3f0060fe631d-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-5rchb\" (UID: \"fe3a5992-1b84-4df9-bebe-3f0060fe631d\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-5rchb" Jan 26 19:23:51 crc kubenswrapper[4737]: I0126 19:23:51.100313 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe3a5992-1b84-4df9-bebe-3f0060fe631d-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-5rchb\" (UID: \"fe3a5992-1b84-4df9-bebe-3f0060fe631d\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-5rchb" Jan 26 19:23:51 crc kubenswrapper[4737]: I0126 19:23:51.100441 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/fe3a5992-1b84-4df9-bebe-3f0060fe631d-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-5rchb\" (UID: \"fe3a5992-1b84-4df9-bebe-3f0060fe631d\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-5rchb" Jan 26 19:23:51 crc kubenswrapper[4737]: I0126 19:23:51.100693 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/fe3a5992-1b84-4df9-bebe-3f0060fe631d-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-5rchb\" (UID: \"fe3a5992-1b84-4df9-bebe-3f0060fe631d\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-5rchb" Jan 26 19:23:51 crc kubenswrapper[4737]: I0126 19:23:51.100802 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/fe3a5992-1b84-4df9-bebe-3f0060fe631d-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-5rchb\" (UID: \"fe3a5992-1b84-4df9-bebe-3f0060fe631d\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-5rchb" Jan 26 19:23:51 crc kubenswrapper[4737]: I0126 19:23:51.100894 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg2xm\" (UniqueName: \"kubernetes.io/projected/fe3a5992-1b84-4df9-bebe-3f0060fe631d-kube-api-access-tg2xm\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-5rchb\" (UID: \"fe3a5992-1b84-4df9-bebe-3f0060fe631d\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-5rchb" Jan 26 19:23:51 crc kubenswrapper[4737]: I0126 19:23:51.100998 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fe3a5992-1b84-4df9-bebe-3f0060fe631d-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-5rchb\" (UID: \"fe3a5992-1b84-4df9-bebe-3f0060fe631d\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-5rchb" Jan 26 19:23:51 crc kubenswrapper[4737]: I0126 19:23:51.105857 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/fe3a5992-1b84-4df9-bebe-3f0060fe631d-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-5rchb\" (UID: \"fe3a5992-1b84-4df9-bebe-3f0060fe631d\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-5rchb" Jan 26 19:23:51 crc kubenswrapper[4737]: I0126 19:23:51.106722 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe3a5992-1b84-4df9-bebe-3f0060fe631d-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-5rchb\" (UID: \"fe3a5992-1b84-4df9-bebe-3f0060fe631d\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-5rchb" Jan 26 19:23:51 crc kubenswrapper[4737]: I0126 19:23:51.106943 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/fe3a5992-1b84-4df9-bebe-3f0060fe631d-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-5rchb\" (UID: \"fe3a5992-1b84-4df9-bebe-3f0060fe631d\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-5rchb" Jan 26 19:23:51 crc kubenswrapper[4737]: I0126 19:23:51.108225 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fe3a5992-1b84-4df9-bebe-3f0060fe631d-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-5rchb\" (UID: \"fe3a5992-1b84-4df9-bebe-3f0060fe631d\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-5rchb" Jan 26 19:23:51 crc kubenswrapper[4737]: I0126 19:23:51.108485 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe3a5992-1b84-4df9-bebe-3f0060fe631d-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-5rchb\" (UID: \"fe3a5992-1b84-4df9-bebe-3f0060fe631d\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-5rchb" Jan 26 19:23:51 crc kubenswrapper[4737]: I0126 19:23:51.110656 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/fe3a5992-1b84-4df9-bebe-3f0060fe631d-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-5rchb\" (UID: \"fe3a5992-1b84-4df9-bebe-3f0060fe631d\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-5rchb" Jan 26 19:23:51 crc kubenswrapper[4737]: I0126 19:23:51.126283 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg2xm\" (UniqueName: \"kubernetes.io/projected/fe3a5992-1b84-4df9-bebe-3f0060fe631d-kube-api-access-tg2xm\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-5rchb\" (UID: \"fe3a5992-1b84-4df9-bebe-3f0060fe631d\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-5rchb" Jan 26 19:23:51 crc kubenswrapper[4737]: I0126 19:23:51.315520 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-5rchb" Jan 26 19:23:51 crc kubenswrapper[4737]: I0126 19:23:51.977815 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-5rchb"] Jan 26 19:23:52 crc kubenswrapper[4737]: I0126 19:23:52.875938 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-5rchb" event={"ID":"fe3a5992-1b84-4df9-bebe-3f0060fe631d","Type":"ContainerStarted","Data":"8b730ab8c6c56f19a178f8be79bbc317f460f204940ffb36c68cbeb633cfb8db"} Jan 26 19:23:53 crc kubenswrapper[4737]: I0126 19:23:53.894141 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-5rchb" event={"ID":"fe3a5992-1b84-4df9-bebe-3f0060fe631d","Type":"ContainerStarted","Data":"3003a756d8653376bf64220b9e83ae9b9db8640532b9bb41f2dd5e1a6dc7ca19"} Jan 26 19:23:53 crc kubenswrapper[4737]: I0126 19:23:53.930599 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-5rchb" podStartSLOduration=3.177103859 podStartE2EDuration="3.930575226s" podCreationTimestamp="2026-01-26 19:23:50 +0000 UTC" firstStartedPulling="2026-01-26 19:23:51.989300608 +0000 UTC m=+3205.297495316" lastFinishedPulling="2026-01-26 19:23:52.742771975 +0000 UTC m=+3206.050966683" observedRunningTime="2026-01-26 19:23:53.917910326 +0000 UTC m=+3207.226105054" watchObservedRunningTime="2026-01-26 19:23:53.930575226 +0000 UTC m=+3207.238769944" Jan 26 19:24:03 crc kubenswrapper[4737]: I0126 19:24:03.982334 4737 scope.go:117] "RemoveContainer" containerID="0456e4438ad40c4f582e22f53643b0ff4e17de4961aa6f19105c775dca022959" Jan 26 19:24:03 crc kubenswrapper[4737]: E0126 19:24:03.983387 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:24:18 crc kubenswrapper[4737]: I0126 19:24:18.982372 4737 scope.go:117] "RemoveContainer" containerID="0456e4438ad40c4f582e22f53643b0ff4e17de4961aa6f19105c775dca022959" Jan 26 19:24:18 crc kubenswrapper[4737]: E0126 19:24:18.984330 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:24:30 crc kubenswrapper[4737]: I0126 19:24:30.982215 4737 scope.go:117] "RemoveContainer" containerID="0456e4438ad40c4f582e22f53643b0ff4e17de4961aa6f19105c775dca022959" Jan 26 19:24:30 crc kubenswrapper[4737]: E0126 19:24:30.983004 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:24:43 crc kubenswrapper[4737]: I0126 19:24:43.982019 4737 scope.go:117] "RemoveContainer" containerID="0456e4438ad40c4f582e22f53643b0ff4e17de4961aa6f19105c775dca022959" Jan 26 19:24:43 crc kubenswrapper[4737]: E0126 19:24:43.982965 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:24:54 crc kubenswrapper[4737]: I0126 19:24:54.982838 4737 scope.go:117] "RemoveContainer" containerID="0456e4438ad40c4f582e22f53643b0ff4e17de4961aa6f19105c775dca022959" Jan 26 19:24:54 crc kubenswrapper[4737]: E0126 19:24:54.983876 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:25:06 crc kubenswrapper[4737]: I0126 19:25:06.995527 4737 scope.go:117] "RemoveContainer" containerID="0456e4438ad40c4f582e22f53643b0ff4e17de4961aa6f19105c775dca022959" Jan 26 19:25:06 crc kubenswrapper[4737]: E0126 19:25:06.996399 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:25:19 crc kubenswrapper[4737]: I0126 19:25:19.982667 4737 scope.go:117] "RemoveContainer" containerID="0456e4438ad40c4f582e22f53643b0ff4e17de4961aa6f19105c775dca022959" Jan 26 19:25:19 crc kubenswrapper[4737]: E0126 19:25:19.983572 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:25:30 crc kubenswrapper[4737]: I0126 19:25:30.983130 4737 scope.go:117] "RemoveContainer" containerID="0456e4438ad40c4f582e22f53643b0ff4e17de4961aa6f19105c775dca022959" Jan 26 19:25:30 crc kubenswrapper[4737]: E0126 19:25:30.985244 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:25:45 crc kubenswrapper[4737]: I0126 19:25:45.982219 4737 scope.go:117] "RemoveContainer" containerID="0456e4438ad40c4f582e22f53643b0ff4e17de4961aa6f19105c775dca022959" Jan 26 19:25:45 crc kubenswrapper[4737]: E0126 19:25:45.984446 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:26:00 crc kubenswrapper[4737]: I0126 19:26:00.981769 4737 scope.go:117] "RemoveContainer" containerID="0456e4438ad40c4f582e22f53643b0ff4e17de4961aa6f19105c775dca022959" Jan 26 19:26:00 crc kubenswrapper[4737]: E0126 19:26:00.982822 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:26:10 crc kubenswrapper[4737]: I0126 19:26:10.513385 4737 generic.go:334] "Generic (PLEG): container finished" podID="fe3a5992-1b84-4df9-bebe-3f0060fe631d" containerID="3003a756d8653376bf64220b9e83ae9b9db8640532b9bb41f2dd5e1a6dc7ca19" exitCode=0 Jan 26 19:26:10 crc kubenswrapper[4737]: I0126 19:26:10.513475 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-5rchb" event={"ID":"fe3a5992-1b84-4df9-bebe-3f0060fe631d","Type":"ContainerDied","Data":"3003a756d8653376bf64220b9e83ae9b9db8640532b9bb41f2dd5e1a6dc7ca19"} Jan 26 19:26:12 crc kubenswrapper[4737]: I0126 19:26:12.108973 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-5rchb" Jan 26 19:26:12 crc kubenswrapper[4737]: I0126 19:26:12.249878 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/fe3a5992-1b84-4df9-bebe-3f0060fe631d-ceilometer-ipmi-config-data-1\") pod \"fe3a5992-1b84-4df9-bebe-3f0060fe631d\" (UID: \"fe3a5992-1b84-4df9-bebe-3f0060fe631d\") " Jan 26 19:26:12 crc kubenswrapper[4737]: I0126 19:26:12.250011 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe3a5992-1b84-4df9-bebe-3f0060fe631d-inventory\") pod \"fe3a5992-1b84-4df9-bebe-3f0060fe631d\" (UID: \"fe3a5992-1b84-4df9-bebe-3f0060fe631d\") " Jan 26 19:26:12 crc kubenswrapper[4737]: I0126 19:26:12.250296 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe3a5992-1b84-4df9-bebe-3f0060fe631d-telemetry-power-monitoring-combined-ca-bundle\") pod \"fe3a5992-1b84-4df9-bebe-3f0060fe631d\" (UID: \"fe3a5992-1b84-4df9-bebe-3f0060fe631d\") " Jan 26 19:26:12 crc kubenswrapper[4737]: I0126 19:26:12.250403 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fe3a5992-1b84-4df9-bebe-3f0060fe631d-ssh-key-openstack-edpm-ipam\") pod \"fe3a5992-1b84-4df9-bebe-3f0060fe631d\" (UID: \"fe3a5992-1b84-4df9-bebe-3f0060fe631d\") " Jan 26 19:26:12 crc kubenswrapper[4737]: I0126 19:26:12.250457 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tg2xm\" (UniqueName: \"kubernetes.io/projected/fe3a5992-1b84-4df9-bebe-3f0060fe631d-kube-api-access-tg2xm\") pod \"fe3a5992-1b84-4df9-bebe-3f0060fe631d\" (UID: \"fe3a5992-1b84-4df9-bebe-3f0060fe631d\") " Jan 26 19:26:12 crc kubenswrapper[4737]: I0126 19:26:12.250521 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/fe3a5992-1b84-4df9-bebe-3f0060fe631d-ceilometer-ipmi-config-data-2\") pod \"fe3a5992-1b84-4df9-bebe-3f0060fe631d\" (UID: \"fe3a5992-1b84-4df9-bebe-3f0060fe631d\") " Jan 26 19:26:12 crc kubenswrapper[4737]: I0126 19:26:12.250629 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/fe3a5992-1b84-4df9-bebe-3f0060fe631d-ceilometer-ipmi-config-data-0\") pod \"fe3a5992-1b84-4df9-bebe-3f0060fe631d\" (UID: \"fe3a5992-1b84-4df9-bebe-3f0060fe631d\") " Jan 26 19:26:12 crc kubenswrapper[4737]: I0126 19:26:12.258777 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe3a5992-1b84-4df9-bebe-3f0060fe631d-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "fe3a5992-1b84-4df9-bebe-3f0060fe631d" (UID: "fe3a5992-1b84-4df9-bebe-3f0060fe631d"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:26:12 crc kubenswrapper[4737]: I0126 19:26:12.258990 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe3a5992-1b84-4df9-bebe-3f0060fe631d-kube-api-access-tg2xm" (OuterVolumeSpecName: "kube-api-access-tg2xm") pod "fe3a5992-1b84-4df9-bebe-3f0060fe631d" (UID: "fe3a5992-1b84-4df9-bebe-3f0060fe631d"). InnerVolumeSpecName "kube-api-access-tg2xm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:26:12 crc kubenswrapper[4737]: I0126 19:26:12.287595 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe3a5992-1b84-4df9-bebe-3f0060fe631d-ceilometer-ipmi-config-data-2" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-2") pod "fe3a5992-1b84-4df9-bebe-3f0060fe631d" (UID: "fe3a5992-1b84-4df9-bebe-3f0060fe631d"). InnerVolumeSpecName "ceilometer-ipmi-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:26:12 crc kubenswrapper[4737]: I0126 19:26:12.296135 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe3a5992-1b84-4df9-bebe-3f0060fe631d-inventory" (OuterVolumeSpecName: "inventory") pod "fe3a5992-1b84-4df9-bebe-3f0060fe631d" (UID: "fe3a5992-1b84-4df9-bebe-3f0060fe631d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:26:12 crc kubenswrapper[4737]: I0126 19:26:12.296876 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe3a5992-1b84-4df9-bebe-3f0060fe631d-ceilometer-ipmi-config-data-1" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-1") pod "fe3a5992-1b84-4df9-bebe-3f0060fe631d" (UID: "fe3a5992-1b84-4df9-bebe-3f0060fe631d"). InnerVolumeSpecName "ceilometer-ipmi-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:26:12 crc kubenswrapper[4737]: I0126 19:26:12.298482 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe3a5992-1b84-4df9-bebe-3f0060fe631d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fe3a5992-1b84-4df9-bebe-3f0060fe631d" (UID: "fe3a5992-1b84-4df9-bebe-3f0060fe631d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:26:12 crc kubenswrapper[4737]: I0126 19:26:12.300945 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe3a5992-1b84-4df9-bebe-3f0060fe631d-ceilometer-ipmi-config-data-0" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-0") pod "fe3a5992-1b84-4df9-bebe-3f0060fe631d" (UID: "fe3a5992-1b84-4df9-bebe-3f0060fe631d"). InnerVolumeSpecName "ceilometer-ipmi-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:26:12 crc kubenswrapper[4737]: I0126 19:26:12.359651 4737 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/fe3a5992-1b84-4df9-bebe-3f0060fe631d-ceilometer-ipmi-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 26 19:26:12 crc kubenswrapper[4737]: I0126 19:26:12.359741 4737 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe3a5992-1b84-4df9-bebe-3f0060fe631d-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 19:26:12 crc kubenswrapper[4737]: I0126 19:26:12.359763 4737 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe3a5992-1b84-4df9-bebe-3f0060fe631d-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:26:12 crc kubenswrapper[4737]: I0126 19:26:12.359782 4737 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fe3a5992-1b84-4df9-bebe-3f0060fe631d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 19:26:12 crc kubenswrapper[4737]: I0126 19:26:12.359795 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tg2xm\" (UniqueName: \"kubernetes.io/projected/fe3a5992-1b84-4df9-bebe-3f0060fe631d-kube-api-access-tg2xm\") on node \"crc\" DevicePath \"\"" Jan 26 19:26:12 crc kubenswrapper[4737]: I0126 19:26:12.359807 4737 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/fe3a5992-1b84-4df9-bebe-3f0060fe631d-ceilometer-ipmi-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 26 19:26:12 crc kubenswrapper[4737]: I0126 19:26:12.359818 4737 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/fe3a5992-1b84-4df9-bebe-3f0060fe631d-ceilometer-ipmi-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 26 19:26:12 crc kubenswrapper[4737]: I0126 19:26:12.535657 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-5rchb" event={"ID":"fe3a5992-1b84-4df9-bebe-3f0060fe631d","Type":"ContainerDied","Data":"8b730ab8c6c56f19a178f8be79bbc317f460f204940ffb36c68cbeb633cfb8db"} Jan 26 19:26:12 crc kubenswrapper[4737]: I0126 19:26:12.535705 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b730ab8c6c56f19a178f8be79bbc317f460f204940ffb36c68cbeb633cfb8db" Jan 26 19:26:12 crc kubenswrapper[4737]: I0126 19:26:12.535740 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-5rchb" Jan 26 19:26:12 crc kubenswrapper[4737]: I0126 19:26:12.650185 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-p6bgr"] Jan 26 19:26:12 crc kubenswrapper[4737]: E0126 19:26:12.650798 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe3a5992-1b84-4df9-bebe-3f0060fe631d" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Jan 26 19:26:12 crc kubenswrapper[4737]: I0126 19:26:12.650822 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe3a5992-1b84-4df9-bebe-3f0060fe631d" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Jan 26 19:26:12 crc kubenswrapper[4737]: I0126 19:26:12.651160 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe3a5992-1b84-4df9-bebe-3f0060fe631d" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Jan 26 19:26:12 crc kubenswrapper[4737]: I0126 19:26:12.652268 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-p6bgr" Jan 26 19:26:12 crc kubenswrapper[4737]: I0126 19:26:12.655156 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 19:26:12 crc kubenswrapper[4737]: I0126 19:26:12.655171 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 19:26:12 crc kubenswrapper[4737]: I0126 19:26:12.655335 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"logging-compute-config-data" Jan 26 19:26:12 crc kubenswrapper[4737]: I0126 19:26:12.655670 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 19:26:12 crc kubenswrapper[4737]: I0126 19:26:12.662880 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xlvv9" Jan 26 19:26:12 crc kubenswrapper[4737]: I0126 19:26:12.685156 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-p6bgr"] Jan 26 19:26:12 crc kubenswrapper[4737]: I0126 19:26:12.771029 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpjj8\" (UniqueName: \"kubernetes.io/projected/9f1823e5-fd64-4ddd-a4ed-5727de977754-kube-api-access-dpjj8\") pod \"logging-edpm-deployment-openstack-edpm-ipam-p6bgr\" (UID: \"9f1823e5-fd64-4ddd-a4ed-5727de977754\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-p6bgr" Jan 26 19:26:12 crc kubenswrapper[4737]: I0126 19:26:12.771208 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f1823e5-fd64-4ddd-a4ed-5727de977754-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-p6bgr\" (UID: \"9f1823e5-fd64-4ddd-a4ed-5727de977754\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-p6bgr" Jan 26 19:26:12 crc kubenswrapper[4737]: I0126 19:26:12.771285 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f1823e5-fd64-4ddd-a4ed-5727de977754-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-p6bgr\" (UID: \"9f1823e5-fd64-4ddd-a4ed-5727de977754\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-p6bgr" Jan 26 19:26:12 crc kubenswrapper[4737]: I0126 19:26:12.771498 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/9f1823e5-fd64-4ddd-a4ed-5727de977754-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-p6bgr\" (UID: \"9f1823e5-fd64-4ddd-a4ed-5727de977754\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-p6bgr" Jan 26 19:26:12 crc kubenswrapper[4737]: I0126 19:26:12.771619 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/9f1823e5-fd64-4ddd-a4ed-5727de977754-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-p6bgr\" (UID: \"9f1823e5-fd64-4ddd-a4ed-5727de977754\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-p6bgr" Jan 26 19:26:12 crc kubenswrapper[4737]: I0126 19:26:12.873750 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/9f1823e5-fd64-4ddd-a4ed-5727de977754-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-p6bgr\" (UID: \"9f1823e5-fd64-4ddd-a4ed-5727de977754\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-p6bgr" Jan 26 19:26:12 crc kubenswrapper[4737]: I0126 19:26:12.873839 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/9f1823e5-fd64-4ddd-a4ed-5727de977754-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-p6bgr\" (UID: \"9f1823e5-fd64-4ddd-a4ed-5727de977754\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-p6bgr" Jan 26 19:26:12 crc kubenswrapper[4737]: I0126 19:26:12.873902 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpjj8\" (UniqueName: \"kubernetes.io/projected/9f1823e5-fd64-4ddd-a4ed-5727de977754-kube-api-access-dpjj8\") pod \"logging-edpm-deployment-openstack-edpm-ipam-p6bgr\" (UID: \"9f1823e5-fd64-4ddd-a4ed-5727de977754\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-p6bgr" Jan 26 19:26:12 crc kubenswrapper[4737]: I0126 19:26:12.873927 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f1823e5-fd64-4ddd-a4ed-5727de977754-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-p6bgr\" (UID: \"9f1823e5-fd64-4ddd-a4ed-5727de977754\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-p6bgr" Jan 26 19:26:12 crc kubenswrapper[4737]: I0126 19:26:12.873969 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f1823e5-fd64-4ddd-a4ed-5727de977754-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-p6bgr\" (UID: \"9f1823e5-fd64-4ddd-a4ed-5727de977754\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-p6bgr" Jan 26 19:26:12 crc kubenswrapper[4737]: I0126 19:26:12.877472 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/9f1823e5-fd64-4ddd-a4ed-5727de977754-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-p6bgr\" (UID: \"9f1823e5-fd64-4ddd-a4ed-5727de977754\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-p6bgr" Jan 26 19:26:12 crc kubenswrapper[4737]: I0126 19:26:12.878270 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/9f1823e5-fd64-4ddd-a4ed-5727de977754-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-p6bgr\" (UID: \"9f1823e5-fd64-4ddd-a4ed-5727de977754\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-p6bgr" Jan 26 19:26:12 crc kubenswrapper[4737]: I0126 19:26:12.878903 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f1823e5-fd64-4ddd-a4ed-5727de977754-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-p6bgr\" (UID: \"9f1823e5-fd64-4ddd-a4ed-5727de977754\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-p6bgr" Jan 26 19:26:12 crc kubenswrapper[4737]: I0126 19:26:12.883427 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f1823e5-fd64-4ddd-a4ed-5727de977754-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-p6bgr\" (UID: \"9f1823e5-fd64-4ddd-a4ed-5727de977754\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-p6bgr" Jan 26 19:26:12 crc kubenswrapper[4737]: I0126 19:26:12.900766 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpjj8\" (UniqueName: \"kubernetes.io/projected/9f1823e5-fd64-4ddd-a4ed-5727de977754-kube-api-access-dpjj8\") pod \"logging-edpm-deployment-openstack-edpm-ipam-p6bgr\" (UID: \"9f1823e5-fd64-4ddd-a4ed-5727de977754\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-p6bgr" Jan 26 19:26:12 crc kubenswrapper[4737]: I0126 19:26:12.976532 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-p6bgr" Jan 26 19:26:13 crc kubenswrapper[4737]: I0126 19:26:13.578725 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-p6bgr"] Jan 26 19:26:13 crc kubenswrapper[4737]: I0126 19:26:13.982569 4737 scope.go:117] "RemoveContainer" containerID="0456e4438ad40c4f582e22f53643b0ff4e17de4961aa6f19105c775dca022959" Jan 26 19:26:13 crc kubenswrapper[4737]: E0126 19:26:13.982961 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:26:14 crc kubenswrapper[4737]: I0126 19:26:14.557449 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-p6bgr" event={"ID":"9f1823e5-fd64-4ddd-a4ed-5727de977754","Type":"ContainerStarted","Data":"773732afe35b4d3e7f0dd12bff610a85618988fcf0c8bd35394661b5dc33eb20"} Jan 26 19:26:14 crc kubenswrapper[4737]: I0126 19:26:14.557818 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-p6bgr" event={"ID":"9f1823e5-fd64-4ddd-a4ed-5727de977754","Type":"ContainerStarted","Data":"177d4c61681d548720f7fc33257482ef8ba02a6a947abe9b28e120c46f70df4e"} Jan 26 19:26:14 crc kubenswrapper[4737]: I0126 19:26:14.576963 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-p6bgr" podStartSLOduration=2.051515822 podStartE2EDuration="2.576944885s" podCreationTimestamp="2026-01-26 19:26:12 +0000 UTC" firstStartedPulling="2026-01-26 19:26:13.569582133 +0000 UTC m=+3346.877776841" lastFinishedPulling="2026-01-26 19:26:14.095011196 +0000 UTC m=+3347.403205904" observedRunningTime="2026-01-26 19:26:14.576261949 +0000 UTC m=+3347.884456677" watchObservedRunningTime="2026-01-26 19:26:14.576944885 +0000 UTC m=+3347.885139593" Jan 26 19:26:25 crc kubenswrapper[4737]: I0126 19:26:25.982231 4737 scope.go:117] "RemoveContainer" containerID="0456e4438ad40c4f582e22f53643b0ff4e17de4961aa6f19105c775dca022959" Jan 26 19:26:25 crc kubenswrapper[4737]: E0126 19:26:25.984343 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:26:31 crc kubenswrapper[4737]: I0126 19:26:31.742733 4737 generic.go:334] "Generic (PLEG): container finished" podID="9f1823e5-fd64-4ddd-a4ed-5727de977754" containerID="773732afe35b4d3e7f0dd12bff610a85618988fcf0c8bd35394661b5dc33eb20" exitCode=0 Jan 26 19:26:31 crc kubenswrapper[4737]: I0126 19:26:31.742831 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-p6bgr" event={"ID":"9f1823e5-fd64-4ddd-a4ed-5727de977754","Type":"ContainerDied","Data":"773732afe35b4d3e7f0dd12bff610a85618988fcf0c8bd35394661b5dc33eb20"} Jan 26 19:26:33 crc kubenswrapper[4737]: I0126 19:26:33.310162 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-p6bgr" Jan 26 19:26:33 crc kubenswrapper[4737]: I0126 19:26:33.419812 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f1823e5-fd64-4ddd-a4ed-5727de977754-ssh-key-openstack-edpm-ipam\") pod \"9f1823e5-fd64-4ddd-a4ed-5727de977754\" (UID: \"9f1823e5-fd64-4ddd-a4ed-5727de977754\") " Jan 26 19:26:33 crc kubenswrapper[4737]: I0126 19:26:33.419957 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpjj8\" (UniqueName: \"kubernetes.io/projected/9f1823e5-fd64-4ddd-a4ed-5727de977754-kube-api-access-dpjj8\") pod \"9f1823e5-fd64-4ddd-a4ed-5727de977754\" (UID: \"9f1823e5-fd64-4ddd-a4ed-5727de977754\") " Jan 26 19:26:33 crc kubenswrapper[4737]: I0126 19:26:33.420141 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/9f1823e5-fd64-4ddd-a4ed-5727de977754-logging-compute-config-data-0\") pod \"9f1823e5-fd64-4ddd-a4ed-5727de977754\" (UID: \"9f1823e5-fd64-4ddd-a4ed-5727de977754\") " Jan 26 19:26:33 crc kubenswrapper[4737]: I0126 19:26:33.420220 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f1823e5-fd64-4ddd-a4ed-5727de977754-inventory\") pod \"9f1823e5-fd64-4ddd-a4ed-5727de977754\" (UID: \"9f1823e5-fd64-4ddd-a4ed-5727de977754\") " Jan 26 19:26:33 crc kubenswrapper[4737]: I0126 19:26:33.420243 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/9f1823e5-fd64-4ddd-a4ed-5727de977754-logging-compute-config-data-1\") pod \"9f1823e5-fd64-4ddd-a4ed-5727de977754\" (UID: \"9f1823e5-fd64-4ddd-a4ed-5727de977754\") " Jan 26 19:26:33 crc kubenswrapper[4737]: I0126 19:26:33.426038 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f1823e5-fd64-4ddd-a4ed-5727de977754-kube-api-access-dpjj8" (OuterVolumeSpecName: "kube-api-access-dpjj8") pod "9f1823e5-fd64-4ddd-a4ed-5727de977754" (UID: "9f1823e5-fd64-4ddd-a4ed-5727de977754"). InnerVolumeSpecName "kube-api-access-dpjj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:26:33 crc kubenswrapper[4737]: I0126 19:26:33.452626 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f1823e5-fd64-4ddd-a4ed-5727de977754-inventory" (OuterVolumeSpecName: "inventory") pod "9f1823e5-fd64-4ddd-a4ed-5727de977754" (UID: "9f1823e5-fd64-4ddd-a4ed-5727de977754"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:26:33 crc kubenswrapper[4737]: I0126 19:26:33.455953 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f1823e5-fd64-4ddd-a4ed-5727de977754-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9f1823e5-fd64-4ddd-a4ed-5727de977754" (UID: "9f1823e5-fd64-4ddd-a4ed-5727de977754"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:26:33 crc kubenswrapper[4737]: I0126 19:26:33.456924 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f1823e5-fd64-4ddd-a4ed-5727de977754-logging-compute-config-data-0" (OuterVolumeSpecName: "logging-compute-config-data-0") pod "9f1823e5-fd64-4ddd-a4ed-5727de977754" (UID: "9f1823e5-fd64-4ddd-a4ed-5727de977754"). InnerVolumeSpecName "logging-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:26:33 crc kubenswrapper[4737]: I0126 19:26:33.458309 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f1823e5-fd64-4ddd-a4ed-5727de977754-logging-compute-config-data-1" (OuterVolumeSpecName: "logging-compute-config-data-1") pod "9f1823e5-fd64-4ddd-a4ed-5727de977754" (UID: "9f1823e5-fd64-4ddd-a4ed-5727de977754"). InnerVolumeSpecName "logging-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:26:33 crc kubenswrapper[4737]: I0126 19:26:33.524459 4737 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/9f1823e5-fd64-4ddd-a4ed-5727de977754-logging-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 26 19:26:33 crc kubenswrapper[4737]: I0126 19:26:33.524802 4737 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f1823e5-fd64-4ddd-a4ed-5727de977754-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 19:26:33 crc kubenswrapper[4737]: I0126 19:26:33.524822 4737 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/9f1823e5-fd64-4ddd-a4ed-5727de977754-logging-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 26 19:26:33 crc kubenswrapper[4737]: I0126 19:26:33.524838 4737 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f1823e5-fd64-4ddd-a4ed-5727de977754-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 19:26:33 crc kubenswrapper[4737]: I0126 19:26:33.524850 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpjj8\" (UniqueName: \"kubernetes.io/projected/9f1823e5-fd64-4ddd-a4ed-5727de977754-kube-api-access-dpjj8\") on node \"crc\" DevicePath \"\"" Jan 26 19:26:33 crc kubenswrapper[4737]: I0126 19:26:33.766156 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-p6bgr" event={"ID":"9f1823e5-fd64-4ddd-a4ed-5727de977754","Type":"ContainerDied","Data":"177d4c61681d548720f7fc33257482ef8ba02a6a947abe9b28e120c46f70df4e"} Jan 26 19:26:33 crc kubenswrapper[4737]: I0126 19:26:33.766195 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="177d4c61681d548720f7fc33257482ef8ba02a6a947abe9b28e120c46f70df4e" Jan 26 19:26:33 crc kubenswrapper[4737]: I0126 19:26:33.766216 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-p6bgr" Jan 26 19:26:39 crc kubenswrapper[4737]: I0126 19:26:39.984189 4737 scope.go:117] "RemoveContainer" containerID="0456e4438ad40c4f582e22f53643b0ff4e17de4961aa6f19105c775dca022959" Jan 26 19:26:39 crc kubenswrapper[4737]: E0126 19:26:39.985039 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:26:52 crc kubenswrapper[4737]: I0126 19:26:52.981988 4737 scope.go:117] "RemoveContainer" containerID="0456e4438ad40c4f582e22f53643b0ff4e17de4961aa6f19105c775dca022959" Jan 26 19:26:52 crc kubenswrapper[4737]: E0126 19:26:52.982848 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:27:05 crc kubenswrapper[4737]: E0126 19:27:05.351837 4737 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.236:34002->38.102.83.236:42217: read tcp 38.102.83.236:34002->38.102.83.236:42217: read: connection reset by peer Jan 26 19:27:07 crc kubenswrapper[4737]: I0126 19:27:07.984359 4737 scope.go:117] "RemoveContainer" containerID="0456e4438ad40c4f582e22f53643b0ff4e17de4961aa6f19105c775dca022959" Jan 26 19:27:07 crc kubenswrapper[4737]: E0126 19:27:07.986825 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:27:21 crc kubenswrapper[4737]: I0126 19:27:21.983843 4737 scope.go:117] "RemoveContainer" containerID="0456e4438ad40c4f582e22f53643b0ff4e17de4961aa6f19105c775dca022959" Jan 26 19:27:21 crc kubenswrapper[4737]: E0126 19:27:21.984739 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:27:34 crc kubenswrapper[4737]: I0126 19:27:34.982577 4737 scope.go:117] "RemoveContainer" containerID="0456e4438ad40c4f582e22f53643b0ff4e17de4961aa6f19105c775dca022959" Jan 26 19:27:35 crc kubenswrapper[4737]: I0126 19:27:35.624169 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" event={"ID":"afd75772-7900-46c3-b392-afb075e1cc08","Type":"ContainerStarted","Data":"14c7fd260bae92a07afbd01d5dd27c7d2166d255896a3c62d5ce12f51b34b359"} Jan 26 19:27:36 crc kubenswrapper[4737]: I0126 19:27:36.356878 4737 scope.go:117] "RemoveContainer" containerID="0a5979b6080dfbc22d77a233b61b107fa496f1e5a25ec3a158a03156766ac821" Jan 26 19:27:36 crc kubenswrapper[4737]: I0126 19:27:36.409372 4737 scope.go:117] "RemoveContainer" containerID="d2c560b45d6d33684ab6a85e532a88ec572064212c6a4b09088314a24b35ac9c" Jan 26 19:28:36 crc kubenswrapper[4737]: I0126 19:28:36.484022 4737 scope.go:117] "RemoveContainer" containerID="7051fe1f759fa8eb179f4745183819a4d06ed53c2fc1f2af0557fd4b756677c3" Jan 26 19:30:00 crc kubenswrapper[4737]: I0126 19:30:00.151399 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490930-4k5qs"] Jan 26 19:30:00 crc kubenswrapper[4737]: E0126 19:30:00.152436 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f1823e5-fd64-4ddd-a4ed-5727de977754" containerName="logging-edpm-deployment-openstack-edpm-ipam" Jan 26 19:30:00 crc kubenswrapper[4737]: I0126 19:30:00.152451 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f1823e5-fd64-4ddd-a4ed-5727de977754" containerName="logging-edpm-deployment-openstack-edpm-ipam" Jan 26 19:30:00 crc kubenswrapper[4737]: I0126 19:30:00.152705 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f1823e5-fd64-4ddd-a4ed-5727de977754" containerName="logging-edpm-deployment-openstack-edpm-ipam" Jan 26 19:30:00 crc kubenswrapper[4737]: I0126 19:30:00.153683 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490930-4k5qs" Jan 26 19:30:00 crc kubenswrapper[4737]: I0126 19:30:00.156032 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 19:30:00 crc kubenswrapper[4737]: I0126 19:30:00.156633 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 19:30:00 crc kubenswrapper[4737]: I0126 19:30:00.210220 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490930-4k5qs"] Jan 26 19:30:00 crc kubenswrapper[4737]: I0126 19:30:00.231464 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04d5c317-4d69-4c80-8d0e-98dcfb41af6c-secret-volume\") pod \"collect-profiles-29490930-4k5qs\" (UID: \"04d5c317-4d69-4c80-8d0e-98dcfb41af6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490930-4k5qs" Jan 26 19:30:00 crc kubenswrapper[4737]: I0126 19:30:00.231934 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04d5c317-4d69-4c80-8d0e-98dcfb41af6c-config-volume\") pod \"collect-profiles-29490930-4k5qs\" (UID: \"04d5c317-4d69-4c80-8d0e-98dcfb41af6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490930-4k5qs" Jan 26 19:30:00 crc kubenswrapper[4737]: I0126 19:30:00.232110 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x4n2\" (UniqueName: \"kubernetes.io/projected/04d5c317-4d69-4c80-8d0e-98dcfb41af6c-kube-api-access-9x4n2\") pod \"collect-profiles-29490930-4k5qs\" (UID: \"04d5c317-4d69-4c80-8d0e-98dcfb41af6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490930-4k5qs" Jan 26 19:30:00 crc kubenswrapper[4737]: I0126 19:30:00.334852 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04d5c317-4d69-4c80-8d0e-98dcfb41af6c-config-volume\") pod \"collect-profiles-29490930-4k5qs\" (UID: \"04d5c317-4d69-4c80-8d0e-98dcfb41af6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490930-4k5qs" Jan 26 19:30:00 crc kubenswrapper[4737]: I0126 19:30:00.334905 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x4n2\" (UniqueName: \"kubernetes.io/projected/04d5c317-4d69-4c80-8d0e-98dcfb41af6c-kube-api-access-9x4n2\") pod \"collect-profiles-29490930-4k5qs\" (UID: \"04d5c317-4d69-4c80-8d0e-98dcfb41af6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490930-4k5qs" Jan 26 19:30:00 crc kubenswrapper[4737]: I0126 19:30:00.335080 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04d5c317-4d69-4c80-8d0e-98dcfb41af6c-secret-volume\") pod \"collect-profiles-29490930-4k5qs\" (UID: \"04d5c317-4d69-4c80-8d0e-98dcfb41af6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490930-4k5qs" Jan 26 19:30:00 crc kubenswrapper[4737]: I0126 19:30:00.336022 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04d5c317-4d69-4c80-8d0e-98dcfb41af6c-config-volume\") pod \"collect-profiles-29490930-4k5qs\" (UID: \"04d5c317-4d69-4c80-8d0e-98dcfb41af6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490930-4k5qs" Jan 26 19:30:00 crc kubenswrapper[4737]: I0126 19:30:00.341822 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04d5c317-4d69-4c80-8d0e-98dcfb41af6c-secret-volume\") pod \"collect-profiles-29490930-4k5qs\" (UID: \"04d5c317-4d69-4c80-8d0e-98dcfb41af6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490930-4k5qs" Jan 26 19:30:00 crc kubenswrapper[4737]: I0126 19:30:00.362006 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x4n2\" (UniqueName: \"kubernetes.io/projected/04d5c317-4d69-4c80-8d0e-98dcfb41af6c-kube-api-access-9x4n2\") pod \"collect-profiles-29490930-4k5qs\" (UID: \"04d5c317-4d69-4c80-8d0e-98dcfb41af6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490930-4k5qs" Jan 26 19:30:00 crc kubenswrapper[4737]: I0126 19:30:00.476652 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490930-4k5qs" Jan 26 19:30:00 crc kubenswrapper[4737]: I0126 19:30:00.948769 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:30:00 crc kubenswrapper[4737]: I0126 19:30:00.949125 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:30:00 crc kubenswrapper[4737]: I0126 19:30:00.965130 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490930-4k5qs"] Jan 26 19:30:01 crc kubenswrapper[4737]: I0126 19:30:01.215825 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490930-4k5qs" event={"ID":"04d5c317-4d69-4c80-8d0e-98dcfb41af6c","Type":"ContainerStarted","Data":"8b51cf34a6a0e319157bc98ff85610b708b510bb897a8c2dc1b086a6a339dd3a"} Jan 26 19:30:01 crc kubenswrapper[4737]: I0126 19:30:01.215870 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490930-4k5qs" event={"ID":"04d5c317-4d69-4c80-8d0e-98dcfb41af6c","Type":"ContainerStarted","Data":"9a52ff0a4f6a61e2f86af64b21be7a81afa9a66b8d03d10bf210c1dd3230e2ba"} Jan 26 19:30:02 crc kubenswrapper[4737]: I0126 19:30:02.233031 4737 generic.go:334] "Generic (PLEG): container finished" podID="04d5c317-4d69-4c80-8d0e-98dcfb41af6c" containerID="8b51cf34a6a0e319157bc98ff85610b708b510bb897a8c2dc1b086a6a339dd3a" exitCode=0 Jan 26 19:30:02 crc kubenswrapper[4737]: I0126 19:30:02.233395 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490930-4k5qs" event={"ID":"04d5c317-4d69-4c80-8d0e-98dcfb41af6c","Type":"ContainerDied","Data":"8b51cf34a6a0e319157bc98ff85610b708b510bb897a8c2dc1b086a6a339dd3a"} Jan 26 19:30:03 crc kubenswrapper[4737]: I0126 19:30:03.690257 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490930-4k5qs" Jan 26 19:30:03 crc kubenswrapper[4737]: I0126 19:30:03.737393 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9x4n2\" (UniqueName: \"kubernetes.io/projected/04d5c317-4d69-4c80-8d0e-98dcfb41af6c-kube-api-access-9x4n2\") pod \"04d5c317-4d69-4c80-8d0e-98dcfb41af6c\" (UID: \"04d5c317-4d69-4c80-8d0e-98dcfb41af6c\") " Jan 26 19:30:03 crc kubenswrapper[4737]: I0126 19:30:03.737510 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04d5c317-4d69-4c80-8d0e-98dcfb41af6c-config-volume\") pod \"04d5c317-4d69-4c80-8d0e-98dcfb41af6c\" (UID: \"04d5c317-4d69-4c80-8d0e-98dcfb41af6c\") " Jan 26 19:30:03 crc kubenswrapper[4737]: I0126 19:30:03.737853 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04d5c317-4d69-4c80-8d0e-98dcfb41af6c-secret-volume\") pod \"04d5c317-4d69-4c80-8d0e-98dcfb41af6c\" (UID: \"04d5c317-4d69-4c80-8d0e-98dcfb41af6c\") " Jan 26 19:30:03 crc kubenswrapper[4737]: I0126 19:30:03.738206 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04d5c317-4d69-4c80-8d0e-98dcfb41af6c-config-volume" (OuterVolumeSpecName: "config-volume") pod "04d5c317-4d69-4c80-8d0e-98dcfb41af6c" (UID: "04d5c317-4d69-4c80-8d0e-98dcfb41af6c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:30:03 crc kubenswrapper[4737]: I0126 19:30:03.739066 4737 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04d5c317-4d69-4c80-8d0e-98dcfb41af6c-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 19:30:03 crc kubenswrapper[4737]: I0126 19:30:03.744676 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04d5c317-4d69-4c80-8d0e-98dcfb41af6c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "04d5c317-4d69-4c80-8d0e-98dcfb41af6c" (UID: "04d5c317-4d69-4c80-8d0e-98dcfb41af6c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:30:03 crc kubenswrapper[4737]: I0126 19:30:03.745787 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04d5c317-4d69-4c80-8d0e-98dcfb41af6c-kube-api-access-9x4n2" (OuterVolumeSpecName: "kube-api-access-9x4n2") pod "04d5c317-4d69-4c80-8d0e-98dcfb41af6c" (UID: "04d5c317-4d69-4c80-8d0e-98dcfb41af6c"). InnerVolumeSpecName "kube-api-access-9x4n2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:30:03 crc kubenswrapper[4737]: I0126 19:30:03.841535 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9x4n2\" (UniqueName: \"kubernetes.io/projected/04d5c317-4d69-4c80-8d0e-98dcfb41af6c-kube-api-access-9x4n2\") on node \"crc\" DevicePath \"\"" Jan 26 19:30:03 crc kubenswrapper[4737]: I0126 19:30:03.841658 4737 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04d5c317-4d69-4c80-8d0e-98dcfb41af6c-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 19:30:04 crc kubenswrapper[4737]: I0126 19:30:04.255505 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490930-4k5qs" Jan 26 19:30:04 crc kubenswrapper[4737]: I0126 19:30:04.255505 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490930-4k5qs" event={"ID":"04d5c317-4d69-4c80-8d0e-98dcfb41af6c","Type":"ContainerDied","Data":"9a52ff0a4f6a61e2f86af64b21be7a81afa9a66b8d03d10bf210c1dd3230e2ba"} Jan 26 19:30:04 crc kubenswrapper[4737]: I0126 19:30:04.255881 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a52ff0a4f6a61e2f86af64b21be7a81afa9a66b8d03d10bf210c1dd3230e2ba" Jan 26 19:30:04 crc kubenswrapper[4737]: I0126 19:30:04.786421 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490885-vng2z"] Jan 26 19:30:04 crc kubenswrapper[4737]: I0126 19:30:04.799675 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490885-vng2z"] Jan 26 19:30:04 crc kubenswrapper[4737]: I0126 19:30:04.999650 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52054249-74bc-48df-978b-dcf49912e6c7" path="/var/lib/kubelet/pods/52054249-74bc-48df-978b-dcf49912e6c7/volumes" Jan 26 19:30:30 crc kubenswrapper[4737]: I0126 19:30:30.949079 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:30:30 crc kubenswrapper[4737]: I0126 19:30:30.949718 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:30:36 crc kubenswrapper[4737]: I0126 19:30:36.600199 4737 scope.go:117] "RemoveContainer" containerID="c0fa96a76151b48afaa093e6b9a004113454e0a4197ed339d2d6049ead31e772" Jan 26 19:31:00 crc kubenswrapper[4737]: I0126 19:31:00.949488 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:31:00 crc kubenswrapper[4737]: I0126 19:31:00.950041 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:31:00 crc kubenswrapper[4737]: I0126 19:31:00.950122 4737 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" Jan 26 19:31:00 crc kubenswrapper[4737]: I0126 19:31:00.951063 4737 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"14c7fd260bae92a07afbd01d5dd27c7d2166d255896a3c62d5ce12f51b34b359"} pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 19:31:00 crc kubenswrapper[4737]: I0126 19:31:00.951130 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" containerID="cri-o://14c7fd260bae92a07afbd01d5dd27c7d2166d255896a3c62d5ce12f51b34b359" gracePeriod=600 Jan 26 19:31:01 crc kubenswrapper[4737]: I0126 19:31:01.947999 4737 generic.go:334] "Generic (PLEG): container finished" podID="afd75772-7900-46c3-b392-afb075e1cc08" containerID="14c7fd260bae92a07afbd01d5dd27c7d2166d255896a3c62d5ce12f51b34b359" exitCode=0 Jan 26 19:31:01 crc kubenswrapper[4737]: I0126 19:31:01.948134 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" event={"ID":"afd75772-7900-46c3-b392-afb075e1cc08","Type":"ContainerDied","Data":"14c7fd260bae92a07afbd01d5dd27c7d2166d255896a3c62d5ce12f51b34b359"} Jan 26 19:31:01 crc kubenswrapper[4737]: I0126 19:31:01.949140 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" event={"ID":"afd75772-7900-46c3-b392-afb075e1cc08","Type":"ContainerStarted","Data":"bea5325956185fb27826d8fae2a183cfd4f393578349aee8d75e63af7507aee3"} Jan 26 19:31:01 crc kubenswrapper[4737]: I0126 19:31:01.949200 4737 scope.go:117] "RemoveContainer" containerID="0456e4438ad40c4f582e22f53643b0ff4e17de4961aa6f19105c775dca022959" Jan 26 19:32:24 crc kubenswrapper[4737]: I0126 19:32:24.915603 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-g9czn"] Jan 26 19:32:24 crc kubenswrapper[4737]: E0126 19:32:24.917087 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04d5c317-4d69-4c80-8d0e-98dcfb41af6c" containerName="collect-profiles" Jan 26 19:32:24 crc kubenswrapper[4737]: I0126 19:32:24.917119 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="04d5c317-4d69-4c80-8d0e-98dcfb41af6c" containerName="collect-profiles" Jan 26 19:32:24 crc kubenswrapper[4737]: I0126 19:32:24.917492 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="04d5c317-4d69-4c80-8d0e-98dcfb41af6c" containerName="collect-profiles" Jan 26 19:32:24 crc kubenswrapper[4737]: I0126 19:32:24.933162 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g9czn" Jan 26 19:32:24 crc kubenswrapper[4737]: I0126 19:32:24.949784 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g9czn"] Jan 26 19:32:25 crc kubenswrapper[4737]: I0126 19:32:25.093003 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm5d4\" (UniqueName: \"kubernetes.io/projected/86fa1de9-2021-4412-8454-dce892e23024-kube-api-access-wm5d4\") pod \"certified-operators-g9czn\" (UID: \"86fa1de9-2021-4412-8454-dce892e23024\") " pod="openshift-marketplace/certified-operators-g9czn" Jan 26 19:32:25 crc kubenswrapper[4737]: I0126 19:32:25.093129 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86fa1de9-2021-4412-8454-dce892e23024-catalog-content\") pod \"certified-operators-g9czn\" (UID: \"86fa1de9-2021-4412-8454-dce892e23024\") " pod="openshift-marketplace/certified-operators-g9czn" Jan 26 19:32:25 crc kubenswrapper[4737]: I0126 19:32:25.093708 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86fa1de9-2021-4412-8454-dce892e23024-utilities\") pod \"certified-operators-g9czn\" (UID: \"86fa1de9-2021-4412-8454-dce892e23024\") " pod="openshift-marketplace/certified-operators-g9czn" Jan 26 19:32:25 crc kubenswrapper[4737]: I0126 19:32:25.198283 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm5d4\" (UniqueName: \"kubernetes.io/projected/86fa1de9-2021-4412-8454-dce892e23024-kube-api-access-wm5d4\") pod \"certified-operators-g9czn\" (UID: \"86fa1de9-2021-4412-8454-dce892e23024\") " pod="openshift-marketplace/certified-operators-g9czn" Jan 26 19:32:25 crc kubenswrapper[4737]: I0126 19:32:25.198349 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86fa1de9-2021-4412-8454-dce892e23024-catalog-content\") pod \"certified-operators-g9czn\" (UID: \"86fa1de9-2021-4412-8454-dce892e23024\") " pod="openshift-marketplace/certified-operators-g9czn" Jan 26 19:32:25 crc kubenswrapper[4737]: I0126 19:32:25.198434 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86fa1de9-2021-4412-8454-dce892e23024-utilities\") pod \"certified-operators-g9czn\" (UID: \"86fa1de9-2021-4412-8454-dce892e23024\") " pod="openshift-marketplace/certified-operators-g9czn" Jan 26 19:32:25 crc kubenswrapper[4737]: I0126 19:32:25.199223 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86fa1de9-2021-4412-8454-dce892e23024-utilities\") pod \"certified-operators-g9czn\" (UID: \"86fa1de9-2021-4412-8454-dce892e23024\") " pod="openshift-marketplace/certified-operators-g9czn" Jan 26 19:32:25 crc kubenswrapper[4737]: I0126 19:32:25.199331 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86fa1de9-2021-4412-8454-dce892e23024-catalog-content\") pod \"certified-operators-g9czn\" (UID: \"86fa1de9-2021-4412-8454-dce892e23024\") " pod="openshift-marketplace/certified-operators-g9czn" Jan 26 19:32:25 crc kubenswrapper[4737]: I0126 19:32:25.222914 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm5d4\" (UniqueName: \"kubernetes.io/projected/86fa1de9-2021-4412-8454-dce892e23024-kube-api-access-wm5d4\") pod \"certified-operators-g9czn\" (UID: \"86fa1de9-2021-4412-8454-dce892e23024\") " pod="openshift-marketplace/certified-operators-g9czn" Jan 26 19:32:25 crc kubenswrapper[4737]: I0126 19:32:25.277348 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g9czn" Jan 26 19:32:25 crc kubenswrapper[4737]: I0126 19:32:25.948092 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g9czn"] Jan 26 19:32:26 crc kubenswrapper[4737]: I0126 19:32:26.928253 4737 generic.go:334] "Generic (PLEG): container finished" podID="86fa1de9-2021-4412-8454-dce892e23024" containerID="edaa408677560c7e5190553642900e94cc0c629de32d47910d1a013fa94e2ce8" exitCode=0 Jan 26 19:32:26 crc kubenswrapper[4737]: I0126 19:32:26.928331 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g9czn" event={"ID":"86fa1de9-2021-4412-8454-dce892e23024","Type":"ContainerDied","Data":"edaa408677560c7e5190553642900e94cc0c629de32d47910d1a013fa94e2ce8"} Jan 26 19:32:26 crc kubenswrapper[4737]: I0126 19:32:26.928589 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g9czn" event={"ID":"86fa1de9-2021-4412-8454-dce892e23024","Type":"ContainerStarted","Data":"091c3d3c2c3a7116acd20b1cb2810794a6c24ad0669f2c36841e9929be50e67d"} Jan 26 19:32:26 crc kubenswrapper[4737]: I0126 19:32:26.930929 4737 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 19:32:28 crc kubenswrapper[4737]: I0126 19:32:28.971310 4737 generic.go:334] "Generic (PLEG): container finished" podID="86fa1de9-2021-4412-8454-dce892e23024" containerID="88e837f4f5eada7bafe21bc005c0fdf2ac1a670d79627afe8c15e08c8ed667da" exitCode=0 Jan 26 19:32:28 crc kubenswrapper[4737]: I0126 19:32:28.972300 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g9czn" event={"ID":"86fa1de9-2021-4412-8454-dce892e23024","Type":"ContainerDied","Data":"88e837f4f5eada7bafe21bc005c0fdf2ac1a670d79627afe8c15e08c8ed667da"} Jan 26 19:32:29 crc kubenswrapper[4737]: I0126 19:32:29.984873 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g9czn" event={"ID":"86fa1de9-2021-4412-8454-dce892e23024","Type":"ContainerStarted","Data":"0a9852aa9ba810605deff539a02597cfbb1fecff59387898eeb04b522420b67d"} Jan 26 19:32:30 crc kubenswrapper[4737]: I0126 19:32:30.006021 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-g9czn" podStartSLOduration=3.501596653 podStartE2EDuration="6.006004997s" podCreationTimestamp="2026-01-26 19:32:24 +0000 UTC" firstStartedPulling="2026-01-26 19:32:26.930605895 +0000 UTC m=+3720.238800603" lastFinishedPulling="2026-01-26 19:32:29.435014249 +0000 UTC m=+3722.743208947" observedRunningTime="2026-01-26 19:32:30.003998467 +0000 UTC m=+3723.312193195" watchObservedRunningTime="2026-01-26 19:32:30.006004997 +0000 UTC m=+3723.314199705" Jan 26 19:32:35 crc kubenswrapper[4737]: I0126 19:32:35.277982 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-g9czn" Jan 26 19:32:35 crc kubenswrapper[4737]: I0126 19:32:35.278888 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-g9czn" Jan 26 19:32:35 crc kubenswrapper[4737]: I0126 19:32:35.342510 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-g9czn" Jan 26 19:32:36 crc kubenswrapper[4737]: I0126 19:32:36.101092 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-g9czn" Jan 26 19:32:36 crc kubenswrapper[4737]: I0126 19:32:36.157735 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g9czn"] Jan 26 19:32:38 crc kubenswrapper[4737]: I0126 19:32:38.104565 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-g9czn" podUID="86fa1de9-2021-4412-8454-dce892e23024" containerName="registry-server" containerID="cri-o://0a9852aa9ba810605deff539a02597cfbb1fecff59387898eeb04b522420b67d" gracePeriod=2 Jan 26 19:32:38 crc kubenswrapper[4737]: I0126 19:32:38.653863 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g9czn" Jan 26 19:32:38 crc kubenswrapper[4737]: I0126 19:32:38.851885 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86fa1de9-2021-4412-8454-dce892e23024-utilities\") pod \"86fa1de9-2021-4412-8454-dce892e23024\" (UID: \"86fa1de9-2021-4412-8454-dce892e23024\") " Jan 26 19:32:38 crc kubenswrapper[4737]: I0126 19:32:38.852054 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86fa1de9-2021-4412-8454-dce892e23024-catalog-content\") pod \"86fa1de9-2021-4412-8454-dce892e23024\" (UID: \"86fa1de9-2021-4412-8454-dce892e23024\") " Jan 26 19:32:38 crc kubenswrapper[4737]: I0126 19:32:38.852152 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wm5d4\" (UniqueName: \"kubernetes.io/projected/86fa1de9-2021-4412-8454-dce892e23024-kube-api-access-wm5d4\") pod \"86fa1de9-2021-4412-8454-dce892e23024\" (UID: \"86fa1de9-2021-4412-8454-dce892e23024\") " Jan 26 19:32:38 crc kubenswrapper[4737]: I0126 19:32:38.852959 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86fa1de9-2021-4412-8454-dce892e23024-utilities" (OuterVolumeSpecName: "utilities") pod "86fa1de9-2021-4412-8454-dce892e23024" (UID: "86fa1de9-2021-4412-8454-dce892e23024"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:32:38 crc kubenswrapper[4737]: I0126 19:32:38.857653 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86fa1de9-2021-4412-8454-dce892e23024-kube-api-access-wm5d4" (OuterVolumeSpecName: "kube-api-access-wm5d4") pod "86fa1de9-2021-4412-8454-dce892e23024" (UID: "86fa1de9-2021-4412-8454-dce892e23024"). InnerVolumeSpecName "kube-api-access-wm5d4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:32:38 crc kubenswrapper[4737]: I0126 19:32:38.913694 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86fa1de9-2021-4412-8454-dce892e23024-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "86fa1de9-2021-4412-8454-dce892e23024" (UID: "86fa1de9-2021-4412-8454-dce892e23024"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:32:38 crc kubenswrapper[4737]: I0126 19:32:38.955829 4737 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86fa1de9-2021-4412-8454-dce892e23024-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:32:38 crc kubenswrapper[4737]: I0126 19:32:38.955868 4737 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86fa1de9-2021-4412-8454-dce892e23024-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:32:38 crc kubenswrapper[4737]: I0126 19:32:38.955882 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wm5d4\" (UniqueName: \"kubernetes.io/projected/86fa1de9-2021-4412-8454-dce892e23024-kube-api-access-wm5d4\") on node \"crc\" DevicePath \"\"" Jan 26 19:32:39 crc kubenswrapper[4737]: I0126 19:32:39.116260 4737 generic.go:334] "Generic (PLEG): container finished" podID="86fa1de9-2021-4412-8454-dce892e23024" containerID="0a9852aa9ba810605deff539a02597cfbb1fecff59387898eeb04b522420b67d" exitCode=0 Jan 26 19:32:39 crc kubenswrapper[4737]: I0126 19:32:39.116308 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g9czn" event={"ID":"86fa1de9-2021-4412-8454-dce892e23024","Type":"ContainerDied","Data":"0a9852aa9ba810605deff539a02597cfbb1fecff59387898eeb04b522420b67d"} Jan 26 19:32:39 crc kubenswrapper[4737]: I0126 19:32:39.116346 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g9czn" event={"ID":"86fa1de9-2021-4412-8454-dce892e23024","Type":"ContainerDied","Data":"091c3d3c2c3a7116acd20b1cb2810794a6c24ad0669f2c36841e9929be50e67d"} Jan 26 19:32:39 crc kubenswrapper[4737]: I0126 19:32:39.116369 4737 scope.go:117] "RemoveContainer" containerID="0a9852aa9ba810605deff539a02597cfbb1fecff59387898eeb04b522420b67d" Jan 26 19:32:39 crc kubenswrapper[4737]: I0126 19:32:39.116372 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g9czn" Jan 26 19:32:39 crc kubenswrapper[4737]: I0126 19:32:39.142308 4737 scope.go:117] "RemoveContainer" containerID="88e837f4f5eada7bafe21bc005c0fdf2ac1a670d79627afe8c15e08c8ed667da" Jan 26 19:32:39 crc kubenswrapper[4737]: I0126 19:32:39.148498 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g9czn"] Jan 26 19:32:39 crc kubenswrapper[4737]: I0126 19:32:39.160446 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-g9czn"] Jan 26 19:32:39 crc kubenswrapper[4737]: I0126 19:32:39.172712 4737 scope.go:117] "RemoveContainer" containerID="edaa408677560c7e5190553642900e94cc0c629de32d47910d1a013fa94e2ce8" Jan 26 19:32:39 crc kubenswrapper[4737]: I0126 19:32:39.229101 4737 scope.go:117] "RemoveContainer" containerID="0a9852aa9ba810605deff539a02597cfbb1fecff59387898eeb04b522420b67d" Jan 26 19:32:39 crc kubenswrapper[4737]: E0126 19:32:39.229631 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a9852aa9ba810605deff539a02597cfbb1fecff59387898eeb04b522420b67d\": container with ID starting with 0a9852aa9ba810605deff539a02597cfbb1fecff59387898eeb04b522420b67d not found: ID does not exist" containerID="0a9852aa9ba810605deff539a02597cfbb1fecff59387898eeb04b522420b67d" Jan 26 19:32:39 crc kubenswrapper[4737]: I0126 19:32:39.229688 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a9852aa9ba810605deff539a02597cfbb1fecff59387898eeb04b522420b67d"} err="failed to get container status \"0a9852aa9ba810605deff539a02597cfbb1fecff59387898eeb04b522420b67d\": rpc error: code = NotFound desc = could not find container \"0a9852aa9ba810605deff539a02597cfbb1fecff59387898eeb04b522420b67d\": container with ID starting with 0a9852aa9ba810605deff539a02597cfbb1fecff59387898eeb04b522420b67d not found: ID does not exist" Jan 26 19:32:39 crc kubenswrapper[4737]: I0126 19:32:39.229722 4737 scope.go:117] "RemoveContainer" containerID="88e837f4f5eada7bafe21bc005c0fdf2ac1a670d79627afe8c15e08c8ed667da" Jan 26 19:32:39 crc kubenswrapper[4737]: E0126 19:32:39.230141 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88e837f4f5eada7bafe21bc005c0fdf2ac1a670d79627afe8c15e08c8ed667da\": container with ID starting with 88e837f4f5eada7bafe21bc005c0fdf2ac1a670d79627afe8c15e08c8ed667da not found: ID does not exist" containerID="88e837f4f5eada7bafe21bc005c0fdf2ac1a670d79627afe8c15e08c8ed667da" Jan 26 19:32:39 crc kubenswrapper[4737]: I0126 19:32:39.230182 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88e837f4f5eada7bafe21bc005c0fdf2ac1a670d79627afe8c15e08c8ed667da"} err="failed to get container status \"88e837f4f5eada7bafe21bc005c0fdf2ac1a670d79627afe8c15e08c8ed667da\": rpc error: code = NotFound desc = could not find container \"88e837f4f5eada7bafe21bc005c0fdf2ac1a670d79627afe8c15e08c8ed667da\": container with ID starting with 88e837f4f5eada7bafe21bc005c0fdf2ac1a670d79627afe8c15e08c8ed667da not found: ID does not exist" Jan 26 19:32:39 crc kubenswrapper[4737]: I0126 19:32:39.230221 4737 scope.go:117] "RemoveContainer" containerID="edaa408677560c7e5190553642900e94cc0c629de32d47910d1a013fa94e2ce8" Jan 26 19:32:39 crc kubenswrapper[4737]: E0126 19:32:39.230623 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edaa408677560c7e5190553642900e94cc0c629de32d47910d1a013fa94e2ce8\": container with ID starting with edaa408677560c7e5190553642900e94cc0c629de32d47910d1a013fa94e2ce8 not found: ID does not exist" containerID="edaa408677560c7e5190553642900e94cc0c629de32d47910d1a013fa94e2ce8" Jan 26 19:32:39 crc kubenswrapper[4737]: I0126 19:32:39.230659 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edaa408677560c7e5190553642900e94cc0c629de32d47910d1a013fa94e2ce8"} err="failed to get container status \"edaa408677560c7e5190553642900e94cc0c629de32d47910d1a013fa94e2ce8\": rpc error: code = NotFound desc = could not find container \"edaa408677560c7e5190553642900e94cc0c629de32d47910d1a013fa94e2ce8\": container with ID starting with edaa408677560c7e5190553642900e94cc0c629de32d47910d1a013fa94e2ce8 not found: ID does not exist" Jan 26 19:32:40 crc kubenswrapper[4737]: I0126 19:32:40.998647 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86fa1de9-2021-4412-8454-dce892e23024" path="/var/lib/kubelet/pods/86fa1de9-2021-4412-8454-dce892e23024/volumes" Jan 26 19:33:30 crc kubenswrapper[4737]: I0126 19:33:30.948866 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:33:30 crc kubenswrapper[4737]: I0126 19:33:30.949645 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:34:00 crc kubenswrapper[4737]: I0126 19:34:00.949050 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:34:00 crc kubenswrapper[4737]: I0126 19:34:00.949610 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:34:30 crc kubenswrapper[4737]: I0126 19:34:30.949051 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:34:30 crc kubenswrapper[4737]: I0126 19:34:30.949668 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:34:30 crc kubenswrapper[4737]: I0126 19:34:30.949719 4737 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" Jan 26 19:34:30 crc kubenswrapper[4737]: I0126 19:34:30.950715 4737 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bea5325956185fb27826d8fae2a183cfd4f393578349aee8d75e63af7507aee3"} pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 19:34:30 crc kubenswrapper[4737]: I0126 19:34:30.950774 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" containerID="cri-o://bea5325956185fb27826d8fae2a183cfd4f393578349aee8d75e63af7507aee3" gracePeriod=600 Jan 26 19:34:31 crc kubenswrapper[4737]: E0126 19:34:31.080792 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:34:31 crc kubenswrapper[4737]: I0126 19:34:31.474656 4737 generic.go:334] "Generic (PLEG): container finished" podID="afd75772-7900-46c3-b392-afb075e1cc08" containerID="bea5325956185fb27826d8fae2a183cfd4f393578349aee8d75e63af7507aee3" exitCode=0 Jan 26 19:34:31 crc kubenswrapper[4737]: I0126 19:34:31.474719 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" event={"ID":"afd75772-7900-46c3-b392-afb075e1cc08","Type":"ContainerDied","Data":"bea5325956185fb27826d8fae2a183cfd4f393578349aee8d75e63af7507aee3"} Jan 26 19:34:31 crc kubenswrapper[4737]: I0126 19:34:31.474783 4737 scope.go:117] "RemoveContainer" containerID="14c7fd260bae92a07afbd01d5dd27c7d2166d255896a3c62d5ce12f51b34b359" Jan 26 19:34:31 crc kubenswrapper[4737]: I0126 19:34:31.477262 4737 scope.go:117] "RemoveContainer" containerID="bea5325956185fb27826d8fae2a183cfd4f393578349aee8d75e63af7507aee3" Jan 26 19:34:31 crc kubenswrapper[4737]: E0126 19:34:31.479487 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:34:42 crc kubenswrapper[4737]: I0126 19:34:42.983008 4737 scope.go:117] "RemoveContainer" containerID="bea5325956185fb27826d8fae2a183cfd4f393578349aee8d75e63af7507aee3" Jan 26 19:34:42 crc kubenswrapper[4737]: E0126 19:34:42.984045 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:34:57 crc kubenswrapper[4737]: I0126 19:34:57.981910 4737 scope.go:117] "RemoveContainer" containerID="bea5325956185fb27826d8fae2a183cfd4f393578349aee8d75e63af7507aee3" Jan 26 19:34:57 crc kubenswrapper[4737]: E0126 19:34:57.982756 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:35:12 crc kubenswrapper[4737]: I0126 19:35:12.982388 4737 scope.go:117] "RemoveContainer" containerID="bea5325956185fb27826d8fae2a183cfd4f393578349aee8d75e63af7507aee3" Jan 26 19:35:12 crc kubenswrapper[4737]: E0126 19:35:12.983304 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:35:23 crc kubenswrapper[4737]: I0126 19:35:23.982234 4737 scope.go:117] "RemoveContainer" containerID="bea5325956185fb27826d8fae2a183cfd4f393578349aee8d75e63af7507aee3" Jan 26 19:35:23 crc kubenswrapper[4737]: E0126 19:35:23.983050 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:35:36 crc kubenswrapper[4737]: I0126 19:35:36.982818 4737 scope.go:117] "RemoveContainer" containerID="bea5325956185fb27826d8fae2a183cfd4f393578349aee8d75e63af7507aee3" Jan 26 19:35:36 crc kubenswrapper[4737]: E0126 19:35:36.983807 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:35:48 crc kubenswrapper[4737]: I0126 19:35:48.982132 4737 scope.go:117] "RemoveContainer" containerID="bea5325956185fb27826d8fae2a183cfd4f393578349aee8d75e63af7507aee3" Jan 26 19:35:48 crc kubenswrapper[4737]: E0126 19:35:48.982864 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:36:00 crc kubenswrapper[4737]: I0126 19:36:00.982596 4737 scope.go:117] "RemoveContainer" containerID="bea5325956185fb27826d8fae2a183cfd4f393578349aee8d75e63af7507aee3" Jan 26 19:36:00 crc kubenswrapper[4737]: E0126 19:36:00.983713 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:36:11 crc kubenswrapper[4737]: I0126 19:36:11.982967 4737 scope.go:117] "RemoveContainer" containerID="bea5325956185fb27826d8fae2a183cfd4f393578349aee8d75e63af7507aee3" Jan 26 19:36:11 crc kubenswrapper[4737]: E0126 19:36:11.983917 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:36:26 crc kubenswrapper[4737]: I0126 19:36:26.991568 4737 scope.go:117] "RemoveContainer" containerID="bea5325956185fb27826d8fae2a183cfd4f393578349aee8d75e63af7507aee3" Jan 26 19:36:26 crc kubenswrapper[4737]: E0126 19:36:26.992584 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:36:30 crc kubenswrapper[4737]: I0126 19:36:30.017854 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hh8tt"] Jan 26 19:36:30 crc kubenswrapper[4737]: E0126 19:36:30.019043 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86fa1de9-2021-4412-8454-dce892e23024" containerName="registry-server" Jan 26 19:36:30 crc kubenswrapper[4737]: I0126 19:36:30.019059 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="86fa1de9-2021-4412-8454-dce892e23024" containerName="registry-server" Jan 26 19:36:30 crc kubenswrapper[4737]: E0126 19:36:30.019097 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86fa1de9-2021-4412-8454-dce892e23024" containerName="extract-utilities" Jan 26 19:36:30 crc kubenswrapper[4737]: I0126 19:36:30.019105 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="86fa1de9-2021-4412-8454-dce892e23024" containerName="extract-utilities" Jan 26 19:36:30 crc kubenswrapper[4737]: E0126 19:36:30.019144 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86fa1de9-2021-4412-8454-dce892e23024" containerName="extract-content" Jan 26 19:36:30 crc kubenswrapper[4737]: I0126 19:36:30.019151 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="86fa1de9-2021-4412-8454-dce892e23024" containerName="extract-content" Jan 26 19:36:30 crc kubenswrapper[4737]: I0126 19:36:30.019369 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="86fa1de9-2021-4412-8454-dce892e23024" containerName="registry-server" Jan 26 19:36:30 crc kubenswrapper[4737]: I0126 19:36:30.022658 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hh8tt" Jan 26 19:36:30 crc kubenswrapper[4737]: I0126 19:36:30.032602 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hh8tt"] Jan 26 19:36:30 crc kubenswrapper[4737]: I0126 19:36:30.156503 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/398c29bb-610f-410f-9dfa-c33389465a6d-utilities\") pod \"community-operators-hh8tt\" (UID: \"398c29bb-610f-410f-9dfa-c33389465a6d\") " pod="openshift-marketplace/community-operators-hh8tt" Jan 26 19:36:30 crc kubenswrapper[4737]: I0126 19:36:30.157376 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pmnq\" (UniqueName: \"kubernetes.io/projected/398c29bb-610f-410f-9dfa-c33389465a6d-kube-api-access-2pmnq\") pod \"community-operators-hh8tt\" (UID: \"398c29bb-610f-410f-9dfa-c33389465a6d\") " pod="openshift-marketplace/community-operators-hh8tt" Jan 26 19:36:30 crc kubenswrapper[4737]: I0126 19:36:30.157699 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/398c29bb-610f-410f-9dfa-c33389465a6d-catalog-content\") pod \"community-operators-hh8tt\" (UID: \"398c29bb-610f-410f-9dfa-c33389465a6d\") " pod="openshift-marketplace/community-operators-hh8tt" Jan 26 19:36:30 crc kubenswrapper[4737]: I0126 19:36:30.260200 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/398c29bb-610f-410f-9dfa-c33389465a6d-catalog-content\") pod \"community-operators-hh8tt\" (UID: \"398c29bb-610f-410f-9dfa-c33389465a6d\") " pod="openshift-marketplace/community-operators-hh8tt" Jan 26 19:36:30 crc kubenswrapper[4737]: I0126 19:36:30.260369 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/398c29bb-610f-410f-9dfa-c33389465a6d-utilities\") pod \"community-operators-hh8tt\" (UID: \"398c29bb-610f-410f-9dfa-c33389465a6d\") " pod="openshift-marketplace/community-operators-hh8tt" Jan 26 19:36:30 crc kubenswrapper[4737]: I0126 19:36:30.260500 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pmnq\" (UniqueName: \"kubernetes.io/projected/398c29bb-610f-410f-9dfa-c33389465a6d-kube-api-access-2pmnq\") pod \"community-operators-hh8tt\" (UID: \"398c29bb-610f-410f-9dfa-c33389465a6d\") " pod="openshift-marketplace/community-operators-hh8tt" Jan 26 19:36:30 crc kubenswrapper[4737]: I0126 19:36:30.260976 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/398c29bb-610f-410f-9dfa-c33389465a6d-catalog-content\") pod \"community-operators-hh8tt\" (UID: \"398c29bb-610f-410f-9dfa-c33389465a6d\") " pod="openshift-marketplace/community-operators-hh8tt" Jan 26 19:36:30 crc kubenswrapper[4737]: I0126 19:36:30.261240 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/398c29bb-610f-410f-9dfa-c33389465a6d-utilities\") pod \"community-operators-hh8tt\" (UID: \"398c29bb-610f-410f-9dfa-c33389465a6d\") " pod="openshift-marketplace/community-operators-hh8tt" Jan 26 19:36:30 crc kubenswrapper[4737]: I0126 19:36:30.280166 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pmnq\" (UniqueName: \"kubernetes.io/projected/398c29bb-610f-410f-9dfa-c33389465a6d-kube-api-access-2pmnq\") pod \"community-operators-hh8tt\" (UID: \"398c29bb-610f-410f-9dfa-c33389465a6d\") " pod="openshift-marketplace/community-operators-hh8tt" Jan 26 19:36:30 crc kubenswrapper[4737]: I0126 19:36:30.350956 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hh8tt" Jan 26 19:36:31 crc kubenswrapper[4737]: I0126 19:36:31.025320 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hh8tt"] Jan 26 19:36:31 crc kubenswrapper[4737]: I0126 19:36:31.740404 4737 generic.go:334] "Generic (PLEG): container finished" podID="398c29bb-610f-410f-9dfa-c33389465a6d" containerID="f57dcd1c2e96e3c5e2c755c8a314bc3590c5fc50ad5fd2a1e7134219df67cfc2" exitCode=0 Jan 26 19:36:31 crc kubenswrapper[4737]: I0126 19:36:31.740518 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hh8tt" event={"ID":"398c29bb-610f-410f-9dfa-c33389465a6d","Type":"ContainerDied","Data":"f57dcd1c2e96e3c5e2c755c8a314bc3590c5fc50ad5fd2a1e7134219df67cfc2"} Jan 26 19:36:31 crc kubenswrapper[4737]: I0126 19:36:31.740751 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hh8tt" event={"ID":"398c29bb-610f-410f-9dfa-c33389465a6d","Type":"ContainerStarted","Data":"8742e9ae2d70fd5dfe92416dbaf1b1d130338a2b7d0670a8d2d83319c947610f"} Jan 26 19:36:32 crc kubenswrapper[4737]: I0126 19:36:32.826845 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bvtxb"] Jan 26 19:36:32 crc kubenswrapper[4737]: I0126 19:36:32.831339 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bvtxb" Jan 26 19:36:32 crc kubenswrapper[4737]: I0126 19:36:32.836613 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bvtxb"] Jan 26 19:36:32 crc kubenswrapper[4737]: I0126 19:36:32.929462 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4xds\" (UniqueName: \"kubernetes.io/projected/cd751fa9-1eca-47a7-9a04-3a27b21458cf-kube-api-access-z4xds\") pod \"redhat-marketplace-bvtxb\" (UID: \"cd751fa9-1eca-47a7-9a04-3a27b21458cf\") " pod="openshift-marketplace/redhat-marketplace-bvtxb" Jan 26 19:36:32 crc kubenswrapper[4737]: I0126 19:36:32.929625 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd751fa9-1eca-47a7-9a04-3a27b21458cf-catalog-content\") pod \"redhat-marketplace-bvtxb\" (UID: \"cd751fa9-1eca-47a7-9a04-3a27b21458cf\") " pod="openshift-marketplace/redhat-marketplace-bvtxb" Jan 26 19:36:32 crc kubenswrapper[4737]: I0126 19:36:32.929777 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd751fa9-1eca-47a7-9a04-3a27b21458cf-utilities\") pod \"redhat-marketplace-bvtxb\" (UID: \"cd751fa9-1eca-47a7-9a04-3a27b21458cf\") " pod="openshift-marketplace/redhat-marketplace-bvtxb" Jan 26 19:36:33 crc kubenswrapper[4737]: I0126 19:36:33.024476 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-b6l5c"] Jan 26 19:36:33 crc kubenswrapper[4737]: I0126 19:36:33.027353 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b6l5c" Jan 26 19:36:33 crc kubenswrapper[4737]: I0126 19:36:33.033185 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4xds\" (UniqueName: \"kubernetes.io/projected/cd751fa9-1eca-47a7-9a04-3a27b21458cf-kube-api-access-z4xds\") pod \"redhat-marketplace-bvtxb\" (UID: \"cd751fa9-1eca-47a7-9a04-3a27b21458cf\") " pod="openshift-marketplace/redhat-marketplace-bvtxb" Jan 26 19:36:33 crc kubenswrapper[4737]: I0126 19:36:33.033315 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd751fa9-1eca-47a7-9a04-3a27b21458cf-catalog-content\") pod \"redhat-marketplace-bvtxb\" (UID: \"cd751fa9-1eca-47a7-9a04-3a27b21458cf\") " pod="openshift-marketplace/redhat-marketplace-bvtxb" Jan 26 19:36:33 crc kubenswrapper[4737]: I0126 19:36:33.033430 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd751fa9-1eca-47a7-9a04-3a27b21458cf-utilities\") pod \"redhat-marketplace-bvtxb\" (UID: \"cd751fa9-1eca-47a7-9a04-3a27b21458cf\") " pod="openshift-marketplace/redhat-marketplace-bvtxb" Jan 26 19:36:33 crc kubenswrapper[4737]: I0126 19:36:33.034130 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd751fa9-1eca-47a7-9a04-3a27b21458cf-utilities\") pod \"redhat-marketplace-bvtxb\" (UID: \"cd751fa9-1eca-47a7-9a04-3a27b21458cf\") " pod="openshift-marketplace/redhat-marketplace-bvtxb" Jan 26 19:36:33 crc kubenswrapper[4737]: I0126 19:36:33.034729 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd751fa9-1eca-47a7-9a04-3a27b21458cf-catalog-content\") pod \"redhat-marketplace-bvtxb\" (UID: \"cd751fa9-1eca-47a7-9a04-3a27b21458cf\") " pod="openshift-marketplace/redhat-marketplace-bvtxb" Jan 26 19:36:33 crc kubenswrapper[4737]: I0126 19:36:33.044865 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b6l5c"] Jan 26 19:36:33 crc kubenswrapper[4737]: I0126 19:36:33.071879 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4xds\" (UniqueName: \"kubernetes.io/projected/cd751fa9-1eca-47a7-9a04-3a27b21458cf-kube-api-access-z4xds\") pod \"redhat-marketplace-bvtxb\" (UID: \"cd751fa9-1eca-47a7-9a04-3a27b21458cf\") " pod="openshift-marketplace/redhat-marketplace-bvtxb" Jan 26 19:36:33 crc kubenswrapper[4737]: I0126 19:36:33.138201 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4d42\" (UniqueName: \"kubernetes.io/projected/23e38a69-06a9-43c0-ac6e-b092d96a777c-kube-api-access-n4d42\") pod \"redhat-operators-b6l5c\" (UID: \"23e38a69-06a9-43c0-ac6e-b092d96a777c\") " pod="openshift-marketplace/redhat-operators-b6l5c" Jan 26 19:36:33 crc kubenswrapper[4737]: I0126 19:36:33.138495 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23e38a69-06a9-43c0-ac6e-b092d96a777c-catalog-content\") pod \"redhat-operators-b6l5c\" (UID: \"23e38a69-06a9-43c0-ac6e-b092d96a777c\") " pod="openshift-marketplace/redhat-operators-b6l5c" Jan 26 19:36:33 crc kubenswrapper[4737]: I0126 19:36:33.139977 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23e38a69-06a9-43c0-ac6e-b092d96a777c-utilities\") pod \"redhat-operators-b6l5c\" (UID: \"23e38a69-06a9-43c0-ac6e-b092d96a777c\") " pod="openshift-marketplace/redhat-operators-b6l5c" Jan 26 19:36:33 crc kubenswrapper[4737]: I0126 19:36:33.220086 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bvtxb" Jan 26 19:36:33 crc kubenswrapper[4737]: I0126 19:36:33.249224 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23e38a69-06a9-43c0-ac6e-b092d96a777c-catalog-content\") pod \"redhat-operators-b6l5c\" (UID: \"23e38a69-06a9-43c0-ac6e-b092d96a777c\") " pod="openshift-marketplace/redhat-operators-b6l5c" Jan 26 19:36:33 crc kubenswrapper[4737]: I0126 19:36:33.249380 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23e38a69-06a9-43c0-ac6e-b092d96a777c-utilities\") pod \"redhat-operators-b6l5c\" (UID: \"23e38a69-06a9-43c0-ac6e-b092d96a777c\") " pod="openshift-marketplace/redhat-operators-b6l5c" Jan 26 19:36:33 crc kubenswrapper[4737]: I0126 19:36:33.249786 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4d42\" (UniqueName: \"kubernetes.io/projected/23e38a69-06a9-43c0-ac6e-b092d96a777c-kube-api-access-n4d42\") pod \"redhat-operators-b6l5c\" (UID: \"23e38a69-06a9-43c0-ac6e-b092d96a777c\") " pod="openshift-marketplace/redhat-operators-b6l5c" Jan 26 19:36:33 crc kubenswrapper[4737]: I0126 19:36:33.250792 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23e38a69-06a9-43c0-ac6e-b092d96a777c-catalog-content\") pod \"redhat-operators-b6l5c\" (UID: \"23e38a69-06a9-43c0-ac6e-b092d96a777c\") " pod="openshift-marketplace/redhat-operators-b6l5c" Jan 26 19:36:33 crc kubenswrapper[4737]: I0126 19:36:33.252147 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23e38a69-06a9-43c0-ac6e-b092d96a777c-utilities\") pod \"redhat-operators-b6l5c\" (UID: \"23e38a69-06a9-43c0-ac6e-b092d96a777c\") " pod="openshift-marketplace/redhat-operators-b6l5c" Jan 26 19:36:33 crc kubenswrapper[4737]: I0126 19:36:33.272651 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4d42\" (UniqueName: \"kubernetes.io/projected/23e38a69-06a9-43c0-ac6e-b092d96a777c-kube-api-access-n4d42\") pod \"redhat-operators-b6l5c\" (UID: \"23e38a69-06a9-43c0-ac6e-b092d96a777c\") " pod="openshift-marketplace/redhat-operators-b6l5c" Jan 26 19:36:33 crc kubenswrapper[4737]: I0126 19:36:33.369381 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b6l5c" Jan 26 19:36:33 crc kubenswrapper[4737]: I0126 19:36:33.766320 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hh8tt" event={"ID":"398c29bb-610f-410f-9dfa-c33389465a6d","Type":"ContainerStarted","Data":"5c6305f12028dcba9684d4e747a926106efcc6771739b8c026c05b50ee092dc5"} Jan 26 19:36:33 crc kubenswrapper[4737]: I0126 19:36:33.915459 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bvtxb"] Jan 26 19:36:33 crc kubenswrapper[4737]: W0126 19:36:33.973177 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd751fa9_1eca_47a7_9a04_3a27b21458cf.slice/crio-591854ea01c90373ca319ec296b7f464d53b505035151ba6d0f9aacf2f6dbf4a WatchSource:0}: Error finding container 591854ea01c90373ca319ec296b7f464d53b505035151ba6d0f9aacf2f6dbf4a: Status 404 returned error can't find the container with id 591854ea01c90373ca319ec296b7f464d53b505035151ba6d0f9aacf2f6dbf4a Jan 26 19:36:34 crc kubenswrapper[4737]: I0126 19:36:34.119853 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b6l5c"] Jan 26 19:36:34 crc kubenswrapper[4737]: W0126 19:36:34.124381 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23e38a69_06a9_43c0_ac6e_b092d96a777c.slice/crio-da179e9ea3f3840eded736b46cdc234ba2d21f351e7371c41ae155909a463399 WatchSource:0}: Error finding container da179e9ea3f3840eded736b46cdc234ba2d21f351e7371c41ae155909a463399: Status 404 returned error can't find the container with id da179e9ea3f3840eded736b46cdc234ba2d21f351e7371c41ae155909a463399 Jan 26 19:36:34 crc kubenswrapper[4737]: I0126 19:36:34.786773 4737 generic.go:334] "Generic (PLEG): container finished" podID="23e38a69-06a9-43c0-ac6e-b092d96a777c" containerID="96a82969f41a58ee1aad81f1d3bb8b7ca77cdbefe2738ba0550807101b01d54a" exitCode=0 Jan 26 19:36:34 crc kubenswrapper[4737]: I0126 19:36:34.787140 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b6l5c" event={"ID":"23e38a69-06a9-43c0-ac6e-b092d96a777c","Type":"ContainerDied","Data":"96a82969f41a58ee1aad81f1d3bb8b7ca77cdbefe2738ba0550807101b01d54a"} Jan 26 19:36:34 crc kubenswrapper[4737]: I0126 19:36:34.787178 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b6l5c" event={"ID":"23e38a69-06a9-43c0-ac6e-b092d96a777c","Type":"ContainerStarted","Data":"da179e9ea3f3840eded736b46cdc234ba2d21f351e7371c41ae155909a463399"} Jan 26 19:36:34 crc kubenswrapper[4737]: I0126 19:36:34.795192 4737 generic.go:334] "Generic (PLEG): container finished" podID="398c29bb-610f-410f-9dfa-c33389465a6d" containerID="5c6305f12028dcba9684d4e747a926106efcc6771739b8c026c05b50ee092dc5" exitCode=0 Jan 26 19:36:34 crc kubenswrapper[4737]: I0126 19:36:34.795289 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hh8tt" event={"ID":"398c29bb-610f-410f-9dfa-c33389465a6d","Type":"ContainerDied","Data":"5c6305f12028dcba9684d4e747a926106efcc6771739b8c026c05b50ee092dc5"} Jan 26 19:36:34 crc kubenswrapper[4737]: I0126 19:36:34.807520 4737 generic.go:334] "Generic (PLEG): container finished" podID="cd751fa9-1eca-47a7-9a04-3a27b21458cf" containerID="8ffcac21234efbdbb3586069a3c96df5481775689e4866500ede5255b1e272bb" exitCode=0 Jan 26 19:36:34 crc kubenswrapper[4737]: I0126 19:36:34.807568 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bvtxb" event={"ID":"cd751fa9-1eca-47a7-9a04-3a27b21458cf","Type":"ContainerDied","Data":"8ffcac21234efbdbb3586069a3c96df5481775689e4866500ede5255b1e272bb"} Jan 26 19:36:34 crc kubenswrapper[4737]: I0126 19:36:34.807598 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bvtxb" event={"ID":"cd751fa9-1eca-47a7-9a04-3a27b21458cf","Type":"ContainerStarted","Data":"591854ea01c90373ca319ec296b7f464d53b505035151ba6d0f9aacf2f6dbf4a"} Jan 26 19:36:35 crc kubenswrapper[4737]: I0126 19:36:35.827481 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hh8tt" event={"ID":"398c29bb-610f-410f-9dfa-c33389465a6d","Type":"ContainerStarted","Data":"42a516ddb0d66f24d5144206421958737a57f39341edb7550a3582d2af418b6f"} Jan 26 19:36:35 crc kubenswrapper[4737]: I0126 19:36:35.838513 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bvtxb" event={"ID":"cd751fa9-1eca-47a7-9a04-3a27b21458cf","Type":"ContainerStarted","Data":"eeee81823bc45b43e94d907b2fa8e33481cf5e9a8a4b10e9d2f2027148a01bdb"} Jan 26 19:36:35 crc kubenswrapper[4737]: I0126 19:36:35.872238 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hh8tt" podStartSLOduration=3.272922104 podStartE2EDuration="6.872218502s" podCreationTimestamp="2026-01-26 19:36:29 +0000 UTC" firstStartedPulling="2026-01-26 19:36:31.742636112 +0000 UTC m=+3965.050830810" lastFinishedPulling="2026-01-26 19:36:35.34193249 +0000 UTC m=+3968.650127208" observedRunningTime="2026-01-26 19:36:35.85291497 +0000 UTC m=+3969.161109698" watchObservedRunningTime="2026-01-26 19:36:35.872218502 +0000 UTC m=+3969.180413210" Jan 26 19:36:36 crc kubenswrapper[4737]: I0126 19:36:36.849331 4737 generic.go:334] "Generic (PLEG): container finished" podID="cd751fa9-1eca-47a7-9a04-3a27b21458cf" containerID="eeee81823bc45b43e94d907b2fa8e33481cf5e9a8a4b10e9d2f2027148a01bdb" exitCode=0 Jan 26 19:36:36 crc kubenswrapper[4737]: I0126 19:36:36.849440 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bvtxb" event={"ID":"cd751fa9-1eca-47a7-9a04-3a27b21458cf","Type":"ContainerDied","Data":"eeee81823bc45b43e94d907b2fa8e33481cf5e9a8a4b10e9d2f2027148a01bdb"} Jan 26 19:36:36 crc kubenswrapper[4737]: I0126 19:36:36.853519 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b6l5c" event={"ID":"23e38a69-06a9-43c0-ac6e-b092d96a777c","Type":"ContainerStarted","Data":"98862738572cbbc4c473149d8f07fe95e1020c8dcd68bc1c294b2a5d98a76493"} Jan 26 19:36:38 crc kubenswrapper[4737]: I0126 19:36:38.873836 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bvtxb" event={"ID":"cd751fa9-1eca-47a7-9a04-3a27b21458cf","Type":"ContainerStarted","Data":"4869058e5d9513037efdfd455f16f1d0d1ae813a57fc0808f50fd685bfb72b5d"} Jan 26 19:36:38 crc kubenswrapper[4737]: I0126 19:36:38.903434 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bvtxb" podStartSLOduration=3.944862289 podStartE2EDuration="6.903405522s" podCreationTimestamp="2026-01-26 19:36:32 +0000 UTC" firstStartedPulling="2026-01-26 19:36:34.81297793 +0000 UTC m=+3968.121172638" lastFinishedPulling="2026-01-26 19:36:37.771521173 +0000 UTC m=+3971.079715871" observedRunningTime="2026-01-26 19:36:38.890113446 +0000 UTC m=+3972.198308174" watchObservedRunningTime="2026-01-26 19:36:38.903405522 +0000 UTC m=+3972.211600230" Jan 26 19:36:40 crc kubenswrapper[4737]: I0126 19:36:40.352358 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hh8tt" Jan 26 19:36:40 crc kubenswrapper[4737]: I0126 19:36:40.352716 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hh8tt" Jan 26 19:36:40 crc kubenswrapper[4737]: I0126 19:36:40.895163 4737 generic.go:334] "Generic (PLEG): container finished" podID="23e38a69-06a9-43c0-ac6e-b092d96a777c" containerID="98862738572cbbc4c473149d8f07fe95e1020c8dcd68bc1c294b2a5d98a76493" exitCode=0 Jan 26 19:36:40 crc kubenswrapper[4737]: I0126 19:36:40.895271 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b6l5c" event={"ID":"23e38a69-06a9-43c0-ac6e-b092d96a777c","Type":"ContainerDied","Data":"98862738572cbbc4c473149d8f07fe95e1020c8dcd68bc1c294b2a5d98a76493"} Jan 26 19:36:40 crc kubenswrapper[4737]: I0126 19:36:40.982948 4737 scope.go:117] "RemoveContainer" containerID="bea5325956185fb27826d8fae2a183cfd4f393578349aee8d75e63af7507aee3" Jan 26 19:36:40 crc kubenswrapper[4737]: E0126 19:36:40.983214 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:36:41 crc kubenswrapper[4737]: I0126 19:36:41.409523 4737 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-hh8tt" podUID="398c29bb-610f-410f-9dfa-c33389465a6d" containerName="registry-server" probeResult="failure" output=< Jan 26 19:36:41 crc kubenswrapper[4737]: timeout: failed to connect service ":50051" within 1s Jan 26 19:36:41 crc kubenswrapper[4737]: > Jan 26 19:36:41 crc kubenswrapper[4737]: I0126 19:36:41.910904 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b6l5c" event={"ID":"23e38a69-06a9-43c0-ac6e-b092d96a777c","Type":"ContainerStarted","Data":"6a33e6cb50c5753664630bb7d3b0e727938e532b39ac96dc9fd808ec27b8be1d"} Jan 26 19:36:41 crc kubenswrapper[4737]: I0126 19:36:41.934910 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-b6l5c" podStartSLOduration=3.44001021 podStartE2EDuration="9.934897359s" podCreationTimestamp="2026-01-26 19:36:32 +0000 UTC" firstStartedPulling="2026-01-26 19:36:34.796506388 +0000 UTC m=+3968.104701096" lastFinishedPulling="2026-01-26 19:36:41.291393547 +0000 UTC m=+3974.599588245" observedRunningTime="2026-01-26 19:36:41.932708615 +0000 UTC m=+3975.240903323" watchObservedRunningTime="2026-01-26 19:36:41.934897359 +0000 UTC m=+3975.243092067" Jan 26 19:36:43 crc kubenswrapper[4737]: I0126 19:36:43.220972 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bvtxb" Jan 26 19:36:43 crc kubenswrapper[4737]: I0126 19:36:43.221034 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bvtxb" Jan 26 19:36:43 crc kubenswrapper[4737]: I0126 19:36:43.319781 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bvtxb" Jan 26 19:36:43 crc kubenswrapper[4737]: I0126 19:36:43.369680 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-b6l5c" Jan 26 19:36:43 crc kubenswrapper[4737]: I0126 19:36:43.369990 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-b6l5c" Jan 26 19:36:43 crc kubenswrapper[4737]: I0126 19:36:43.988826 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bvtxb" Jan 26 19:36:44 crc kubenswrapper[4737]: I0126 19:36:44.424087 4737 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-b6l5c" podUID="23e38a69-06a9-43c0-ac6e-b092d96a777c" containerName="registry-server" probeResult="failure" output=< Jan 26 19:36:44 crc kubenswrapper[4737]: timeout: failed to connect service ":50051" within 1s Jan 26 19:36:44 crc kubenswrapper[4737]: > Jan 26 19:36:46 crc kubenswrapper[4737]: I0126 19:36:46.006987 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bvtxb"] Jan 26 19:36:46 crc kubenswrapper[4737]: I0126 19:36:46.007749 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bvtxb" podUID="cd751fa9-1eca-47a7-9a04-3a27b21458cf" containerName="registry-server" containerID="cri-o://4869058e5d9513037efdfd455f16f1d0d1ae813a57fc0808f50fd685bfb72b5d" gracePeriod=2 Jan 26 19:36:46 crc kubenswrapper[4737]: I0126 19:36:46.758977 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bvtxb" Jan 26 19:36:46 crc kubenswrapper[4737]: I0126 19:36:46.826425 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd751fa9-1eca-47a7-9a04-3a27b21458cf-utilities\") pod \"cd751fa9-1eca-47a7-9a04-3a27b21458cf\" (UID: \"cd751fa9-1eca-47a7-9a04-3a27b21458cf\") " Jan 26 19:36:46 crc kubenswrapper[4737]: I0126 19:36:46.826575 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4xds\" (UniqueName: \"kubernetes.io/projected/cd751fa9-1eca-47a7-9a04-3a27b21458cf-kube-api-access-z4xds\") pod \"cd751fa9-1eca-47a7-9a04-3a27b21458cf\" (UID: \"cd751fa9-1eca-47a7-9a04-3a27b21458cf\") " Jan 26 19:36:46 crc kubenswrapper[4737]: I0126 19:36:46.826639 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd751fa9-1eca-47a7-9a04-3a27b21458cf-catalog-content\") pod \"cd751fa9-1eca-47a7-9a04-3a27b21458cf\" (UID: \"cd751fa9-1eca-47a7-9a04-3a27b21458cf\") " Jan 26 19:36:46 crc kubenswrapper[4737]: I0126 19:36:46.827882 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd751fa9-1eca-47a7-9a04-3a27b21458cf-utilities" (OuterVolumeSpecName: "utilities") pod "cd751fa9-1eca-47a7-9a04-3a27b21458cf" (UID: "cd751fa9-1eca-47a7-9a04-3a27b21458cf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:36:46 crc kubenswrapper[4737]: I0126 19:36:46.837709 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd751fa9-1eca-47a7-9a04-3a27b21458cf-kube-api-access-z4xds" (OuterVolumeSpecName: "kube-api-access-z4xds") pod "cd751fa9-1eca-47a7-9a04-3a27b21458cf" (UID: "cd751fa9-1eca-47a7-9a04-3a27b21458cf"). InnerVolumeSpecName "kube-api-access-z4xds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:36:46 crc kubenswrapper[4737]: I0126 19:36:46.856125 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd751fa9-1eca-47a7-9a04-3a27b21458cf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cd751fa9-1eca-47a7-9a04-3a27b21458cf" (UID: "cd751fa9-1eca-47a7-9a04-3a27b21458cf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:36:46 crc kubenswrapper[4737]: I0126 19:36:46.930355 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4xds\" (UniqueName: \"kubernetes.io/projected/cd751fa9-1eca-47a7-9a04-3a27b21458cf-kube-api-access-z4xds\") on node \"crc\" DevicePath \"\"" Jan 26 19:36:46 crc kubenswrapper[4737]: I0126 19:36:46.930428 4737 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd751fa9-1eca-47a7-9a04-3a27b21458cf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:36:46 crc kubenswrapper[4737]: I0126 19:36:46.930440 4737 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd751fa9-1eca-47a7-9a04-3a27b21458cf-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:36:46 crc kubenswrapper[4737]: I0126 19:36:46.975806 4737 generic.go:334] "Generic (PLEG): container finished" podID="cd751fa9-1eca-47a7-9a04-3a27b21458cf" containerID="4869058e5d9513037efdfd455f16f1d0d1ae813a57fc0808f50fd685bfb72b5d" exitCode=0 Jan 26 19:36:46 crc kubenswrapper[4737]: I0126 19:36:46.975892 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bvtxb" Jan 26 19:36:46 crc kubenswrapper[4737]: I0126 19:36:46.975891 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bvtxb" event={"ID":"cd751fa9-1eca-47a7-9a04-3a27b21458cf","Type":"ContainerDied","Data":"4869058e5d9513037efdfd455f16f1d0d1ae813a57fc0808f50fd685bfb72b5d"} Jan 26 19:36:46 crc kubenswrapper[4737]: I0126 19:36:46.976052 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bvtxb" event={"ID":"cd751fa9-1eca-47a7-9a04-3a27b21458cf","Type":"ContainerDied","Data":"591854ea01c90373ca319ec296b7f464d53b505035151ba6d0f9aacf2f6dbf4a"} Jan 26 19:36:46 crc kubenswrapper[4737]: I0126 19:36:46.976102 4737 scope.go:117] "RemoveContainer" containerID="4869058e5d9513037efdfd455f16f1d0d1ae813a57fc0808f50fd685bfb72b5d" Jan 26 19:36:47 crc kubenswrapper[4737]: I0126 19:36:47.018378 4737 scope.go:117] "RemoveContainer" containerID="eeee81823bc45b43e94d907b2fa8e33481cf5e9a8a4b10e9d2f2027148a01bdb" Jan 26 19:36:47 crc kubenswrapper[4737]: I0126 19:36:47.032528 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bvtxb"] Jan 26 19:36:47 crc kubenswrapper[4737]: I0126 19:36:47.043282 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bvtxb"] Jan 26 19:36:47 crc kubenswrapper[4737]: I0126 19:36:47.048269 4737 scope.go:117] "RemoveContainer" containerID="8ffcac21234efbdbb3586069a3c96df5481775689e4866500ede5255b1e272bb" Jan 26 19:36:47 crc kubenswrapper[4737]: I0126 19:36:47.117657 4737 scope.go:117] "RemoveContainer" containerID="4869058e5d9513037efdfd455f16f1d0d1ae813a57fc0808f50fd685bfb72b5d" Jan 26 19:36:47 crc kubenswrapper[4737]: E0126 19:36:47.118854 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4869058e5d9513037efdfd455f16f1d0d1ae813a57fc0808f50fd685bfb72b5d\": container with ID starting with 4869058e5d9513037efdfd455f16f1d0d1ae813a57fc0808f50fd685bfb72b5d not found: ID does not exist" containerID="4869058e5d9513037efdfd455f16f1d0d1ae813a57fc0808f50fd685bfb72b5d" Jan 26 19:36:47 crc kubenswrapper[4737]: I0126 19:36:47.118900 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4869058e5d9513037efdfd455f16f1d0d1ae813a57fc0808f50fd685bfb72b5d"} err="failed to get container status \"4869058e5d9513037efdfd455f16f1d0d1ae813a57fc0808f50fd685bfb72b5d\": rpc error: code = NotFound desc = could not find container \"4869058e5d9513037efdfd455f16f1d0d1ae813a57fc0808f50fd685bfb72b5d\": container with ID starting with 4869058e5d9513037efdfd455f16f1d0d1ae813a57fc0808f50fd685bfb72b5d not found: ID does not exist" Jan 26 19:36:47 crc kubenswrapper[4737]: I0126 19:36:47.118935 4737 scope.go:117] "RemoveContainer" containerID="eeee81823bc45b43e94d907b2fa8e33481cf5e9a8a4b10e9d2f2027148a01bdb" Jan 26 19:36:47 crc kubenswrapper[4737]: E0126 19:36:47.119560 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eeee81823bc45b43e94d907b2fa8e33481cf5e9a8a4b10e9d2f2027148a01bdb\": container with ID starting with eeee81823bc45b43e94d907b2fa8e33481cf5e9a8a4b10e9d2f2027148a01bdb not found: ID does not exist" containerID="eeee81823bc45b43e94d907b2fa8e33481cf5e9a8a4b10e9d2f2027148a01bdb" Jan 26 19:36:47 crc kubenswrapper[4737]: I0126 19:36:47.119597 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eeee81823bc45b43e94d907b2fa8e33481cf5e9a8a4b10e9d2f2027148a01bdb"} err="failed to get container status \"eeee81823bc45b43e94d907b2fa8e33481cf5e9a8a4b10e9d2f2027148a01bdb\": rpc error: code = NotFound desc = could not find container \"eeee81823bc45b43e94d907b2fa8e33481cf5e9a8a4b10e9d2f2027148a01bdb\": container with ID starting with eeee81823bc45b43e94d907b2fa8e33481cf5e9a8a4b10e9d2f2027148a01bdb not found: ID does not exist" Jan 26 19:36:47 crc kubenswrapper[4737]: I0126 19:36:47.119624 4737 scope.go:117] "RemoveContainer" containerID="8ffcac21234efbdbb3586069a3c96df5481775689e4866500ede5255b1e272bb" Jan 26 19:36:47 crc kubenswrapper[4737]: E0126 19:36:47.119911 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ffcac21234efbdbb3586069a3c96df5481775689e4866500ede5255b1e272bb\": container with ID starting with 8ffcac21234efbdbb3586069a3c96df5481775689e4866500ede5255b1e272bb not found: ID does not exist" containerID="8ffcac21234efbdbb3586069a3c96df5481775689e4866500ede5255b1e272bb" Jan 26 19:36:47 crc kubenswrapper[4737]: I0126 19:36:47.119960 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ffcac21234efbdbb3586069a3c96df5481775689e4866500ede5255b1e272bb"} err="failed to get container status \"8ffcac21234efbdbb3586069a3c96df5481775689e4866500ede5255b1e272bb\": rpc error: code = NotFound desc = could not find container \"8ffcac21234efbdbb3586069a3c96df5481775689e4866500ede5255b1e272bb\": container with ID starting with 8ffcac21234efbdbb3586069a3c96df5481775689e4866500ede5255b1e272bb not found: ID does not exist" Jan 26 19:36:48 crc kubenswrapper[4737]: I0126 19:36:48.997714 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd751fa9-1eca-47a7-9a04-3a27b21458cf" path="/var/lib/kubelet/pods/cd751fa9-1eca-47a7-9a04-3a27b21458cf/volumes" Jan 26 19:36:50 crc kubenswrapper[4737]: I0126 19:36:50.410041 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hh8tt" Jan 26 19:36:50 crc kubenswrapper[4737]: I0126 19:36:50.471237 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hh8tt" Jan 26 19:36:51 crc kubenswrapper[4737]: I0126 19:36:51.982499 4737 scope.go:117] "RemoveContainer" containerID="bea5325956185fb27826d8fae2a183cfd4f393578349aee8d75e63af7507aee3" Jan 26 19:36:51 crc kubenswrapper[4737]: E0126 19:36:51.983144 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:36:53 crc kubenswrapper[4737]: I0126 19:36:53.417568 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-b6l5c" Jan 26 19:36:53 crc kubenswrapper[4737]: I0126 19:36:53.467285 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-b6l5c" Jan 26 19:36:55 crc kubenswrapper[4737]: I0126 19:36:55.205503 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hh8tt"] Jan 26 19:36:55 crc kubenswrapper[4737]: I0126 19:36:55.206032 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hh8tt" podUID="398c29bb-610f-410f-9dfa-c33389465a6d" containerName="registry-server" containerID="cri-o://42a516ddb0d66f24d5144206421958737a57f39341edb7550a3582d2af418b6f" gracePeriod=2 Jan 26 19:36:56 crc kubenswrapper[4737]: I0126 19:36:56.085975 4737 generic.go:334] "Generic (PLEG): container finished" podID="398c29bb-610f-410f-9dfa-c33389465a6d" containerID="42a516ddb0d66f24d5144206421958737a57f39341edb7550a3582d2af418b6f" exitCode=0 Jan 26 19:36:56 crc kubenswrapper[4737]: I0126 19:36:56.086335 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hh8tt" event={"ID":"398c29bb-610f-410f-9dfa-c33389465a6d","Type":"ContainerDied","Data":"42a516ddb0d66f24d5144206421958737a57f39341edb7550a3582d2af418b6f"} Jan 26 19:36:56 crc kubenswrapper[4737]: I0126 19:36:56.086550 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hh8tt" event={"ID":"398c29bb-610f-410f-9dfa-c33389465a6d","Type":"ContainerDied","Data":"8742e9ae2d70fd5dfe92416dbaf1b1d130338a2b7d0670a8d2d83319c947610f"} Jan 26 19:36:56 crc kubenswrapper[4737]: I0126 19:36:56.086568 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8742e9ae2d70fd5dfe92416dbaf1b1d130338a2b7d0670a8d2d83319c947610f" Jan 26 19:36:56 crc kubenswrapper[4737]: I0126 19:36:56.212429 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hh8tt" Jan 26 19:36:56 crc kubenswrapper[4737]: I0126 19:36:56.273009 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/398c29bb-610f-410f-9dfa-c33389465a6d-catalog-content\") pod \"398c29bb-610f-410f-9dfa-c33389465a6d\" (UID: \"398c29bb-610f-410f-9dfa-c33389465a6d\") " Jan 26 19:36:56 crc kubenswrapper[4737]: I0126 19:36:56.273321 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/398c29bb-610f-410f-9dfa-c33389465a6d-utilities\") pod \"398c29bb-610f-410f-9dfa-c33389465a6d\" (UID: \"398c29bb-610f-410f-9dfa-c33389465a6d\") " Jan 26 19:36:56 crc kubenswrapper[4737]: I0126 19:36:56.273364 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pmnq\" (UniqueName: \"kubernetes.io/projected/398c29bb-610f-410f-9dfa-c33389465a6d-kube-api-access-2pmnq\") pod \"398c29bb-610f-410f-9dfa-c33389465a6d\" (UID: \"398c29bb-610f-410f-9dfa-c33389465a6d\") " Jan 26 19:36:56 crc kubenswrapper[4737]: I0126 19:36:56.273958 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/398c29bb-610f-410f-9dfa-c33389465a6d-utilities" (OuterVolumeSpecName: "utilities") pod "398c29bb-610f-410f-9dfa-c33389465a6d" (UID: "398c29bb-610f-410f-9dfa-c33389465a6d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:36:56 crc kubenswrapper[4737]: I0126 19:36:56.296545 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/398c29bb-610f-410f-9dfa-c33389465a6d-kube-api-access-2pmnq" (OuterVolumeSpecName: "kube-api-access-2pmnq") pod "398c29bb-610f-410f-9dfa-c33389465a6d" (UID: "398c29bb-610f-410f-9dfa-c33389465a6d"). InnerVolumeSpecName "kube-api-access-2pmnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:36:56 crc kubenswrapper[4737]: I0126 19:36:56.332313 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/398c29bb-610f-410f-9dfa-c33389465a6d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "398c29bb-610f-410f-9dfa-c33389465a6d" (UID: "398c29bb-610f-410f-9dfa-c33389465a6d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:36:56 crc kubenswrapper[4737]: I0126 19:36:56.375972 4737 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/398c29bb-610f-410f-9dfa-c33389465a6d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:36:56 crc kubenswrapper[4737]: I0126 19:36:56.376044 4737 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/398c29bb-610f-410f-9dfa-c33389465a6d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:36:56 crc kubenswrapper[4737]: I0126 19:36:56.376058 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pmnq\" (UniqueName: \"kubernetes.io/projected/398c29bb-610f-410f-9dfa-c33389465a6d-kube-api-access-2pmnq\") on node \"crc\" DevicePath \"\"" Jan 26 19:36:57 crc kubenswrapper[4737]: I0126 19:36:57.097984 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hh8tt" Jan 26 19:36:57 crc kubenswrapper[4737]: I0126 19:36:57.124947 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hh8tt"] Jan 26 19:36:57 crc kubenswrapper[4737]: I0126 19:36:57.138021 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hh8tt"] Jan 26 19:36:57 crc kubenswrapper[4737]: I0126 19:36:57.609922 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b6l5c"] Jan 26 19:36:57 crc kubenswrapper[4737]: I0126 19:36:57.610463 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-b6l5c" podUID="23e38a69-06a9-43c0-ac6e-b092d96a777c" containerName="registry-server" containerID="cri-o://6a33e6cb50c5753664630bb7d3b0e727938e532b39ac96dc9fd808ec27b8be1d" gracePeriod=2 Jan 26 19:36:58 crc kubenswrapper[4737]: I0126 19:36:58.114514 4737 generic.go:334] "Generic (PLEG): container finished" podID="23e38a69-06a9-43c0-ac6e-b092d96a777c" containerID="6a33e6cb50c5753664630bb7d3b0e727938e532b39ac96dc9fd808ec27b8be1d" exitCode=0 Jan 26 19:36:58 crc kubenswrapper[4737]: I0126 19:36:58.114810 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b6l5c" event={"ID":"23e38a69-06a9-43c0-ac6e-b092d96a777c","Type":"ContainerDied","Data":"6a33e6cb50c5753664630bb7d3b0e727938e532b39ac96dc9fd808ec27b8be1d"} Jan 26 19:36:58 crc kubenswrapper[4737]: I0126 19:36:58.805313 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b6l5c" Jan 26 19:36:58 crc kubenswrapper[4737]: I0126 19:36:58.837291 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4d42\" (UniqueName: \"kubernetes.io/projected/23e38a69-06a9-43c0-ac6e-b092d96a777c-kube-api-access-n4d42\") pod \"23e38a69-06a9-43c0-ac6e-b092d96a777c\" (UID: \"23e38a69-06a9-43c0-ac6e-b092d96a777c\") " Jan 26 19:36:58 crc kubenswrapper[4737]: I0126 19:36:58.837801 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23e38a69-06a9-43c0-ac6e-b092d96a777c-catalog-content\") pod \"23e38a69-06a9-43c0-ac6e-b092d96a777c\" (UID: \"23e38a69-06a9-43c0-ac6e-b092d96a777c\") " Jan 26 19:36:58 crc kubenswrapper[4737]: I0126 19:36:58.838124 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23e38a69-06a9-43c0-ac6e-b092d96a777c-utilities\") pod \"23e38a69-06a9-43c0-ac6e-b092d96a777c\" (UID: \"23e38a69-06a9-43c0-ac6e-b092d96a777c\") " Jan 26 19:36:58 crc kubenswrapper[4737]: I0126 19:36:58.838837 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23e38a69-06a9-43c0-ac6e-b092d96a777c-utilities" (OuterVolumeSpecName: "utilities") pod "23e38a69-06a9-43c0-ac6e-b092d96a777c" (UID: "23e38a69-06a9-43c0-ac6e-b092d96a777c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:36:58 crc kubenswrapper[4737]: I0126 19:36:58.839942 4737 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23e38a69-06a9-43c0-ac6e-b092d96a777c-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:36:58 crc kubenswrapper[4737]: I0126 19:36:58.853964 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23e38a69-06a9-43c0-ac6e-b092d96a777c-kube-api-access-n4d42" (OuterVolumeSpecName: "kube-api-access-n4d42") pod "23e38a69-06a9-43c0-ac6e-b092d96a777c" (UID: "23e38a69-06a9-43c0-ac6e-b092d96a777c"). InnerVolumeSpecName "kube-api-access-n4d42". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:36:58 crc kubenswrapper[4737]: I0126 19:36:58.942601 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4d42\" (UniqueName: \"kubernetes.io/projected/23e38a69-06a9-43c0-ac6e-b092d96a777c-kube-api-access-n4d42\") on node \"crc\" DevicePath \"\"" Jan 26 19:36:58 crc kubenswrapper[4737]: I0126 19:36:58.997323 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="398c29bb-610f-410f-9dfa-c33389465a6d" path="/var/lib/kubelet/pods/398c29bb-610f-410f-9dfa-c33389465a6d/volumes" Jan 26 19:36:59 crc kubenswrapper[4737]: I0126 19:36:59.000346 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23e38a69-06a9-43c0-ac6e-b092d96a777c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "23e38a69-06a9-43c0-ac6e-b092d96a777c" (UID: "23e38a69-06a9-43c0-ac6e-b092d96a777c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:36:59 crc kubenswrapper[4737]: I0126 19:36:59.045629 4737 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23e38a69-06a9-43c0-ac6e-b092d96a777c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:36:59 crc kubenswrapper[4737]: I0126 19:36:59.130170 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b6l5c" event={"ID":"23e38a69-06a9-43c0-ac6e-b092d96a777c","Type":"ContainerDied","Data":"da179e9ea3f3840eded736b46cdc234ba2d21f351e7371c41ae155909a463399"} Jan 26 19:36:59 crc kubenswrapper[4737]: I0126 19:36:59.130247 4737 scope.go:117] "RemoveContainer" containerID="6a33e6cb50c5753664630bb7d3b0e727938e532b39ac96dc9fd808ec27b8be1d" Jan 26 19:36:59 crc kubenswrapper[4737]: I0126 19:36:59.130338 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b6l5c" Jan 26 19:36:59 crc kubenswrapper[4737]: I0126 19:36:59.170569 4737 scope.go:117] "RemoveContainer" containerID="98862738572cbbc4c473149d8f07fe95e1020c8dcd68bc1c294b2a5d98a76493" Jan 26 19:36:59 crc kubenswrapper[4737]: I0126 19:36:59.180720 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b6l5c"] Jan 26 19:36:59 crc kubenswrapper[4737]: I0126 19:36:59.200218 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-b6l5c"] Jan 26 19:36:59 crc kubenswrapper[4737]: I0126 19:36:59.201863 4737 scope.go:117] "RemoveContainer" containerID="96a82969f41a58ee1aad81f1d3bb8b7ca77cdbefe2738ba0550807101b01d54a" Jan 26 19:37:00 crc kubenswrapper[4737]: I0126 19:37:00.999007 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23e38a69-06a9-43c0-ac6e-b092d96a777c" path="/var/lib/kubelet/pods/23e38a69-06a9-43c0-ac6e-b092d96a777c/volumes" Jan 26 19:37:06 crc kubenswrapper[4737]: I0126 19:37:06.992208 4737 scope.go:117] "RemoveContainer" containerID="bea5325956185fb27826d8fae2a183cfd4f393578349aee8d75e63af7507aee3" Jan 26 19:37:06 crc kubenswrapper[4737]: E0126 19:37:06.993212 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:37:20 crc kubenswrapper[4737]: I0126 19:37:20.983049 4737 scope.go:117] "RemoveContainer" containerID="bea5325956185fb27826d8fae2a183cfd4f393578349aee8d75e63af7507aee3" Jan 26 19:37:20 crc kubenswrapper[4737]: E0126 19:37:20.984366 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:37:31 crc kubenswrapper[4737]: I0126 19:37:31.982009 4737 scope.go:117] "RemoveContainer" containerID="bea5325956185fb27826d8fae2a183cfd4f393578349aee8d75e63af7507aee3" Jan 26 19:37:31 crc kubenswrapper[4737]: E0126 19:37:31.982784 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:37:45 crc kubenswrapper[4737]: I0126 19:37:45.982674 4737 scope.go:117] "RemoveContainer" containerID="bea5325956185fb27826d8fae2a183cfd4f393578349aee8d75e63af7507aee3" Jan 26 19:37:45 crc kubenswrapper[4737]: E0126 19:37:45.983627 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:37:56 crc kubenswrapper[4737]: I0126 19:37:56.991575 4737 scope.go:117] "RemoveContainer" containerID="bea5325956185fb27826d8fae2a183cfd4f393578349aee8d75e63af7507aee3" Jan 26 19:37:56 crc kubenswrapper[4737]: E0126 19:37:56.992827 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:38:10 crc kubenswrapper[4737]: I0126 19:38:10.983673 4737 scope.go:117] "RemoveContainer" containerID="bea5325956185fb27826d8fae2a183cfd4f393578349aee8d75e63af7507aee3" Jan 26 19:38:10 crc kubenswrapper[4737]: E0126 19:38:10.984601 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:38:23 crc kubenswrapper[4737]: I0126 19:38:23.982091 4737 scope.go:117] "RemoveContainer" containerID="bea5325956185fb27826d8fae2a183cfd4f393578349aee8d75e63af7507aee3" Jan 26 19:38:23 crc kubenswrapper[4737]: E0126 19:38:23.983172 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:38:34 crc kubenswrapper[4737]: I0126 19:38:34.982146 4737 scope.go:117] "RemoveContainer" containerID="bea5325956185fb27826d8fae2a183cfd4f393578349aee8d75e63af7507aee3" Jan 26 19:38:34 crc kubenswrapper[4737]: E0126 19:38:34.983104 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:38:45 crc kubenswrapper[4737]: I0126 19:38:45.982841 4737 scope.go:117] "RemoveContainer" containerID="bea5325956185fb27826d8fae2a183cfd4f393578349aee8d75e63af7507aee3" Jan 26 19:38:45 crc kubenswrapper[4737]: E0126 19:38:45.983920 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:38:59 crc kubenswrapper[4737]: I0126 19:38:59.982284 4737 scope.go:117] "RemoveContainer" containerID="bea5325956185fb27826d8fae2a183cfd4f393578349aee8d75e63af7507aee3" Jan 26 19:38:59 crc kubenswrapper[4737]: E0126 19:38:59.983136 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:39:10 crc kubenswrapper[4737]: I0126 19:39:10.983775 4737 scope.go:117] "RemoveContainer" containerID="bea5325956185fb27826d8fae2a183cfd4f393578349aee8d75e63af7507aee3" Jan 26 19:39:10 crc kubenswrapper[4737]: E0126 19:39:10.984451 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:39:22 crc kubenswrapper[4737]: I0126 19:39:22.982163 4737 scope.go:117] "RemoveContainer" containerID="bea5325956185fb27826d8fae2a183cfd4f393578349aee8d75e63af7507aee3" Jan 26 19:39:22 crc kubenswrapper[4737]: E0126 19:39:22.983176 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:39:35 crc kubenswrapper[4737]: I0126 19:39:35.982946 4737 scope.go:117] "RemoveContainer" containerID="bea5325956185fb27826d8fae2a183cfd4f393578349aee8d75e63af7507aee3" Jan 26 19:39:36 crc kubenswrapper[4737]: I0126 19:39:36.848676 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" event={"ID":"afd75772-7900-46c3-b392-afb075e1cc08","Type":"ContainerStarted","Data":"e2c50232a5fa93efde224493847b8e0a84baab428efbb8de02cab3290ca68781"} Jan 26 19:42:00 crc kubenswrapper[4737]: I0126 19:42:00.949325 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:42:00 crc kubenswrapper[4737]: I0126 19:42:00.949984 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:42:30 crc kubenswrapper[4737]: I0126 19:42:30.948760 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:42:30 crc kubenswrapper[4737]: I0126 19:42:30.949374 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:42:36 crc kubenswrapper[4737]: I0126 19:42:36.998871 4737 scope.go:117] "RemoveContainer" containerID="5c6305f12028dcba9684d4e747a926106efcc6771739b8c026c05b50ee092dc5" Jan 26 19:42:37 crc kubenswrapper[4737]: I0126 19:42:37.066278 4737 scope.go:117] "RemoveContainer" containerID="f57dcd1c2e96e3c5e2c755c8a314bc3590c5fc50ad5fd2a1e7134219df67cfc2" Jan 26 19:42:37 crc kubenswrapper[4737]: I0126 19:42:37.140218 4737 scope.go:117] "RemoveContainer" containerID="42a516ddb0d66f24d5144206421958737a57f39341edb7550a3582d2af418b6f" Jan 26 19:43:00 crc kubenswrapper[4737]: I0126 19:43:00.949169 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:43:00 crc kubenswrapper[4737]: I0126 19:43:00.949774 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:43:00 crc kubenswrapper[4737]: I0126 19:43:00.949834 4737 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" Jan 26 19:43:00 crc kubenswrapper[4737]: I0126 19:43:00.950964 4737 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e2c50232a5fa93efde224493847b8e0a84baab428efbb8de02cab3290ca68781"} pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 19:43:00 crc kubenswrapper[4737]: I0126 19:43:00.951023 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" containerID="cri-o://e2c50232a5fa93efde224493847b8e0a84baab428efbb8de02cab3290ca68781" gracePeriod=600 Jan 26 19:43:01 crc kubenswrapper[4737]: I0126 19:43:01.231589 4737 generic.go:334] "Generic (PLEG): container finished" podID="afd75772-7900-46c3-b392-afb075e1cc08" containerID="e2c50232a5fa93efde224493847b8e0a84baab428efbb8de02cab3290ca68781" exitCode=0 Jan 26 19:43:01 crc kubenswrapper[4737]: I0126 19:43:01.231651 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" event={"ID":"afd75772-7900-46c3-b392-afb075e1cc08","Type":"ContainerDied","Data":"e2c50232a5fa93efde224493847b8e0a84baab428efbb8de02cab3290ca68781"} Jan 26 19:43:01 crc kubenswrapper[4737]: I0126 19:43:01.232431 4737 scope.go:117] "RemoveContainer" containerID="bea5325956185fb27826d8fae2a183cfd4f393578349aee8d75e63af7507aee3" Jan 26 19:43:02 crc kubenswrapper[4737]: I0126 19:43:02.245466 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" event={"ID":"afd75772-7900-46c3-b392-afb075e1cc08","Type":"ContainerStarted","Data":"7b6fabd15a79cd275ed2884c7ff8f267e25e1f7cb223b3a1ecfae218c1fe84b3"} Jan 26 19:45:00 crc kubenswrapper[4737]: I0126 19:45:00.194441 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490945-jw2xc"] Jan 26 19:45:00 crc kubenswrapper[4737]: E0126 19:45:00.195644 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23e38a69-06a9-43c0-ac6e-b092d96a777c" containerName="extract-utilities" Jan 26 19:45:00 crc kubenswrapper[4737]: I0126 19:45:00.195659 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="23e38a69-06a9-43c0-ac6e-b092d96a777c" containerName="extract-utilities" Jan 26 19:45:00 crc kubenswrapper[4737]: E0126 19:45:00.195672 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="398c29bb-610f-410f-9dfa-c33389465a6d" containerName="registry-server" Jan 26 19:45:00 crc kubenswrapper[4737]: I0126 19:45:00.195677 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="398c29bb-610f-410f-9dfa-c33389465a6d" containerName="registry-server" Jan 26 19:45:00 crc kubenswrapper[4737]: E0126 19:45:00.195694 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23e38a69-06a9-43c0-ac6e-b092d96a777c" containerName="registry-server" Jan 26 19:45:00 crc kubenswrapper[4737]: I0126 19:45:00.195702 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="23e38a69-06a9-43c0-ac6e-b092d96a777c" containerName="registry-server" Jan 26 19:45:00 crc kubenswrapper[4737]: E0126 19:45:00.195710 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="398c29bb-610f-410f-9dfa-c33389465a6d" containerName="extract-utilities" Jan 26 19:45:00 crc kubenswrapper[4737]: I0126 19:45:00.195717 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="398c29bb-610f-410f-9dfa-c33389465a6d" containerName="extract-utilities" Jan 26 19:45:00 crc kubenswrapper[4737]: E0126 19:45:00.195727 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="398c29bb-610f-410f-9dfa-c33389465a6d" containerName="extract-content" Jan 26 19:45:00 crc kubenswrapper[4737]: I0126 19:45:00.195736 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="398c29bb-610f-410f-9dfa-c33389465a6d" containerName="extract-content" Jan 26 19:45:00 crc kubenswrapper[4737]: E0126 19:45:00.195747 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23e38a69-06a9-43c0-ac6e-b092d96a777c" containerName="extract-content" Jan 26 19:45:00 crc kubenswrapper[4737]: I0126 19:45:00.195753 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="23e38a69-06a9-43c0-ac6e-b092d96a777c" containerName="extract-content" Jan 26 19:45:00 crc kubenswrapper[4737]: E0126 19:45:00.195774 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd751fa9-1eca-47a7-9a04-3a27b21458cf" containerName="extract-content" Jan 26 19:45:00 crc kubenswrapper[4737]: I0126 19:45:00.195780 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd751fa9-1eca-47a7-9a04-3a27b21458cf" containerName="extract-content" Jan 26 19:45:00 crc kubenswrapper[4737]: E0126 19:45:00.195799 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd751fa9-1eca-47a7-9a04-3a27b21458cf" containerName="extract-utilities" Jan 26 19:45:00 crc kubenswrapper[4737]: I0126 19:45:00.195807 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd751fa9-1eca-47a7-9a04-3a27b21458cf" containerName="extract-utilities" Jan 26 19:45:00 crc kubenswrapper[4737]: E0126 19:45:00.195825 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd751fa9-1eca-47a7-9a04-3a27b21458cf" containerName="registry-server" Jan 26 19:45:00 crc kubenswrapper[4737]: I0126 19:45:00.195831 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd751fa9-1eca-47a7-9a04-3a27b21458cf" containerName="registry-server" Jan 26 19:45:00 crc kubenswrapper[4737]: I0126 19:45:00.196043 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="398c29bb-610f-410f-9dfa-c33389465a6d" containerName="registry-server" Jan 26 19:45:00 crc kubenswrapper[4737]: I0126 19:45:00.196061 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd751fa9-1eca-47a7-9a04-3a27b21458cf" containerName="registry-server" Jan 26 19:45:00 crc kubenswrapper[4737]: I0126 19:45:00.196093 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="23e38a69-06a9-43c0-ac6e-b092d96a777c" containerName="registry-server" Jan 26 19:45:00 crc kubenswrapper[4737]: I0126 19:45:00.197018 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490945-jw2xc" Jan 26 19:45:00 crc kubenswrapper[4737]: I0126 19:45:00.200034 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 19:45:00 crc kubenswrapper[4737]: I0126 19:45:00.200340 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 19:45:00 crc kubenswrapper[4737]: I0126 19:45:00.222694 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490945-jw2xc"] Jan 26 19:45:00 crc kubenswrapper[4737]: I0126 19:45:00.293004 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw4pw\" (UniqueName: \"kubernetes.io/projected/8d713092-777f-413e-9356-8d5ffaa09d8a-kube-api-access-zw4pw\") pod \"collect-profiles-29490945-jw2xc\" (UID: \"8d713092-777f-413e-9356-8d5ffaa09d8a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490945-jw2xc" Jan 26 19:45:00 crc kubenswrapper[4737]: I0126 19:45:00.293355 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d713092-777f-413e-9356-8d5ffaa09d8a-secret-volume\") pod \"collect-profiles-29490945-jw2xc\" (UID: \"8d713092-777f-413e-9356-8d5ffaa09d8a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490945-jw2xc" Jan 26 19:45:00 crc kubenswrapper[4737]: I0126 19:45:00.293599 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d713092-777f-413e-9356-8d5ffaa09d8a-config-volume\") pod \"collect-profiles-29490945-jw2xc\" (UID: \"8d713092-777f-413e-9356-8d5ffaa09d8a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490945-jw2xc" Jan 26 19:45:00 crc kubenswrapper[4737]: I0126 19:45:00.395821 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d713092-777f-413e-9356-8d5ffaa09d8a-config-volume\") pod \"collect-profiles-29490945-jw2xc\" (UID: \"8d713092-777f-413e-9356-8d5ffaa09d8a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490945-jw2xc" Jan 26 19:45:00 crc kubenswrapper[4737]: I0126 19:45:00.395962 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zw4pw\" (UniqueName: \"kubernetes.io/projected/8d713092-777f-413e-9356-8d5ffaa09d8a-kube-api-access-zw4pw\") pod \"collect-profiles-29490945-jw2xc\" (UID: \"8d713092-777f-413e-9356-8d5ffaa09d8a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490945-jw2xc" Jan 26 19:45:00 crc kubenswrapper[4737]: I0126 19:45:00.396016 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d713092-777f-413e-9356-8d5ffaa09d8a-secret-volume\") pod \"collect-profiles-29490945-jw2xc\" (UID: \"8d713092-777f-413e-9356-8d5ffaa09d8a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490945-jw2xc" Jan 26 19:45:00 crc kubenswrapper[4737]: I0126 19:45:00.398118 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d713092-777f-413e-9356-8d5ffaa09d8a-config-volume\") pod \"collect-profiles-29490945-jw2xc\" (UID: \"8d713092-777f-413e-9356-8d5ffaa09d8a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490945-jw2xc" Jan 26 19:45:00 crc kubenswrapper[4737]: I0126 19:45:00.406775 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d713092-777f-413e-9356-8d5ffaa09d8a-secret-volume\") pod \"collect-profiles-29490945-jw2xc\" (UID: \"8d713092-777f-413e-9356-8d5ffaa09d8a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490945-jw2xc" Jan 26 19:45:00 crc kubenswrapper[4737]: I0126 19:45:00.416549 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zw4pw\" (UniqueName: \"kubernetes.io/projected/8d713092-777f-413e-9356-8d5ffaa09d8a-kube-api-access-zw4pw\") pod \"collect-profiles-29490945-jw2xc\" (UID: \"8d713092-777f-413e-9356-8d5ffaa09d8a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490945-jw2xc" Jan 26 19:45:00 crc kubenswrapper[4737]: I0126 19:45:00.536141 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490945-jw2xc" Jan 26 19:45:01 crc kubenswrapper[4737]: I0126 19:45:01.068968 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490945-jw2xc"] Jan 26 19:45:01 crc kubenswrapper[4737]: I0126 19:45:01.589458 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490945-jw2xc" event={"ID":"8d713092-777f-413e-9356-8d5ffaa09d8a","Type":"ContainerStarted","Data":"7c1bfb94c5e071b2bda36678fdf3fcf688f2ac601326ed878d61b04568b21b2b"} Jan 26 19:45:01 crc kubenswrapper[4737]: I0126 19:45:01.589508 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490945-jw2xc" event={"ID":"8d713092-777f-413e-9356-8d5ffaa09d8a","Type":"ContainerStarted","Data":"685051f4920d926c06b73abc8ff2cdb2e61cfcf3b1f1f4202d4a8a903db5a648"} Jan 26 19:45:01 crc kubenswrapper[4737]: I0126 19:45:01.616687 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29490945-jw2xc" podStartSLOduration=1.616669205 podStartE2EDuration="1.616669205s" podCreationTimestamp="2026-01-26 19:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:45:01.606862854 +0000 UTC m=+4474.915057562" watchObservedRunningTime="2026-01-26 19:45:01.616669205 +0000 UTC m=+4474.924863913" Jan 26 19:45:02 crc kubenswrapper[4737]: I0126 19:45:02.604707 4737 generic.go:334] "Generic (PLEG): container finished" podID="8d713092-777f-413e-9356-8d5ffaa09d8a" containerID="7c1bfb94c5e071b2bda36678fdf3fcf688f2ac601326ed878d61b04568b21b2b" exitCode=0 Jan 26 19:45:02 crc kubenswrapper[4737]: I0126 19:45:02.605111 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490945-jw2xc" event={"ID":"8d713092-777f-413e-9356-8d5ffaa09d8a","Type":"ContainerDied","Data":"7c1bfb94c5e071b2bda36678fdf3fcf688f2ac601326ed878d61b04568b21b2b"} Jan 26 19:45:04 crc kubenswrapper[4737]: I0126 19:45:04.060857 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490945-jw2xc" Jan 26 19:45:04 crc kubenswrapper[4737]: I0126 19:45:04.213268 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d713092-777f-413e-9356-8d5ffaa09d8a-secret-volume\") pod \"8d713092-777f-413e-9356-8d5ffaa09d8a\" (UID: \"8d713092-777f-413e-9356-8d5ffaa09d8a\") " Jan 26 19:45:04 crc kubenswrapper[4737]: I0126 19:45:04.213453 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zw4pw\" (UniqueName: \"kubernetes.io/projected/8d713092-777f-413e-9356-8d5ffaa09d8a-kube-api-access-zw4pw\") pod \"8d713092-777f-413e-9356-8d5ffaa09d8a\" (UID: \"8d713092-777f-413e-9356-8d5ffaa09d8a\") " Jan 26 19:45:04 crc kubenswrapper[4737]: I0126 19:45:04.213593 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d713092-777f-413e-9356-8d5ffaa09d8a-config-volume\") pod \"8d713092-777f-413e-9356-8d5ffaa09d8a\" (UID: \"8d713092-777f-413e-9356-8d5ffaa09d8a\") " Jan 26 19:45:04 crc kubenswrapper[4737]: I0126 19:45:04.215255 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d713092-777f-413e-9356-8d5ffaa09d8a-config-volume" (OuterVolumeSpecName: "config-volume") pod "8d713092-777f-413e-9356-8d5ffaa09d8a" (UID: "8d713092-777f-413e-9356-8d5ffaa09d8a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:45:04 crc kubenswrapper[4737]: I0126 19:45:04.219811 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d713092-777f-413e-9356-8d5ffaa09d8a-kube-api-access-zw4pw" (OuterVolumeSpecName: "kube-api-access-zw4pw") pod "8d713092-777f-413e-9356-8d5ffaa09d8a" (UID: "8d713092-777f-413e-9356-8d5ffaa09d8a"). InnerVolumeSpecName "kube-api-access-zw4pw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:45:04 crc kubenswrapper[4737]: I0126 19:45:04.221729 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d713092-777f-413e-9356-8d5ffaa09d8a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8d713092-777f-413e-9356-8d5ffaa09d8a" (UID: "8d713092-777f-413e-9356-8d5ffaa09d8a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:45:04 crc kubenswrapper[4737]: I0126 19:45:04.318045 4737 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d713092-777f-413e-9356-8d5ffaa09d8a-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 19:45:04 crc kubenswrapper[4737]: I0126 19:45:04.318111 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zw4pw\" (UniqueName: \"kubernetes.io/projected/8d713092-777f-413e-9356-8d5ffaa09d8a-kube-api-access-zw4pw\") on node \"crc\" DevicePath \"\"" Jan 26 19:45:04 crc kubenswrapper[4737]: I0126 19:45:04.318123 4737 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d713092-777f-413e-9356-8d5ffaa09d8a-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 19:45:04 crc kubenswrapper[4737]: I0126 19:45:04.627505 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490945-jw2xc" event={"ID":"8d713092-777f-413e-9356-8d5ffaa09d8a","Type":"ContainerDied","Data":"685051f4920d926c06b73abc8ff2cdb2e61cfcf3b1f1f4202d4a8a903db5a648"} Jan 26 19:45:04 crc kubenswrapper[4737]: I0126 19:45:04.627551 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="685051f4920d926c06b73abc8ff2cdb2e61cfcf3b1f1f4202d4a8a903db5a648" Jan 26 19:45:04 crc kubenswrapper[4737]: I0126 19:45:04.627561 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490945-jw2xc" Jan 26 19:45:04 crc kubenswrapper[4737]: I0126 19:45:04.691839 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490900-5hl86"] Jan 26 19:45:04 crc kubenswrapper[4737]: I0126 19:45:04.704649 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490900-5hl86"] Jan 26 19:45:04 crc kubenswrapper[4737]: I0126 19:45:04.995944 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75833d1d-a0c8-4b19-8754-f491c70ce8e3" path="/var/lib/kubelet/pods/75833d1d-a0c8-4b19-8754-f491c70ce8e3/volumes" Jan 26 19:45:30 crc kubenswrapper[4737]: I0126 19:45:30.949306 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:45:30 crc kubenswrapper[4737]: I0126 19:45:30.949923 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:45:37 crc kubenswrapper[4737]: I0126 19:45:37.308627 4737 scope.go:117] "RemoveContainer" containerID="5108861c4cb099fc4e4b3a0f817369f4ef64d904b5ae6533f7c5aca450b244de" Jan 26 19:46:00 crc kubenswrapper[4737]: I0126 19:46:00.948661 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:46:00 crc kubenswrapper[4737]: I0126 19:46:00.949312 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:46:30 crc kubenswrapper[4737]: I0126 19:46:30.948726 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:46:30 crc kubenswrapper[4737]: I0126 19:46:30.949657 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:46:30 crc kubenswrapper[4737]: I0126 19:46:30.949705 4737 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" Jan 26 19:46:30 crc kubenswrapper[4737]: I0126 19:46:30.950521 4737 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7b6fabd15a79cd275ed2884c7ff8f267e25e1f7cb223b3a1ecfae218c1fe84b3"} pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 19:46:30 crc kubenswrapper[4737]: I0126 19:46:30.950648 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" containerID="cri-o://7b6fabd15a79cd275ed2884c7ff8f267e25e1f7cb223b3a1ecfae218c1fe84b3" gracePeriod=600 Jan 26 19:46:31 crc kubenswrapper[4737]: E0126 19:46:31.088543 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:46:31 crc kubenswrapper[4737]: I0126 19:46:31.606236 4737 generic.go:334] "Generic (PLEG): container finished" podID="afd75772-7900-46c3-b392-afb075e1cc08" containerID="7b6fabd15a79cd275ed2884c7ff8f267e25e1f7cb223b3a1ecfae218c1fe84b3" exitCode=0 Jan 26 19:46:31 crc kubenswrapper[4737]: I0126 19:46:31.606284 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" event={"ID":"afd75772-7900-46c3-b392-afb075e1cc08","Type":"ContainerDied","Data":"7b6fabd15a79cd275ed2884c7ff8f267e25e1f7cb223b3a1ecfae218c1fe84b3"} Jan 26 19:46:31 crc kubenswrapper[4737]: I0126 19:46:31.606326 4737 scope.go:117] "RemoveContainer" containerID="e2c50232a5fa93efde224493847b8e0a84baab428efbb8de02cab3290ca68781" Jan 26 19:46:31 crc kubenswrapper[4737]: I0126 19:46:31.607262 4737 scope.go:117] "RemoveContainer" containerID="7b6fabd15a79cd275ed2884c7ff8f267e25e1f7cb223b3a1ecfae218c1fe84b3" Jan 26 19:46:31 crc kubenswrapper[4737]: E0126 19:46:31.607742 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:46:39 crc kubenswrapper[4737]: I0126 19:46:39.338697 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k9dc4"] Jan 26 19:46:39 crc kubenswrapper[4737]: E0126 19:46:39.339888 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d713092-777f-413e-9356-8d5ffaa09d8a" containerName="collect-profiles" Jan 26 19:46:39 crc kubenswrapper[4737]: I0126 19:46:39.339908 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d713092-777f-413e-9356-8d5ffaa09d8a" containerName="collect-profiles" Jan 26 19:46:39 crc kubenswrapper[4737]: I0126 19:46:39.340272 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d713092-777f-413e-9356-8d5ffaa09d8a" containerName="collect-profiles" Jan 26 19:46:39 crc kubenswrapper[4737]: I0126 19:46:39.342506 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k9dc4" Jan 26 19:46:39 crc kubenswrapper[4737]: I0126 19:46:39.367295 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k9dc4"] Jan 26 19:46:39 crc kubenswrapper[4737]: I0126 19:46:39.441947 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37dd0cbb-8aed-45b9-b60e-01286b420e96-catalog-content\") pod \"redhat-operators-k9dc4\" (UID: \"37dd0cbb-8aed-45b9-b60e-01286b420e96\") " pod="openshift-marketplace/redhat-operators-k9dc4" Jan 26 19:46:39 crc kubenswrapper[4737]: I0126 19:46:39.442009 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37dd0cbb-8aed-45b9-b60e-01286b420e96-utilities\") pod \"redhat-operators-k9dc4\" (UID: \"37dd0cbb-8aed-45b9-b60e-01286b420e96\") " pod="openshift-marketplace/redhat-operators-k9dc4" Jan 26 19:46:39 crc kubenswrapper[4737]: I0126 19:46:39.442204 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g2tn\" (UniqueName: \"kubernetes.io/projected/37dd0cbb-8aed-45b9-b60e-01286b420e96-kube-api-access-8g2tn\") pod \"redhat-operators-k9dc4\" (UID: \"37dd0cbb-8aed-45b9-b60e-01286b420e96\") " pod="openshift-marketplace/redhat-operators-k9dc4" Jan 26 19:46:39 crc kubenswrapper[4737]: I0126 19:46:39.543917 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8g2tn\" (UniqueName: \"kubernetes.io/projected/37dd0cbb-8aed-45b9-b60e-01286b420e96-kube-api-access-8g2tn\") pod \"redhat-operators-k9dc4\" (UID: \"37dd0cbb-8aed-45b9-b60e-01286b420e96\") " pod="openshift-marketplace/redhat-operators-k9dc4" Jan 26 19:46:39 crc kubenswrapper[4737]: I0126 19:46:39.544436 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37dd0cbb-8aed-45b9-b60e-01286b420e96-catalog-content\") pod \"redhat-operators-k9dc4\" (UID: \"37dd0cbb-8aed-45b9-b60e-01286b420e96\") " pod="openshift-marketplace/redhat-operators-k9dc4" Jan 26 19:46:39 crc kubenswrapper[4737]: I0126 19:46:39.544468 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37dd0cbb-8aed-45b9-b60e-01286b420e96-utilities\") pod \"redhat-operators-k9dc4\" (UID: \"37dd0cbb-8aed-45b9-b60e-01286b420e96\") " pod="openshift-marketplace/redhat-operators-k9dc4" Jan 26 19:46:39 crc kubenswrapper[4737]: I0126 19:46:39.544932 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37dd0cbb-8aed-45b9-b60e-01286b420e96-utilities\") pod \"redhat-operators-k9dc4\" (UID: \"37dd0cbb-8aed-45b9-b60e-01286b420e96\") " pod="openshift-marketplace/redhat-operators-k9dc4" Jan 26 19:46:39 crc kubenswrapper[4737]: I0126 19:46:39.545244 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37dd0cbb-8aed-45b9-b60e-01286b420e96-catalog-content\") pod \"redhat-operators-k9dc4\" (UID: \"37dd0cbb-8aed-45b9-b60e-01286b420e96\") " pod="openshift-marketplace/redhat-operators-k9dc4" Jan 26 19:46:39 crc kubenswrapper[4737]: I0126 19:46:39.570950 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8g2tn\" (UniqueName: \"kubernetes.io/projected/37dd0cbb-8aed-45b9-b60e-01286b420e96-kube-api-access-8g2tn\") pod \"redhat-operators-k9dc4\" (UID: \"37dd0cbb-8aed-45b9-b60e-01286b420e96\") " pod="openshift-marketplace/redhat-operators-k9dc4" Jan 26 19:46:39 crc kubenswrapper[4737]: I0126 19:46:39.671681 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k9dc4" Jan 26 19:46:40 crc kubenswrapper[4737]: I0126 19:46:40.369204 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k9dc4"] Jan 26 19:46:40 crc kubenswrapper[4737]: I0126 19:46:40.719356 4737 generic.go:334] "Generic (PLEG): container finished" podID="37dd0cbb-8aed-45b9-b60e-01286b420e96" containerID="53b459efc84eeda84a8665ae712a4b4cfa3defdc6acdb7a4ca4844cf415bdc69" exitCode=0 Jan 26 19:46:40 crc kubenswrapper[4737]: I0126 19:46:40.719409 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9dc4" event={"ID":"37dd0cbb-8aed-45b9-b60e-01286b420e96","Type":"ContainerDied","Data":"53b459efc84eeda84a8665ae712a4b4cfa3defdc6acdb7a4ca4844cf415bdc69"} Jan 26 19:46:40 crc kubenswrapper[4737]: I0126 19:46:40.719437 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9dc4" event={"ID":"37dd0cbb-8aed-45b9-b60e-01286b420e96","Type":"ContainerStarted","Data":"f052936c06a5e7a33f3466d377140159f3a181645e95490835eef07f7af66e0b"} Jan 26 19:46:40 crc kubenswrapper[4737]: I0126 19:46:40.725666 4737 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 19:46:42 crc kubenswrapper[4737]: I0126 19:46:42.743424 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9dc4" event={"ID":"37dd0cbb-8aed-45b9-b60e-01286b420e96","Type":"ContainerStarted","Data":"672ab4ba5090ce7011b4f38499e86d208b9501c236dee6ae75173df3e064fcb1"} Jan 26 19:46:43 crc kubenswrapper[4737]: I0126 19:46:43.982102 4737 scope.go:117] "RemoveContainer" containerID="7b6fabd15a79cd275ed2884c7ff8f267e25e1f7cb223b3a1ecfae218c1fe84b3" Jan 26 19:46:43 crc kubenswrapper[4737]: E0126 19:46:43.982679 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:46:47 crc kubenswrapper[4737]: I0126 19:46:47.800924 4737 generic.go:334] "Generic (PLEG): container finished" podID="37dd0cbb-8aed-45b9-b60e-01286b420e96" containerID="672ab4ba5090ce7011b4f38499e86d208b9501c236dee6ae75173df3e064fcb1" exitCode=0 Jan 26 19:46:47 crc kubenswrapper[4737]: I0126 19:46:47.801006 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9dc4" event={"ID":"37dd0cbb-8aed-45b9-b60e-01286b420e96","Type":"ContainerDied","Data":"672ab4ba5090ce7011b4f38499e86d208b9501c236dee6ae75173df3e064fcb1"} Jan 26 19:46:48 crc kubenswrapper[4737]: I0126 19:46:48.821986 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9dc4" event={"ID":"37dd0cbb-8aed-45b9-b60e-01286b420e96","Type":"ContainerStarted","Data":"5fd728cbcdd77894078ddb69683e5170c2b1cbbd2b3a17256a2ee6d4f393410e"} Jan 26 19:46:48 crc kubenswrapper[4737]: I0126 19:46:48.841658 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k9dc4" podStartSLOduration=2.2951240889999998 podStartE2EDuration="9.841639045s" podCreationTimestamp="2026-01-26 19:46:39 +0000 UTC" firstStartedPulling="2026-01-26 19:46:40.725409827 +0000 UTC m=+4574.033604535" lastFinishedPulling="2026-01-26 19:46:48.271924783 +0000 UTC m=+4581.580119491" observedRunningTime="2026-01-26 19:46:48.839606436 +0000 UTC m=+4582.147801144" watchObservedRunningTime="2026-01-26 19:46:48.841639045 +0000 UTC m=+4582.149833753" Jan 26 19:46:49 crc kubenswrapper[4737]: I0126 19:46:49.316033 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-f8npf"] Jan 26 19:46:49 crc kubenswrapper[4737]: I0126 19:46:49.318845 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f8npf" Jan 26 19:46:49 crc kubenswrapper[4737]: I0126 19:46:49.328364 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f8npf"] Jan 26 19:46:49 crc kubenswrapper[4737]: I0126 19:46:49.328748 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9007eaed-d294-4c2b-a3e9-bbf86f95bfd5-utilities\") pod \"redhat-marketplace-f8npf\" (UID: \"9007eaed-d294-4c2b-a3e9-bbf86f95bfd5\") " pod="openshift-marketplace/redhat-marketplace-f8npf" Jan 26 19:46:49 crc kubenswrapper[4737]: I0126 19:46:49.328821 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9007eaed-d294-4c2b-a3e9-bbf86f95bfd5-catalog-content\") pod \"redhat-marketplace-f8npf\" (UID: \"9007eaed-d294-4c2b-a3e9-bbf86f95bfd5\") " pod="openshift-marketplace/redhat-marketplace-f8npf" Jan 26 19:46:49 crc kubenswrapper[4737]: I0126 19:46:49.328888 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27x2z\" (UniqueName: \"kubernetes.io/projected/9007eaed-d294-4c2b-a3e9-bbf86f95bfd5-kube-api-access-27x2z\") pod \"redhat-marketplace-f8npf\" (UID: \"9007eaed-d294-4c2b-a3e9-bbf86f95bfd5\") " pod="openshift-marketplace/redhat-marketplace-f8npf" Jan 26 19:46:49 crc kubenswrapper[4737]: I0126 19:46:49.431788 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9007eaed-d294-4c2b-a3e9-bbf86f95bfd5-utilities\") pod \"redhat-marketplace-f8npf\" (UID: \"9007eaed-d294-4c2b-a3e9-bbf86f95bfd5\") " pod="openshift-marketplace/redhat-marketplace-f8npf" Jan 26 19:46:49 crc kubenswrapper[4737]: I0126 19:46:49.432304 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9007eaed-d294-4c2b-a3e9-bbf86f95bfd5-catalog-content\") pod \"redhat-marketplace-f8npf\" (UID: \"9007eaed-d294-4c2b-a3e9-bbf86f95bfd5\") " pod="openshift-marketplace/redhat-marketplace-f8npf" Jan 26 19:46:49 crc kubenswrapper[4737]: I0126 19:46:49.432903 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9007eaed-d294-4c2b-a3e9-bbf86f95bfd5-utilities\") pod \"redhat-marketplace-f8npf\" (UID: \"9007eaed-d294-4c2b-a3e9-bbf86f95bfd5\") " pod="openshift-marketplace/redhat-marketplace-f8npf" Jan 26 19:46:49 crc kubenswrapper[4737]: I0126 19:46:49.432495 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27x2z\" (UniqueName: \"kubernetes.io/projected/9007eaed-d294-4c2b-a3e9-bbf86f95bfd5-kube-api-access-27x2z\") pod \"redhat-marketplace-f8npf\" (UID: \"9007eaed-d294-4c2b-a3e9-bbf86f95bfd5\") " pod="openshift-marketplace/redhat-marketplace-f8npf" Jan 26 19:46:49 crc kubenswrapper[4737]: I0126 19:46:49.433172 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9007eaed-d294-4c2b-a3e9-bbf86f95bfd5-catalog-content\") pod \"redhat-marketplace-f8npf\" (UID: \"9007eaed-d294-4c2b-a3e9-bbf86f95bfd5\") " pod="openshift-marketplace/redhat-marketplace-f8npf" Jan 26 19:46:49 crc kubenswrapper[4737]: I0126 19:46:49.459300 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27x2z\" (UniqueName: \"kubernetes.io/projected/9007eaed-d294-4c2b-a3e9-bbf86f95bfd5-kube-api-access-27x2z\") pod \"redhat-marketplace-f8npf\" (UID: \"9007eaed-d294-4c2b-a3e9-bbf86f95bfd5\") " pod="openshift-marketplace/redhat-marketplace-f8npf" Jan 26 19:46:49 crc kubenswrapper[4737]: I0126 19:46:49.652800 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f8npf" Jan 26 19:46:49 crc kubenswrapper[4737]: I0126 19:46:49.675575 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k9dc4" Jan 26 19:46:49 crc kubenswrapper[4737]: I0126 19:46:49.675629 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k9dc4" Jan 26 19:46:50 crc kubenswrapper[4737]: I0126 19:46:50.326150 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f8npf"] Jan 26 19:46:50 crc kubenswrapper[4737]: I0126 19:46:50.765107 4737 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k9dc4" podUID="37dd0cbb-8aed-45b9-b60e-01286b420e96" containerName="registry-server" probeResult="failure" output=< Jan 26 19:46:50 crc kubenswrapper[4737]: timeout: failed to connect service ":50051" within 1s Jan 26 19:46:50 crc kubenswrapper[4737]: > Jan 26 19:46:50 crc kubenswrapper[4737]: I0126 19:46:50.868980 4737 generic.go:334] "Generic (PLEG): container finished" podID="9007eaed-d294-4c2b-a3e9-bbf86f95bfd5" containerID="0ea51d888336bf61174a611437abc39a28a7c68096dd9c9b5263981c317eccaa" exitCode=0 Jan 26 19:46:50 crc kubenswrapper[4737]: I0126 19:46:50.869521 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f8npf" event={"ID":"9007eaed-d294-4c2b-a3e9-bbf86f95bfd5","Type":"ContainerDied","Data":"0ea51d888336bf61174a611437abc39a28a7c68096dd9c9b5263981c317eccaa"} Jan 26 19:46:50 crc kubenswrapper[4737]: I0126 19:46:50.869563 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f8npf" event={"ID":"9007eaed-d294-4c2b-a3e9-bbf86f95bfd5","Type":"ContainerStarted","Data":"3e521f1522634914a1093d732a360550792925dba2f6c82c0393b7a96145a69c"} Jan 26 19:46:52 crc kubenswrapper[4737]: I0126 19:46:52.893038 4737 generic.go:334] "Generic (PLEG): container finished" podID="9007eaed-d294-4c2b-a3e9-bbf86f95bfd5" containerID="0abaca393277560ec5cc9870df8396e4fbe08b6e7600ca8cc743e8a8cdd7d85d" exitCode=0 Jan 26 19:46:52 crc kubenswrapper[4737]: I0126 19:46:52.893112 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f8npf" event={"ID":"9007eaed-d294-4c2b-a3e9-bbf86f95bfd5","Type":"ContainerDied","Data":"0abaca393277560ec5cc9870df8396e4fbe08b6e7600ca8cc743e8a8cdd7d85d"} Jan 26 19:46:54 crc kubenswrapper[4737]: I0126 19:46:54.921394 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f8npf" event={"ID":"9007eaed-d294-4c2b-a3e9-bbf86f95bfd5","Type":"ContainerStarted","Data":"1fce7c78c66f8775a28e63f823871b75ed89c4fce62d8d7feba3ddac09eacf2a"} Jan 26 19:46:54 crc kubenswrapper[4737]: I0126 19:46:54.946654 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-f8npf" podStartSLOduration=3.517781016 podStartE2EDuration="5.946626522s" podCreationTimestamp="2026-01-26 19:46:49 +0000 UTC" firstStartedPulling="2026-01-26 19:46:50.871856434 +0000 UTC m=+4584.180051142" lastFinishedPulling="2026-01-26 19:46:53.30070194 +0000 UTC m=+4586.608896648" observedRunningTime="2026-01-26 19:46:54.939337423 +0000 UTC m=+4588.247532131" watchObservedRunningTime="2026-01-26 19:46:54.946626522 +0000 UTC m=+4588.254821230" Jan 26 19:46:54 crc kubenswrapper[4737]: I0126 19:46:54.982164 4737 scope.go:117] "RemoveContainer" containerID="7b6fabd15a79cd275ed2884c7ff8f267e25e1f7cb223b3a1ecfae218c1fe84b3" Jan 26 19:46:54 crc kubenswrapper[4737]: E0126 19:46:54.982474 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:46:59 crc kubenswrapper[4737]: I0126 19:46:59.652974 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-f8npf" Jan 26 19:46:59 crc kubenswrapper[4737]: I0126 19:46:59.653631 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-f8npf" Jan 26 19:46:59 crc kubenswrapper[4737]: I0126 19:46:59.709037 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-f8npf" Jan 26 19:47:00 crc kubenswrapper[4737]: I0126 19:47:00.022801 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-f8npf" Jan 26 19:47:00 crc kubenswrapper[4737]: I0126 19:47:00.075365 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f8npf"] Jan 26 19:47:00 crc kubenswrapper[4737]: I0126 19:47:00.729135 4737 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k9dc4" podUID="37dd0cbb-8aed-45b9-b60e-01286b420e96" containerName="registry-server" probeResult="failure" output=< Jan 26 19:47:00 crc kubenswrapper[4737]: timeout: failed to connect service ":50051" within 1s Jan 26 19:47:00 crc kubenswrapper[4737]: > Jan 26 19:47:01 crc kubenswrapper[4737]: I0126 19:47:01.992423 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-f8npf" podUID="9007eaed-d294-4c2b-a3e9-bbf86f95bfd5" containerName="registry-server" containerID="cri-o://1fce7c78c66f8775a28e63f823871b75ed89c4fce62d8d7feba3ddac09eacf2a" gracePeriod=2 Jan 26 19:47:02 crc kubenswrapper[4737]: I0126 19:47:02.615758 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f8npf" Jan 26 19:47:02 crc kubenswrapper[4737]: I0126 19:47:02.711699 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9007eaed-d294-4c2b-a3e9-bbf86f95bfd5-utilities\") pod \"9007eaed-d294-4c2b-a3e9-bbf86f95bfd5\" (UID: \"9007eaed-d294-4c2b-a3e9-bbf86f95bfd5\") " Jan 26 19:47:02 crc kubenswrapper[4737]: I0126 19:47:02.711803 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9007eaed-d294-4c2b-a3e9-bbf86f95bfd5-catalog-content\") pod \"9007eaed-d294-4c2b-a3e9-bbf86f95bfd5\" (UID: \"9007eaed-d294-4c2b-a3e9-bbf86f95bfd5\") " Jan 26 19:47:02 crc kubenswrapper[4737]: I0126 19:47:02.712001 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27x2z\" (UniqueName: \"kubernetes.io/projected/9007eaed-d294-4c2b-a3e9-bbf86f95bfd5-kube-api-access-27x2z\") pod \"9007eaed-d294-4c2b-a3e9-bbf86f95bfd5\" (UID: \"9007eaed-d294-4c2b-a3e9-bbf86f95bfd5\") " Jan 26 19:47:02 crc kubenswrapper[4737]: I0126 19:47:02.712551 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9007eaed-d294-4c2b-a3e9-bbf86f95bfd5-utilities" (OuterVolumeSpecName: "utilities") pod "9007eaed-d294-4c2b-a3e9-bbf86f95bfd5" (UID: "9007eaed-d294-4c2b-a3e9-bbf86f95bfd5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:47:02 crc kubenswrapper[4737]: I0126 19:47:02.712784 4737 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9007eaed-d294-4c2b-a3e9-bbf86f95bfd5-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:47:02 crc kubenswrapper[4737]: I0126 19:47:02.722399 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9007eaed-d294-4c2b-a3e9-bbf86f95bfd5-kube-api-access-27x2z" (OuterVolumeSpecName: "kube-api-access-27x2z") pod "9007eaed-d294-4c2b-a3e9-bbf86f95bfd5" (UID: "9007eaed-d294-4c2b-a3e9-bbf86f95bfd5"). InnerVolumeSpecName "kube-api-access-27x2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:47:02 crc kubenswrapper[4737]: I0126 19:47:02.741186 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9007eaed-d294-4c2b-a3e9-bbf86f95bfd5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9007eaed-d294-4c2b-a3e9-bbf86f95bfd5" (UID: "9007eaed-d294-4c2b-a3e9-bbf86f95bfd5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:47:02 crc kubenswrapper[4737]: I0126 19:47:02.815688 4737 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9007eaed-d294-4c2b-a3e9-bbf86f95bfd5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:47:02 crc kubenswrapper[4737]: I0126 19:47:02.815742 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27x2z\" (UniqueName: \"kubernetes.io/projected/9007eaed-d294-4c2b-a3e9-bbf86f95bfd5-kube-api-access-27x2z\") on node \"crc\" DevicePath \"\"" Jan 26 19:47:03 crc kubenswrapper[4737]: I0126 19:47:03.015501 4737 generic.go:334] "Generic (PLEG): container finished" podID="9007eaed-d294-4c2b-a3e9-bbf86f95bfd5" containerID="1fce7c78c66f8775a28e63f823871b75ed89c4fce62d8d7feba3ddac09eacf2a" exitCode=0 Jan 26 19:47:03 crc kubenswrapper[4737]: I0126 19:47:03.015543 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f8npf" Jan 26 19:47:03 crc kubenswrapper[4737]: I0126 19:47:03.015553 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f8npf" event={"ID":"9007eaed-d294-4c2b-a3e9-bbf86f95bfd5","Type":"ContainerDied","Data":"1fce7c78c66f8775a28e63f823871b75ed89c4fce62d8d7feba3ddac09eacf2a"} Jan 26 19:47:03 crc kubenswrapper[4737]: I0126 19:47:03.015593 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f8npf" event={"ID":"9007eaed-d294-4c2b-a3e9-bbf86f95bfd5","Type":"ContainerDied","Data":"3e521f1522634914a1093d732a360550792925dba2f6c82c0393b7a96145a69c"} Jan 26 19:47:03 crc kubenswrapper[4737]: I0126 19:47:03.015614 4737 scope.go:117] "RemoveContainer" containerID="1fce7c78c66f8775a28e63f823871b75ed89c4fce62d8d7feba3ddac09eacf2a" Jan 26 19:47:03 crc kubenswrapper[4737]: I0126 19:47:03.046755 4737 scope.go:117] "RemoveContainer" containerID="0abaca393277560ec5cc9870df8396e4fbe08b6e7600ca8cc743e8a8cdd7d85d" Jan 26 19:47:03 crc kubenswrapper[4737]: I0126 19:47:03.070727 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f8npf"] Jan 26 19:47:03 crc kubenswrapper[4737]: I0126 19:47:03.086435 4737 scope.go:117] "RemoveContainer" containerID="0ea51d888336bf61174a611437abc39a28a7c68096dd9c9b5263981c317eccaa" Jan 26 19:47:03 crc kubenswrapper[4737]: I0126 19:47:03.089650 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-f8npf"] Jan 26 19:47:03 crc kubenswrapper[4737]: I0126 19:47:03.154501 4737 scope.go:117] "RemoveContainer" containerID="1fce7c78c66f8775a28e63f823871b75ed89c4fce62d8d7feba3ddac09eacf2a" Jan 26 19:47:03 crc kubenswrapper[4737]: E0126 19:47:03.155032 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fce7c78c66f8775a28e63f823871b75ed89c4fce62d8d7feba3ddac09eacf2a\": container with ID starting with 1fce7c78c66f8775a28e63f823871b75ed89c4fce62d8d7feba3ddac09eacf2a not found: ID does not exist" containerID="1fce7c78c66f8775a28e63f823871b75ed89c4fce62d8d7feba3ddac09eacf2a" Jan 26 19:47:03 crc kubenswrapper[4737]: I0126 19:47:03.155105 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fce7c78c66f8775a28e63f823871b75ed89c4fce62d8d7feba3ddac09eacf2a"} err="failed to get container status \"1fce7c78c66f8775a28e63f823871b75ed89c4fce62d8d7feba3ddac09eacf2a\": rpc error: code = NotFound desc = could not find container \"1fce7c78c66f8775a28e63f823871b75ed89c4fce62d8d7feba3ddac09eacf2a\": container with ID starting with 1fce7c78c66f8775a28e63f823871b75ed89c4fce62d8d7feba3ddac09eacf2a not found: ID does not exist" Jan 26 19:47:03 crc kubenswrapper[4737]: I0126 19:47:03.155146 4737 scope.go:117] "RemoveContainer" containerID="0abaca393277560ec5cc9870df8396e4fbe08b6e7600ca8cc743e8a8cdd7d85d" Jan 26 19:47:03 crc kubenswrapper[4737]: E0126 19:47:03.155548 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0abaca393277560ec5cc9870df8396e4fbe08b6e7600ca8cc743e8a8cdd7d85d\": container with ID starting with 0abaca393277560ec5cc9870df8396e4fbe08b6e7600ca8cc743e8a8cdd7d85d not found: ID does not exist" containerID="0abaca393277560ec5cc9870df8396e4fbe08b6e7600ca8cc743e8a8cdd7d85d" Jan 26 19:47:03 crc kubenswrapper[4737]: I0126 19:47:03.155633 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0abaca393277560ec5cc9870df8396e4fbe08b6e7600ca8cc743e8a8cdd7d85d"} err="failed to get container status \"0abaca393277560ec5cc9870df8396e4fbe08b6e7600ca8cc743e8a8cdd7d85d\": rpc error: code = NotFound desc = could not find container \"0abaca393277560ec5cc9870df8396e4fbe08b6e7600ca8cc743e8a8cdd7d85d\": container with ID starting with 0abaca393277560ec5cc9870df8396e4fbe08b6e7600ca8cc743e8a8cdd7d85d not found: ID does not exist" Jan 26 19:47:03 crc kubenswrapper[4737]: I0126 19:47:03.155655 4737 scope.go:117] "RemoveContainer" containerID="0ea51d888336bf61174a611437abc39a28a7c68096dd9c9b5263981c317eccaa" Jan 26 19:47:03 crc kubenswrapper[4737]: E0126 19:47:03.155957 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ea51d888336bf61174a611437abc39a28a7c68096dd9c9b5263981c317eccaa\": container with ID starting with 0ea51d888336bf61174a611437abc39a28a7c68096dd9c9b5263981c317eccaa not found: ID does not exist" containerID="0ea51d888336bf61174a611437abc39a28a7c68096dd9c9b5263981c317eccaa" Jan 26 19:47:03 crc kubenswrapper[4737]: I0126 19:47:03.155984 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ea51d888336bf61174a611437abc39a28a7c68096dd9c9b5263981c317eccaa"} err="failed to get container status \"0ea51d888336bf61174a611437abc39a28a7c68096dd9c9b5263981c317eccaa\": rpc error: code = NotFound desc = could not find container \"0ea51d888336bf61174a611437abc39a28a7c68096dd9c9b5263981c317eccaa\": container with ID starting with 0ea51d888336bf61174a611437abc39a28a7c68096dd9c9b5263981c317eccaa not found: ID does not exist" Jan 26 19:47:04 crc kubenswrapper[4737]: I0126 19:47:04.995013 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9007eaed-d294-4c2b-a3e9-bbf86f95bfd5" path="/var/lib/kubelet/pods/9007eaed-d294-4c2b-a3e9-bbf86f95bfd5/volumes" Jan 26 19:47:07 crc kubenswrapper[4737]: I0126 19:47:07.982289 4737 scope.go:117] "RemoveContainer" containerID="7b6fabd15a79cd275ed2884c7ff8f267e25e1f7cb223b3a1ecfae218c1fe84b3" Jan 26 19:47:07 crc kubenswrapper[4737]: E0126 19:47:07.983182 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:47:08 crc kubenswrapper[4737]: E0126 19:47:08.865152 4737 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.236:36348->38.102.83.236:42217: read tcp 38.102.83.236:36348->38.102.83.236:42217: read: connection reset by peer Jan 26 19:47:09 crc kubenswrapper[4737]: I0126 19:47:09.728873 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k9dc4" Jan 26 19:47:09 crc kubenswrapper[4737]: I0126 19:47:09.787278 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k9dc4" Jan 26 19:47:10 crc kubenswrapper[4737]: I0126 19:47:10.539470 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k9dc4"] Jan 26 19:47:11 crc kubenswrapper[4737]: I0126 19:47:11.099666 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k9dc4" podUID="37dd0cbb-8aed-45b9-b60e-01286b420e96" containerName="registry-server" containerID="cri-o://5fd728cbcdd77894078ddb69683e5170c2b1cbbd2b3a17256a2ee6d4f393410e" gracePeriod=2 Jan 26 19:47:11 crc kubenswrapper[4737]: I0126 19:47:11.693315 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k9dc4" Jan 26 19:47:11 crc kubenswrapper[4737]: I0126 19:47:11.745875 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8g2tn\" (UniqueName: \"kubernetes.io/projected/37dd0cbb-8aed-45b9-b60e-01286b420e96-kube-api-access-8g2tn\") pod \"37dd0cbb-8aed-45b9-b60e-01286b420e96\" (UID: \"37dd0cbb-8aed-45b9-b60e-01286b420e96\") " Jan 26 19:47:11 crc kubenswrapper[4737]: I0126 19:47:11.745977 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37dd0cbb-8aed-45b9-b60e-01286b420e96-catalog-content\") pod \"37dd0cbb-8aed-45b9-b60e-01286b420e96\" (UID: \"37dd0cbb-8aed-45b9-b60e-01286b420e96\") " Jan 26 19:47:11 crc kubenswrapper[4737]: I0126 19:47:11.746159 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37dd0cbb-8aed-45b9-b60e-01286b420e96-utilities\") pod \"37dd0cbb-8aed-45b9-b60e-01286b420e96\" (UID: \"37dd0cbb-8aed-45b9-b60e-01286b420e96\") " Jan 26 19:47:11 crc kubenswrapper[4737]: I0126 19:47:11.747340 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37dd0cbb-8aed-45b9-b60e-01286b420e96-utilities" (OuterVolumeSpecName: "utilities") pod "37dd0cbb-8aed-45b9-b60e-01286b420e96" (UID: "37dd0cbb-8aed-45b9-b60e-01286b420e96"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:47:11 crc kubenswrapper[4737]: I0126 19:47:11.759776 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37dd0cbb-8aed-45b9-b60e-01286b420e96-kube-api-access-8g2tn" (OuterVolumeSpecName: "kube-api-access-8g2tn") pod "37dd0cbb-8aed-45b9-b60e-01286b420e96" (UID: "37dd0cbb-8aed-45b9-b60e-01286b420e96"). InnerVolumeSpecName "kube-api-access-8g2tn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:47:11 crc kubenswrapper[4737]: I0126 19:47:11.849349 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8g2tn\" (UniqueName: \"kubernetes.io/projected/37dd0cbb-8aed-45b9-b60e-01286b420e96-kube-api-access-8g2tn\") on node \"crc\" DevicePath \"\"" Jan 26 19:47:11 crc kubenswrapper[4737]: I0126 19:47:11.849392 4737 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37dd0cbb-8aed-45b9-b60e-01286b420e96-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:47:11 crc kubenswrapper[4737]: I0126 19:47:11.870248 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37dd0cbb-8aed-45b9-b60e-01286b420e96-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "37dd0cbb-8aed-45b9-b60e-01286b420e96" (UID: "37dd0cbb-8aed-45b9-b60e-01286b420e96"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:47:11 crc kubenswrapper[4737]: I0126 19:47:11.952240 4737 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37dd0cbb-8aed-45b9-b60e-01286b420e96-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:47:12 crc kubenswrapper[4737]: I0126 19:47:12.113610 4737 generic.go:334] "Generic (PLEG): container finished" podID="37dd0cbb-8aed-45b9-b60e-01286b420e96" containerID="5fd728cbcdd77894078ddb69683e5170c2b1cbbd2b3a17256a2ee6d4f393410e" exitCode=0 Jan 26 19:47:12 crc kubenswrapper[4737]: I0126 19:47:12.113677 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9dc4" event={"ID":"37dd0cbb-8aed-45b9-b60e-01286b420e96","Type":"ContainerDied","Data":"5fd728cbcdd77894078ddb69683e5170c2b1cbbd2b3a17256a2ee6d4f393410e"} Jan 26 19:47:12 crc kubenswrapper[4737]: I0126 19:47:12.113697 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k9dc4" Jan 26 19:47:12 crc kubenswrapper[4737]: I0126 19:47:12.113721 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9dc4" event={"ID":"37dd0cbb-8aed-45b9-b60e-01286b420e96","Type":"ContainerDied","Data":"f052936c06a5e7a33f3466d377140159f3a181645e95490835eef07f7af66e0b"} Jan 26 19:47:12 crc kubenswrapper[4737]: I0126 19:47:12.113753 4737 scope.go:117] "RemoveContainer" containerID="5fd728cbcdd77894078ddb69683e5170c2b1cbbd2b3a17256a2ee6d4f393410e" Jan 26 19:47:12 crc kubenswrapper[4737]: I0126 19:47:12.151561 4737 scope.go:117] "RemoveContainer" containerID="672ab4ba5090ce7011b4f38499e86d208b9501c236dee6ae75173df3e064fcb1" Jan 26 19:47:12 crc kubenswrapper[4737]: I0126 19:47:12.191518 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k9dc4"] Jan 26 19:47:12 crc kubenswrapper[4737]: I0126 19:47:12.199861 4737 scope.go:117] "RemoveContainer" containerID="53b459efc84eeda84a8665ae712a4b4cfa3defdc6acdb7a4ca4844cf415bdc69" Jan 26 19:47:12 crc kubenswrapper[4737]: I0126 19:47:12.202972 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-k9dc4"] Jan 26 19:47:12 crc kubenswrapper[4737]: I0126 19:47:12.256571 4737 scope.go:117] "RemoveContainer" containerID="5fd728cbcdd77894078ddb69683e5170c2b1cbbd2b3a17256a2ee6d4f393410e" Jan 26 19:47:12 crc kubenswrapper[4737]: E0126 19:47:12.257048 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fd728cbcdd77894078ddb69683e5170c2b1cbbd2b3a17256a2ee6d4f393410e\": container with ID starting with 5fd728cbcdd77894078ddb69683e5170c2b1cbbd2b3a17256a2ee6d4f393410e not found: ID does not exist" containerID="5fd728cbcdd77894078ddb69683e5170c2b1cbbd2b3a17256a2ee6d4f393410e" Jan 26 19:47:12 crc kubenswrapper[4737]: I0126 19:47:12.257091 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fd728cbcdd77894078ddb69683e5170c2b1cbbd2b3a17256a2ee6d4f393410e"} err="failed to get container status \"5fd728cbcdd77894078ddb69683e5170c2b1cbbd2b3a17256a2ee6d4f393410e\": rpc error: code = NotFound desc = could not find container \"5fd728cbcdd77894078ddb69683e5170c2b1cbbd2b3a17256a2ee6d4f393410e\": container with ID starting with 5fd728cbcdd77894078ddb69683e5170c2b1cbbd2b3a17256a2ee6d4f393410e not found: ID does not exist" Jan 26 19:47:12 crc kubenswrapper[4737]: I0126 19:47:12.257141 4737 scope.go:117] "RemoveContainer" containerID="672ab4ba5090ce7011b4f38499e86d208b9501c236dee6ae75173df3e064fcb1" Jan 26 19:47:12 crc kubenswrapper[4737]: E0126 19:47:12.257393 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"672ab4ba5090ce7011b4f38499e86d208b9501c236dee6ae75173df3e064fcb1\": container with ID starting with 672ab4ba5090ce7011b4f38499e86d208b9501c236dee6ae75173df3e064fcb1 not found: ID does not exist" containerID="672ab4ba5090ce7011b4f38499e86d208b9501c236dee6ae75173df3e064fcb1" Jan 26 19:47:12 crc kubenswrapper[4737]: I0126 19:47:12.257408 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"672ab4ba5090ce7011b4f38499e86d208b9501c236dee6ae75173df3e064fcb1"} err="failed to get container status \"672ab4ba5090ce7011b4f38499e86d208b9501c236dee6ae75173df3e064fcb1\": rpc error: code = NotFound desc = could not find container \"672ab4ba5090ce7011b4f38499e86d208b9501c236dee6ae75173df3e064fcb1\": container with ID starting with 672ab4ba5090ce7011b4f38499e86d208b9501c236dee6ae75173df3e064fcb1 not found: ID does not exist" Jan 26 19:47:12 crc kubenswrapper[4737]: I0126 19:47:12.257419 4737 scope.go:117] "RemoveContainer" containerID="53b459efc84eeda84a8665ae712a4b4cfa3defdc6acdb7a4ca4844cf415bdc69" Jan 26 19:47:12 crc kubenswrapper[4737]: E0126 19:47:12.259087 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53b459efc84eeda84a8665ae712a4b4cfa3defdc6acdb7a4ca4844cf415bdc69\": container with ID starting with 53b459efc84eeda84a8665ae712a4b4cfa3defdc6acdb7a4ca4844cf415bdc69 not found: ID does not exist" containerID="53b459efc84eeda84a8665ae712a4b4cfa3defdc6acdb7a4ca4844cf415bdc69" Jan 26 19:47:12 crc kubenswrapper[4737]: I0126 19:47:12.259129 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53b459efc84eeda84a8665ae712a4b4cfa3defdc6acdb7a4ca4844cf415bdc69"} err="failed to get container status \"53b459efc84eeda84a8665ae712a4b4cfa3defdc6acdb7a4ca4844cf415bdc69\": rpc error: code = NotFound desc = could not find container \"53b459efc84eeda84a8665ae712a4b4cfa3defdc6acdb7a4ca4844cf415bdc69\": container with ID starting with 53b459efc84eeda84a8665ae712a4b4cfa3defdc6acdb7a4ca4844cf415bdc69 not found: ID does not exist" Jan 26 19:47:13 crc kubenswrapper[4737]: I0126 19:47:13.008147 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37dd0cbb-8aed-45b9-b60e-01286b420e96" path="/var/lib/kubelet/pods/37dd0cbb-8aed-45b9-b60e-01286b420e96/volumes" Jan 26 19:47:22 crc kubenswrapper[4737]: I0126 19:47:22.981784 4737 scope.go:117] "RemoveContainer" containerID="7b6fabd15a79cd275ed2884c7ff8f267e25e1f7cb223b3a1ecfae218c1fe84b3" Jan 26 19:47:22 crc kubenswrapper[4737]: E0126 19:47:22.982647 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:47:35 crc kubenswrapper[4737]: I0126 19:47:35.982581 4737 scope.go:117] "RemoveContainer" containerID="7b6fabd15a79cd275ed2884c7ff8f267e25e1f7cb223b3a1ecfae218c1fe84b3" Jan 26 19:47:35 crc kubenswrapper[4737]: E0126 19:47:35.983517 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:47:47 crc kubenswrapper[4737]: I0126 19:47:47.634535 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-bbw9t" podUID="ed97d0e9-4ae3-4db6-9635-38141f37948e" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 19:47:47 crc kubenswrapper[4737]: I0126 19:47:47.982618 4737 scope.go:117] "RemoveContainer" containerID="7b6fabd15a79cd275ed2884c7ff8f267e25e1f7cb223b3a1ecfae218c1fe84b3" Jan 26 19:47:47 crc kubenswrapper[4737]: E0126 19:47:47.982935 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:47:59 crc kubenswrapper[4737]: I0126 19:47:59.983397 4737 scope.go:117] "RemoveContainer" containerID="7b6fabd15a79cd275ed2884c7ff8f267e25e1f7cb223b3a1ecfae218c1fe84b3" Jan 26 19:47:59 crc kubenswrapper[4737]: E0126 19:47:59.985549 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:48:11 crc kubenswrapper[4737]: I0126 19:48:11.983092 4737 scope.go:117] "RemoveContainer" containerID="7b6fabd15a79cd275ed2884c7ff8f267e25e1f7cb223b3a1ecfae218c1fe84b3" Jan 26 19:48:11 crc kubenswrapper[4737]: E0126 19:48:11.983946 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:48:22 crc kubenswrapper[4737]: I0126 19:48:22.982999 4737 scope.go:117] "RemoveContainer" containerID="7b6fabd15a79cd275ed2884c7ff8f267e25e1f7cb223b3a1ecfae218c1fe84b3" Jan 26 19:48:22 crc kubenswrapper[4737]: E0126 19:48:22.983958 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:48:37 crc kubenswrapper[4737]: I0126 19:48:37.983488 4737 scope.go:117] "RemoveContainer" containerID="7b6fabd15a79cd275ed2884c7ff8f267e25e1f7cb223b3a1ecfae218c1fe84b3" Jan 26 19:48:37 crc kubenswrapper[4737]: E0126 19:48:37.985493 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:48:48 crc kubenswrapper[4737]: I0126 19:48:48.982527 4737 scope.go:117] "RemoveContainer" containerID="7b6fabd15a79cd275ed2884c7ff8f267e25e1f7cb223b3a1ecfae218c1fe84b3" Jan 26 19:48:48 crc kubenswrapper[4737]: E0126 19:48:48.983349 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:49:01 crc kubenswrapper[4737]: I0126 19:49:01.981967 4737 scope.go:117] "RemoveContainer" containerID="7b6fabd15a79cd275ed2884c7ff8f267e25e1f7cb223b3a1ecfae218c1fe84b3" Jan 26 19:49:01 crc kubenswrapper[4737]: E0126 19:49:01.983000 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:49:12 crc kubenswrapper[4737]: I0126 19:49:12.982093 4737 scope.go:117] "RemoveContainer" containerID="7b6fabd15a79cd275ed2884c7ff8f267e25e1f7cb223b3a1ecfae218c1fe84b3" Jan 26 19:49:12 crc kubenswrapper[4737]: E0126 19:49:12.982836 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:49:26 crc kubenswrapper[4737]: I0126 19:49:26.991168 4737 scope.go:117] "RemoveContainer" containerID="7b6fabd15a79cd275ed2884c7ff8f267e25e1f7cb223b3a1ecfae218c1fe84b3" Jan 26 19:49:26 crc kubenswrapper[4737]: E0126 19:49:26.992865 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:49:37 crc kubenswrapper[4737]: I0126 19:49:37.982869 4737 scope.go:117] "RemoveContainer" containerID="7b6fabd15a79cd275ed2884c7ff8f267e25e1f7cb223b3a1ecfae218c1fe84b3" Jan 26 19:49:37 crc kubenswrapper[4737]: E0126 19:49:37.983877 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:49:49 crc kubenswrapper[4737]: I0126 19:49:49.982392 4737 scope.go:117] "RemoveContainer" containerID="7b6fabd15a79cd275ed2884c7ff8f267e25e1f7cb223b3a1ecfae218c1fe84b3" Jan 26 19:49:49 crc kubenswrapper[4737]: E0126 19:49:49.983305 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:50:00 crc kubenswrapper[4737]: I0126 19:50:00.982940 4737 scope.go:117] "RemoveContainer" containerID="7b6fabd15a79cd275ed2884c7ff8f267e25e1f7cb223b3a1ecfae218c1fe84b3" Jan 26 19:50:00 crc kubenswrapper[4737]: E0126 19:50:00.983805 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:50:13 crc kubenswrapper[4737]: I0126 19:50:13.983761 4737 scope.go:117] "RemoveContainer" containerID="7b6fabd15a79cd275ed2884c7ff8f267e25e1f7cb223b3a1ecfae218c1fe84b3" Jan 26 19:50:13 crc kubenswrapper[4737]: E0126 19:50:13.984774 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:50:27 crc kubenswrapper[4737]: I0126 19:50:27.002898 4737 scope.go:117] "RemoveContainer" containerID="7b6fabd15a79cd275ed2884c7ff8f267e25e1f7cb223b3a1ecfae218c1fe84b3" Jan 26 19:50:27 crc kubenswrapper[4737]: E0126 19:50:27.004568 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:50:41 crc kubenswrapper[4737]: I0126 19:50:41.982506 4737 scope.go:117] "RemoveContainer" containerID="7b6fabd15a79cd275ed2884c7ff8f267e25e1f7cb223b3a1ecfae218c1fe84b3" Jan 26 19:50:41 crc kubenswrapper[4737]: E0126 19:50:41.983651 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:50:56 crc kubenswrapper[4737]: I0126 19:50:56.982483 4737 scope.go:117] "RemoveContainer" containerID="7b6fabd15a79cd275ed2884c7ff8f267e25e1f7cb223b3a1ecfae218c1fe84b3" Jan 26 19:50:56 crc kubenswrapper[4737]: E0126 19:50:56.983573 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:51:09 crc kubenswrapper[4737]: I0126 19:51:09.981706 4737 scope.go:117] "RemoveContainer" containerID="7b6fabd15a79cd275ed2884c7ff8f267e25e1f7cb223b3a1ecfae218c1fe84b3" Jan 26 19:51:09 crc kubenswrapper[4737]: E0126 19:51:09.982475 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:51:24 crc kubenswrapper[4737]: I0126 19:51:24.982774 4737 scope.go:117] "RemoveContainer" containerID="7b6fabd15a79cd275ed2884c7ff8f267e25e1f7cb223b3a1ecfae218c1fe84b3" Jan 26 19:51:24 crc kubenswrapper[4737]: E0126 19:51:24.983854 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:51:29 crc kubenswrapper[4737]: I0126 19:51:29.674732 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qjsbs"] Jan 26 19:51:29 crc kubenswrapper[4737]: E0126 19:51:29.675785 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9007eaed-d294-4c2b-a3e9-bbf86f95bfd5" containerName="registry-server" Jan 26 19:51:29 crc kubenswrapper[4737]: I0126 19:51:29.675797 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="9007eaed-d294-4c2b-a3e9-bbf86f95bfd5" containerName="registry-server" Jan 26 19:51:29 crc kubenswrapper[4737]: E0126 19:51:29.675819 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37dd0cbb-8aed-45b9-b60e-01286b420e96" containerName="extract-utilities" Jan 26 19:51:29 crc kubenswrapper[4737]: I0126 19:51:29.675826 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="37dd0cbb-8aed-45b9-b60e-01286b420e96" containerName="extract-utilities" Jan 26 19:51:29 crc kubenswrapper[4737]: E0126 19:51:29.675841 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37dd0cbb-8aed-45b9-b60e-01286b420e96" containerName="registry-server" Jan 26 19:51:29 crc kubenswrapper[4737]: I0126 19:51:29.675847 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="37dd0cbb-8aed-45b9-b60e-01286b420e96" containerName="registry-server" Jan 26 19:51:29 crc kubenswrapper[4737]: E0126 19:51:29.675865 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9007eaed-d294-4c2b-a3e9-bbf86f95bfd5" containerName="extract-content" Jan 26 19:51:29 crc kubenswrapper[4737]: I0126 19:51:29.675871 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="9007eaed-d294-4c2b-a3e9-bbf86f95bfd5" containerName="extract-content" Jan 26 19:51:29 crc kubenswrapper[4737]: E0126 19:51:29.675884 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37dd0cbb-8aed-45b9-b60e-01286b420e96" containerName="extract-content" Jan 26 19:51:29 crc kubenswrapper[4737]: I0126 19:51:29.675895 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="37dd0cbb-8aed-45b9-b60e-01286b420e96" containerName="extract-content" Jan 26 19:51:29 crc kubenswrapper[4737]: E0126 19:51:29.675916 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9007eaed-d294-4c2b-a3e9-bbf86f95bfd5" containerName="extract-utilities" Jan 26 19:51:29 crc kubenswrapper[4737]: I0126 19:51:29.675923 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="9007eaed-d294-4c2b-a3e9-bbf86f95bfd5" containerName="extract-utilities" Jan 26 19:51:29 crc kubenswrapper[4737]: I0126 19:51:29.676209 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="9007eaed-d294-4c2b-a3e9-bbf86f95bfd5" containerName="registry-server" Jan 26 19:51:29 crc kubenswrapper[4737]: I0126 19:51:29.676234 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="37dd0cbb-8aed-45b9-b60e-01286b420e96" containerName="registry-server" Jan 26 19:51:29 crc kubenswrapper[4737]: I0126 19:51:29.678288 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qjsbs" Jan 26 19:51:29 crc kubenswrapper[4737]: I0126 19:51:29.689486 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qjsbs"] Jan 26 19:51:29 crc kubenswrapper[4737]: I0126 19:51:29.786615 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdqh9\" (UniqueName: \"kubernetes.io/projected/f325c214-4902-4a66-a21c-d29413e523f3-kube-api-access-hdqh9\") pod \"community-operators-qjsbs\" (UID: \"f325c214-4902-4a66-a21c-d29413e523f3\") " pod="openshift-marketplace/community-operators-qjsbs" Jan 26 19:51:29 crc kubenswrapper[4737]: I0126 19:51:29.786776 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f325c214-4902-4a66-a21c-d29413e523f3-utilities\") pod \"community-operators-qjsbs\" (UID: \"f325c214-4902-4a66-a21c-d29413e523f3\") " pod="openshift-marketplace/community-operators-qjsbs" Jan 26 19:51:29 crc kubenswrapper[4737]: I0126 19:51:29.786863 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f325c214-4902-4a66-a21c-d29413e523f3-catalog-content\") pod \"community-operators-qjsbs\" (UID: \"f325c214-4902-4a66-a21c-d29413e523f3\") " pod="openshift-marketplace/community-operators-qjsbs" Jan 26 19:51:29 crc kubenswrapper[4737]: I0126 19:51:29.888625 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f325c214-4902-4a66-a21c-d29413e523f3-utilities\") pod \"community-operators-qjsbs\" (UID: \"f325c214-4902-4a66-a21c-d29413e523f3\") " pod="openshift-marketplace/community-operators-qjsbs" Jan 26 19:51:29 crc kubenswrapper[4737]: I0126 19:51:29.888746 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f325c214-4902-4a66-a21c-d29413e523f3-catalog-content\") pod \"community-operators-qjsbs\" (UID: \"f325c214-4902-4a66-a21c-d29413e523f3\") " pod="openshift-marketplace/community-operators-qjsbs" Jan 26 19:51:29 crc kubenswrapper[4737]: I0126 19:51:29.888810 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdqh9\" (UniqueName: \"kubernetes.io/projected/f325c214-4902-4a66-a21c-d29413e523f3-kube-api-access-hdqh9\") pod \"community-operators-qjsbs\" (UID: \"f325c214-4902-4a66-a21c-d29413e523f3\") " pod="openshift-marketplace/community-operators-qjsbs" Jan 26 19:51:29 crc kubenswrapper[4737]: I0126 19:51:29.889596 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f325c214-4902-4a66-a21c-d29413e523f3-catalog-content\") pod \"community-operators-qjsbs\" (UID: \"f325c214-4902-4a66-a21c-d29413e523f3\") " pod="openshift-marketplace/community-operators-qjsbs" Jan 26 19:51:29 crc kubenswrapper[4737]: I0126 19:51:29.889665 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f325c214-4902-4a66-a21c-d29413e523f3-utilities\") pod \"community-operators-qjsbs\" (UID: \"f325c214-4902-4a66-a21c-d29413e523f3\") " pod="openshift-marketplace/community-operators-qjsbs" Jan 26 19:51:29 crc kubenswrapper[4737]: I0126 19:51:29.931101 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdqh9\" (UniqueName: \"kubernetes.io/projected/f325c214-4902-4a66-a21c-d29413e523f3-kube-api-access-hdqh9\") pod \"community-operators-qjsbs\" (UID: \"f325c214-4902-4a66-a21c-d29413e523f3\") " pod="openshift-marketplace/community-operators-qjsbs" Jan 26 19:51:30 crc kubenswrapper[4737]: I0126 19:51:30.013111 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qjsbs" Jan 26 19:51:30 crc kubenswrapper[4737]: I0126 19:51:30.580025 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qjsbs"] Jan 26 19:51:31 crc kubenswrapper[4737]: I0126 19:51:31.259357 4737 generic.go:334] "Generic (PLEG): container finished" podID="f325c214-4902-4a66-a21c-d29413e523f3" containerID="f8c9a6bd89d94875b3f1a2fe142a5ec00341045e3a165a06e4b1694585cc5154" exitCode=0 Jan 26 19:51:31 crc kubenswrapper[4737]: I0126 19:51:31.259431 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qjsbs" event={"ID":"f325c214-4902-4a66-a21c-d29413e523f3","Type":"ContainerDied","Data":"f8c9a6bd89d94875b3f1a2fe142a5ec00341045e3a165a06e4b1694585cc5154"} Jan 26 19:51:31 crc kubenswrapper[4737]: I0126 19:51:31.259893 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qjsbs" event={"ID":"f325c214-4902-4a66-a21c-d29413e523f3","Type":"ContainerStarted","Data":"a5d92e74c1c3e9af9e970c785d5d60f0d0b67b6515fb0f5e6086a0258d4d234e"} Jan 26 19:51:33 crc kubenswrapper[4737]: I0126 19:51:33.480057 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5xbnr"] Jan 26 19:51:33 crc kubenswrapper[4737]: I0126 19:51:33.483178 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5xbnr" Jan 26 19:51:33 crc kubenswrapper[4737]: I0126 19:51:33.500898 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5xbnr"] Jan 26 19:51:33 crc kubenswrapper[4737]: I0126 19:51:33.508044 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldxww\" (UniqueName: \"kubernetes.io/projected/916c9a24-683b-491a-b306-24862478ba4c-kube-api-access-ldxww\") pod \"certified-operators-5xbnr\" (UID: \"916c9a24-683b-491a-b306-24862478ba4c\") " pod="openshift-marketplace/certified-operators-5xbnr" Jan 26 19:51:33 crc kubenswrapper[4737]: I0126 19:51:33.508590 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/916c9a24-683b-491a-b306-24862478ba4c-utilities\") pod \"certified-operators-5xbnr\" (UID: \"916c9a24-683b-491a-b306-24862478ba4c\") " pod="openshift-marketplace/certified-operators-5xbnr" Jan 26 19:51:33 crc kubenswrapper[4737]: I0126 19:51:33.508641 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/916c9a24-683b-491a-b306-24862478ba4c-catalog-content\") pod \"certified-operators-5xbnr\" (UID: \"916c9a24-683b-491a-b306-24862478ba4c\") " pod="openshift-marketplace/certified-operators-5xbnr" Jan 26 19:51:33 crc kubenswrapper[4737]: I0126 19:51:33.611667 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldxww\" (UniqueName: \"kubernetes.io/projected/916c9a24-683b-491a-b306-24862478ba4c-kube-api-access-ldxww\") pod \"certified-operators-5xbnr\" (UID: \"916c9a24-683b-491a-b306-24862478ba4c\") " pod="openshift-marketplace/certified-operators-5xbnr" Jan 26 19:51:33 crc kubenswrapper[4737]: I0126 19:51:33.611828 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/916c9a24-683b-491a-b306-24862478ba4c-utilities\") pod \"certified-operators-5xbnr\" (UID: \"916c9a24-683b-491a-b306-24862478ba4c\") " pod="openshift-marketplace/certified-operators-5xbnr" Jan 26 19:51:33 crc kubenswrapper[4737]: I0126 19:51:33.611859 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/916c9a24-683b-491a-b306-24862478ba4c-catalog-content\") pod \"certified-operators-5xbnr\" (UID: \"916c9a24-683b-491a-b306-24862478ba4c\") " pod="openshift-marketplace/certified-operators-5xbnr" Jan 26 19:51:33 crc kubenswrapper[4737]: I0126 19:51:33.612492 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/916c9a24-683b-491a-b306-24862478ba4c-catalog-content\") pod \"certified-operators-5xbnr\" (UID: \"916c9a24-683b-491a-b306-24862478ba4c\") " pod="openshift-marketplace/certified-operators-5xbnr" Jan 26 19:51:33 crc kubenswrapper[4737]: I0126 19:51:33.612605 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/916c9a24-683b-491a-b306-24862478ba4c-utilities\") pod \"certified-operators-5xbnr\" (UID: \"916c9a24-683b-491a-b306-24862478ba4c\") " pod="openshift-marketplace/certified-operators-5xbnr" Jan 26 19:51:33 crc kubenswrapper[4737]: I0126 19:51:33.645786 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldxww\" (UniqueName: \"kubernetes.io/projected/916c9a24-683b-491a-b306-24862478ba4c-kube-api-access-ldxww\") pod \"certified-operators-5xbnr\" (UID: \"916c9a24-683b-491a-b306-24862478ba4c\") " pod="openshift-marketplace/certified-operators-5xbnr" Jan 26 19:51:33 crc kubenswrapper[4737]: I0126 19:51:33.808239 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5xbnr" Jan 26 19:51:34 crc kubenswrapper[4737]: I0126 19:51:34.525280 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5xbnr"] Jan 26 19:51:36 crc kubenswrapper[4737]: I0126 19:51:36.321515 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5xbnr" event={"ID":"916c9a24-683b-491a-b306-24862478ba4c","Type":"ContainerStarted","Data":"96eada9ce7840bce3df0d372b8508a22f43c9e9e48f33894dbeb4b6d4cf3c3e4"} Jan 26 19:51:36 crc kubenswrapper[4737]: I0126 19:51:36.995087 4737 scope.go:117] "RemoveContainer" containerID="7b6fabd15a79cd275ed2884c7ff8f267e25e1f7cb223b3a1ecfae218c1fe84b3" Jan 26 19:51:37 crc kubenswrapper[4737]: I0126 19:51:37.337967 4737 generic.go:334] "Generic (PLEG): container finished" podID="916c9a24-683b-491a-b306-24862478ba4c" containerID="8e5e91440131074d527e216420d04f47414f10f0f594336f1e5a68a8ecb47e20" exitCode=0 Jan 26 19:51:37 crc kubenswrapper[4737]: I0126 19:51:37.338024 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5xbnr" event={"ID":"916c9a24-683b-491a-b306-24862478ba4c","Type":"ContainerDied","Data":"8e5e91440131074d527e216420d04f47414f10f0f594336f1e5a68a8ecb47e20"} Jan 26 19:51:37 crc kubenswrapper[4737]: I0126 19:51:37.341442 4737 generic.go:334] "Generic (PLEG): container finished" podID="f325c214-4902-4a66-a21c-d29413e523f3" containerID="6c9e564a23ebd167c3ff139bc64fc6cf9d9e762a8db298bb29926729e7c6d516" exitCode=0 Jan 26 19:51:37 crc kubenswrapper[4737]: I0126 19:51:37.341487 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qjsbs" event={"ID":"f325c214-4902-4a66-a21c-d29413e523f3","Type":"ContainerDied","Data":"6c9e564a23ebd167c3ff139bc64fc6cf9d9e762a8db298bb29926729e7c6d516"} Jan 26 19:51:38 crc kubenswrapper[4737]: I0126 19:51:38.358766 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" event={"ID":"afd75772-7900-46c3-b392-afb075e1cc08","Type":"ContainerStarted","Data":"c38991e4ff60ea23b7470444b242edde168e75e6987918f8aae48b15bb03a5b0"} Jan 26 19:51:39 crc kubenswrapper[4737]: I0126 19:51:39.380269 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5xbnr" event={"ID":"916c9a24-683b-491a-b306-24862478ba4c","Type":"ContainerStarted","Data":"91318435fd8ff17270e1e376855be81014afbb2e78f3fe274bba1f4e467b1da5"} Jan 26 19:51:39 crc kubenswrapper[4737]: I0126 19:51:39.386221 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qjsbs" event={"ID":"f325c214-4902-4a66-a21c-d29413e523f3","Type":"ContainerStarted","Data":"399eaf432e465a11e0774b06063e21c2bf94b6a6725871dba3999b8b472df3a3"} Jan 26 19:51:39 crc kubenswrapper[4737]: I0126 19:51:39.433927 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qjsbs" podStartSLOduration=3.330387204 podStartE2EDuration="10.433907102s" podCreationTimestamp="2026-01-26 19:51:29 +0000 UTC" firstStartedPulling="2026-01-26 19:51:31.262437242 +0000 UTC m=+4864.570631950" lastFinishedPulling="2026-01-26 19:51:38.36595713 +0000 UTC m=+4871.674151848" observedRunningTime="2026-01-26 19:51:39.430021446 +0000 UTC m=+4872.738216154" watchObservedRunningTime="2026-01-26 19:51:39.433907102 +0000 UTC m=+4872.742101810" Jan 26 19:51:40 crc kubenswrapper[4737]: I0126 19:51:40.018155 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qjsbs" Jan 26 19:51:40 crc kubenswrapper[4737]: I0126 19:51:40.019464 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qjsbs" Jan 26 19:51:40 crc kubenswrapper[4737]: I0126 19:51:40.399546 4737 generic.go:334] "Generic (PLEG): container finished" podID="916c9a24-683b-491a-b306-24862478ba4c" containerID="91318435fd8ff17270e1e376855be81014afbb2e78f3fe274bba1f4e467b1da5" exitCode=0 Jan 26 19:51:40 crc kubenswrapper[4737]: I0126 19:51:40.399649 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5xbnr" event={"ID":"916c9a24-683b-491a-b306-24862478ba4c","Type":"ContainerDied","Data":"91318435fd8ff17270e1e376855be81014afbb2e78f3fe274bba1f4e467b1da5"} Jan 26 19:51:41 crc kubenswrapper[4737]: I0126 19:51:41.125420 4737 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-qjsbs" podUID="f325c214-4902-4a66-a21c-d29413e523f3" containerName="registry-server" probeResult="failure" output=< Jan 26 19:51:41 crc kubenswrapper[4737]: timeout: failed to connect service ":50051" within 1s Jan 26 19:51:41 crc kubenswrapper[4737]: > Jan 26 19:51:41 crc kubenswrapper[4737]: I0126 19:51:41.415700 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5xbnr" event={"ID":"916c9a24-683b-491a-b306-24862478ba4c","Type":"ContainerStarted","Data":"27d174ba069972d7cd3bf6275526a75cb5237ec06237a6cf74e1ef381210730a"} Jan 26 19:51:41 crc kubenswrapper[4737]: I0126 19:51:41.448343 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5xbnr" podStartSLOduration=5.019222954 podStartE2EDuration="8.448296657s" podCreationTimestamp="2026-01-26 19:51:33 +0000 UTC" firstStartedPulling="2026-01-26 19:51:37.369481861 +0000 UTC m=+4870.677676579" lastFinishedPulling="2026-01-26 19:51:40.798555574 +0000 UTC m=+4874.106750282" observedRunningTime="2026-01-26 19:51:41.433160215 +0000 UTC m=+4874.741354933" watchObservedRunningTime="2026-01-26 19:51:41.448296657 +0000 UTC m=+4874.756491365" Jan 26 19:51:43 crc kubenswrapper[4737]: I0126 19:51:43.809327 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5xbnr" Jan 26 19:51:43 crc kubenswrapper[4737]: I0126 19:51:43.810042 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5xbnr" Jan 26 19:51:44 crc kubenswrapper[4737]: I0126 19:51:44.515859 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5xbnr" Jan 26 19:51:50 crc kubenswrapper[4737]: I0126 19:51:50.068818 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qjsbs" Jan 26 19:51:50 crc kubenswrapper[4737]: I0126 19:51:50.135622 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qjsbs" Jan 26 19:51:53 crc kubenswrapper[4737]: I0126 19:51:53.109528 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qjsbs"] Jan 26 19:51:53 crc kubenswrapper[4737]: I0126 19:51:53.671969 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2v6jg"] Jan 26 19:51:53 crc kubenswrapper[4737]: I0126 19:51:53.672581 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2v6jg" podUID="575ea0ec-a40c-47ca-b30d-a1907aca111e" containerName="registry-server" containerID="cri-o://0be326261e68101c3eb5a405570846262b6f6b7520dd842ce539573b7385531f" gracePeriod=2 Jan 26 19:51:54 crc kubenswrapper[4737]: I0126 19:51:54.022329 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5xbnr" Jan 26 19:51:54 crc kubenswrapper[4737]: I0126 19:51:54.410768 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2v6jg" Jan 26 19:51:54 crc kubenswrapper[4737]: I0126 19:51:54.555113 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/575ea0ec-a40c-47ca-b30d-a1907aca111e-utilities\") pod \"575ea0ec-a40c-47ca-b30d-a1907aca111e\" (UID: \"575ea0ec-a40c-47ca-b30d-a1907aca111e\") " Jan 26 19:51:54 crc kubenswrapper[4737]: I0126 19:51:54.555670 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6t98\" (UniqueName: \"kubernetes.io/projected/575ea0ec-a40c-47ca-b30d-a1907aca111e-kube-api-access-n6t98\") pod \"575ea0ec-a40c-47ca-b30d-a1907aca111e\" (UID: \"575ea0ec-a40c-47ca-b30d-a1907aca111e\") " Jan 26 19:51:54 crc kubenswrapper[4737]: I0126 19:51:54.555854 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/575ea0ec-a40c-47ca-b30d-a1907aca111e-catalog-content\") pod \"575ea0ec-a40c-47ca-b30d-a1907aca111e\" (UID: \"575ea0ec-a40c-47ca-b30d-a1907aca111e\") " Jan 26 19:51:54 crc kubenswrapper[4737]: I0126 19:51:54.556530 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/575ea0ec-a40c-47ca-b30d-a1907aca111e-utilities" (OuterVolumeSpecName: "utilities") pod "575ea0ec-a40c-47ca-b30d-a1907aca111e" (UID: "575ea0ec-a40c-47ca-b30d-a1907aca111e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:51:54 crc kubenswrapper[4737]: I0126 19:51:54.574732 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/575ea0ec-a40c-47ca-b30d-a1907aca111e-kube-api-access-n6t98" (OuterVolumeSpecName: "kube-api-access-n6t98") pod "575ea0ec-a40c-47ca-b30d-a1907aca111e" (UID: "575ea0ec-a40c-47ca-b30d-a1907aca111e"). InnerVolumeSpecName "kube-api-access-n6t98". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:51:54 crc kubenswrapper[4737]: I0126 19:51:54.607789 4737 generic.go:334] "Generic (PLEG): container finished" podID="575ea0ec-a40c-47ca-b30d-a1907aca111e" containerID="0be326261e68101c3eb5a405570846262b6f6b7520dd842ce539573b7385531f" exitCode=0 Jan 26 19:51:54 crc kubenswrapper[4737]: I0126 19:51:54.607843 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2v6jg" event={"ID":"575ea0ec-a40c-47ca-b30d-a1907aca111e","Type":"ContainerDied","Data":"0be326261e68101c3eb5a405570846262b6f6b7520dd842ce539573b7385531f"} Jan 26 19:51:54 crc kubenswrapper[4737]: I0126 19:51:54.607877 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2v6jg" event={"ID":"575ea0ec-a40c-47ca-b30d-a1907aca111e","Type":"ContainerDied","Data":"ae2399c04ea4321b84e4a1447f6109fa675d51426d46929665b53a42fd760314"} Jan 26 19:51:54 crc kubenswrapper[4737]: I0126 19:51:54.607884 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2v6jg" Jan 26 19:51:54 crc kubenswrapper[4737]: I0126 19:51:54.607911 4737 scope.go:117] "RemoveContainer" containerID="0be326261e68101c3eb5a405570846262b6f6b7520dd842ce539573b7385531f" Jan 26 19:51:54 crc kubenswrapper[4737]: I0126 19:51:54.631852 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/575ea0ec-a40c-47ca-b30d-a1907aca111e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "575ea0ec-a40c-47ca-b30d-a1907aca111e" (UID: "575ea0ec-a40c-47ca-b30d-a1907aca111e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:51:54 crc kubenswrapper[4737]: I0126 19:51:54.657887 4737 scope.go:117] "RemoveContainer" containerID="086facb998d04828823338abf5e23dee88c969e3333006c51a2bcc3193ea85e2" Jan 26 19:51:54 crc kubenswrapper[4737]: I0126 19:51:54.658405 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6t98\" (UniqueName: \"kubernetes.io/projected/575ea0ec-a40c-47ca-b30d-a1907aca111e-kube-api-access-n6t98\") on node \"crc\" DevicePath \"\"" Jan 26 19:51:54 crc kubenswrapper[4737]: I0126 19:51:54.658438 4737 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/575ea0ec-a40c-47ca-b30d-a1907aca111e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:51:54 crc kubenswrapper[4737]: I0126 19:51:54.658463 4737 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/575ea0ec-a40c-47ca-b30d-a1907aca111e-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:51:54 crc kubenswrapper[4737]: I0126 19:51:54.684245 4737 scope.go:117] "RemoveContainer" containerID="3e13a1c6f0f86958a752bebeb338d0cdc4c99611ddedb77902aa1f616b602e10" Jan 26 19:51:54 crc kubenswrapper[4737]: I0126 19:51:54.740182 4737 scope.go:117] "RemoveContainer" containerID="0be326261e68101c3eb5a405570846262b6f6b7520dd842ce539573b7385531f" Jan 26 19:51:54 crc kubenswrapper[4737]: E0126 19:51:54.740746 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0be326261e68101c3eb5a405570846262b6f6b7520dd842ce539573b7385531f\": container with ID starting with 0be326261e68101c3eb5a405570846262b6f6b7520dd842ce539573b7385531f not found: ID does not exist" containerID="0be326261e68101c3eb5a405570846262b6f6b7520dd842ce539573b7385531f" Jan 26 19:51:54 crc kubenswrapper[4737]: I0126 19:51:54.740806 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0be326261e68101c3eb5a405570846262b6f6b7520dd842ce539573b7385531f"} err="failed to get container status \"0be326261e68101c3eb5a405570846262b6f6b7520dd842ce539573b7385531f\": rpc error: code = NotFound desc = could not find container \"0be326261e68101c3eb5a405570846262b6f6b7520dd842ce539573b7385531f\": container with ID starting with 0be326261e68101c3eb5a405570846262b6f6b7520dd842ce539573b7385531f not found: ID does not exist" Jan 26 19:51:54 crc kubenswrapper[4737]: I0126 19:51:54.740840 4737 scope.go:117] "RemoveContainer" containerID="086facb998d04828823338abf5e23dee88c969e3333006c51a2bcc3193ea85e2" Jan 26 19:51:54 crc kubenswrapper[4737]: E0126 19:51:54.741288 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"086facb998d04828823338abf5e23dee88c969e3333006c51a2bcc3193ea85e2\": container with ID starting with 086facb998d04828823338abf5e23dee88c969e3333006c51a2bcc3193ea85e2 not found: ID does not exist" containerID="086facb998d04828823338abf5e23dee88c969e3333006c51a2bcc3193ea85e2" Jan 26 19:51:54 crc kubenswrapper[4737]: I0126 19:51:54.741322 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"086facb998d04828823338abf5e23dee88c969e3333006c51a2bcc3193ea85e2"} err="failed to get container status \"086facb998d04828823338abf5e23dee88c969e3333006c51a2bcc3193ea85e2\": rpc error: code = NotFound desc = could not find container \"086facb998d04828823338abf5e23dee88c969e3333006c51a2bcc3193ea85e2\": container with ID starting with 086facb998d04828823338abf5e23dee88c969e3333006c51a2bcc3193ea85e2 not found: ID does not exist" Jan 26 19:51:54 crc kubenswrapper[4737]: I0126 19:51:54.741342 4737 scope.go:117] "RemoveContainer" containerID="3e13a1c6f0f86958a752bebeb338d0cdc4c99611ddedb77902aa1f616b602e10" Jan 26 19:51:54 crc kubenswrapper[4737]: E0126 19:51:54.741605 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e13a1c6f0f86958a752bebeb338d0cdc4c99611ddedb77902aa1f616b602e10\": container with ID starting with 3e13a1c6f0f86958a752bebeb338d0cdc4c99611ddedb77902aa1f616b602e10 not found: ID does not exist" containerID="3e13a1c6f0f86958a752bebeb338d0cdc4c99611ddedb77902aa1f616b602e10" Jan 26 19:51:54 crc kubenswrapper[4737]: I0126 19:51:54.741626 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e13a1c6f0f86958a752bebeb338d0cdc4c99611ddedb77902aa1f616b602e10"} err="failed to get container status \"3e13a1c6f0f86958a752bebeb338d0cdc4c99611ddedb77902aa1f616b602e10\": rpc error: code = NotFound desc = could not find container \"3e13a1c6f0f86958a752bebeb338d0cdc4c99611ddedb77902aa1f616b602e10\": container with ID starting with 3e13a1c6f0f86958a752bebeb338d0cdc4c99611ddedb77902aa1f616b602e10 not found: ID does not exist" Jan 26 19:51:54 crc kubenswrapper[4737]: I0126 19:51:54.955632 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2v6jg"] Jan 26 19:51:54 crc kubenswrapper[4737]: I0126 19:51:54.970355 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2v6jg"] Jan 26 19:51:54 crc kubenswrapper[4737]: I0126 19:51:54.997852 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="575ea0ec-a40c-47ca-b30d-a1907aca111e" path="/var/lib/kubelet/pods/575ea0ec-a40c-47ca-b30d-a1907aca111e/volumes" Jan 26 19:51:56 crc kubenswrapper[4737]: I0126 19:51:56.467898 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5xbnr"] Jan 26 19:51:56 crc kubenswrapper[4737]: I0126 19:51:56.468705 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5xbnr" podUID="916c9a24-683b-491a-b306-24862478ba4c" containerName="registry-server" containerID="cri-o://27d174ba069972d7cd3bf6275526a75cb5237ec06237a6cf74e1ef381210730a" gracePeriod=2 Jan 26 19:51:56 crc kubenswrapper[4737]: I0126 19:51:56.641787 4737 generic.go:334] "Generic (PLEG): container finished" podID="916c9a24-683b-491a-b306-24862478ba4c" containerID="27d174ba069972d7cd3bf6275526a75cb5237ec06237a6cf74e1ef381210730a" exitCode=0 Jan 26 19:51:56 crc kubenswrapper[4737]: I0126 19:51:56.641834 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5xbnr" event={"ID":"916c9a24-683b-491a-b306-24862478ba4c","Type":"ContainerDied","Data":"27d174ba069972d7cd3bf6275526a75cb5237ec06237a6cf74e1ef381210730a"} Jan 26 19:51:57 crc kubenswrapper[4737]: I0126 19:51:57.531673 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5xbnr" Jan 26 19:51:57 crc kubenswrapper[4737]: I0126 19:51:57.638588 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/916c9a24-683b-491a-b306-24862478ba4c-catalog-content\") pod \"916c9a24-683b-491a-b306-24862478ba4c\" (UID: \"916c9a24-683b-491a-b306-24862478ba4c\") " Jan 26 19:51:57 crc kubenswrapper[4737]: I0126 19:51:57.638641 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/916c9a24-683b-491a-b306-24862478ba4c-utilities\") pod \"916c9a24-683b-491a-b306-24862478ba4c\" (UID: \"916c9a24-683b-491a-b306-24862478ba4c\") " Jan 26 19:51:57 crc kubenswrapper[4737]: I0126 19:51:57.638689 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldxww\" (UniqueName: \"kubernetes.io/projected/916c9a24-683b-491a-b306-24862478ba4c-kube-api-access-ldxww\") pod \"916c9a24-683b-491a-b306-24862478ba4c\" (UID: \"916c9a24-683b-491a-b306-24862478ba4c\") " Jan 26 19:51:57 crc kubenswrapper[4737]: I0126 19:51:57.645691 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/916c9a24-683b-491a-b306-24862478ba4c-kube-api-access-ldxww" (OuterVolumeSpecName: "kube-api-access-ldxww") pod "916c9a24-683b-491a-b306-24862478ba4c" (UID: "916c9a24-683b-491a-b306-24862478ba4c"). InnerVolumeSpecName "kube-api-access-ldxww". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:51:57 crc kubenswrapper[4737]: I0126 19:51:57.646584 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/916c9a24-683b-491a-b306-24862478ba4c-utilities" (OuterVolumeSpecName: "utilities") pod "916c9a24-683b-491a-b306-24862478ba4c" (UID: "916c9a24-683b-491a-b306-24862478ba4c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:51:57 crc kubenswrapper[4737]: I0126 19:51:57.676483 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5xbnr" event={"ID":"916c9a24-683b-491a-b306-24862478ba4c","Type":"ContainerDied","Data":"96eada9ce7840bce3df0d372b8508a22f43c9e9e48f33894dbeb4b6d4cf3c3e4"} Jan 26 19:51:57 crc kubenswrapper[4737]: I0126 19:51:57.677106 4737 scope.go:117] "RemoveContainer" containerID="27d174ba069972d7cd3bf6275526a75cb5237ec06237a6cf74e1ef381210730a" Jan 26 19:51:57 crc kubenswrapper[4737]: I0126 19:51:57.677287 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5xbnr" Jan 26 19:51:57 crc kubenswrapper[4737]: I0126 19:51:57.699407 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/916c9a24-683b-491a-b306-24862478ba4c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "916c9a24-683b-491a-b306-24862478ba4c" (UID: "916c9a24-683b-491a-b306-24862478ba4c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:51:57 crc kubenswrapper[4737]: I0126 19:51:57.735984 4737 scope.go:117] "RemoveContainer" containerID="91318435fd8ff17270e1e376855be81014afbb2e78f3fe274bba1f4e467b1da5" Jan 26 19:51:57 crc kubenswrapper[4737]: I0126 19:51:57.742373 4737 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/916c9a24-683b-491a-b306-24862478ba4c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:51:57 crc kubenswrapper[4737]: I0126 19:51:57.742405 4737 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/916c9a24-683b-491a-b306-24862478ba4c-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:51:57 crc kubenswrapper[4737]: I0126 19:51:57.742416 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldxww\" (UniqueName: \"kubernetes.io/projected/916c9a24-683b-491a-b306-24862478ba4c-kube-api-access-ldxww\") on node \"crc\" DevicePath \"\"" Jan 26 19:51:57 crc kubenswrapper[4737]: I0126 19:51:57.760708 4737 scope.go:117] "RemoveContainer" containerID="8e5e91440131074d527e216420d04f47414f10f0f594336f1e5a68a8ecb47e20" Jan 26 19:51:58 crc kubenswrapper[4737]: I0126 19:51:58.012020 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5xbnr"] Jan 26 19:51:58 crc kubenswrapper[4737]: I0126 19:51:58.024324 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5xbnr"] Jan 26 19:51:58 crc kubenswrapper[4737]: I0126 19:51:58.995714 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="916c9a24-683b-491a-b306-24862478ba4c" path="/var/lib/kubelet/pods/916c9a24-683b-491a-b306-24862478ba4c/volumes" Jan 26 19:52:49 crc kubenswrapper[4737]: I0126 19:52:49.722004 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 26 19:52:49 crc kubenswrapper[4737]: E0126 19:52:49.723197 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="575ea0ec-a40c-47ca-b30d-a1907aca111e" containerName="extract-content" Jan 26 19:52:49 crc kubenswrapper[4737]: I0126 19:52:49.723216 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="575ea0ec-a40c-47ca-b30d-a1907aca111e" containerName="extract-content" Jan 26 19:52:49 crc kubenswrapper[4737]: E0126 19:52:49.723241 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="916c9a24-683b-491a-b306-24862478ba4c" containerName="extract-content" Jan 26 19:52:49 crc kubenswrapper[4737]: I0126 19:52:49.723249 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="916c9a24-683b-491a-b306-24862478ba4c" containerName="extract-content" Jan 26 19:52:49 crc kubenswrapper[4737]: E0126 19:52:49.723261 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="575ea0ec-a40c-47ca-b30d-a1907aca111e" containerName="registry-server" Jan 26 19:52:49 crc kubenswrapper[4737]: I0126 19:52:49.723270 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="575ea0ec-a40c-47ca-b30d-a1907aca111e" containerName="registry-server" Jan 26 19:52:49 crc kubenswrapper[4737]: E0126 19:52:49.723299 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="575ea0ec-a40c-47ca-b30d-a1907aca111e" containerName="extract-utilities" Jan 26 19:52:49 crc kubenswrapper[4737]: I0126 19:52:49.723306 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="575ea0ec-a40c-47ca-b30d-a1907aca111e" containerName="extract-utilities" Jan 26 19:52:49 crc kubenswrapper[4737]: E0126 19:52:49.723331 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="916c9a24-683b-491a-b306-24862478ba4c" containerName="extract-utilities" Jan 26 19:52:49 crc kubenswrapper[4737]: I0126 19:52:49.723339 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="916c9a24-683b-491a-b306-24862478ba4c" containerName="extract-utilities" Jan 26 19:52:49 crc kubenswrapper[4737]: E0126 19:52:49.723355 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="916c9a24-683b-491a-b306-24862478ba4c" containerName="registry-server" Jan 26 19:52:49 crc kubenswrapper[4737]: I0126 19:52:49.723363 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="916c9a24-683b-491a-b306-24862478ba4c" containerName="registry-server" Jan 26 19:52:49 crc kubenswrapper[4737]: I0126 19:52:49.723677 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="916c9a24-683b-491a-b306-24862478ba4c" containerName="registry-server" Jan 26 19:52:49 crc kubenswrapper[4737]: I0126 19:52:49.723706 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="575ea0ec-a40c-47ca-b30d-a1907aca111e" containerName="registry-server" Jan 26 19:52:49 crc kubenswrapper[4737]: I0126 19:52:49.724797 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 26 19:52:49 crc kubenswrapper[4737]: I0126 19:52:49.727427 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-tk496" Jan 26 19:52:49 crc kubenswrapper[4737]: I0126 19:52:49.728088 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 26 19:52:49 crc kubenswrapper[4737]: I0126 19:52:49.728316 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 26 19:52:49 crc kubenswrapper[4737]: I0126 19:52:49.728574 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 26 19:52:49 crc kubenswrapper[4737]: I0126 19:52:49.741802 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 26 19:52:49 crc kubenswrapper[4737]: I0126 19:52:49.859739 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"d81cdf24-ce67-401f-869f-805f4718fce4\") " pod="openstack/tempest-tests-tempest" Jan 26 19:52:49 crc kubenswrapper[4737]: I0126 19:52:49.860149 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d81cdf24-ce67-401f-869f-805f4718fce4-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"d81cdf24-ce67-401f-869f-805f4718fce4\") " pod="openstack/tempest-tests-tempest" Jan 26 19:52:49 crc kubenswrapper[4737]: I0126 19:52:49.860459 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d81cdf24-ce67-401f-869f-805f4718fce4-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"d81cdf24-ce67-401f-869f-805f4718fce4\") " pod="openstack/tempest-tests-tempest" Jan 26 19:52:49 crc kubenswrapper[4737]: I0126 19:52:49.860604 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d81cdf24-ce67-401f-869f-805f4718fce4-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"d81cdf24-ce67-401f-869f-805f4718fce4\") " pod="openstack/tempest-tests-tempest" Jan 26 19:52:49 crc kubenswrapper[4737]: I0126 19:52:49.860764 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d81cdf24-ce67-401f-869f-805f4718fce4-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"d81cdf24-ce67-401f-869f-805f4718fce4\") " pod="openstack/tempest-tests-tempest" Jan 26 19:52:49 crc kubenswrapper[4737]: I0126 19:52:49.860922 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d81cdf24-ce67-401f-869f-805f4718fce4-config-data\") pod \"tempest-tests-tempest\" (UID: \"d81cdf24-ce67-401f-869f-805f4718fce4\") " pod="openstack/tempest-tests-tempest" Jan 26 19:52:49 crc kubenswrapper[4737]: I0126 19:52:49.861034 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d81cdf24-ce67-401f-869f-805f4718fce4-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"d81cdf24-ce67-401f-869f-805f4718fce4\") " pod="openstack/tempest-tests-tempest" Jan 26 19:52:49 crc kubenswrapper[4737]: I0126 19:52:49.861216 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d81cdf24-ce67-401f-869f-805f4718fce4-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"d81cdf24-ce67-401f-869f-805f4718fce4\") " pod="openstack/tempest-tests-tempest" Jan 26 19:52:49 crc kubenswrapper[4737]: I0126 19:52:49.861383 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvk65\" (UniqueName: \"kubernetes.io/projected/d81cdf24-ce67-401f-869f-805f4718fce4-kube-api-access-bvk65\") pod \"tempest-tests-tempest\" (UID: \"d81cdf24-ce67-401f-869f-805f4718fce4\") " pod="openstack/tempest-tests-tempest" Jan 26 19:52:49 crc kubenswrapper[4737]: I0126 19:52:49.964012 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d81cdf24-ce67-401f-869f-805f4718fce4-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"d81cdf24-ce67-401f-869f-805f4718fce4\") " pod="openstack/tempest-tests-tempest" Jan 26 19:52:49 crc kubenswrapper[4737]: I0126 19:52:49.964149 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d81cdf24-ce67-401f-869f-805f4718fce4-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"d81cdf24-ce67-401f-869f-805f4718fce4\") " pod="openstack/tempest-tests-tempest" Jan 26 19:52:49 crc kubenswrapper[4737]: I0126 19:52:49.964183 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d81cdf24-ce67-401f-869f-805f4718fce4-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"d81cdf24-ce67-401f-869f-805f4718fce4\") " pod="openstack/tempest-tests-tempest" Jan 26 19:52:49 crc kubenswrapper[4737]: I0126 19:52:49.964223 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d81cdf24-ce67-401f-869f-805f4718fce4-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"d81cdf24-ce67-401f-869f-805f4718fce4\") " pod="openstack/tempest-tests-tempest" Jan 26 19:52:49 crc kubenswrapper[4737]: I0126 19:52:49.964260 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d81cdf24-ce67-401f-869f-805f4718fce4-config-data\") pod \"tempest-tests-tempest\" (UID: \"d81cdf24-ce67-401f-869f-805f4718fce4\") " pod="openstack/tempest-tests-tempest" Jan 26 19:52:49 crc kubenswrapper[4737]: I0126 19:52:49.964281 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d81cdf24-ce67-401f-869f-805f4718fce4-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"d81cdf24-ce67-401f-869f-805f4718fce4\") " pod="openstack/tempest-tests-tempest" Jan 26 19:52:49 crc kubenswrapper[4737]: I0126 19:52:49.964321 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d81cdf24-ce67-401f-869f-805f4718fce4-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"d81cdf24-ce67-401f-869f-805f4718fce4\") " pod="openstack/tempest-tests-tempest" Jan 26 19:52:49 crc kubenswrapper[4737]: I0126 19:52:49.964358 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvk65\" (UniqueName: \"kubernetes.io/projected/d81cdf24-ce67-401f-869f-805f4718fce4-kube-api-access-bvk65\") pod \"tempest-tests-tempest\" (UID: \"d81cdf24-ce67-401f-869f-805f4718fce4\") " pod="openstack/tempest-tests-tempest" Jan 26 19:52:49 crc kubenswrapper[4737]: I0126 19:52:49.964410 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"d81cdf24-ce67-401f-869f-805f4718fce4\") " pod="openstack/tempest-tests-tempest" Jan 26 19:52:49 crc kubenswrapper[4737]: I0126 19:52:49.965940 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d81cdf24-ce67-401f-869f-805f4718fce4-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"d81cdf24-ce67-401f-869f-805f4718fce4\") " pod="openstack/tempest-tests-tempest" Jan 26 19:52:49 crc kubenswrapper[4737]: I0126 19:52:49.966014 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d81cdf24-ce67-401f-869f-805f4718fce4-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"d81cdf24-ce67-401f-869f-805f4718fce4\") " pod="openstack/tempest-tests-tempest" Jan 26 19:52:49 crc kubenswrapper[4737]: I0126 19:52:49.966158 4737 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"d81cdf24-ce67-401f-869f-805f4718fce4\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/tempest-tests-tempest" Jan 26 19:52:49 crc kubenswrapper[4737]: I0126 19:52:49.966604 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d81cdf24-ce67-401f-869f-805f4718fce4-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"d81cdf24-ce67-401f-869f-805f4718fce4\") " pod="openstack/tempest-tests-tempest" Jan 26 19:52:49 crc kubenswrapper[4737]: I0126 19:52:49.966611 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d81cdf24-ce67-401f-869f-805f4718fce4-config-data\") pod \"tempest-tests-tempest\" (UID: \"d81cdf24-ce67-401f-869f-805f4718fce4\") " pod="openstack/tempest-tests-tempest" Jan 26 19:52:50 crc kubenswrapper[4737]: I0126 19:52:50.367272 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d81cdf24-ce67-401f-869f-805f4718fce4-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"d81cdf24-ce67-401f-869f-805f4718fce4\") " pod="openstack/tempest-tests-tempest" Jan 26 19:52:50 crc kubenswrapper[4737]: I0126 19:52:50.367447 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvk65\" (UniqueName: \"kubernetes.io/projected/d81cdf24-ce67-401f-869f-805f4718fce4-kube-api-access-bvk65\") pod \"tempest-tests-tempest\" (UID: \"d81cdf24-ce67-401f-869f-805f4718fce4\") " pod="openstack/tempest-tests-tempest" Jan 26 19:52:50 crc kubenswrapper[4737]: I0126 19:52:50.367474 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d81cdf24-ce67-401f-869f-805f4718fce4-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"d81cdf24-ce67-401f-869f-805f4718fce4\") " pod="openstack/tempest-tests-tempest" Jan 26 19:52:50 crc kubenswrapper[4737]: I0126 19:52:50.370765 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d81cdf24-ce67-401f-869f-805f4718fce4-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"d81cdf24-ce67-401f-869f-805f4718fce4\") " pod="openstack/tempest-tests-tempest" Jan 26 19:52:50 crc kubenswrapper[4737]: I0126 19:52:50.423683 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"d81cdf24-ce67-401f-869f-805f4718fce4\") " pod="openstack/tempest-tests-tempest" Jan 26 19:52:50 crc kubenswrapper[4737]: I0126 19:52:50.644210 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 26 19:52:51 crc kubenswrapper[4737]: I0126 19:52:51.182975 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 26 19:52:51 crc kubenswrapper[4737]: I0126 19:52:51.186113 4737 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 19:52:51 crc kubenswrapper[4737]: I0126 19:52:51.297613 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"d81cdf24-ce67-401f-869f-805f4718fce4","Type":"ContainerStarted","Data":"a6ed3328d7e95852106d94e6730d632042147056c71b8fb6b8f2dfe6e6362332"} Jan 26 19:53:28 crc kubenswrapper[4737]: E0126 19:53:28.583198 4737 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 26 19:53:28 crc kubenswrapper[4737]: E0126 19:53:28.585242 4737 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bvk65,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(d81cdf24-ce67-401f-869f-805f4718fce4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 19:53:28 crc kubenswrapper[4737]: E0126 19:53:28.586957 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="d81cdf24-ce67-401f-869f-805f4718fce4" Jan 26 19:53:28 crc kubenswrapper[4737]: E0126 19:53:28.788690 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="d81cdf24-ce67-401f-869f-805f4718fce4" Jan 26 19:53:44 crc kubenswrapper[4737]: I0126 19:53:44.418186 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 26 19:53:46 crc kubenswrapper[4737]: I0126 19:53:46.003188 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"d81cdf24-ce67-401f-869f-805f4718fce4","Type":"ContainerStarted","Data":"180b2af4ea7eabf4e2acf652f681d686309ee1b6332346cbf09d3cf12422b349"} Jan 26 19:53:46 crc kubenswrapper[4737]: I0126 19:53:46.038932 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.809263265 podStartE2EDuration="58.038908902s" podCreationTimestamp="2026-01-26 19:52:48 +0000 UTC" firstStartedPulling="2026-01-26 19:52:51.185831382 +0000 UTC m=+4944.494026090" lastFinishedPulling="2026-01-26 19:53:44.415477019 +0000 UTC m=+4997.723671727" observedRunningTime="2026-01-26 19:53:46.030542037 +0000 UTC m=+4999.338736745" watchObservedRunningTime="2026-01-26 19:53:46.038908902 +0000 UTC m=+4999.347103610" Jan 26 19:54:00 crc kubenswrapper[4737]: I0126 19:54:00.949419 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:54:00 crc kubenswrapper[4737]: I0126 19:54:00.950203 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:54:30 crc kubenswrapper[4737]: I0126 19:54:30.949459 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:54:30 crc kubenswrapper[4737]: I0126 19:54:30.950408 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:55:00 crc kubenswrapper[4737]: I0126 19:55:00.960682 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:55:00 crc kubenswrapper[4737]: I0126 19:55:00.961462 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:55:00 crc kubenswrapper[4737]: I0126 19:55:00.961960 4737 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" Jan 26 19:55:00 crc kubenswrapper[4737]: I0126 19:55:00.963719 4737 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c38991e4ff60ea23b7470444b242edde168e75e6987918f8aae48b15bb03a5b0"} pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 19:55:00 crc kubenswrapper[4737]: I0126 19:55:00.964183 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" containerID="cri-o://c38991e4ff60ea23b7470444b242edde168e75e6987918f8aae48b15bb03a5b0" gracePeriod=600 Jan 26 19:55:01 crc kubenswrapper[4737]: I0126 19:55:01.993973 4737 generic.go:334] "Generic (PLEG): container finished" podID="afd75772-7900-46c3-b392-afb075e1cc08" containerID="c38991e4ff60ea23b7470444b242edde168e75e6987918f8aae48b15bb03a5b0" exitCode=0 Jan 26 19:55:01 crc kubenswrapper[4737]: I0126 19:55:01.994053 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" event={"ID":"afd75772-7900-46c3-b392-afb075e1cc08","Type":"ContainerDied","Data":"c38991e4ff60ea23b7470444b242edde168e75e6987918f8aae48b15bb03a5b0"} Jan 26 19:55:01 crc kubenswrapper[4737]: I0126 19:55:01.994697 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" event={"ID":"afd75772-7900-46c3-b392-afb075e1cc08","Type":"ContainerStarted","Data":"999f3b8bf4218ca969dd3559e41014ea98b9927ff16b813685ab6cfa003cd090"} Jan 26 19:55:01 crc kubenswrapper[4737]: I0126 19:55:01.995261 4737 scope.go:117] "RemoveContainer" containerID="7b6fabd15a79cd275ed2884c7ff8f267e25e1f7cb223b3a1ecfae218c1fe84b3" Jan 26 19:57:30 crc kubenswrapper[4737]: I0126 19:57:30.950135 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:57:30 crc kubenswrapper[4737]: I0126 19:57:30.951323 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:58:00 crc kubenswrapper[4737]: I0126 19:58:00.949440 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:58:00 crc kubenswrapper[4737]: I0126 19:58:00.949929 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:58:14 crc kubenswrapper[4737]: I0126 19:58:14.075629 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6t4x6"] Jan 26 19:58:14 crc kubenswrapper[4737]: I0126 19:58:14.083468 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6t4x6" Jan 26 19:58:14 crc kubenswrapper[4737]: I0126 19:58:14.228963 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77870c35-1820-490c-8302-b6d11377d858-utilities\") pod \"redhat-marketplace-6t4x6\" (UID: \"77870c35-1820-490c-8302-b6d11377d858\") " pod="openshift-marketplace/redhat-marketplace-6t4x6" Jan 26 19:58:14 crc kubenswrapper[4737]: I0126 19:58:14.229117 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzzpf\" (UniqueName: \"kubernetes.io/projected/77870c35-1820-490c-8302-b6d11377d858-kube-api-access-zzzpf\") pod \"redhat-marketplace-6t4x6\" (UID: \"77870c35-1820-490c-8302-b6d11377d858\") " pod="openshift-marketplace/redhat-marketplace-6t4x6" Jan 26 19:58:14 crc kubenswrapper[4737]: I0126 19:58:14.229204 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77870c35-1820-490c-8302-b6d11377d858-catalog-content\") pod \"redhat-marketplace-6t4x6\" (UID: \"77870c35-1820-490c-8302-b6d11377d858\") " pod="openshift-marketplace/redhat-marketplace-6t4x6" Jan 26 19:58:14 crc kubenswrapper[4737]: I0126 19:58:14.263463 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6t4x6"] Jan 26 19:58:14 crc kubenswrapper[4737]: I0126 19:58:14.331548 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77870c35-1820-490c-8302-b6d11377d858-catalog-content\") pod \"redhat-marketplace-6t4x6\" (UID: \"77870c35-1820-490c-8302-b6d11377d858\") " pod="openshift-marketplace/redhat-marketplace-6t4x6" Jan 26 19:58:14 crc kubenswrapper[4737]: I0126 19:58:14.331719 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77870c35-1820-490c-8302-b6d11377d858-utilities\") pod \"redhat-marketplace-6t4x6\" (UID: \"77870c35-1820-490c-8302-b6d11377d858\") " pod="openshift-marketplace/redhat-marketplace-6t4x6" Jan 26 19:58:14 crc kubenswrapper[4737]: I0126 19:58:14.331848 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzzpf\" (UniqueName: \"kubernetes.io/projected/77870c35-1820-490c-8302-b6d11377d858-kube-api-access-zzzpf\") pod \"redhat-marketplace-6t4x6\" (UID: \"77870c35-1820-490c-8302-b6d11377d858\") " pod="openshift-marketplace/redhat-marketplace-6t4x6" Jan 26 19:58:14 crc kubenswrapper[4737]: I0126 19:58:14.333031 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77870c35-1820-490c-8302-b6d11377d858-utilities\") pod \"redhat-marketplace-6t4x6\" (UID: \"77870c35-1820-490c-8302-b6d11377d858\") " pod="openshift-marketplace/redhat-marketplace-6t4x6" Jan 26 19:58:14 crc kubenswrapper[4737]: I0126 19:58:14.333033 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77870c35-1820-490c-8302-b6d11377d858-catalog-content\") pod \"redhat-marketplace-6t4x6\" (UID: \"77870c35-1820-490c-8302-b6d11377d858\") " pod="openshift-marketplace/redhat-marketplace-6t4x6" Jan 26 19:58:14 crc kubenswrapper[4737]: I0126 19:58:14.361563 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzzpf\" (UniqueName: \"kubernetes.io/projected/77870c35-1820-490c-8302-b6d11377d858-kube-api-access-zzzpf\") pod \"redhat-marketplace-6t4x6\" (UID: \"77870c35-1820-490c-8302-b6d11377d858\") " pod="openshift-marketplace/redhat-marketplace-6t4x6" Jan 26 19:58:14 crc kubenswrapper[4737]: I0126 19:58:14.406398 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6t4x6" Jan 26 19:58:15 crc kubenswrapper[4737]: I0126 19:58:15.279968 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6t4x6"] Jan 26 19:58:15 crc kubenswrapper[4737]: I0126 19:58:15.784479 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6t4x6" event={"ID":"77870c35-1820-490c-8302-b6d11377d858","Type":"ContainerStarted","Data":"d7eedef83d81c98a6509e2a55ca544c569e130628cb83f722c877d43b468146b"} Jan 26 19:58:16 crc kubenswrapper[4737]: I0126 19:58:16.795733 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6t4x6" event={"ID":"77870c35-1820-490c-8302-b6d11377d858","Type":"ContainerDied","Data":"73facc4b741ad860020eb8a1cf49324c987415e7ad6f6ec6a6418cf6470150f9"} Jan 26 19:58:16 crc kubenswrapper[4737]: I0126 19:58:16.795827 4737 generic.go:334] "Generic (PLEG): container finished" podID="77870c35-1820-490c-8302-b6d11377d858" containerID="73facc4b741ad860020eb8a1cf49324c987415e7ad6f6ec6a6418cf6470150f9" exitCode=0 Jan 26 19:58:16 crc kubenswrapper[4737]: I0126 19:58:16.799911 4737 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 19:58:18 crc kubenswrapper[4737]: I0126 19:58:18.821849 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6t4x6" event={"ID":"77870c35-1820-490c-8302-b6d11377d858","Type":"ContainerStarted","Data":"dc69557a929b85de3a72a5bccb9f088fdd1330fb5304a27113c3977fed7ad2c6"} Jan 26 19:58:19 crc kubenswrapper[4737]: I0126 19:58:19.834626 4737 generic.go:334] "Generic (PLEG): container finished" podID="77870c35-1820-490c-8302-b6d11377d858" containerID="dc69557a929b85de3a72a5bccb9f088fdd1330fb5304a27113c3977fed7ad2c6" exitCode=0 Jan 26 19:58:19 crc kubenswrapper[4737]: I0126 19:58:19.834733 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6t4x6" event={"ID":"77870c35-1820-490c-8302-b6d11377d858","Type":"ContainerDied","Data":"dc69557a929b85de3a72a5bccb9f088fdd1330fb5304a27113c3977fed7ad2c6"} Jan 26 19:58:20 crc kubenswrapper[4737]: I0126 19:58:20.849339 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6t4x6" event={"ID":"77870c35-1820-490c-8302-b6d11377d858","Type":"ContainerStarted","Data":"46bf3f06fa89900a36156803a188b7ccdd22245ad8ec1f42a2b4c794b592164f"} Jan 26 19:58:20 crc kubenswrapper[4737]: I0126 19:58:20.883165 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6t4x6" podStartSLOduration=4.396528039 podStartE2EDuration="7.882794804s" podCreationTimestamp="2026-01-26 19:58:13 +0000 UTC" firstStartedPulling="2026-01-26 19:58:16.798379341 +0000 UTC m=+5270.106574049" lastFinishedPulling="2026-01-26 19:58:20.284646106 +0000 UTC m=+5273.592840814" observedRunningTime="2026-01-26 19:58:20.867604282 +0000 UTC m=+5274.175798990" watchObservedRunningTime="2026-01-26 19:58:20.882794804 +0000 UTC m=+5274.190989512" Jan 26 19:58:24 crc kubenswrapper[4737]: I0126 19:58:24.407486 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6t4x6" Jan 26 19:58:24 crc kubenswrapper[4737]: I0126 19:58:24.408692 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6t4x6" Jan 26 19:58:24 crc kubenswrapper[4737]: I0126 19:58:24.456144 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6t4x6" Jan 26 19:58:25 crc kubenswrapper[4737]: I0126 19:58:25.976831 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6t4x6" Jan 26 19:58:26 crc kubenswrapper[4737]: I0126 19:58:26.051669 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6t4x6"] Jan 26 19:58:27 crc kubenswrapper[4737]: I0126 19:58:27.922772 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6t4x6" podUID="77870c35-1820-490c-8302-b6d11377d858" containerName="registry-server" containerID="cri-o://46bf3f06fa89900a36156803a188b7ccdd22245ad8ec1f42a2b4c794b592164f" gracePeriod=2 Jan 26 19:58:28 crc kubenswrapper[4737]: I0126 19:58:28.554224 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6t4x6" Jan 26 19:58:28 crc kubenswrapper[4737]: I0126 19:58:28.707195 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77870c35-1820-490c-8302-b6d11377d858-utilities\") pod \"77870c35-1820-490c-8302-b6d11377d858\" (UID: \"77870c35-1820-490c-8302-b6d11377d858\") " Jan 26 19:58:28 crc kubenswrapper[4737]: I0126 19:58:28.707711 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77870c35-1820-490c-8302-b6d11377d858-catalog-content\") pod \"77870c35-1820-490c-8302-b6d11377d858\" (UID: \"77870c35-1820-490c-8302-b6d11377d858\") " Jan 26 19:58:28 crc kubenswrapper[4737]: I0126 19:58:28.707968 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzzpf\" (UniqueName: \"kubernetes.io/projected/77870c35-1820-490c-8302-b6d11377d858-kube-api-access-zzzpf\") pod \"77870c35-1820-490c-8302-b6d11377d858\" (UID: \"77870c35-1820-490c-8302-b6d11377d858\") " Jan 26 19:58:28 crc kubenswrapper[4737]: I0126 19:58:28.708946 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77870c35-1820-490c-8302-b6d11377d858-utilities" (OuterVolumeSpecName: "utilities") pod "77870c35-1820-490c-8302-b6d11377d858" (UID: "77870c35-1820-490c-8302-b6d11377d858"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:58:28 crc kubenswrapper[4737]: I0126 19:58:28.709693 4737 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77870c35-1820-490c-8302-b6d11377d858-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:58:28 crc kubenswrapper[4737]: I0126 19:58:28.716672 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77870c35-1820-490c-8302-b6d11377d858-kube-api-access-zzzpf" (OuterVolumeSpecName: "kube-api-access-zzzpf") pod "77870c35-1820-490c-8302-b6d11377d858" (UID: "77870c35-1820-490c-8302-b6d11377d858"). InnerVolumeSpecName "kube-api-access-zzzpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:58:28 crc kubenswrapper[4737]: I0126 19:58:28.728319 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77870c35-1820-490c-8302-b6d11377d858-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "77870c35-1820-490c-8302-b6d11377d858" (UID: "77870c35-1820-490c-8302-b6d11377d858"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:58:28 crc kubenswrapper[4737]: I0126 19:58:28.812078 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzzpf\" (UniqueName: \"kubernetes.io/projected/77870c35-1820-490c-8302-b6d11377d858-kube-api-access-zzzpf\") on node \"crc\" DevicePath \"\"" Jan 26 19:58:28 crc kubenswrapper[4737]: I0126 19:58:28.812131 4737 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77870c35-1820-490c-8302-b6d11377d858-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:58:28 crc kubenswrapper[4737]: I0126 19:58:28.933175 4737 generic.go:334] "Generic (PLEG): container finished" podID="77870c35-1820-490c-8302-b6d11377d858" containerID="46bf3f06fa89900a36156803a188b7ccdd22245ad8ec1f42a2b4c794b592164f" exitCode=0 Jan 26 19:58:28 crc kubenswrapper[4737]: I0126 19:58:28.933224 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6t4x6" event={"ID":"77870c35-1820-490c-8302-b6d11377d858","Type":"ContainerDied","Data":"46bf3f06fa89900a36156803a188b7ccdd22245ad8ec1f42a2b4c794b592164f"} Jan 26 19:58:28 crc kubenswrapper[4737]: I0126 19:58:28.933262 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6t4x6" event={"ID":"77870c35-1820-490c-8302-b6d11377d858","Type":"ContainerDied","Data":"d7eedef83d81c98a6509e2a55ca544c569e130628cb83f722c877d43b468146b"} Jan 26 19:58:28 crc kubenswrapper[4737]: I0126 19:58:28.933285 4737 scope.go:117] "RemoveContainer" containerID="46bf3f06fa89900a36156803a188b7ccdd22245ad8ec1f42a2b4c794b592164f" Jan 26 19:58:28 crc kubenswrapper[4737]: I0126 19:58:28.933484 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6t4x6" Jan 26 19:58:28 crc kubenswrapper[4737]: I0126 19:58:28.958878 4737 scope.go:117] "RemoveContainer" containerID="dc69557a929b85de3a72a5bccb9f088fdd1330fb5304a27113c3977fed7ad2c6" Jan 26 19:58:28 crc kubenswrapper[4737]: I0126 19:58:28.994718 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6t4x6"] Jan 26 19:58:28 crc kubenswrapper[4737]: I0126 19:58:28.995698 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6t4x6"] Jan 26 19:58:28 crc kubenswrapper[4737]: I0126 19:58:28.998872 4737 scope.go:117] "RemoveContainer" containerID="73facc4b741ad860020eb8a1cf49324c987415e7ad6f6ec6a6418cf6470150f9" Jan 26 19:58:29 crc kubenswrapper[4737]: I0126 19:58:29.038238 4737 scope.go:117] "RemoveContainer" containerID="46bf3f06fa89900a36156803a188b7ccdd22245ad8ec1f42a2b4c794b592164f" Jan 26 19:58:29 crc kubenswrapper[4737]: E0126 19:58:29.039051 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46bf3f06fa89900a36156803a188b7ccdd22245ad8ec1f42a2b4c794b592164f\": container with ID starting with 46bf3f06fa89900a36156803a188b7ccdd22245ad8ec1f42a2b4c794b592164f not found: ID does not exist" containerID="46bf3f06fa89900a36156803a188b7ccdd22245ad8ec1f42a2b4c794b592164f" Jan 26 19:58:29 crc kubenswrapper[4737]: I0126 19:58:29.039094 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46bf3f06fa89900a36156803a188b7ccdd22245ad8ec1f42a2b4c794b592164f"} err="failed to get container status \"46bf3f06fa89900a36156803a188b7ccdd22245ad8ec1f42a2b4c794b592164f\": rpc error: code = NotFound desc = could not find container \"46bf3f06fa89900a36156803a188b7ccdd22245ad8ec1f42a2b4c794b592164f\": container with ID starting with 46bf3f06fa89900a36156803a188b7ccdd22245ad8ec1f42a2b4c794b592164f not found: ID does not exist" Jan 26 19:58:29 crc kubenswrapper[4737]: I0126 19:58:29.039131 4737 scope.go:117] "RemoveContainer" containerID="dc69557a929b85de3a72a5bccb9f088fdd1330fb5304a27113c3977fed7ad2c6" Jan 26 19:58:29 crc kubenswrapper[4737]: E0126 19:58:29.039521 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc69557a929b85de3a72a5bccb9f088fdd1330fb5304a27113c3977fed7ad2c6\": container with ID starting with dc69557a929b85de3a72a5bccb9f088fdd1330fb5304a27113c3977fed7ad2c6 not found: ID does not exist" containerID="dc69557a929b85de3a72a5bccb9f088fdd1330fb5304a27113c3977fed7ad2c6" Jan 26 19:58:29 crc kubenswrapper[4737]: I0126 19:58:29.039595 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc69557a929b85de3a72a5bccb9f088fdd1330fb5304a27113c3977fed7ad2c6"} err="failed to get container status \"dc69557a929b85de3a72a5bccb9f088fdd1330fb5304a27113c3977fed7ad2c6\": rpc error: code = NotFound desc = could not find container \"dc69557a929b85de3a72a5bccb9f088fdd1330fb5304a27113c3977fed7ad2c6\": container with ID starting with dc69557a929b85de3a72a5bccb9f088fdd1330fb5304a27113c3977fed7ad2c6 not found: ID does not exist" Jan 26 19:58:29 crc kubenswrapper[4737]: I0126 19:58:29.039643 4737 scope.go:117] "RemoveContainer" containerID="73facc4b741ad860020eb8a1cf49324c987415e7ad6f6ec6a6418cf6470150f9" Jan 26 19:58:29 crc kubenswrapper[4737]: E0126 19:58:29.040050 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73facc4b741ad860020eb8a1cf49324c987415e7ad6f6ec6a6418cf6470150f9\": container with ID starting with 73facc4b741ad860020eb8a1cf49324c987415e7ad6f6ec6a6418cf6470150f9 not found: ID does not exist" containerID="73facc4b741ad860020eb8a1cf49324c987415e7ad6f6ec6a6418cf6470150f9" Jan 26 19:58:29 crc kubenswrapper[4737]: I0126 19:58:29.040080 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73facc4b741ad860020eb8a1cf49324c987415e7ad6f6ec6a6418cf6470150f9"} err="failed to get container status \"73facc4b741ad860020eb8a1cf49324c987415e7ad6f6ec6a6418cf6470150f9\": rpc error: code = NotFound desc = could not find container \"73facc4b741ad860020eb8a1cf49324c987415e7ad6f6ec6a6418cf6470150f9\": container with ID starting with 73facc4b741ad860020eb8a1cf49324c987415e7ad6f6ec6a6418cf6470150f9 not found: ID does not exist" Jan 26 19:58:30 crc kubenswrapper[4737]: I0126 19:58:30.948951 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:58:30 crc kubenswrapper[4737]: I0126 19:58:30.949511 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:58:30 crc kubenswrapper[4737]: I0126 19:58:30.949558 4737 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" Jan 26 19:58:30 crc kubenswrapper[4737]: I0126 19:58:30.950562 4737 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"999f3b8bf4218ca969dd3559e41014ea98b9927ff16b813685ab6cfa003cd090"} pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 19:58:30 crc kubenswrapper[4737]: I0126 19:58:30.950623 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" containerID="cri-o://999f3b8bf4218ca969dd3559e41014ea98b9927ff16b813685ab6cfa003cd090" gracePeriod=600 Jan 26 19:58:30 crc kubenswrapper[4737]: I0126 19:58:30.997219 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77870c35-1820-490c-8302-b6d11377d858" path="/var/lib/kubelet/pods/77870c35-1820-490c-8302-b6d11377d858/volumes" Jan 26 19:58:31 crc kubenswrapper[4737]: I0126 19:58:31.407552 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vh5vs"] Jan 26 19:58:31 crc kubenswrapper[4737]: E0126 19:58:31.408049 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77870c35-1820-490c-8302-b6d11377d858" containerName="extract-content" Jan 26 19:58:31 crc kubenswrapper[4737]: I0126 19:58:31.408917 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="77870c35-1820-490c-8302-b6d11377d858" containerName="extract-content" Jan 26 19:58:31 crc kubenswrapper[4737]: E0126 19:58:31.408964 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77870c35-1820-490c-8302-b6d11377d858" containerName="extract-utilities" Jan 26 19:58:31 crc kubenswrapper[4737]: I0126 19:58:31.408973 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="77870c35-1820-490c-8302-b6d11377d858" containerName="extract-utilities" Jan 26 19:58:31 crc kubenswrapper[4737]: E0126 19:58:31.408983 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77870c35-1820-490c-8302-b6d11377d858" containerName="registry-server" Jan 26 19:58:31 crc kubenswrapper[4737]: I0126 19:58:31.408989 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="77870c35-1820-490c-8302-b6d11377d858" containerName="registry-server" Jan 26 19:58:31 crc kubenswrapper[4737]: I0126 19:58:31.409392 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="77870c35-1820-490c-8302-b6d11377d858" containerName="registry-server" Jan 26 19:58:31 crc kubenswrapper[4737]: I0126 19:58:31.411626 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vh5vs" Jan 26 19:58:31 crc kubenswrapper[4737]: I0126 19:58:31.440074 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vh5vs"] Jan 26 19:58:31 crc kubenswrapper[4737]: I0126 19:58:31.585630 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/631e276b-eec7-456b-8ae8-ee31078c12fd-utilities\") pod \"redhat-operators-vh5vs\" (UID: \"631e276b-eec7-456b-8ae8-ee31078c12fd\") " pod="openshift-marketplace/redhat-operators-vh5vs" Jan 26 19:58:31 crc kubenswrapper[4737]: I0126 19:58:31.586175 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q95r\" (UniqueName: \"kubernetes.io/projected/631e276b-eec7-456b-8ae8-ee31078c12fd-kube-api-access-9q95r\") pod \"redhat-operators-vh5vs\" (UID: \"631e276b-eec7-456b-8ae8-ee31078c12fd\") " pod="openshift-marketplace/redhat-operators-vh5vs" Jan 26 19:58:31 crc kubenswrapper[4737]: I0126 19:58:31.586210 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/631e276b-eec7-456b-8ae8-ee31078c12fd-catalog-content\") pod \"redhat-operators-vh5vs\" (UID: \"631e276b-eec7-456b-8ae8-ee31078c12fd\") " pod="openshift-marketplace/redhat-operators-vh5vs" Jan 26 19:58:31 crc kubenswrapper[4737]: I0126 19:58:31.688690 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q95r\" (UniqueName: \"kubernetes.io/projected/631e276b-eec7-456b-8ae8-ee31078c12fd-kube-api-access-9q95r\") pod \"redhat-operators-vh5vs\" (UID: \"631e276b-eec7-456b-8ae8-ee31078c12fd\") " pod="openshift-marketplace/redhat-operators-vh5vs" Jan 26 19:58:31 crc kubenswrapper[4737]: I0126 19:58:31.689023 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/631e276b-eec7-456b-8ae8-ee31078c12fd-catalog-content\") pod \"redhat-operators-vh5vs\" (UID: \"631e276b-eec7-456b-8ae8-ee31078c12fd\") " pod="openshift-marketplace/redhat-operators-vh5vs" Jan 26 19:58:31 crc kubenswrapper[4737]: I0126 19:58:31.689384 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/631e276b-eec7-456b-8ae8-ee31078c12fd-utilities\") pod \"redhat-operators-vh5vs\" (UID: \"631e276b-eec7-456b-8ae8-ee31078c12fd\") " pod="openshift-marketplace/redhat-operators-vh5vs" Jan 26 19:58:31 crc kubenswrapper[4737]: I0126 19:58:31.689728 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/631e276b-eec7-456b-8ae8-ee31078c12fd-catalog-content\") pod \"redhat-operators-vh5vs\" (UID: \"631e276b-eec7-456b-8ae8-ee31078c12fd\") " pod="openshift-marketplace/redhat-operators-vh5vs" Jan 26 19:58:31 crc kubenswrapper[4737]: I0126 19:58:31.689755 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/631e276b-eec7-456b-8ae8-ee31078c12fd-utilities\") pod \"redhat-operators-vh5vs\" (UID: \"631e276b-eec7-456b-8ae8-ee31078c12fd\") " pod="openshift-marketplace/redhat-operators-vh5vs" Jan 26 19:58:31 crc kubenswrapper[4737]: I0126 19:58:31.769388 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q95r\" (UniqueName: \"kubernetes.io/projected/631e276b-eec7-456b-8ae8-ee31078c12fd-kube-api-access-9q95r\") pod \"redhat-operators-vh5vs\" (UID: \"631e276b-eec7-456b-8ae8-ee31078c12fd\") " pod="openshift-marketplace/redhat-operators-vh5vs" Jan 26 19:58:31 crc kubenswrapper[4737]: I0126 19:58:31.970361 4737 generic.go:334] "Generic (PLEG): container finished" podID="afd75772-7900-46c3-b392-afb075e1cc08" containerID="999f3b8bf4218ca969dd3559e41014ea98b9927ff16b813685ab6cfa003cd090" exitCode=0 Jan 26 19:58:31 crc kubenswrapper[4737]: I0126 19:58:31.970422 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" event={"ID":"afd75772-7900-46c3-b392-afb075e1cc08","Type":"ContainerDied","Data":"999f3b8bf4218ca969dd3559e41014ea98b9927ff16b813685ab6cfa003cd090"} Jan 26 19:58:31 crc kubenswrapper[4737]: I0126 19:58:31.970490 4737 scope.go:117] "RemoveContainer" containerID="c38991e4ff60ea23b7470444b242edde168e75e6987918f8aae48b15bb03a5b0" Jan 26 19:58:32 crc kubenswrapper[4737]: I0126 19:58:32.034746 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vh5vs" Jan 26 19:58:32 crc kubenswrapper[4737]: E0126 19:58:32.295878 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:58:32 crc kubenswrapper[4737]: I0126 19:58:32.592254 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vh5vs"] Jan 26 19:58:32 crc kubenswrapper[4737]: I0126 19:58:32.984271 4737 generic.go:334] "Generic (PLEG): container finished" podID="631e276b-eec7-456b-8ae8-ee31078c12fd" containerID="216d8b97e109beb90580ad4891f11bb304dd2c39197d145164401df3c9a3853a" exitCode=0 Jan 26 19:58:32 crc kubenswrapper[4737]: I0126 19:58:32.991802 4737 scope.go:117] "RemoveContainer" containerID="999f3b8bf4218ca969dd3559e41014ea98b9927ff16b813685ab6cfa003cd090" Jan 26 19:58:32 crc kubenswrapper[4737]: E0126 19:58:32.992139 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:58:32 crc kubenswrapper[4737]: I0126 19:58:32.994791 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vh5vs" event={"ID":"631e276b-eec7-456b-8ae8-ee31078c12fd","Type":"ContainerDied","Data":"216d8b97e109beb90580ad4891f11bb304dd2c39197d145164401df3c9a3853a"} Jan 26 19:58:32 crc kubenswrapper[4737]: I0126 19:58:32.994833 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vh5vs" event={"ID":"631e276b-eec7-456b-8ae8-ee31078c12fd","Type":"ContainerStarted","Data":"ea7f1334bb816b6606f8b4a5a1c24ab40e883ef15069b9324cc42dc40ac462d2"} Jan 26 19:58:35 crc kubenswrapper[4737]: I0126 19:58:35.017917 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vh5vs" event={"ID":"631e276b-eec7-456b-8ae8-ee31078c12fd","Type":"ContainerStarted","Data":"65101d2841c794cf0101cb56c3c2216685b1c3baea479071d38ec3c7514f736c"} Jan 26 19:58:39 crc kubenswrapper[4737]: I0126 19:58:39.072121 4737 generic.go:334] "Generic (PLEG): container finished" podID="631e276b-eec7-456b-8ae8-ee31078c12fd" containerID="65101d2841c794cf0101cb56c3c2216685b1c3baea479071d38ec3c7514f736c" exitCode=0 Jan 26 19:58:39 crc kubenswrapper[4737]: I0126 19:58:39.072188 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vh5vs" event={"ID":"631e276b-eec7-456b-8ae8-ee31078c12fd","Type":"ContainerDied","Data":"65101d2841c794cf0101cb56c3c2216685b1c3baea479071d38ec3c7514f736c"} Jan 26 19:58:40 crc kubenswrapper[4737]: I0126 19:58:40.100629 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vh5vs" event={"ID":"631e276b-eec7-456b-8ae8-ee31078c12fd","Type":"ContainerStarted","Data":"2079b77a841c38ed6b6c0ecbf920e42629d9131b874bf93df0a78ab05150b793"} Jan 26 19:58:42 crc kubenswrapper[4737]: I0126 19:58:42.035321 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vh5vs" Jan 26 19:58:42 crc kubenswrapper[4737]: I0126 19:58:42.035653 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vh5vs" Jan 26 19:58:43 crc kubenswrapper[4737]: I0126 19:58:43.088352 4737 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vh5vs" podUID="631e276b-eec7-456b-8ae8-ee31078c12fd" containerName="registry-server" probeResult="failure" output=< Jan 26 19:58:43 crc kubenswrapper[4737]: timeout: failed to connect service ":50051" within 1s Jan 26 19:58:43 crc kubenswrapper[4737]: > Jan 26 19:58:46 crc kubenswrapper[4737]: I0126 19:58:46.993731 4737 scope.go:117] "RemoveContainer" containerID="999f3b8bf4218ca969dd3559e41014ea98b9927ff16b813685ab6cfa003cd090" Jan 26 19:58:46 crc kubenswrapper[4737]: E0126 19:58:46.994659 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:58:53 crc kubenswrapper[4737]: I0126 19:58:53.105309 4737 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vh5vs" podUID="631e276b-eec7-456b-8ae8-ee31078c12fd" containerName="registry-server" probeResult="failure" output=< Jan 26 19:58:53 crc kubenswrapper[4737]: timeout: failed to connect service ":50051" within 1s Jan 26 19:58:53 crc kubenswrapper[4737]: > Jan 26 19:59:01 crc kubenswrapper[4737]: I0126 19:59:01.982810 4737 scope.go:117] "RemoveContainer" containerID="999f3b8bf4218ca969dd3559e41014ea98b9927ff16b813685ab6cfa003cd090" Jan 26 19:59:01 crc kubenswrapper[4737]: E0126 19:59:01.988644 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:59:02 crc kubenswrapper[4737]: I0126 19:59:02.082767 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vh5vs" Jan 26 19:59:02 crc kubenswrapper[4737]: I0126 19:59:02.112477 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vh5vs" podStartSLOduration=24.606514913 podStartE2EDuration="31.112096219s" podCreationTimestamp="2026-01-26 19:58:31 +0000 UTC" firstStartedPulling="2026-01-26 19:58:32.987576034 +0000 UTC m=+5286.295770742" lastFinishedPulling="2026-01-26 19:58:39.49315734 +0000 UTC m=+5292.801352048" observedRunningTime="2026-01-26 19:58:40.127727891 +0000 UTC m=+5293.435922599" watchObservedRunningTime="2026-01-26 19:59:02.112096219 +0000 UTC m=+5315.420290937" Jan 26 19:59:02 crc kubenswrapper[4737]: I0126 19:59:02.135188 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vh5vs" Jan 26 19:59:02 crc kubenswrapper[4737]: I0126 19:59:02.615490 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vh5vs"] Jan 26 19:59:03 crc kubenswrapper[4737]: I0126 19:59:03.335374 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vh5vs" podUID="631e276b-eec7-456b-8ae8-ee31078c12fd" containerName="registry-server" containerID="cri-o://2079b77a841c38ed6b6c0ecbf920e42629d9131b874bf93df0a78ab05150b793" gracePeriod=2 Jan 26 19:59:04 crc kubenswrapper[4737]: I0126 19:59:04.360322 4737 generic.go:334] "Generic (PLEG): container finished" podID="631e276b-eec7-456b-8ae8-ee31078c12fd" containerID="2079b77a841c38ed6b6c0ecbf920e42629d9131b874bf93df0a78ab05150b793" exitCode=0 Jan 26 19:59:04 crc kubenswrapper[4737]: I0126 19:59:04.360386 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vh5vs" event={"ID":"631e276b-eec7-456b-8ae8-ee31078c12fd","Type":"ContainerDied","Data":"2079b77a841c38ed6b6c0ecbf920e42629d9131b874bf93df0a78ab05150b793"} Jan 26 19:59:04 crc kubenswrapper[4737]: I0126 19:59:04.558301 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vh5vs" Jan 26 19:59:04 crc kubenswrapper[4737]: I0126 19:59:04.609223 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9q95r\" (UniqueName: \"kubernetes.io/projected/631e276b-eec7-456b-8ae8-ee31078c12fd-kube-api-access-9q95r\") pod \"631e276b-eec7-456b-8ae8-ee31078c12fd\" (UID: \"631e276b-eec7-456b-8ae8-ee31078c12fd\") " Jan 26 19:59:04 crc kubenswrapper[4737]: I0126 19:59:04.609300 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/631e276b-eec7-456b-8ae8-ee31078c12fd-catalog-content\") pod \"631e276b-eec7-456b-8ae8-ee31078c12fd\" (UID: \"631e276b-eec7-456b-8ae8-ee31078c12fd\") " Jan 26 19:59:04 crc kubenswrapper[4737]: I0126 19:59:04.609382 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/631e276b-eec7-456b-8ae8-ee31078c12fd-utilities\") pod \"631e276b-eec7-456b-8ae8-ee31078c12fd\" (UID: \"631e276b-eec7-456b-8ae8-ee31078c12fd\") " Jan 26 19:59:04 crc kubenswrapper[4737]: I0126 19:59:04.610773 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/631e276b-eec7-456b-8ae8-ee31078c12fd-utilities" (OuterVolumeSpecName: "utilities") pod "631e276b-eec7-456b-8ae8-ee31078c12fd" (UID: "631e276b-eec7-456b-8ae8-ee31078c12fd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:59:04 crc kubenswrapper[4737]: I0126 19:59:04.630956 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/631e276b-eec7-456b-8ae8-ee31078c12fd-kube-api-access-9q95r" (OuterVolumeSpecName: "kube-api-access-9q95r") pod "631e276b-eec7-456b-8ae8-ee31078c12fd" (UID: "631e276b-eec7-456b-8ae8-ee31078c12fd"). InnerVolumeSpecName "kube-api-access-9q95r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:59:04 crc kubenswrapper[4737]: I0126 19:59:04.717333 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9q95r\" (UniqueName: \"kubernetes.io/projected/631e276b-eec7-456b-8ae8-ee31078c12fd-kube-api-access-9q95r\") on node \"crc\" DevicePath \"\"" Jan 26 19:59:04 crc kubenswrapper[4737]: I0126 19:59:04.717386 4737 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/631e276b-eec7-456b-8ae8-ee31078c12fd-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:59:04 crc kubenswrapper[4737]: I0126 19:59:04.747326 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/631e276b-eec7-456b-8ae8-ee31078c12fd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "631e276b-eec7-456b-8ae8-ee31078c12fd" (UID: "631e276b-eec7-456b-8ae8-ee31078c12fd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:59:04 crc kubenswrapper[4737]: I0126 19:59:04.819902 4737 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/631e276b-eec7-456b-8ae8-ee31078c12fd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:59:05 crc kubenswrapper[4737]: I0126 19:59:05.374487 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vh5vs" event={"ID":"631e276b-eec7-456b-8ae8-ee31078c12fd","Type":"ContainerDied","Data":"ea7f1334bb816b6606f8b4a5a1c24ab40e883ef15069b9324cc42dc40ac462d2"} Jan 26 19:59:05 crc kubenswrapper[4737]: I0126 19:59:05.374902 4737 scope.go:117] "RemoveContainer" containerID="2079b77a841c38ed6b6c0ecbf920e42629d9131b874bf93df0a78ab05150b793" Jan 26 19:59:05 crc kubenswrapper[4737]: I0126 19:59:05.375416 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vh5vs" Jan 26 19:59:05 crc kubenswrapper[4737]: I0126 19:59:05.411291 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vh5vs"] Jan 26 19:59:05 crc kubenswrapper[4737]: I0126 19:59:05.421562 4737 scope.go:117] "RemoveContainer" containerID="65101d2841c794cf0101cb56c3c2216685b1c3baea479071d38ec3c7514f736c" Jan 26 19:59:05 crc kubenswrapper[4737]: I0126 19:59:05.426208 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vh5vs"] Jan 26 19:59:05 crc kubenswrapper[4737]: I0126 19:59:05.457887 4737 scope.go:117] "RemoveContainer" containerID="216d8b97e109beb90580ad4891f11bb304dd2c39197d145164401df3c9a3853a" Jan 26 19:59:07 crc kubenswrapper[4737]: I0126 19:59:07.005255 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="631e276b-eec7-456b-8ae8-ee31078c12fd" path="/var/lib/kubelet/pods/631e276b-eec7-456b-8ae8-ee31078c12fd/volumes" Jan 26 19:59:12 crc kubenswrapper[4737]: I0126 19:59:12.983158 4737 scope.go:117] "RemoveContainer" containerID="999f3b8bf4218ca969dd3559e41014ea98b9927ff16b813685ab6cfa003cd090" Jan 26 19:59:12 crc kubenswrapper[4737]: E0126 19:59:12.984568 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:59:27 crc kubenswrapper[4737]: I0126 19:59:27.983956 4737 scope.go:117] "RemoveContainer" containerID="999f3b8bf4218ca969dd3559e41014ea98b9927ff16b813685ab6cfa003cd090" Jan 26 19:59:27 crc kubenswrapper[4737]: E0126 19:59:27.985270 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:59:38 crc kubenswrapper[4737]: I0126 19:59:38.983200 4737 scope.go:117] "RemoveContainer" containerID="999f3b8bf4218ca969dd3559e41014ea98b9927ff16b813685ab6cfa003cd090" Jan 26 19:59:38 crc kubenswrapper[4737]: E0126 19:59:38.984204 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 19:59:53 crc kubenswrapper[4737]: I0126 19:59:53.981751 4737 scope.go:117] "RemoveContainer" containerID="999f3b8bf4218ca969dd3559e41014ea98b9927ff16b813685ab6cfa003cd090" Jan 26 19:59:53 crc kubenswrapper[4737]: E0126 19:59:53.982706 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:00:00 crc kubenswrapper[4737]: I0126 20:00:00.323607 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490960-v46dx"] Jan 26 20:00:00 crc kubenswrapper[4737]: E0126 20:00:00.331648 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="631e276b-eec7-456b-8ae8-ee31078c12fd" containerName="extract-content" Jan 26 20:00:00 crc kubenswrapper[4737]: I0126 20:00:00.331679 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="631e276b-eec7-456b-8ae8-ee31078c12fd" containerName="extract-content" Jan 26 20:00:00 crc kubenswrapper[4737]: E0126 20:00:00.331967 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="631e276b-eec7-456b-8ae8-ee31078c12fd" containerName="registry-server" Jan 26 20:00:00 crc kubenswrapper[4737]: I0126 20:00:00.331978 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="631e276b-eec7-456b-8ae8-ee31078c12fd" containerName="registry-server" Jan 26 20:00:00 crc kubenswrapper[4737]: E0126 20:00:00.332010 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="631e276b-eec7-456b-8ae8-ee31078c12fd" containerName="extract-utilities" Jan 26 20:00:00 crc kubenswrapper[4737]: I0126 20:00:00.332017 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="631e276b-eec7-456b-8ae8-ee31078c12fd" containerName="extract-utilities" Jan 26 20:00:00 crc kubenswrapper[4737]: I0126 20:00:00.332778 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="631e276b-eec7-456b-8ae8-ee31078c12fd" containerName="registry-server" Jan 26 20:00:00 crc kubenswrapper[4737]: I0126 20:00:00.333811 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490960-v46dx" Jan 26 20:00:00 crc kubenswrapper[4737]: I0126 20:00:00.337621 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 20:00:00 crc kubenswrapper[4737]: I0126 20:00:00.338354 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 20:00:00 crc kubenswrapper[4737]: I0126 20:00:00.360884 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490960-v46dx"] Jan 26 20:00:00 crc kubenswrapper[4737]: I0126 20:00:00.363919 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b92cb62-fe44-4a02-afd4-31259c19fa4d-secret-volume\") pod \"collect-profiles-29490960-v46dx\" (UID: \"9b92cb62-fe44-4a02-afd4-31259c19fa4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490960-v46dx" Jan 26 20:00:00 crc kubenswrapper[4737]: I0126 20:00:00.364005 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b92cb62-fe44-4a02-afd4-31259c19fa4d-config-volume\") pod \"collect-profiles-29490960-v46dx\" (UID: \"9b92cb62-fe44-4a02-afd4-31259c19fa4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490960-v46dx" Jan 26 20:00:00 crc kubenswrapper[4737]: I0126 20:00:00.364046 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2brdf\" (UniqueName: \"kubernetes.io/projected/9b92cb62-fe44-4a02-afd4-31259c19fa4d-kube-api-access-2brdf\") pod \"collect-profiles-29490960-v46dx\" (UID: \"9b92cb62-fe44-4a02-afd4-31259c19fa4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490960-v46dx" Jan 26 20:00:00 crc kubenswrapper[4737]: I0126 20:00:00.465697 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b92cb62-fe44-4a02-afd4-31259c19fa4d-secret-volume\") pod \"collect-profiles-29490960-v46dx\" (UID: \"9b92cb62-fe44-4a02-afd4-31259c19fa4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490960-v46dx" Jan 26 20:00:00 crc kubenswrapper[4737]: I0126 20:00:00.465776 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b92cb62-fe44-4a02-afd4-31259c19fa4d-config-volume\") pod \"collect-profiles-29490960-v46dx\" (UID: \"9b92cb62-fe44-4a02-afd4-31259c19fa4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490960-v46dx" Jan 26 20:00:00 crc kubenswrapper[4737]: I0126 20:00:00.465813 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2brdf\" (UniqueName: \"kubernetes.io/projected/9b92cb62-fe44-4a02-afd4-31259c19fa4d-kube-api-access-2brdf\") pod \"collect-profiles-29490960-v46dx\" (UID: \"9b92cb62-fe44-4a02-afd4-31259c19fa4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490960-v46dx" Jan 26 20:00:00 crc kubenswrapper[4737]: I0126 20:00:00.466945 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b92cb62-fe44-4a02-afd4-31259c19fa4d-config-volume\") pod \"collect-profiles-29490960-v46dx\" (UID: \"9b92cb62-fe44-4a02-afd4-31259c19fa4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490960-v46dx" Jan 26 20:00:00 crc kubenswrapper[4737]: I0126 20:00:00.474436 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b92cb62-fe44-4a02-afd4-31259c19fa4d-secret-volume\") pod \"collect-profiles-29490960-v46dx\" (UID: \"9b92cb62-fe44-4a02-afd4-31259c19fa4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490960-v46dx" Jan 26 20:00:00 crc kubenswrapper[4737]: I0126 20:00:00.481751 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2brdf\" (UniqueName: \"kubernetes.io/projected/9b92cb62-fe44-4a02-afd4-31259c19fa4d-kube-api-access-2brdf\") pod \"collect-profiles-29490960-v46dx\" (UID: \"9b92cb62-fe44-4a02-afd4-31259c19fa4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490960-v46dx" Jan 26 20:00:00 crc kubenswrapper[4737]: I0126 20:00:00.660055 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490960-v46dx" Jan 26 20:00:01 crc kubenswrapper[4737]: I0126 20:00:01.246788 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490960-v46dx"] Jan 26 20:00:02 crc kubenswrapper[4737]: I0126 20:00:02.025055 4737 generic.go:334] "Generic (PLEG): container finished" podID="9b92cb62-fe44-4a02-afd4-31259c19fa4d" containerID="fd49dc733278a720866b7876a57c4c440a948757f6c05ee06f7a7d3c81b8fb0a" exitCode=0 Jan 26 20:00:02 crc kubenswrapper[4737]: I0126 20:00:02.025197 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490960-v46dx" event={"ID":"9b92cb62-fe44-4a02-afd4-31259c19fa4d","Type":"ContainerDied","Data":"fd49dc733278a720866b7876a57c4c440a948757f6c05ee06f7a7d3c81b8fb0a"} Jan 26 20:00:02 crc kubenswrapper[4737]: I0126 20:00:02.025414 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490960-v46dx" event={"ID":"9b92cb62-fe44-4a02-afd4-31259c19fa4d","Type":"ContainerStarted","Data":"8d54f5b48558d0ac8f8341ff092b804adef0351749998536517067a448e76ddc"} Jan 26 20:00:03 crc kubenswrapper[4737]: I0126 20:00:03.931496 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490960-v46dx" Jan 26 20:00:04 crc kubenswrapper[4737]: I0126 20:00:04.049743 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490960-v46dx" event={"ID":"9b92cb62-fe44-4a02-afd4-31259c19fa4d","Type":"ContainerDied","Data":"8d54f5b48558d0ac8f8341ff092b804adef0351749998536517067a448e76ddc"} Jan 26 20:00:04 crc kubenswrapper[4737]: I0126 20:00:04.049828 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490960-v46dx" Jan 26 20:00:04 crc kubenswrapper[4737]: I0126 20:00:04.050126 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d54f5b48558d0ac8f8341ff092b804adef0351749998536517067a448e76ddc" Jan 26 20:00:04 crc kubenswrapper[4737]: I0126 20:00:04.102958 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2brdf\" (UniqueName: \"kubernetes.io/projected/9b92cb62-fe44-4a02-afd4-31259c19fa4d-kube-api-access-2brdf\") pod \"9b92cb62-fe44-4a02-afd4-31259c19fa4d\" (UID: \"9b92cb62-fe44-4a02-afd4-31259c19fa4d\") " Jan 26 20:00:04 crc kubenswrapper[4737]: I0126 20:00:04.103020 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b92cb62-fe44-4a02-afd4-31259c19fa4d-secret-volume\") pod \"9b92cb62-fe44-4a02-afd4-31259c19fa4d\" (UID: \"9b92cb62-fe44-4a02-afd4-31259c19fa4d\") " Jan 26 20:00:04 crc kubenswrapper[4737]: I0126 20:00:04.103149 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b92cb62-fe44-4a02-afd4-31259c19fa4d-config-volume\") pod \"9b92cb62-fe44-4a02-afd4-31259c19fa4d\" (UID: \"9b92cb62-fe44-4a02-afd4-31259c19fa4d\") " Jan 26 20:00:04 crc kubenswrapper[4737]: I0126 20:00:04.104643 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b92cb62-fe44-4a02-afd4-31259c19fa4d-config-volume" (OuterVolumeSpecName: "config-volume") pod "9b92cb62-fe44-4a02-afd4-31259c19fa4d" (UID: "9b92cb62-fe44-4a02-afd4-31259c19fa4d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:00:04 crc kubenswrapper[4737]: I0126 20:00:04.113776 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b92cb62-fe44-4a02-afd4-31259c19fa4d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9b92cb62-fe44-4a02-afd4-31259c19fa4d" (UID: "9b92cb62-fe44-4a02-afd4-31259c19fa4d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:00:04 crc kubenswrapper[4737]: I0126 20:00:04.121583 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b92cb62-fe44-4a02-afd4-31259c19fa4d-kube-api-access-2brdf" (OuterVolumeSpecName: "kube-api-access-2brdf") pod "9b92cb62-fe44-4a02-afd4-31259c19fa4d" (UID: "9b92cb62-fe44-4a02-afd4-31259c19fa4d"). InnerVolumeSpecName "kube-api-access-2brdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:00:04 crc kubenswrapper[4737]: I0126 20:00:04.206298 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2brdf\" (UniqueName: \"kubernetes.io/projected/9b92cb62-fe44-4a02-afd4-31259c19fa4d-kube-api-access-2brdf\") on node \"crc\" DevicePath \"\"" Jan 26 20:00:04 crc kubenswrapper[4737]: I0126 20:00:04.206334 4737 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b92cb62-fe44-4a02-afd4-31259c19fa4d-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 20:00:04 crc kubenswrapper[4737]: I0126 20:00:04.206344 4737 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b92cb62-fe44-4a02-afd4-31259c19fa4d-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 20:00:05 crc kubenswrapper[4737]: I0126 20:00:05.057205 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490915-9jj25"] Jan 26 20:00:05 crc kubenswrapper[4737]: I0126 20:00:05.084653 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490915-9jj25"] Jan 26 20:00:07 crc kubenswrapper[4737]: I0126 20:00:07.000849 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="beeb9ebb-aa23-459a-b6f3-6ca0857850c4" path="/var/lib/kubelet/pods/beeb9ebb-aa23-459a-b6f3-6ca0857850c4/volumes" Jan 26 20:00:08 crc kubenswrapper[4737]: I0126 20:00:08.982329 4737 scope.go:117] "RemoveContainer" containerID="999f3b8bf4218ca969dd3559e41014ea98b9927ff16b813685ab6cfa003cd090" Jan 26 20:00:08 crc kubenswrapper[4737]: E0126 20:00:08.983161 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:00:21 crc kubenswrapper[4737]: I0126 20:00:21.982695 4737 scope.go:117] "RemoveContainer" containerID="999f3b8bf4218ca969dd3559e41014ea98b9927ff16b813685ab6cfa003cd090" Jan 26 20:00:21 crc kubenswrapper[4737]: E0126 20:00:21.983983 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:00:33 crc kubenswrapper[4737]: I0126 20:00:33.982585 4737 scope.go:117] "RemoveContainer" containerID="999f3b8bf4218ca969dd3559e41014ea98b9927ff16b813685ab6cfa003cd090" Jan 26 20:00:33 crc kubenswrapper[4737]: E0126 20:00:33.983700 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:00:38 crc kubenswrapper[4737]: I0126 20:00:38.013216 4737 scope.go:117] "RemoveContainer" containerID="b16304b51cc0fac85dc1c289bc4c7b4734327cd5ded61b60e24e1c9857b527e7" Jan 26 20:00:46 crc kubenswrapper[4737]: I0126 20:00:46.990368 4737 scope.go:117] "RemoveContainer" containerID="999f3b8bf4218ca969dd3559e41014ea98b9927ff16b813685ab6cfa003cd090" Jan 26 20:00:46 crc kubenswrapper[4737]: E0126 20:00:46.991052 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:01:00 crc kubenswrapper[4737]: I0126 20:01:00.168878 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29490961-hfz45"] Jan 26 20:01:00 crc kubenswrapper[4737]: E0126 20:01:00.170256 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b92cb62-fe44-4a02-afd4-31259c19fa4d" containerName="collect-profiles" Jan 26 20:01:00 crc kubenswrapper[4737]: I0126 20:01:00.170274 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b92cb62-fe44-4a02-afd4-31259c19fa4d" containerName="collect-profiles" Jan 26 20:01:00 crc kubenswrapper[4737]: I0126 20:01:00.170560 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b92cb62-fe44-4a02-afd4-31259c19fa4d" containerName="collect-profiles" Jan 26 20:01:00 crc kubenswrapper[4737]: I0126 20:01:00.171711 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490961-hfz45" Jan 26 20:01:00 crc kubenswrapper[4737]: I0126 20:01:00.198695 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29490961-hfz45"] Jan 26 20:01:00 crc kubenswrapper[4737]: I0126 20:01:00.348468 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46jhj\" (UniqueName: \"kubernetes.io/projected/5f36a330-35fc-46b8-9f3f-4648e4e5485c-kube-api-access-46jhj\") pod \"keystone-cron-29490961-hfz45\" (UID: \"5f36a330-35fc-46b8-9f3f-4648e4e5485c\") " pod="openstack/keystone-cron-29490961-hfz45" Jan 26 20:01:00 crc kubenswrapper[4737]: I0126 20:01:00.349000 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5f36a330-35fc-46b8-9f3f-4648e4e5485c-fernet-keys\") pod \"keystone-cron-29490961-hfz45\" (UID: \"5f36a330-35fc-46b8-9f3f-4648e4e5485c\") " pod="openstack/keystone-cron-29490961-hfz45" Jan 26 20:01:00 crc kubenswrapper[4737]: I0126 20:01:00.349092 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f36a330-35fc-46b8-9f3f-4648e4e5485c-combined-ca-bundle\") pod \"keystone-cron-29490961-hfz45\" (UID: \"5f36a330-35fc-46b8-9f3f-4648e4e5485c\") " pod="openstack/keystone-cron-29490961-hfz45" Jan 26 20:01:00 crc kubenswrapper[4737]: I0126 20:01:00.349152 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f36a330-35fc-46b8-9f3f-4648e4e5485c-config-data\") pod \"keystone-cron-29490961-hfz45\" (UID: \"5f36a330-35fc-46b8-9f3f-4648e4e5485c\") " pod="openstack/keystone-cron-29490961-hfz45" Jan 26 20:01:00 crc kubenswrapper[4737]: I0126 20:01:00.452243 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46jhj\" (UniqueName: \"kubernetes.io/projected/5f36a330-35fc-46b8-9f3f-4648e4e5485c-kube-api-access-46jhj\") pod \"keystone-cron-29490961-hfz45\" (UID: \"5f36a330-35fc-46b8-9f3f-4648e4e5485c\") " pod="openstack/keystone-cron-29490961-hfz45" Jan 26 20:01:00 crc kubenswrapper[4737]: I0126 20:01:00.452417 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5f36a330-35fc-46b8-9f3f-4648e4e5485c-fernet-keys\") pod \"keystone-cron-29490961-hfz45\" (UID: \"5f36a330-35fc-46b8-9f3f-4648e4e5485c\") " pod="openstack/keystone-cron-29490961-hfz45" Jan 26 20:01:00 crc kubenswrapper[4737]: I0126 20:01:00.452447 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f36a330-35fc-46b8-9f3f-4648e4e5485c-combined-ca-bundle\") pod \"keystone-cron-29490961-hfz45\" (UID: \"5f36a330-35fc-46b8-9f3f-4648e4e5485c\") " pod="openstack/keystone-cron-29490961-hfz45" Jan 26 20:01:00 crc kubenswrapper[4737]: I0126 20:01:00.452466 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f36a330-35fc-46b8-9f3f-4648e4e5485c-config-data\") pod \"keystone-cron-29490961-hfz45\" (UID: \"5f36a330-35fc-46b8-9f3f-4648e4e5485c\") " pod="openstack/keystone-cron-29490961-hfz45" Jan 26 20:01:00 crc kubenswrapper[4737]: I0126 20:01:00.462424 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f36a330-35fc-46b8-9f3f-4648e4e5485c-config-data\") pod \"keystone-cron-29490961-hfz45\" (UID: \"5f36a330-35fc-46b8-9f3f-4648e4e5485c\") " pod="openstack/keystone-cron-29490961-hfz45" Jan 26 20:01:00 crc kubenswrapper[4737]: I0126 20:01:00.462994 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5f36a330-35fc-46b8-9f3f-4648e4e5485c-fernet-keys\") pod \"keystone-cron-29490961-hfz45\" (UID: \"5f36a330-35fc-46b8-9f3f-4648e4e5485c\") " pod="openstack/keystone-cron-29490961-hfz45" Jan 26 20:01:00 crc kubenswrapper[4737]: I0126 20:01:00.464994 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f36a330-35fc-46b8-9f3f-4648e4e5485c-combined-ca-bundle\") pod \"keystone-cron-29490961-hfz45\" (UID: \"5f36a330-35fc-46b8-9f3f-4648e4e5485c\") " pod="openstack/keystone-cron-29490961-hfz45" Jan 26 20:01:00 crc kubenswrapper[4737]: I0126 20:01:00.470556 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46jhj\" (UniqueName: \"kubernetes.io/projected/5f36a330-35fc-46b8-9f3f-4648e4e5485c-kube-api-access-46jhj\") pod \"keystone-cron-29490961-hfz45\" (UID: \"5f36a330-35fc-46b8-9f3f-4648e4e5485c\") " pod="openstack/keystone-cron-29490961-hfz45" Jan 26 20:01:00 crc kubenswrapper[4737]: I0126 20:01:00.511917 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490961-hfz45" Jan 26 20:01:01 crc kubenswrapper[4737]: I0126 20:01:01.051438 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29490961-hfz45"] Jan 26 20:01:01 crc kubenswrapper[4737]: I0126 20:01:01.706307 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490961-hfz45" event={"ID":"5f36a330-35fc-46b8-9f3f-4648e4e5485c","Type":"ContainerStarted","Data":"8860d3fad9dc7dea508c5bdce9ea603712b14f82946e67b99c19feae960c796e"} Jan 26 20:01:01 crc kubenswrapper[4737]: I0126 20:01:01.706635 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490961-hfz45" event={"ID":"5f36a330-35fc-46b8-9f3f-4648e4e5485c","Type":"ContainerStarted","Data":"e3d790be8da4f49e5f7d65c9765582532abff38e6431e6305b445fbac691ae79"} Jan 26 20:01:01 crc kubenswrapper[4737]: I0126 20:01:01.739305 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29490961-hfz45" podStartSLOduration=1.7392816930000001 podStartE2EDuration="1.739281693s" podCreationTimestamp="2026-01-26 20:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:01:01.72769139 +0000 UTC m=+5435.035886098" watchObservedRunningTime="2026-01-26 20:01:01.739281693 +0000 UTC m=+5435.047476401" Jan 26 20:01:01 crc kubenswrapper[4737]: I0126 20:01:01.982164 4737 scope.go:117] "RemoveContainer" containerID="999f3b8bf4218ca969dd3559e41014ea98b9927ff16b813685ab6cfa003cd090" Jan 26 20:01:01 crc kubenswrapper[4737]: E0126 20:01:01.982492 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:01:04 crc kubenswrapper[4737]: I0126 20:01:04.743150 4737 generic.go:334] "Generic (PLEG): container finished" podID="5f36a330-35fc-46b8-9f3f-4648e4e5485c" containerID="8860d3fad9dc7dea508c5bdce9ea603712b14f82946e67b99c19feae960c796e" exitCode=0 Jan 26 20:01:04 crc kubenswrapper[4737]: I0126 20:01:04.743213 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490961-hfz45" event={"ID":"5f36a330-35fc-46b8-9f3f-4648e4e5485c","Type":"ContainerDied","Data":"8860d3fad9dc7dea508c5bdce9ea603712b14f82946e67b99c19feae960c796e"} Jan 26 20:01:06 crc kubenswrapper[4737]: I0126 20:01:06.396850 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490961-hfz45" Jan 26 20:01:06 crc kubenswrapper[4737]: I0126 20:01:06.558957 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46jhj\" (UniqueName: \"kubernetes.io/projected/5f36a330-35fc-46b8-9f3f-4648e4e5485c-kube-api-access-46jhj\") pod \"5f36a330-35fc-46b8-9f3f-4648e4e5485c\" (UID: \"5f36a330-35fc-46b8-9f3f-4648e4e5485c\") " Jan 26 20:01:06 crc kubenswrapper[4737]: I0126 20:01:06.559648 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f36a330-35fc-46b8-9f3f-4648e4e5485c-combined-ca-bundle\") pod \"5f36a330-35fc-46b8-9f3f-4648e4e5485c\" (UID: \"5f36a330-35fc-46b8-9f3f-4648e4e5485c\") " Jan 26 20:01:06 crc kubenswrapper[4737]: I0126 20:01:06.559688 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f36a330-35fc-46b8-9f3f-4648e4e5485c-config-data\") pod \"5f36a330-35fc-46b8-9f3f-4648e4e5485c\" (UID: \"5f36a330-35fc-46b8-9f3f-4648e4e5485c\") " Jan 26 20:01:06 crc kubenswrapper[4737]: I0126 20:01:06.559747 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5f36a330-35fc-46b8-9f3f-4648e4e5485c-fernet-keys\") pod \"5f36a330-35fc-46b8-9f3f-4648e4e5485c\" (UID: \"5f36a330-35fc-46b8-9f3f-4648e4e5485c\") " Jan 26 20:01:06 crc kubenswrapper[4737]: I0126 20:01:06.566177 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f36a330-35fc-46b8-9f3f-4648e4e5485c-kube-api-access-46jhj" (OuterVolumeSpecName: "kube-api-access-46jhj") pod "5f36a330-35fc-46b8-9f3f-4648e4e5485c" (UID: "5f36a330-35fc-46b8-9f3f-4648e4e5485c"). InnerVolumeSpecName "kube-api-access-46jhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:01:06 crc kubenswrapper[4737]: I0126 20:01:06.576716 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f36a330-35fc-46b8-9f3f-4648e4e5485c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "5f36a330-35fc-46b8-9f3f-4648e4e5485c" (UID: "5f36a330-35fc-46b8-9f3f-4648e4e5485c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:01:06 crc kubenswrapper[4737]: I0126 20:01:06.619315 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f36a330-35fc-46b8-9f3f-4648e4e5485c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5f36a330-35fc-46b8-9f3f-4648e4e5485c" (UID: "5f36a330-35fc-46b8-9f3f-4648e4e5485c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:01:06 crc kubenswrapper[4737]: I0126 20:01:06.637983 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f36a330-35fc-46b8-9f3f-4648e4e5485c-config-data" (OuterVolumeSpecName: "config-data") pod "5f36a330-35fc-46b8-9f3f-4648e4e5485c" (UID: "5f36a330-35fc-46b8-9f3f-4648e4e5485c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:01:06 crc kubenswrapper[4737]: I0126 20:01:06.663983 4737 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f36a330-35fc-46b8-9f3f-4648e4e5485c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 20:01:06 crc kubenswrapper[4737]: I0126 20:01:06.664011 4737 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f36a330-35fc-46b8-9f3f-4648e4e5485c-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 20:01:06 crc kubenswrapper[4737]: I0126 20:01:06.664020 4737 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5f36a330-35fc-46b8-9f3f-4648e4e5485c-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 20:01:06 crc kubenswrapper[4737]: I0126 20:01:06.664028 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46jhj\" (UniqueName: \"kubernetes.io/projected/5f36a330-35fc-46b8-9f3f-4648e4e5485c-kube-api-access-46jhj\") on node \"crc\" DevicePath \"\"" Jan 26 20:01:06 crc kubenswrapper[4737]: I0126 20:01:06.768562 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490961-hfz45" event={"ID":"5f36a330-35fc-46b8-9f3f-4648e4e5485c","Type":"ContainerDied","Data":"e3d790be8da4f49e5f7d65c9765582532abff38e6431e6305b445fbac691ae79"} Jan 26 20:01:06 crc kubenswrapper[4737]: I0126 20:01:06.768778 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3d790be8da4f49e5f7d65c9765582532abff38e6431e6305b445fbac691ae79" Jan 26 20:01:06 crc kubenswrapper[4737]: I0126 20:01:06.768637 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490961-hfz45" Jan 26 20:01:15 crc kubenswrapper[4737]: I0126 20:01:15.981863 4737 scope.go:117] "RemoveContainer" containerID="999f3b8bf4218ca969dd3559e41014ea98b9927ff16b813685ab6cfa003cd090" Jan 26 20:01:15 crc kubenswrapper[4737]: E0126 20:01:15.982731 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:01:30 crc kubenswrapper[4737]: I0126 20:01:30.982351 4737 scope.go:117] "RemoveContainer" containerID="999f3b8bf4218ca969dd3559e41014ea98b9927ff16b813685ab6cfa003cd090" Jan 26 20:01:30 crc kubenswrapper[4737]: E0126 20:01:30.983086 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:01:36 crc kubenswrapper[4737]: I0126 20:01:36.271416 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tgdnc"] Jan 26 20:01:36 crc kubenswrapper[4737]: E0126 20:01:36.272874 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f36a330-35fc-46b8-9f3f-4648e4e5485c" containerName="keystone-cron" Jan 26 20:01:36 crc kubenswrapper[4737]: I0126 20:01:36.272896 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f36a330-35fc-46b8-9f3f-4648e4e5485c" containerName="keystone-cron" Jan 26 20:01:36 crc kubenswrapper[4737]: I0126 20:01:36.273305 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f36a330-35fc-46b8-9f3f-4648e4e5485c" containerName="keystone-cron" Jan 26 20:01:36 crc kubenswrapper[4737]: I0126 20:01:36.275490 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tgdnc" Jan 26 20:01:36 crc kubenswrapper[4737]: I0126 20:01:36.290243 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tgdnc"] Jan 26 20:01:36 crc kubenswrapper[4737]: I0126 20:01:36.312211 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96022805-a04d-4a11-9bfb-312081638b93-utilities\") pod \"certified-operators-tgdnc\" (UID: \"96022805-a04d-4a11-9bfb-312081638b93\") " pod="openshift-marketplace/certified-operators-tgdnc" Jan 26 20:01:36 crc kubenswrapper[4737]: I0126 20:01:36.312265 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mms8l\" (UniqueName: \"kubernetes.io/projected/96022805-a04d-4a11-9bfb-312081638b93-kube-api-access-mms8l\") pod \"certified-operators-tgdnc\" (UID: \"96022805-a04d-4a11-9bfb-312081638b93\") " pod="openshift-marketplace/certified-operators-tgdnc" Jan 26 20:01:36 crc kubenswrapper[4737]: I0126 20:01:36.312498 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96022805-a04d-4a11-9bfb-312081638b93-catalog-content\") pod \"certified-operators-tgdnc\" (UID: \"96022805-a04d-4a11-9bfb-312081638b93\") " pod="openshift-marketplace/certified-operators-tgdnc" Jan 26 20:01:36 crc kubenswrapper[4737]: I0126 20:01:36.414598 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96022805-a04d-4a11-9bfb-312081638b93-utilities\") pod \"certified-operators-tgdnc\" (UID: \"96022805-a04d-4a11-9bfb-312081638b93\") " pod="openshift-marketplace/certified-operators-tgdnc" Jan 26 20:01:36 crc kubenswrapper[4737]: I0126 20:01:36.414658 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mms8l\" (UniqueName: \"kubernetes.io/projected/96022805-a04d-4a11-9bfb-312081638b93-kube-api-access-mms8l\") pod \"certified-operators-tgdnc\" (UID: \"96022805-a04d-4a11-9bfb-312081638b93\") " pod="openshift-marketplace/certified-operators-tgdnc" Jan 26 20:01:36 crc kubenswrapper[4737]: I0126 20:01:36.414748 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96022805-a04d-4a11-9bfb-312081638b93-catalog-content\") pod \"certified-operators-tgdnc\" (UID: \"96022805-a04d-4a11-9bfb-312081638b93\") " pod="openshift-marketplace/certified-operators-tgdnc" Jan 26 20:01:36 crc kubenswrapper[4737]: I0126 20:01:36.415217 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96022805-a04d-4a11-9bfb-312081638b93-utilities\") pod \"certified-operators-tgdnc\" (UID: \"96022805-a04d-4a11-9bfb-312081638b93\") " pod="openshift-marketplace/certified-operators-tgdnc" Jan 26 20:01:36 crc kubenswrapper[4737]: I0126 20:01:36.415372 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96022805-a04d-4a11-9bfb-312081638b93-catalog-content\") pod \"certified-operators-tgdnc\" (UID: \"96022805-a04d-4a11-9bfb-312081638b93\") " pod="openshift-marketplace/certified-operators-tgdnc" Jan 26 20:01:36 crc kubenswrapper[4737]: I0126 20:01:36.435351 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mms8l\" (UniqueName: \"kubernetes.io/projected/96022805-a04d-4a11-9bfb-312081638b93-kube-api-access-mms8l\") pod \"certified-operators-tgdnc\" (UID: \"96022805-a04d-4a11-9bfb-312081638b93\") " pod="openshift-marketplace/certified-operators-tgdnc" Jan 26 20:01:36 crc kubenswrapper[4737]: I0126 20:01:36.598632 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tgdnc" Jan 26 20:01:37 crc kubenswrapper[4737]: I0126 20:01:37.112993 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tgdnc"] Jan 26 20:01:38 crc kubenswrapper[4737]: I0126 20:01:38.107741 4737 generic.go:334] "Generic (PLEG): container finished" podID="96022805-a04d-4a11-9bfb-312081638b93" containerID="4a1d3f772dfdbb69ae8da941b16f58935f2271f7c14a759fcc278301c918790b" exitCode=0 Jan 26 20:01:38 crc kubenswrapper[4737]: I0126 20:01:38.107803 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tgdnc" event={"ID":"96022805-a04d-4a11-9bfb-312081638b93","Type":"ContainerDied","Data":"4a1d3f772dfdbb69ae8da941b16f58935f2271f7c14a759fcc278301c918790b"} Jan 26 20:01:38 crc kubenswrapper[4737]: I0126 20:01:38.108022 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tgdnc" event={"ID":"96022805-a04d-4a11-9bfb-312081638b93","Type":"ContainerStarted","Data":"797a08c4b6ee5766542b45cb2d40c6de3f2d77f37908264226f0a2f8a672418e"} Jan 26 20:01:40 crc kubenswrapper[4737]: I0126 20:01:40.147558 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tgdnc" event={"ID":"96022805-a04d-4a11-9bfb-312081638b93","Type":"ContainerStarted","Data":"06267026ef58cb506428ec19916e2a9906d56af7caa6e24324d448bb82c1f80d"} Jan 26 20:01:41 crc kubenswrapper[4737]: I0126 20:01:41.165939 4737 generic.go:334] "Generic (PLEG): container finished" podID="96022805-a04d-4a11-9bfb-312081638b93" containerID="06267026ef58cb506428ec19916e2a9906d56af7caa6e24324d448bb82c1f80d" exitCode=0 Jan 26 20:01:41 crc kubenswrapper[4737]: I0126 20:01:41.166030 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tgdnc" event={"ID":"96022805-a04d-4a11-9bfb-312081638b93","Type":"ContainerDied","Data":"06267026ef58cb506428ec19916e2a9906d56af7caa6e24324d448bb82c1f80d"} Jan 26 20:01:42 crc kubenswrapper[4737]: I0126 20:01:42.178462 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tgdnc" event={"ID":"96022805-a04d-4a11-9bfb-312081638b93","Type":"ContainerStarted","Data":"01dbb48c36dad98cd47c21ac1120a10ac37de7e494714fe7c4701f2f4eb90662"} Jan 26 20:01:42 crc kubenswrapper[4737]: I0126 20:01:42.202502 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tgdnc" podStartSLOduration=2.499415517 podStartE2EDuration="6.202487115s" podCreationTimestamp="2026-01-26 20:01:36 +0000 UTC" firstStartedPulling="2026-01-26 20:01:38.110375754 +0000 UTC m=+5471.418570462" lastFinishedPulling="2026-01-26 20:01:41.813447352 +0000 UTC m=+5475.121642060" observedRunningTime="2026-01-26 20:01:42.199287916 +0000 UTC m=+5475.507482624" watchObservedRunningTime="2026-01-26 20:01:42.202487115 +0000 UTC m=+5475.510681823" Jan 26 20:01:42 crc kubenswrapper[4737]: I0126 20:01:42.982530 4737 scope.go:117] "RemoveContainer" containerID="999f3b8bf4218ca969dd3559e41014ea98b9927ff16b813685ab6cfa003cd090" Jan 26 20:01:42 crc kubenswrapper[4737]: E0126 20:01:42.983253 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:01:46 crc kubenswrapper[4737]: I0126 20:01:46.599926 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tgdnc" Jan 26 20:01:46 crc kubenswrapper[4737]: I0126 20:01:46.600039 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tgdnc" Jan 26 20:01:47 crc kubenswrapper[4737]: I0126 20:01:46.999064 4737 patch_prober.go:28] interesting pod/thanos-querier-fc8bc4478-pnz7r container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.74:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 20:01:47 crc kubenswrapper[4737]: I0126 20:01:46.999239 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-fc8bc4478-pnz7r" podUID="f1458df1-0b67-453c-b067-4823882ec184" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.74:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 20:01:47 crc kubenswrapper[4737]: I0126 20:01:47.070438 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tgdnc" Jan 26 20:01:47 crc kubenswrapper[4737]: I0126 20:01:47.307924 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tgdnc" Jan 26 20:01:47 crc kubenswrapper[4737]: I0126 20:01:47.365512 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tgdnc"] Jan 26 20:01:49 crc kubenswrapper[4737]: I0126 20:01:49.277287 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tgdnc" podUID="96022805-a04d-4a11-9bfb-312081638b93" containerName="registry-server" containerID="cri-o://01dbb48c36dad98cd47c21ac1120a10ac37de7e494714fe7c4701f2f4eb90662" gracePeriod=2 Jan 26 20:01:50 crc kubenswrapper[4737]: I0126 20:01:50.291170 4737 generic.go:334] "Generic (PLEG): container finished" podID="96022805-a04d-4a11-9bfb-312081638b93" containerID="01dbb48c36dad98cd47c21ac1120a10ac37de7e494714fe7c4701f2f4eb90662" exitCode=0 Jan 26 20:01:50 crc kubenswrapper[4737]: I0126 20:01:50.291212 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tgdnc" event={"ID":"96022805-a04d-4a11-9bfb-312081638b93","Type":"ContainerDied","Data":"01dbb48c36dad98cd47c21ac1120a10ac37de7e494714fe7c4701f2f4eb90662"} Jan 26 20:01:50 crc kubenswrapper[4737]: I0126 20:01:50.471459 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xqq59"] Jan 26 20:01:50 crc kubenswrapper[4737]: I0126 20:01:50.474644 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xqq59" Jan 26 20:01:50 crc kubenswrapper[4737]: I0126 20:01:50.485844 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xqq59"] Jan 26 20:01:50 crc kubenswrapper[4737]: I0126 20:01:50.501757 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr2vf\" (UniqueName: \"kubernetes.io/projected/3e665764-fb3d-4017-8b31-dcbf10a1f2ef-kube-api-access-jr2vf\") pod \"community-operators-xqq59\" (UID: \"3e665764-fb3d-4017-8b31-dcbf10a1f2ef\") " pod="openshift-marketplace/community-operators-xqq59" Jan 26 20:01:50 crc kubenswrapper[4737]: I0126 20:01:50.501995 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e665764-fb3d-4017-8b31-dcbf10a1f2ef-utilities\") pod \"community-operators-xqq59\" (UID: \"3e665764-fb3d-4017-8b31-dcbf10a1f2ef\") " pod="openshift-marketplace/community-operators-xqq59" Jan 26 20:01:50 crc kubenswrapper[4737]: I0126 20:01:50.502096 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e665764-fb3d-4017-8b31-dcbf10a1f2ef-catalog-content\") pod \"community-operators-xqq59\" (UID: \"3e665764-fb3d-4017-8b31-dcbf10a1f2ef\") " pod="openshift-marketplace/community-operators-xqq59" Jan 26 20:01:50 crc kubenswrapper[4737]: I0126 20:01:50.604664 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jr2vf\" (UniqueName: \"kubernetes.io/projected/3e665764-fb3d-4017-8b31-dcbf10a1f2ef-kube-api-access-jr2vf\") pod \"community-operators-xqq59\" (UID: \"3e665764-fb3d-4017-8b31-dcbf10a1f2ef\") " pod="openshift-marketplace/community-operators-xqq59" Jan 26 20:01:50 crc kubenswrapper[4737]: I0126 20:01:50.604833 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e665764-fb3d-4017-8b31-dcbf10a1f2ef-utilities\") pod \"community-operators-xqq59\" (UID: \"3e665764-fb3d-4017-8b31-dcbf10a1f2ef\") " pod="openshift-marketplace/community-operators-xqq59" Jan 26 20:01:50 crc kubenswrapper[4737]: I0126 20:01:50.604891 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e665764-fb3d-4017-8b31-dcbf10a1f2ef-catalog-content\") pod \"community-operators-xqq59\" (UID: \"3e665764-fb3d-4017-8b31-dcbf10a1f2ef\") " pod="openshift-marketplace/community-operators-xqq59" Jan 26 20:01:50 crc kubenswrapper[4737]: I0126 20:01:50.605494 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e665764-fb3d-4017-8b31-dcbf10a1f2ef-catalog-content\") pod \"community-operators-xqq59\" (UID: \"3e665764-fb3d-4017-8b31-dcbf10a1f2ef\") " pod="openshift-marketplace/community-operators-xqq59" Jan 26 20:01:50 crc kubenswrapper[4737]: I0126 20:01:50.605568 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e665764-fb3d-4017-8b31-dcbf10a1f2ef-utilities\") pod \"community-operators-xqq59\" (UID: \"3e665764-fb3d-4017-8b31-dcbf10a1f2ef\") " pod="openshift-marketplace/community-operators-xqq59" Jan 26 20:01:50 crc kubenswrapper[4737]: I0126 20:01:50.629908 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jr2vf\" (UniqueName: \"kubernetes.io/projected/3e665764-fb3d-4017-8b31-dcbf10a1f2ef-kube-api-access-jr2vf\") pod \"community-operators-xqq59\" (UID: \"3e665764-fb3d-4017-8b31-dcbf10a1f2ef\") " pod="openshift-marketplace/community-operators-xqq59" Jan 26 20:01:50 crc kubenswrapper[4737]: I0126 20:01:50.804887 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xqq59" Jan 26 20:01:50 crc kubenswrapper[4737]: I0126 20:01:50.962170 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tgdnc" Jan 26 20:01:51 crc kubenswrapper[4737]: I0126 20:01:51.013766 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96022805-a04d-4a11-9bfb-312081638b93-catalog-content\") pod \"96022805-a04d-4a11-9bfb-312081638b93\" (UID: \"96022805-a04d-4a11-9bfb-312081638b93\") " Jan 26 20:01:51 crc kubenswrapper[4737]: I0126 20:01:51.014356 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mms8l\" (UniqueName: \"kubernetes.io/projected/96022805-a04d-4a11-9bfb-312081638b93-kube-api-access-mms8l\") pod \"96022805-a04d-4a11-9bfb-312081638b93\" (UID: \"96022805-a04d-4a11-9bfb-312081638b93\") " Jan 26 20:01:51 crc kubenswrapper[4737]: I0126 20:01:51.014804 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96022805-a04d-4a11-9bfb-312081638b93-utilities\") pod \"96022805-a04d-4a11-9bfb-312081638b93\" (UID: \"96022805-a04d-4a11-9bfb-312081638b93\") " Jan 26 20:01:51 crc kubenswrapper[4737]: I0126 20:01:51.018287 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96022805-a04d-4a11-9bfb-312081638b93-utilities" (OuterVolumeSpecName: "utilities") pod "96022805-a04d-4a11-9bfb-312081638b93" (UID: "96022805-a04d-4a11-9bfb-312081638b93"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:01:51 crc kubenswrapper[4737]: I0126 20:01:51.029707 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96022805-a04d-4a11-9bfb-312081638b93-kube-api-access-mms8l" (OuterVolumeSpecName: "kube-api-access-mms8l") pod "96022805-a04d-4a11-9bfb-312081638b93" (UID: "96022805-a04d-4a11-9bfb-312081638b93"). InnerVolumeSpecName "kube-api-access-mms8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:01:51 crc kubenswrapper[4737]: I0126 20:01:51.110941 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96022805-a04d-4a11-9bfb-312081638b93-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "96022805-a04d-4a11-9bfb-312081638b93" (UID: "96022805-a04d-4a11-9bfb-312081638b93"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:01:51 crc kubenswrapper[4737]: I0126 20:01:51.118541 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mms8l\" (UniqueName: \"kubernetes.io/projected/96022805-a04d-4a11-9bfb-312081638b93-kube-api-access-mms8l\") on node \"crc\" DevicePath \"\"" Jan 26 20:01:51 crc kubenswrapper[4737]: I0126 20:01:51.118589 4737 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96022805-a04d-4a11-9bfb-312081638b93-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 20:01:51 crc kubenswrapper[4737]: I0126 20:01:51.118604 4737 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96022805-a04d-4a11-9bfb-312081638b93-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 20:01:51 crc kubenswrapper[4737]: I0126 20:01:51.306399 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tgdnc" event={"ID":"96022805-a04d-4a11-9bfb-312081638b93","Type":"ContainerDied","Data":"797a08c4b6ee5766542b45cb2d40c6de3f2d77f37908264226f0a2f8a672418e"} Jan 26 20:01:51 crc kubenswrapper[4737]: I0126 20:01:51.306470 4737 scope.go:117] "RemoveContainer" containerID="01dbb48c36dad98cd47c21ac1120a10ac37de7e494714fe7c4701f2f4eb90662" Jan 26 20:01:51 crc kubenswrapper[4737]: I0126 20:01:51.306656 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tgdnc" Jan 26 20:01:51 crc kubenswrapper[4737]: I0126 20:01:51.346830 4737 scope.go:117] "RemoveContainer" containerID="06267026ef58cb506428ec19916e2a9906d56af7caa6e24324d448bb82c1f80d" Jan 26 20:01:51 crc kubenswrapper[4737]: I0126 20:01:51.356736 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tgdnc"] Jan 26 20:01:51 crc kubenswrapper[4737]: I0126 20:01:51.368873 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tgdnc"] Jan 26 20:01:51 crc kubenswrapper[4737]: I0126 20:01:51.379333 4737 scope.go:117] "RemoveContainer" containerID="4a1d3f772dfdbb69ae8da941b16f58935f2271f7c14a759fcc278301c918790b" Jan 26 20:01:51 crc kubenswrapper[4737]: I0126 20:01:51.420612 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xqq59"] Jan 26 20:01:52 crc kubenswrapper[4737]: I0126 20:01:52.319980 4737 generic.go:334] "Generic (PLEG): container finished" podID="3e665764-fb3d-4017-8b31-dcbf10a1f2ef" containerID="1e2e72b33bd49b9d089cc1d1f69e040446dd9af795b0dcc2fc936a4e910a319a" exitCode=0 Jan 26 20:01:52 crc kubenswrapper[4737]: I0126 20:01:52.320150 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xqq59" event={"ID":"3e665764-fb3d-4017-8b31-dcbf10a1f2ef","Type":"ContainerDied","Data":"1e2e72b33bd49b9d089cc1d1f69e040446dd9af795b0dcc2fc936a4e910a319a"} Jan 26 20:01:52 crc kubenswrapper[4737]: I0126 20:01:52.320578 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xqq59" event={"ID":"3e665764-fb3d-4017-8b31-dcbf10a1f2ef","Type":"ContainerStarted","Data":"e51bdce1a7d6dcbf28c7ece7be6acd9ec0b74e7eaa7fd65c77cb92dd312750a0"} Jan 26 20:01:52 crc kubenswrapper[4737]: I0126 20:01:52.996049 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96022805-a04d-4a11-9bfb-312081638b93" path="/var/lib/kubelet/pods/96022805-a04d-4a11-9bfb-312081638b93/volumes" Jan 26 20:01:53 crc kubenswrapper[4737]: I0126 20:01:53.335849 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xqq59" event={"ID":"3e665764-fb3d-4017-8b31-dcbf10a1f2ef","Type":"ContainerStarted","Data":"824547fdba56b5c407cc7ff0dcf753d7b4d2dd50b8b5835a9a743a3e316028b3"} Jan 26 20:01:53 crc kubenswrapper[4737]: I0126 20:01:53.981664 4737 scope.go:117] "RemoveContainer" containerID="999f3b8bf4218ca969dd3559e41014ea98b9927ff16b813685ab6cfa003cd090" Jan 26 20:01:53 crc kubenswrapper[4737]: E0126 20:01:53.982247 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:01:54 crc kubenswrapper[4737]: I0126 20:01:54.351035 4737 generic.go:334] "Generic (PLEG): container finished" podID="3e665764-fb3d-4017-8b31-dcbf10a1f2ef" containerID="824547fdba56b5c407cc7ff0dcf753d7b4d2dd50b8b5835a9a743a3e316028b3" exitCode=0 Jan 26 20:01:54 crc kubenswrapper[4737]: I0126 20:01:54.351206 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xqq59" event={"ID":"3e665764-fb3d-4017-8b31-dcbf10a1f2ef","Type":"ContainerDied","Data":"824547fdba56b5c407cc7ff0dcf753d7b4d2dd50b8b5835a9a743a3e316028b3"} Jan 26 20:01:55 crc kubenswrapper[4737]: I0126 20:01:55.381808 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xqq59" event={"ID":"3e665764-fb3d-4017-8b31-dcbf10a1f2ef","Type":"ContainerStarted","Data":"560edae05ed620ffb63383ee78bda0fd0ab2360e5dbf3851d7942df7e74886cb"} Jan 26 20:01:55 crc kubenswrapper[4737]: I0126 20:01:55.438186 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xqq59" podStartSLOduration=2.80866008 podStartE2EDuration="5.438158849s" podCreationTimestamp="2026-01-26 20:01:50 +0000 UTC" firstStartedPulling="2026-01-26 20:01:52.322411634 +0000 UTC m=+5485.630606342" lastFinishedPulling="2026-01-26 20:01:54.951910403 +0000 UTC m=+5488.260105111" observedRunningTime="2026-01-26 20:01:55.407575709 +0000 UTC m=+5488.715770427" watchObservedRunningTime="2026-01-26 20:01:55.438158849 +0000 UTC m=+5488.746353557" Jan 26 20:02:00 crc kubenswrapper[4737]: I0126 20:02:00.805661 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xqq59" Jan 26 20:02:00 crc kubenswrapper[4737]: I0126 20:02:00.806171 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xqq59" Jan 26 20:02:00 crc kubenswrapper[4737]: I0126 20:02:00.869814 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xqq59" Jan 26 20:02:01 crc kubenswrapper[4737]: I0126 20:02:01.516407 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xqq59" Jan 26 20:02:01 crc kubenswrapper[4737]: I0126 20:02:01.573894 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xqq59"] Jan 26 20:02:03 crc kubenswrapper[4737]: I0126 20:02:03.462779 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xqq59" podUID="3e665764-fb3d-4017-8b31-dcbf10a1f2ef" containerName="registry-server" containerID="cri-o://560edae05ed620ffb63383ee78bda0fd0ab2360e5dbf3851d7942df7e74886cb" gracePeriod=2 Jan 26 20:02:04 crc kubenswrapper[4737]: I0126 20:02:04.474642 4737 generic.go:334] "Generic (PLEG): container finished" podID="3e665764-fb3d-4017-8b31-dcbf10a1f2ef" containerID="560edae05ed620ffb63383ee78bda0fd0ab2360e5dbf3851d7942df7e74886cb" exitCode=0 Jan 26 20:02:04 crc kubenswrapper[4737]: I0126 20:02:04.474903 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xqq59" event={"ID":"3e665764-fb3d-4017-8b31-dcbf10a1f2ef","Type":"ContainerDied","Data":"560edae05ed620ffb63383ee78bda0fd0ab2360e5dbf3851d7942df7e74886cb"} Jan 26 20:02:04 crc kubenswrapper[4737]: I0126 20:02:04.782954 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xqq59" Jan 26 20:02:04 crc kubenswrapper[4737]: I0126 20:02:04.926751 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e665764-fb3d-4017-8b31-dcbf10a1f2ef-catalog-content\") pod \"3e665764-fb3d-4017-8b31-dcbf10a1f2ef\" (UID: \"3e665764-fb3d-4017-8b31-dcbf10a1f2ef\") " Jan 26 20:02:04 crc kubenswrapper[4737]: I0126 20:02:04.926933 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jr2vf\" (UniqueName: \"kubernetes.io/projected/3e665764-fb3d-4017-8b31-dcbf10a1f2ef-kube-api-access-jr2vf\") pod \"3e665764-fb3d-4017-8b31-dcbf10a1f2ef\" (UID: \"3e665764-fb3d-4017-8b31-dcbf10a1f2ef\") " Jan 26 20:02:04 crc kubenswrapper[4737]: I0126 20:02:04.927274 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e665764-fb3d-4017-8b31-dcbf10a1f2ef-utilities\") pod \"3e665764-fb3d-4017-8b31-dcbf10a1f2ef\" (UID: \"3e665764-fb3d-4017-8b31-dcbf10a1f2ef\") " Jan 26 20:02:04 crc kubenswrapper[4737]: I0126 20:02:04.929139 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e665764-fb3d-4017-8b31-dcbf10a1f2ef-utilities" (OuterVolumeSpecName: "utilities") pod "3e665764-fb3d-4017-8b31-dcbf10a1f2ef" (UID: "3e665764-fb3d-4017-8b31-dcbf10a1f2ef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:02:04 crc kubenswrapper[4737]: I0126 20:02:04.935472 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e665764-fb3d-4017-8b31-dcbf10a1f2ef-kube-api-access-jr2vf" (OuterVolumeSpecName: "kube-api-access-jr2vf") pod "3e665764-fb3d-4017-8b31-dcbf10a1f2ef" (UID: "3e665764-fb3d-4017-8b31-dcbf10a1f2ef"). InnerVolumeSpecName "kube-api-access-jr2vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:02:04 crc kubenswrapper[4737]: I0126 20:02:04.984402 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e665764-fb3d-4017-8b31-dcbf10a1f2ef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3e665764-fb3d-4017-8b31-dcbf10a1f2ef" (UID: "3e665764-fb3d-4017-8b31-dcbf10a1f2ef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:02:04 crc kubenswrapper[4737]: I0126 20:02:04.984533 4737 scope.go:117] "RemoveContainer" containerID="999f3b8bf4218ca969dd3559e41014ea98b9927ff16b813685ab6cfa003cd090" Jan 26 20:02:04 crc kubenswrapper[4737]: E0126 20:02:04.985619 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:02:05 crc kubenswrapper[4737]: I0126 20:02:05.032618 4737 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e665764-fb3d-4017-8b31-dcbf10a1f2ef-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 20:02:05 crc kubenswrapper[4737]: I0126 20:02:05.033048 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jr2vf\" (UniqueName: \"kubernetes.io/projected/3e665764-fb3d-4017-8b31-dcbf10a1f2ef-kube-api-access-jr2vf\") on node \"crc\" DevicePath \"\"" Jan 26 20:02:05 crc kubenswrapper[4737]: I0126 20:02:05.033065 4737 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e665764-fb3d-4017-8b31-dcbf10a1f2ef-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 20:02:05 crc kubenswrapper[4737]: I0126 20:02:05.489132 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xqq59" event={"ID":"3e665764-fb3d-4017-8b31-dcbf10a1f2ef","Type":"ContainerDied","Data":"e51bdce1a7d6dcbf28c7ece7be6acd9ec0b74e7eaa7fd65c77cb92dd312750a0"} Jan 26 20:02:05 crc kubenswrapper[4737]: I0126 20:02:05.489186 4737 scope.go:117] "RemoveContainer" containerID="560edae05ed620ffb63383ee78bda0fd0ab2360e5dbf3851d7942df7e74886cb" Jan 26 20:02:05 crc kubenswrapper[4737]: I0126 20:02:05.489220 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xqq59" Jan 26 20:02:05 crc kubenswrapper[4737]: I0126 20:02:05.517763 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xqq59"] Jan 26 20:02:05 crc kubenswrapper[4737]: I0126 20:02:05.519389 4737 scope.go:117] "RemoveContainer" containerID="824547fdba56b5c407cc7ff0dcf753d7b4d2dd50b8b5835a9a743a3e316028b3" Jan 26 20:02:05 crc kubenswrapper[4737]: I0126 20:02:05.534728 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xqq59"] Jan 26 20:02:05 crc kubenswrapper[4737]: I0126 20:02:05.541597 4737 scope.go:117] "RemoveContainer" containerID="1e2e72b33bd49b9d089cc1d1f69e040446dd9af795b0dcc2fc936a4e910a319a" Jan 26 20:02:07 crc kubenswrapper[4737]: I0126 20:02:07.001199 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e665764-fb3d-4017-8b31-dcbf10a1f2ef" path="/var/lib/kubelet/pods/3e665764-fb3d-4017-8b31-dcbf10a1f2ef/volumes" Jan 26 20:02:15 crc kubenswrapper[4737]: I0126 20:02:15.982586 4737 scope.go:117] "RemoveContainer" containerID="999f3b8bf4218ca969dd3559e41014ea98b9927ff16b813685ab6cfa003cd090" Jan 26 20:02:15 crc kubenswrapper[4737]: E0126 20:02:15.984299 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:02:30 crc kubenswrapper[4737]: I0126 20:02:30.982281 4737 scope.go:117] "RemoveContainer" containerID="999f3b8bf4218ca969dd3559e41014ea98b9927ff16b813685ab6cfa003cd090" Jan 26 20:02:30 crc kubenswrapper[4737]: E0126 20:02:30.983296 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:02:43 crc kubenswrapper[4737]: I0126 20:02:43.982014 4737 scope.go:117] "RemoveContainer" containerID="999f3b8bf4218ca969dd3559e41014ea98b9927ff16b813685ab6cfa003cd090" Jan 26 20:02:43 crc kubenswrapper[4737]: E0126 20:02:43.982858 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:02:56 crc kubenswrapper[4737]: I0126 20:02:56.995567 4737 scope.go:117] "RemoveContainer" containerID="999f3b8bf4218ca969dd3559e41014ea98b9927ff16b813685ab6cfa003cd090" Jan 26 20:02:56 crc kubenswrapper[4737]: E0126 20:02:56.996909 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:03:10 crc kubenswrapper[4737]: I0126 20:03:10.982251 4737 scope.go:117] "RemoveContainer" containerID="999f3b8bf4218ca969dd3559e41014ea98b9927ff16b813685ab6cfa003cd090" Jan 26 20:03:10 crc kubenswrapper[4737]: E0126 20:03:10.982923 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:03:23 crc kubenswrapper[4737]: I0126 20:03:23.983399 4737 scope.go:117] "RemoveContainer" containerID="999f3b8bf4218ca969dd3559e41014ea98b9927ff16b813685ab6cfa003cd090" Jan 26 20:03:23 crc kubenswrapper[4737]: E0126 20:03:23.984734 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:03:36 crc kubenswrapper[4737]: I0126 20:03:36.993450 4737 scope.go:117] "RemoveContainer" containerID="999f3b8bf4218ca969dd3559e41014ea98b9927ff16b813685ab6cfa003cd090" Jan 26 20:03:37 crc kubenswrapper[4737]: I0126 20:03:37.653574 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" event={"ID":"afd75772-7900-46c3-b392-afb075e1cc08","Type":"ContainerStarted","Data":"b58f7595d98a509245abd4865d2df0bd359d835ebdf87f5752d01fbf553576f9"} Jan 26 20:06:00 crc kubenswrapper[4737]: I0126 20:06:00.949635 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 20:06:00 crc kubenswrapper[4737]: I0126 20:06:00.950078 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 20:06:30 crc kubenswrapper[4737]: I0126 20:06:30.949059 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 20:06:30 crc kubenswrapper[4737]: I0126 20:06:30.949783 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 20:07:00 crc kubenswrapper[4737]: I0126 20:07:00.949333 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 20:07:00 crc kubenswrapper[4737]: I0126 20:07:00.949897 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 20:07:00 crc kubenswrapper[4737]: I0126 20:07:00.949938 4737 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" Jan 26 20:07:00 crc kubenswrapper[4737]: I0126 20:07:00.950666 4737 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b58f7595d98a509245abd4865d2df0bd359d835ebdf87f5752d01fbf553576f9"} pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 20:07:00 crc kubenswrapper[4737]: I0126 20:07:00.950717 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" containerID="cri-o://b58f7595d98a509245abd4865d2df0bd359d835ebdf87f5752d01fbf553576f9" gracePeriod=600 Jan 26 20:07:01 crc kubenswrapper[4737]: I0126 20:07:01.972708 4737 generic.go:334] "Generic (PLEG): container finished" podID="afd75772-7900-46c3-b392-afb075e1cc08" containerID="b58f7595d98a509245abd4865d2df0bd359d835ebdf87f5752d01fbf553576f9" exitCode=0 Jan 26 20:07:01 crc kubenswrapper[4737]: I0126 20:07:01.972842 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" event={"ID":"afd75772-7900-46c3-b392-afb075e1cc08","Type":"ContainerDied","Data":"b58f7595d98a509245abd4865d2df0bd359d835ebdf87f5752d01fbf553576f9"} Jan 26 20:07:01 crc kubenswrapper[4737]: I0126 20:07:01.973642 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" event={"ID":"afd75772-7900-46c3-b392-afb075e1cc08","Type":"ContainerStarted","Data":"33bebcaabae9d57274c8f9ce19e91e5a2ee2c813697141d70a95623238e43961"} Jan 26 20:07:01 crc kubenswrapper[4737]: I0126 20:07:01.973698 4737 scope.go:117] "RemoveContainer" containerID="999f3b8bf4218ca969dd3559e41014ea98b9927ff16b813685ab6cfa003cd090" Jan 26 20:09:29 crc kubenswrapper[4737]: I0126 20:09:29.906759 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-c459z"] Jan 26 20:09:29 crc kubenswrapper[4737]: E0126 20:09:29.912380 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96022805-a04d-4a11-9bfb-312081638b93" containerName="extract-content" Jan 26 20:09:29 crc kubenswrapper[4737]: I0126 20:09:29.912439 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="96022805-a04d-4a11-9bfb-312081638b93" containerName="extract-content" Jan 26 20:09:29 crc kubenswrapper[4737]: E0126 20:09:29.912467 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e665764-fb3d-4017-8b31-dcbf10a1f2ef" containerName="extract-content" Jan 26 20:09:29 crc kubenswrapper[4737]: I0126 20:09:29.912477 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e665764-fb3d-4017-8b31-dcbf10a1f2ef" containerName="extract-content" Jan 26 20:09:29 crc kubenswrapper[4737]: E0126 20:09:29.912498 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96022805-a04d-4a11-9bfb-312081638b93" containerName="extract-utilities" Jan 26 20:09:29 crc kubenswrapper[4737]: I0126 20:09:29.912524 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="96022805-a04d-4a11-9bfb-312081638b93" containerName="extract-utilities" Jan 26 20:09:29 crc kubenswrapper[4737]: E0126 20:09:29.912575 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e665764-fb3d-4017-8b31-dcbf10a1f2ef" containerName="extract-utilities" Jan 26 20:09:29 crc kubenswrapper[4737]: I0126 20:09:29.912585 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e665764-fb3d-4017-8b31-dcbf10a1f2ef" containerName="extract-utilities" Jan 26 20:09:29 crc kubenswrapper[4737]: E0126 20:09:29.912608 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e665764-fb3d-4017-8b31-dcbf10a1f2ef" containerName="registry-server" Jan 26 20:09:29 crc kubenswrapper[4737]: I0126 20:09:29.912616 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e665764-fb3d-4017-8b31-dcbf10a1f2ef" containerName="registry-server" Jan 26 20:09:29 crc kubenswrapper[4737]: E0126 20:09:29.912671 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96022805-a04d-4a11-9bfb-312081638b93" containerName="registry-server" Jan 26 20:09:29 crc kubenswrapper[4737]: I0126 20:09:29.912681 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="96022805-a04d-4a11-9bfb-312081638b93" containerName="registry-server" Jan 26 20:09:29 crc kubenswrapper[4737]: I0126 20:09:29.913212 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e665764-fb3d-4017-8b31-dcbf10a1f2ef" containerName="registry-server" Jan 26 20:09:29 crc kubenswrapper[4737]: I0126 20:09:29.913262 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="96022805-a04d-4a11-9bfb-312081638b93" containerName="registry-server" Jan 26 20:09:29 crc kubenswrapper[4737]: I0126 20:09:29.916392 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c459z" Jan 26 20:09:29 crc kubenswrapper[4737]: I0126 20:09:29.950676 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c459z"] Jan 26 20:09:29 crc kubenswrapper[4737]: I0126 20:09:29.994753 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dda89d3b-1579-4684-a833-3695bcf133b3-utilities\") pod \"redhat-operators-c459z\" (UID: \"dda89d3b-1579-4684-a833-3695bcf133b3\") " pod="openshift-marketplace/redhat-operators-c459z" Jan 26 20:09:29 crc kubenswrapper[4737]: I0126 20:09:29.994834 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kpnk\" (UniqueName: \"kubernetes.io/projected/dda89d3b-1579-4684-a833-3695bcf133b3-kube-api-access-5kpnk\") pod \"redhat-operators-c459z\" (UID: \"dda89d3b-1579-4684-a833-3695bcf133b3\") " pod="openshift-marketplace/redhat-operators-c459z" Jan 26 20:09:29 crc kubenswrapper[4737]: I0126 20:09:29.995051 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dda89d3b-1579-4684-a833-3695bcf133b3-catalog-content\") pod \"redhat-operators-c459z\" (UID: \"dda89d3b-1579-4684-a833-3695bcf133b3\") " pod="openshift-marketplace/redhat-operators-c459z" Jan 26 20:09:30 crc kubenswrapper[4737]: I0126 20:09:30.098410 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dda89d3b-1579-4684-a833-3695bcf133b3-utilities\") pod \"redhat-operators-c459z\" (UID: \"dda89d3b-1579-4684-a833-3695bcf133b3\") " pod="openshift-marketplace/redhat-operators-c459z" Jan 26 20:09:30 crc kubenswrapper[4737]: I0126 20:09:30.098827 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kpnk\" (UniqueName: \"kubernetes.io/projected/dda89d3b-1579-4684-a833-3695bcf133b3-kube-api-access-5kpnk\") pod \"redhat-operators-c459z\" (UID: \"dda89d3b-1579-4684-a833-3695bcf133b3\") " pod="openshift-marketplace/redhat-operators-c459z" Jan 26 20:09:30 crc kubenswrapper[4737]: I0126 20:09:30.100710 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dda89d3b-1579-4684-a833-3695bcf133b3-utilities\") pod \"redhat-operators-c459z\" (UID: \"dda89d3b-1579-4684-a833-3695bcf133b3\") " pod="openshift-marketplace/redhat-operators-c459z" Jan 26 20:09:30 crc kubenswrapper[4737]: I0126 20:09:30.100780 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dda89d3b-1579-4684-a833-3695bcf133b3-catalog-content\") pod \"redhat-operators-c459z\" (UID: \"dda89d3b-1579-4684-a833-3695bcf133b3\") " pod="openshift-marketplace/redhat-operators-c459z" Jan 26 20:09:30 crc kubenswrapper[4737]: I0126 20:09:30.101157 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dda89d3b-1579-4684-a833-3695bcf133b3-catalog-content\") pod \"redhat-operators-c459z\" (UID: \"dda89d3b-1579-4684-a833-3695bcf133b3\") " pod="openshift-marketplace/redhat-operators-c459z" Jan 26 20:09:30 crc kubenswrapper[4737]: I0126 20:09:30.124816 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kpnk\" (UniqueName: \"kubernetes.io/projected/dda89d3b-1579-4684-a833-3695bcf133b3-kube-api-access-5kpnk\") pod \"redhat-operators-c459z\" (UID: \"dda89d3b-1579-4684-a833-3695bcf133b3\") " pod="openshift-marketplace/redhat-operators-c459z" Jan 26 20:09:30 crc kubenswrapper[4737]: I0126 20:09:30.263908 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c459z" Jan 26 20:09:30 crc kubenswrapper[4737]: I0126 20:09:30.903571 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c459z"] Jan 26 20:09:30 crc kubenswrapper[4737]: I0126 20:09:30.949139 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 20:09:30 crc kubenswrapper[4737]: I0126 20:09:30.949240 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 20:09:31 crc kubenswrapper[4737]: I0126 20:09:31.699549 4737 generic.go:334] "Generic (PLEG): container finished" podID="dda89d3b-1579-4684-a833-3695bcf133b3" containerID="b29cb2e9dabd87c8c4ffa0e53e819ddead643d779c00ee0638f98b7ac9398a6c" exitCode=0 Jan 26 20:09:31 crc kubenswrapper[4737]: I0126 20:09:31.699837 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c459z" event={"ID":"dda89d3b-1579-4684-a833-3695bcf133b3","Type":"ContainerDied","Data":"b29cb2e9dabd87c8c4ffa0e53e819ddead643d779c00ee0638f98b7ac9398a6c"} Jan 26 20:09:31 crc kubenswrapper[4737]: I0126 20:09:31.699866 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c459z" event={"ID":"dda89d3b-1579-4684-a833-3695bcf133b3","Type":"ContainerStarted","Data":"3b3403d3fad962d19d7a2de1cda5c5237deb59c62b9c636cf59aa70ef6bf7b81"} Jan 26 20:09:31 crc kubenswrapper[4737]: I0126 20:09:31.705551 4737 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 20:09:32 crc kubenswrapper[4737]: I0126 20:09:32.712769 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c459z" event={"ID":"dda89d3b-1579-4684-a833-3695bcf133b3","Type":"ContainerStarted","Data":"6a54b90836e0b2b975aa04b595d99db8ae37555d10b2ddbad21d44d5e940e1b5"} Jan 26 20:09:35 crc kubenswrapper[4737]: I0126 20:09:35.763374 4737 generic.go:334] "Generic (PLEG): container finished" podID="dda89d3b-1579-4684-a833-3695bcf133b3" containerID="6a54b90836e0b2b975aa04b595d99db8ae37555d10b2ddbad21d44d5e940e1b5" exitCode=0 Jan 26 20:09:35 crc kubenswrapper[4737]: I0126 20:09:35.764237 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c459z" event={"ID":"dda89d3b-1579-4684-a833-3695bcf133b3","Type":"ContainerDied","Data":"6a54b90836e0b2b975aa04b595d99db8ae37555d10b2ddbad21d44d5e940e1b5"} Jan 26 20:09:36 crc kubenswrapper[4737]: I0126 20:09:36.779196 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c459z" event={"ID":"dda89d3b-1579-4684-a833-3695bcf133b3","Type":"ContainerStarted","Data":"501c5fff263e2ab9cb92d93df099fb176ea60a3a6e15b316913806a865c8262e"} Jan 26 20:09:36 crc kubenswrapper[4737]: I0126 20:09:36.829399 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-c459z" podStartSLOduration=3.321472955 podStartE2EDuration="7.829362471s" podCreationTimestamp="2026-01-26 20:09:29 +0000 UTC" firstStartedPulling="2026-01-26 20:09:31.704171809 +0000 UTC m=+5945.012366517" lastFinishedPulling="2026-01-26 20:09:36.212061315 +0000 UTC m=+5949.520256033" observedRunningTime="2026-01-26 20:09:36.80563091 +0000 UTC m=+5950.113825618" watchObservedRunningTime="2026-01-26 20:09:36.829362471 +0000 UTC m=+5950.137557199" Jan 26 20:09:40 crc kubenswrapper[4737]: I0126 20:09:40.264389 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-c459z" Jan 26 20:09:40 crc kubenswrapper[4737]: I0126 20:09:40.264825 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-c459z" Jan 26 20:09:41 crc kubenswrapper[4737]: I0126 20:09:41.331561 4737 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-c459z" podUID="dda89d3b-1579-4684-a833-3695bcf133b3" containerName="registry-server" probeResult="failure" output=< Jan 26 20:09:41 crc kubenswrapper[4737]: timeout: failed to connect service ":50051" within 1s Jan 26 20:09:41 crc kubenswrapper[4737]: > Jan 26 20:09:50 crc kubenswrapper[4737]: I0126 20:09:50.339957 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-c459z" Jan 26 20:09:50 crc kubenswrapper[4737]: I0126 20:09:50.393481 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-c459z" Jan 26 20:09:50 crc kubenswrapper[4737]: I0126 20:09:50.585622 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c459z"] Jan 26 20:09:52 crc kubenswrapper[4737]: I0126 20:09:52.002162 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-c459z" podUID="dda89d3b-1579-4684-a833-3695bcf133b3" containerName="registry-server" containerID="cri-o://501c5fff263e2ab9cb92d93df099fb176ea60a3a6e15b316913806a865c8262e" gracePeriod=2 Jan 26 20:09:52 crc kubenswrapper[4737]: I0126 20:09:52.561888 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c459z" Jan 26 20:09:52 crc kubenswrapper[4737]: I0126 20:09:52.653602 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dda89d3b-1579-4684-a833-3695bcf133b3-catalog-content\") pod \"dda89d3b-1579-4684-a833-3695bcf133b3\" (UID: \"dda89d3b-1579-4684-a833-3695bcf133b3\") " Jan 26 20:09:52 crc kubenswrapper[4737]: I0126 20:09:52.653737 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dda89d3b-1579-4684-a833-3695bcf133b3-utilities\") pod \"dda89d3b-1579-4684-a833-3695bcf133b3\" (UID: \"dda89d3b-1579-4684-a833-3695bcf133b3\") " Jan 26 20:09:52 crc kubenswrapper[4737]: I0126 20:09:52.653919 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kpnk\" (UniqueName: \"kubernetes.io/projected/dda89d3b-1579-4684-a833-3695bcf133b3-kube-api-access-5kpnk\") pod \"dda89d3b-1579-4684-a833-3695bcf133b3\" (UID: \"dda89d3b-1579-4684-a833-3695bcf133b3\") " Jan 26 20:09:52 crc kubenswrapper[4737]: I0126 20:09:52.655729 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dda89d3b-1579-4684-a833-3695bcf133b3-utilities" (OuterVolumeSpecName: "utilities") pod "dda89d3b-1579-4684-a833-3695bcf133b3" (UID: "dda89d3b-1579-4684-a833-3695bcf133b3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:09:52 crc kubenswrapper[4737]: I0126 20:09:52.663658 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dda89d3b-1579-4684-a833-3695bcf133b3-kube-api-access-5kpnk" (OuterVolumeSpecName: "kube-api-access-5kpnk") pod "dda89d3b-1579-4684-a833-3695bcf133b3" (UID: "dda89d3b-1579-4684-a833-3695bcf133b3"). InnerVolumeSpecName "kube-api-access-5kpnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:09:52 crc kubenswrapper[4737]: I0126 20:09:52.758773 4737 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dda89d3b-1579-4684-a833-3695bcf133b3-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 20:09:52 crc kubenswrapper[4737]: I0126 20:09:52.758849 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kpnk\" (UniqueName: \"kubernetes.io/projected/dda89d3b-1579-4684-a833-3695bcf133b3-kube-api-access-5kpnk\") on node \"crc\" DevicePath \"\"" Jan 26 20:09:52 crc kubenswrapper[4737]: I0126 20:09:52.767245 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dda89d3b-1579-4684-a833-3695bcf133b3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dda89d3b-1579-4684-a833-3695bcf133b3" (UID: "dda89d3b-1579-4684-a833-3695bcf133b3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:09:52 crc kubenswrapper[4737]: I0126 20:09:52.860842 4737 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dda89d3b-1579-4684-a833-3695bcf133b3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 20:09:53 crc kubenswrapper[4737]: I0126 20:09:53.013343 4737 generic.go:334] "Generic (PLEG): container finished" podID="dda89d3b-1579-4684-a833-3695bcf133b3" containerID="501c5fff263e2ab9cb92d93df099fb176ea60a3a6e15b316913806a865c8262e" exitCode=0 Jan 26 20:09:53 crc kubenswrapper[4737]: I0126 20:09:53.013401 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c459z" event={"ID":"dda89d3b-1579-4684-a833-3695bcf133b3","Type":"ContainerDied","Data":"501c5fff263e2ab9cb92d93df099fb176ea60a3a6e15b316913806a865c8262e"} Jan 26 20:09:53 crc kubenswrapper[4737]: I0126 20:09:53.014221 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c459z" event={"ID":"dda89d3b-1579-4684-a833-3695bcf133b3","Type":"ContainerDied","Data":"3b3403d3fad962d19d7a2de1cda5c5237deb59c62b9c636cf59aa70ef6bf7b81"} Jan 26 20:09:53 crc kubenswrapper[4737]: I0126 20:09:53.014242 4737 scope.go:117] "RemoveContainer" containerID="501c5fff263e2ab9cb92d93df099fb176ea60a3a6e15b316913806a865c8262e" Jan 26 20:09:53 crc kubenswrapper[4737]: I0126 20:09:53.013427 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c459z" Jan 26 20:09:53 crc kubenswrapper[4737]: I0126 20:09:53.050470 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c459z"] Jan 26 20:09:53 crc kubenswrapper[4737]: I0126 20:09:53.059251 4737 scope.go:117] "RemoveContainer" containerID="6a54b90836e0b2b975aa04b595d99db8ae37555d10b2ddbad21d44d5e940e1b5" Jan 26 20:09:53 crc kubenswrapper[4737]: I0126 20:09:53.061634 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-c459z"] Jan 26 20:09:53 crc kubenswrapper[4737]: I0126 20:09:53.101658 4737 scope.go:117] "RemoveContainer" containerID="b29cb2e9dabd87c8c4ffa0e53e819ddead643d779c00ee0638f98b7ac9398a6c" Jan 26 20:09:53 crc kubenswrapper[4737]: I0126 20:09:53.158996 4737 scope.go:117] "RemoveContainer" containerID="501c5fff263e2ab9cb92d93df099fb176ea60a3a6e15b316913806a865c8262e" Jan 26 20:09:53 crc kubenswrapper[4737]: E0126 20:09:53.160250 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"501c5fff263e2ab9cb92d93df099fb176ea60a3a6e15b316913806a865c8262e\": container with ID starting with 501c5fff263e2ab9cb92d93df099fb176ea60a3a6e15b316913806a865c8262e not found: ID does not exist" containerID="501c5fff263e2ab9cb92d93df099fb176ea60a3a6e15b316913806a865c8262e" Jan 26 20:09:53 crc kubenswrapper[4737]: I0126 20:09:53.160305 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"501c5fff263e2ab9cb92d93df099fb176ea60a3a6e15b316913806a865c8262e"} err="failed to get container status \"501c5fff263e2ab9cb92d93df099fb176ea60a3a6e15b316913806a865c8262e\": rpc error: code = NotFound desc = could not find container \"501c5fff263e2ab9cb92d93df099fb176ea60a3a6e15b316913806a865c8262e\": container with ID starting with 501c5fff263e2ab9cb92d93df099fb176ea60a3a6e15b316913806a865c8262e not found: ID does not exist" Jan 26 20:09:53 crc kubenswrapper[4737]: I0126 20:09:53.160335 4737 scope.go:117] "RemoveContainer" containerID="6a54b90836e0b2b975aa04b595d99db8ae37555d10b2ddbad21d44d5e940e1b5" Jan 26 20:09:53 crc kubenswrapper[4737]: E0126 20:09:53.160802 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a54b90836e0b2b975aa04b595d99db8ae37555d10b2ddbad21d44d5e940e1b5\": container with ID starting with 6a54b90836e0b2b975aa04b595d99db8ae37555d10b2ddbad21d44d5e940e1b5 not found: ID does not exist" containerID="6a54b90836e0b2b975aa04b595d99db8ae37555d10b2ddbad21d44d5e940e1b5" Jan 26 20:09:53 crc kubenswrapper[4737]: I0126 20:09:53.160845 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a54b90836e0b2b975aa04b595d99db8ae37555d10b2ddbad21d44d5e940e1b5"} err="failed to get container status \"6a54b90836e0b2b975aa04b595d99db8ae37555d10b2ddbad21d44d5e940e1b5\": rpc error: code = NotFound desc = could not find container \"6a54b90836e0b2b975aa04b595d99db8ae37555d10b2ddbad21d44d5e940e1b5\": container with ID starting with 6a54b90836e0b2b975aa04b595d99db8ae37555d10b2ddbad21d44d5e940e1b5 not found: ID does not exist" Jan 26 20:09:53 crc kubenswrapper[4737]: I0126 20:09:53.160873 4737 scope.go:117] "RemoveContainer" containerID="b29cb2e9dabd87c8c4ffa0e53e819ddead643d779c00ee0638f98b7ac9398a6c" Jan 26 20:09:53 crc kubenswrapper[4737]: E0126 20:09:53.161256 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b29cb2e9dabd87c8c4ffa0e53e819ddead643d779c00ee0638f98b7ac9398a6c\": container with ID starting with b29cb2e9dabd87c8c4ffa0e53e819ddead643d779c00ee0638f98b7ac9398a6c not found: ID does not exist" containerID="b29cb2e9dabd87c8c4ffa0e53e819ddead643d779c00ee0638f98b7ac9398a6c" Jan 26 20:09:53 crc kubenswrapper[4737]: I0126 20:09:53.161289 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b29cb2e9dabd87c8c4ffa0e53e819ddead643d779c00ee0638f98b7ac9398a6c"} err="failed to get container status \"b29cb2e9dabd87c8c4ffa0e53e819ddead643d779c00ee0638f98b7ac9398a6c\": rpc error: code = NotFound desc = could not find container \"b29cb2e9dabd87c8c4ffa0e53e819ddead643d779c00ee0638f98b7ac9398a6c\": container with ID starting with b29cb2e9dabd87c8c4ffa0e53e819ddead643d779c00ee0638f98b7ac9398a6c not found: ID does not exist" Jan 26 20:09:54 crc kubenswrapper[4737]: I0126 20:09:54.995312 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dda89d3b-1579-4684-a833-3695bcf133b3" path="/var/lib/kubelet/pods/dda89d3b-1579-4684-a833-3695bcf133b3/volumes" Jan 26 20:10:00 crc kubenswrapper[4737]: I0126 20:10:00.948857 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 20:10:00 crc kubenswrapper[4737]: I0126 20:10:00.950801 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 20:10:30 crc kubenswrapper[4737]: I0126 20:10:30.949041 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 20:10:30 crc kubenswrapper[4737]: I0126 20:10:30.950210 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 20:10:30 crc kubenswrapper[4737]: I0126 20:10:30.950269 4737 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" Jan 26 20:10:30 crc kubenswrapper[4737]: I0126 20:10:30.951156 4737 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"33bebcaabae9d57274c8f9ce19e91e5a2ee2c813697141d70a95623238e43961"} pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 20:10:30 crc kubenswrapper[4737]: I0126 20:10:30.951210 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" containerID="cri-o://33bebcaabae9d57274c8f9ce19e91e5a2ee2c813697141d70a95623238e43961" gracePeriod=600 Jan 26 20:10:31 crc kubenswrapper[4737]: E0126 20:10:31.073946 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:10:31 crc kubenswrapper[4737]: I0126 20:10:31.457835 4737 generic.go:334] "Generic (PLEG): container finished" podID="afd75772-7900-46c3-b392-afb075e1cc08" containerID="33bebcaabae9d57274c8f9ce19e91e5a2ee2c813697141d70a95623238e43961" exitCode=0 Jan 26 20:10:31 crc kubenswrapper[4737]: I0126 20:10:31.457942 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" event={"ID":"afd75772-7900-46c3-b392-afb075e1cc08","Type":"ContainerDied","Data":"33bebcaabae9d57274c8f9ce19e91e5a2ee2c813697141d70a95623238e43961"} Jan 26 20:10:31 crc kubenswrapper[4737]: I0126 20:10:31.458023 4737 scope.go:117] "RemoveContainer" containerID="b58f7595d98a509245abd4865d2df0bd359d835ebdf87f5752d01fbf553576f9" Jan 26 20:10:31 crc kubenswrapper[4737]: I0126 20:10:31.459972 4737 scope.go:117] "RemoveContainer" containerID="33bebcaabae9d57274c8f9ce19e91e5a2ee2c813697141d70a95623238e43961" Jan 26 20:10:31 crc kubenswrapper[4737]: E0126 20:10:31.460994 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:10:44 crc kubenswrapper[4737]: I0126 20:10:44.983309 4737 scope.go:117] "RemoveContainer" containerID="33bebcaabae9d57274c8f9ce19e91e5a2ee2c813697141d70a95623238e43961" Jan 26 20:10:44 crc kubenswrapper[4737]: E0126 20:10:44.984035 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:10:57 crc kubenswrapper[4737]: I0126 20:10:57.982104 4737 scope.go:117] "RemoveContainer" containerID="33bebcaabae9d57274c8f9ce19e91e5a2ee2c813697141d70a95623238e43961" Jan 26 20:10:57 crc kubenswrapper[4737]: E0126 20:10:57.982805 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:11:11 crc kubenswrapper[4737]: I0126 20:11:11.982134 4737 scope.go:117] "RemoveContainer" containerID="33bebcaabae9d57274c8f9ce19e91e5a2ee2c813697141d70a95623238e43961" Jan 26 20:11:11 crc kubenswrapper[4737]: E0126 20:11:11.983165 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:11:22 crc kubenswrapper[4737]: I0126 20:11:22.982517 4737 scope.go:117] "RemoveContainer" containerID="33bebcaabae9d57274c8f9ce19e91e5a2ee2c813697141d70a95623238e43961" Jan 26 20:11:22 crc kubenswrapper[4737]: E0126 20:11:22.983716 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:11:33 crc kubenswrapper[4737]: I0126 20:11:33.982284 4737 scope.go:117] "RemoveContainer" containerID="33bebcaabae9d57274c8f9ce19e91e5a2ee2c813697141d70a95623238e43961" Jan 26 20:11:33 crc kubenswrapper[4737]: E0126 20:11:33.983196 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:11:47 crc kubenswrapper[4737]: I0126 20:11:47.981878 4737 scope.go:117] "RemoveContainer" containerID="33bebcaabae9d57274c8f9ce19e91e5a2ee2c813697141d70a95623238e43961" Jan 26 20:11:47 crc kubenswrapper[4737]: E0126 20:11:47.982871 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:12:01 crc kubenswrapper[4737]: I0126 20:12:01.982107 4737 scope.go:117] "RemoveContainer" containerID="33bebcaabae9d57274c8f9ce19e91e5a2ee2c813697141d70a95623238e43961" Jan 26 20:12:01 crc kubenswrapper[4737]: E0126 20:12:01.982855 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:12:15 crc kubenswrapper[4737]: I0126 20:12:15.982704 4737 scope.go:117] "RemoveContainer" containerID="33bebcaabae9d57274c8f9ce19e91e5a2ee2c813697141d70a95623238e43961" Jan 26 20:12:15 crc kubenswrapper[4737]: E0126 20:12:15.983567 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:12:19 crc kubenswrapper[4737]: I0126 20:12:19.742576 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wds2k"] Jan 26 20:12:19 crc kubenswrapper[4737]: E0126 20:12:19.744164 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dda89d3b-1579-4684-a833-3695bcf133b3" containerName="extract-content" Jan 26 20:12:19 crc kubenswrapper[4737]: I0126 20:12:19.744185 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="dda89d3b-1579-4684-a833-3695bcf133b3" containerName="extract-content" Jan 26 20:12:19 crc kubenswrapper[4737]: E0126 20:12:19.744240 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dda89d3b-1579-4684-a833-3695bcf133b3" containerName="extract-utilities" Jan 26 20:12:19 crc kubenswrapper[4737]: I0126 20:12:19.744249 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="dda89d3b-1579-4684-a833-3695bcf133b3" containerName="extract-utilities" Jan 26 20:12:19 crc kubenswrapper[4737]: E0126 20:12:19.744268 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dda89d3b-1579-4684-a833-3695bcf133b3" containerName="registry-server" Jan 26 20:12:19 crc kubenswrapper[4737]: I0126 20:12:19.744276 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="dda89d3b-1579-4684-a833-3695bcf133b3" containerName="registry-server" Jan 26 20:12:19 crc kubenswrapper[4737]: I0126 20:12:19.744573 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="dda89d3b-1579-4684-a833-3695bcf133b3" containerName="registry-server" Jan 26 20:12:19 crc kubenswrapper[4737]: I0126 20:12:19.746988 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wds2k" Jan 26 20:12:19 crc kubenswrapper[4737]: I0126 20:12:19.779473 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wds2k"] Jan 26 20:12:19 crc kubenswrapper[4737]: I0126 20:12:19.843013 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fe07851-8431-4bf7-ad52-dda66e8304f4-catalog-content\") pod \"community-operators-wds2k\" (UID: \"2fe07851-8431-4bf7-ad52-dda66e8304f4\") " pod="openshift-marketplace/community-operators-wds2k" Jan 26 20:12:19 crc kubenswrapper[4737]: I0126 20:12:19.843225 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fe07851-8431-4bf7-ad52-dda66e8304f4-utilities\") pod \"community-operators-wds2k\" (UID: \"2fe07851-8431-4bf7-ad52-dda66e8304f4\") " pod="openshift-marketplace/community-operators-wds2k" Jan 26 20:12:19 crc kubenswrapper[4737]: I0126 20:12:19.843324 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2drcd\" (UniqueName: \"kubernetes.io/projected/2fe07851-8431-4bf7-ad52-dda66e8304f4-kube-api-access-2drcd\") pod \"community-operators-wds2k\" (UID: \"2fe07851-8431-4bf7-ad52-dda66e8304f4\") " pod="openshift-marketplace/community-operators-wds2k" Jan 26 20:12:19 crc kubenswrapper[4737]: I0126 20:12:19.945369 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2drcd\" (UniqueName: \"kubernetes.io/projected/2fe07851-8431-4bf7-ad52-dda66e8304f4-kube-api-access-2drcd\") pod \"community-operators-wds2k\" (UID: \"2fe07851-8431-4bf7-ad52-dda66e8304f4\") " pod="openshift-marketplace/community-operators-wds2k" Jan 26 20:12:19 crc kubenswrapper[4737]: I0126 20:12:19.945592 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fe07851-8431-4bf7-ad52-dda66e8304f4-catalog-content\") pod \"community-operators-wds2k\" (UID: \"2fe07851-8431-4bf7-ad52-dda66e8304f4\") " pod="openshift-marketplace/community-operators-wds2k" Jan 26 20:12:19 crc kubenswrapper[4737]: I0126 20:12:19.945695 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fe07851-8431-4bf7-ad52-dda66e8304f4-utilities\") pod \"community-operators-wds2k\" (UID: \"2fe07851-8431-4bf7-ad52-dda66e8304f4\") " pod="openshift-marketplace/community-operators-wds2k" Jan 26 20:12:19 crc kubenswrapper[4737]: I0126 20:12:19.946410 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fe07851-8431-4bf7-ad52-dda66e8304f4-catalog-content\") pod \"community-operators-wds2k\" (UID: \"2fe07851-8431-4bf7-ad52-dda66e8304f4\") " pod="openshift-marketplace/community-operators-wds2k" Jan 26 20:12:19 crc kubenswrapper[4737]: I0126 20:12:19.946457 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fe07851-8431-4bf7-ad52-dda66e8304f4-utilities\") pod \"community-operators-wds2k\" (UID: \"2fe07851-8431-4bf7-ad52-dda66e8304f4\") " pod="openshift-marketplace/community-operators-wds2k" Jan 26 20:12:19 crc kubenswrapper[4737]: I0126 20:12:19.996877 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2drcd\" (UniqueName: \"kubernetes.io/projected/2fe07851-8431-4bf7-ad52-dda66e8304f4-kube-api-access-2drcd\") pod \"community-operators-wds2k\" (UID: \"2fe07851-8431-4bf7-ad52-dda66e8304f4\") " pod="openshift-marketplace/community-operators-wds2k" Jan 26 20:12:20 crc kubenswrapper[4737]: I0126 20:12:20.077938 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wds2k" Jan 26 20:12:20 crc kubenswrapper[4737]: I0126 20:12:20.624700 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wds2k"] Jan 26 20:12:20 crc kubenswrapper[4737]: I0126 20:12:20.813984 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wds2k" event={"ID":"2fe07851-8431-4bf7-ad52-dda66e8304f4","Type":"ContainerStarted","Data":"9241f679f1d1e91316cf79131af746dd7914479751f96066c98174513a5f20ea"} Jan 26 20:12:21 crc kubenswrapper[4737]: I0126 20:12:21.828259 4737 generic.go:334] "Generic (PLEG): container finished" podID="2fe07851-8431-4bf7-ad52-dda66e8304f4" containerID="a0a53aa19f2750646de3ff2b3b8fab16ee0072e66d6434c9c5381ac97bbe2d0f" exitCode=0 Jan 26 20:12:21 crc kubenswrapper[4737]: I0126 20:12:21.828349 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wds2k" event={"ID":"2fe07851-8431-4bf7-ad52-dda66e8304f4","Type":"ContainerDied","Data":"a0a53aa19f2750646de3ff2b3b8fab16ee0072e66d6434c9c5381ac97bbe2d0f"} Jan 26 20:12:22 crc kubenswrapper[4737]: I0126 20:12:22.841177 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wds2k" event={"ID":"2fe07851-8431-4bf7-ad52-dda66e8304f4","Type":"ContainerStarted","Data":"6799d0d51ccfdd3f85211f509650b19d7032f7810bd263595121ffa6bd916c86"} Jan 26 20:12:23 crc kubenswrapper[4737]: I0126 20:12:23.853847 4737 generic.go:334] "Generic (PLEG): container finished" podID="2fe07851-8431-4bf7-ad52-dda66e8304f4" containerID="6799d0d51ccfdd3f85211f509650b19d7032f7810bd263595121ffa6bd916c86" exitCode=0 Jan 26 20:12:23 crc kubenswrapper[4737]: I0126 20:12:23.853910 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wds2k" event={"ID":"2fe07851-8431-4bf7-ad52-dda66e8304f4","Type":"ContainerDied","Data":"6799d0d51ccfdd3f85211f509650b19d7032f7810bd263595121ffa6bd916c86"} Jan 26 20:12:24 crc kubenswrapper[4737]: I0126 20:12:24.871742 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wds2k" event={"ID":"2fe07851-8431-4bf7-ad52-dda66e8304f4","Type":"ContainerStarted","Data":"ecfbd07b3503b587c7d6618b6c9d983e2fb838d710dc756089d9397a25092ab0"} Jan 26 20:12:30 crc kubenswrapper[4737]: I0126 20:12:30.078281 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wds2k" Jan 26 20:12:30 crc kubenswrapper[4737]: I0126 20:12:30.078817 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wds2k" Jan 26 20:12:30 crc kubenswrapper[4737]: I0126 20:12:30.130181 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wds2k" Jan 26 20:12:30 crc kubenswrapper[4737]: I0126 20:12:30.149022 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wds2k" podStartSLOduration=8.719660716 podStartE2EDuration="11.149005114s" podCreationTimestamp="2026-01-26 20:12:19 +0000 UTC" firstStartedPulling="2026-01-26 20:12:21.831840389 +0000 UTC m=+6115.140035097" lastFinishedPulling="2026-01-26 20:12:24.261184787 +0000 UTC m=+6117.569379495" observedRunningTime="2026-01-26 20:12:24.907595286 +0000 UTC m=+6118.215790004" watchObservedRunningTime="2026-01-26 20:12:30.149005114 +0000 UTC m=+6123.457199812" Jan 26 20:12:30 crc kubenswrapper[4737]: I0126 20:12:30.982705 4737 scope.go:117] "RemoveContainer" containerID="33bebcaabae9d57274c8f9ce19e91e5a2ee2c813697141d70a95623238e43961" Jan 26 20:12:30 crc kubenswrapper[4737]: E0126 20:12:30.983110 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:12:31 crc kubenswrapper[4737]: I0126 20:12:31.001320 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wds2k" Jan 26 20:12:31 crc kubenswrapper[4737]: I0126 20:12:31.089876 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wds2k"] Jan 26 20:12:32 crc kubenswrapper[4737]: I0126 20:12:32.957885 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wds2k" podUID="2fe07851-8431-4bf7-ad52-dda66e8304f4" containerName="registry-server" containerID="cri-o://ecfbd07b3503b587c7d6618b6c9d983e2fb838d710dc756089d9397a25092ab0" gracePeriod=2 Jan 26 20:12:33 crc kubenswrapper[4737]: I0126 20:12:33.502055 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wds2k" Jan 26 20:12:33 crc kubenswrapper[4737]: I0126 20:12:33.621638 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fe07851-8431-4bf7-ad52-dda66e8304f4-utilities\") pod \"2fe07851-8431-4bf7-ad52-dda66e8304f4\" (UID: \"2fe07851-8431-4bf7-ad52-dda66e8304f4\") " Jan 26 20:12:33 crc kubenswrapper[4737]: I0126 20:12:33.621736 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2drcd\" (UniqueName: \"kubernetes.io/projected/2fe07851-8431-4bf7-ad52-dda66e8304f4-kube-api-access-2drcd\") pod \"2fe07851-8431-4bf7-ad52-dda66e8304f4\" (UID: \"2fe07851-8431-4bf7-ad52-dda66e8304f4\") " Jan 26 20:12:33 crc kubenswrapper[4737]: I0126 20:12:33.621824 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fe07851-8431-4bf7-ad52-dda66e8304f4-catalog-content\") pod \"2fe07851-8431-4bf7-ad52-dda66e8304f4\" (UID: \"2fe07851-8431-4bf7-ad52-dda66e8304f4\") " Jan 26 20:12:33 crc kubenswrapper[4737]: I0126 20:12:33.624313 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fe07851-8431-4bf7-ad52-dda66e8304f4-utilities" (OuterVolumeSpecName: "utilities") pod "2fe07851-8431-4bf7-ad52-dda66e8304f4" (UID: "2fe07851-8431-4bf7-ad52-dda66e8304f4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:12:33 crc kubenswrapper[4737]: I0126 20:12:33.631858 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fe07851-8431-4bf7-ad52-dda66e8304f4-kube-api-access-2drcd" (OuterVolumeSpecName: "kube-api-access-2drcd") pod "2fe07851-8431-4bf7-ad52-dda66e8304f4" (UID: "2fe07851-8431-4bf7-ad52-dda66e8304f4"). InnerVolumeSpecName "kube-api-access-2drcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:12:33 crc kubenswrapper[4737]: I0126 20:12:33.681696 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fe07851-8431-4bf7-ad52-dda66e8304f4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2fe07851-8431-4bf7-ad52-dda66e8304f4" (UID: "2fe07851-8431-4bf7-ad52-dda66e8304f4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:12:33 crc kubenswrapper[4737]: I0126 20:12:33.726718 4737 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fe07851-8431-4bf7-ad52-dda66e8304f4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 20:12:33 crc kubenswrapper[4737]: I0126 20:12:33.726766 4737 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fe07851-8431-4bf7-ad52-dda66e8304f4-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 20:12:33 crc kubenswrapper[4737]: I0126 20:12:33.726776 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2drcd\" (UniqueName: \"kubernetes.io/projected/2fe07851-8431-4bf7-ad52-dda66e8304f4-kube-api-access-2drcd\") on node \"crc\" DevicePath \"\"" Jan 26 20:12:33 crc kubenswrapper[4737]: I0126 20:12:33.969945 4737 generic.go:334] "Generic (PLEG): container finished" podID="2fe07851-8431-4bf7-ad52-dda66e8304f4" containerID="ecfbd07b3503b587c7d6618b6c9d983e2fb838d710dc756089d9397a25092ab0" exitCode=0 Jan 26 20:12:33 crc kubenswrapper[4737]: I0126 20:12:33.969996 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wds2k" event={"ID":"2fe07851-8431-4bf7-ad52-dda66e8304f4","Type":"ContainerDied","Data":"ecfbd07b3503b587c7d6618b6c9d983e2fb838d710dc756089d9397a25092ab0"} Jan 26 20:12:33 crc kubenswrapper[4737]: I0126 20:12:33.970017 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wds2k" Jan 26 20:12:33 crc kubenswrapper[4737]: I0126 20:12:33.970038 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wds2k" event={"ID":"2fe07851-8431-4bf7-ad52-dda66e8304f4","Type":"ContainerDied","Data":"9241f679f1d1e91316cf79131af746dd7914479751f96066c98174513a5f20ea"} Jan 26 20:12:33 crc kubenswrapper[4737]: I0126 20:12:33.970063 4737 scope.go:117] "RemoveContainer" containerID="ecfbd07b3503b587c7d6618b6c9d983e2fb838d710dc756089d9397a25092ab0" Jan 26 20:12:33 crc kubenswrapper[4737]: I0126 20:12:33.991262 4737 scope.go:117] "RemoveContainer" containerID="6799d0d51ccfdd3f85211f509650b19d7032f7810bd263595121ffa6bd916c86" Jan 26 20:12:34 crc kubenswrapper[4737]: I0126 20:12:34.019551 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wds2k"] Jan 26 20:12:34 crc kubenswrapper[4737]: I0126 20:12:34.024921 4737 scope.go:117] "RemoveContainer" containerID="a0a53aa19f2750646de3ff2b3b8fab16ee0072e66d6434c9c5381ac97bbe2d0f" Jan 26 20:12:34 crc kubenswrapper[4737]: I0126 20:12:34.030773 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wds2k"] Jan 26 20:12:34 crc kubenswrapper[4737]: I0126 20:12:34.084439 4737 scope.go:117] "RemoveContainer" containerID="ecfbd07b3503b587c7d6618b6c9d983e2fb838d710dc756089d9397a25092ab0" Jan 26 20:12:34 crc kubenswrapper[4737]: E0126 20:12:34.084993 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecfbd07b3503b587c7d6618b6c9d983e2fb838d710dc756089d9397a25092ab0\": container with ID starting with ecfbd07b3503b587c7d6618b6c9d983e2fb838d710dc756089d9397a25092ab0 not found: ID does not exist" containerID="ecfbd07b3503b587c7d6618b6c9d983e2fb838d710dc756089d9397a25092ab0" Jan 26 20:12:34 crc kubenswrapper[4737]: I0126 20:12:34.085032 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecfbd07b3503b587c7d6618b6c9d983e2fb838d710dc756089d9397a25092ab0"} err="failed to get container status \"ecfbd07b3503b587c7d6618b6c9d983e2fb838d710dc756089d9397a25092ab0\": rpc error: code = NotFound desc = could not find container \"ecfbd07b3503b587c7d6618b6c9d983e2fb838d710dc756089d9397a25092ab0\": container with ID starting with ecfbd07b3503b587c7d6618b6c9d983e2fb838d710dc756089d9397a25092ab0 not found: ID does not exist" Jan 26 20:12:34 crc kubenswrapper[4737]: I0126 20:12:34.085053 4737 scope.go:117] "RemoveContainer" containerID="6799d0d51ccfdd3f85211f509650b19d7032f7810bd263595121ffa6bd916c86" Jan 26 20:12:34 crc kubenswrapper[4737]: E0126 20:12:34.085418 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6799d0d51ccfdd3f85211f509650b19d7032f7810bd263595121ffa6bd916c86\": container with ID starting with 6799d0d51ccfdd3f85211f509650b19d7032f7810bd263595121ffa6bd916c86 not found: ID does not exist" containerID="6799d0d51ccfdd3f85211f509650b19d7032f7810bd263595121ffa6bd916c86" Jan 26 20:12:34 crc kubenswrapper[4737]: I0126 20:12:34.085440 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6799d0d51ccfdd3f85211f509650b19d7032f7810bd263595121ffa6bd916c86"} err="failed to get container status \"6799d0d51ccfdd3f85211f509650b19d7032f7810bd263595121ffa6bd916c86\": rpc error: code = NotFound desc = could not find container \"6799d0d51ccfdd3f85211f509650b19d7032f7810bd263595121ffa6bd916c86\": container with ID starting with 6799d0d51ccfdd3f85211f509650b19d7032f7810bd263595121ffa6bd916c86 not found: ID does not exist" Jan 26 20:12:34 crc kubenswrapper[4737]: I0126 20:12:34.085454 4737 scope.go:117] "RemoveContainer" containerID="a0a53aa19f2750646de3ff2b3b8fab16ee0072e66d6434c9c5381ac97bbe2d0f" Jan 26 20:12:34 crc kubenswrapper[4737]: E0126 20:12:34.085848 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0a53aa19f2750646de3ff2b3b8fab16ee0072e66d6434c9c5381ac97bbe2d0f\": container with ID starting with a0a53aa19f2750646de3ff2b3b8fab16ee0072e66d6434c9c5381ac97bbe2d0f not found: ID does not exist" containerID="a0a53aa19f2750646de3ff2b3b8fab16ee0072e66d6434c9c5381ac97bbe2d0f" Jan 26 20:12:34 crc kubenswrapper[4737]: I0126 20:12:34.085905 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0a53aa19f2750646de3ff2b3b8fab16ee0072e66d6434c9c5381ac97bbe2d0f"} err="failed to get container status \"a0a53aa19f2750646de3ff2b3b8fab16ee0072e66d6434c9c5381ac97bbe2d0f\": rpc error: code = NotFound desc = could not find container \"a0a53aa19f2750646de3ff2b3b8fab16ee0072e66d6434c9c5381ac97bbe2d0f\": container with ID starting with a0a53aa19f2750646de3ff2b3b8fab16ee0072e66d6434c9c5381ac97bbe2d0f not found: ID does not exist" Jan 26 20:12:35 crc kubenswrapper[4737]: I0126 20:12:35.014799 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fe07851-8431-4bf7-ad52-dda66e8304f4" path="/var/lib/kubelet/pods/2fe07851-8431-4bf7-ad52-dda66e8304f4/volumes" Jan 26 20:12:41 crc kubenswrapper[4737]: I0126 20:12:41.982059 4737 scope.go:117] "RemoveContainer" containerID="33bebcaabae9d57274c8f9ce19e91e5a2ee2c813697141d70a95623238e43961" Jan 26 20:12:41 crc kubenswrapper[4737]: E0126 20:12:41.982798 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:12:53 crc kubenswrapper[4737]: I0126 20:12:53.983306 4737 scope.go:117] "RemoveContainer" containerID="33bebcaabae9d57274c8f9ce19e91e5a2ee2c813697141d70a95623238e43961" Jan 26 20:12:53 crc kubenswrapper[4737]: E0126 20:12:53.988266 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:13:06 crc kubenswrapper[4737]: I0126 20:13:06.994889 4737 scope.go:117] "RemoveContainer" containerID="33bebcaabae9d57274c8f9ce19e91e5a2ee2c813697141d70a95623238e43961" Jan 26 20:13:06 crc kubenswrapper[4737]: E0126 20:13:06.996019 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:13:17 crc kubenswrapper[4737]: I0126 20:13:17.983822 4737 scope.go:117] "RemoveContainer" containerID="33bebcaabae9d57274c8f9ce19e91e5a2ee2c813697141d70a95623238e43961" Jan 26 20:13:17 crc kubenswrapper[4737]: E0126 20:13:17.984903 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:13:24 crc kubenswrapper[4737]: I0126 20:13:24.328956 4737 generic.go:334] "Generic (PLEG): container finished" podID="d81cdf24-ce67-401f-869f-805f4718fce4" containerID="180b2af4ea7eabf4e2acf652f681d686309ee1b6332346cbf09d3cf12422b349" exitCode=0 Jan 26 20:13:24 crc kubenswrapper[4737]: I0126 20:13:24.329188 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"d81cdf24-ce67-401f-869f-805f4718fce4","Type":"ContainerDied","Data":"180b2af4ea7eabf4e2acf652f681d686309ee1b6332346cbf09d3cf12422b349"} Jan 26 20:13:25 crc kubenswrapper[4737]: I0126 20:13:25.812539 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 26 20:13:25 crc kubenswrapper[4737]: I0126 20:13:25.919113 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d81cdf24-ce67-401f-869f-805f4718fce4-ssh-key\") pod \"d81cdf24-ce67-401f-869f-805f4718fce4\" (UID: \"d81cdf24-ce67-401f-869f-805f4718fce4\") " Jan 26 20:13:25 crc kubenswrapper[4737]: I0126 20:13:25.919208 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"d81cdf24-ce67-401f-869f-805f4718fce4\" (UID: \"d81cdf24-ce67-401f-869f-805f4718fce4\") " Jan 26 20:13:25 crc kubenswrapper[4737]: I0126 20:13:25.919252 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d81cdf24-ce67-401f-869f-805f4718fce4-test-operator-ephemeral-temporary\") pod \"d81cdf24-ce67-401f-869f-805f4718fce4\" (UID: \"d81cdf24-ce67-401f-869f-805f4718fce4\") " Jan 26 20:13:25 crc kubenswrapper[4737]: I0126 20:13:25.919371 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d81cdf24-ce67-401f-869f-805f4718fce4-test-operator-ephemeral-workdir\") pod \"d81cdf24-ce67-401f-869f-805f4718fce4\" (UID: \"d81cdf24-ce67-401f-869f-805f4718fce4\") " Jan 26 20:13:25 crc kubenswrapper[4737]: I0126 20:13:25.919410 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d81cdf24-ce67-401f-869f-805f4718fce4-ca-certs\") pod \"d81cdf24-ce67-401f-869f-805f4718fce4\" (UID: \"d81cdf24-ce67-401f-869f-805f4718fce4\") " Jan 26 20:13:25 crc kubenswrapper[4737]: I0126 20:13:25.919454 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d81cdf24-ce67-401f-869f-805f4718fce4-openstack-config\") pod \"d81cdf24-ce67-401f-869f-805f4718fce4\" (UID: \"d81cdf24-ce67-401f-869f-805f4718fce4\") " Jan 26 20:13:25 crc kubenswrapper[4737]: I0126 20:13:25.919496 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d81cdf24-ce67-401f-869f-805f4718fce4-openstack-config-secret\") pod \"d81cdf24-ce67-401f-869f-805f4718fce4\" (UID: \"d81cdf24-ce67-401f-869f-805f4718fce4\") " Jan 26 20:13:25 crc kubenswrapper[4737]: I0126 20:13:25.919529 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d81cdf24-ce67-401f-869f-805f4718fce4-config-data\") pod \"d81cdf24-ce67-401f-869f-805f4718fce4\" (UID: \"d81cdf24-ce67-401f-869f-805f4718fce4\") " Jan 26 20:13:25 crc kubenswrapper[4737]: I0126 20:13:25.919594 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvk65\" (UniqueName: \"kubernetes.io/projected/d81cdf24-ce67-401f-869f-805f4718fce4-kube-api-access-bvk65\") pod \"d81cdf24-ce67-401f-869f-805f4718fce4\" (UID: \"d81cdf24-ce67-401f-869f-805f4718fce4\") " Jan 26 20:13:25 crc kubenswrapper[4737]: I0126 20:13:25.920146 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d81cdf24-ce67-401f-869f-805f4718fce4-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "d81cdf24-ce67-401f-869f-805f4718fce4" (UID: "d81cdf24-ce67-401f-869f-805f4718fce4"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:13:25 crc kubenswrapper[4737]: I0126 20:13:25.920296 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d81cdf24-ce67-401f-869f-805f4718fce4-config-data" (OuterVolumeSpecName: "config-data") pod "d81cdf24-ce67-401f-869f-805f4718fce4" (UID: "d81cdf24-ce67-401f-869f-805f4718fce4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:13:25 crc kubenswrapper[4737]: I0126 20:13:25.920845 4737 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d81cdf24-ce67-401f-869f-805f4718fce4-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 20:13:25 crc kubenswrapper[4737]: I0126 20:13:25.920868 4737 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d81cdf24-ce67-401f-869f-805f4718fce4-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 26 20:13:25 crc kubenswrapper[4737]: I0126 20:13:25.926958 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "test-operator-logs") pod "d81cdf24-ce67-401f-869f-805f4718fce4" (UID: "d81cdf24-ce67-401f-869f-805f4718fce4"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 20:13:25 crc kubenswrapper[4737]: I0126 20:13:25.929041 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d81cdf24-ce67-401f-869f-805f4718fce4-kube-api-access-bvk65" (OuterVolumeSpecName: "kube-api-access-bvk65") pod "d81cdf24-ce67-401f-869f-805f4718fce4" (UID: "d81cdf24-ce67-401f-869f-805f4718fce4"). InnerVolumeSpecName "kube-api-access-bvk65". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:13:25 crc kubenswrapper[4737]: I0126 20:13:25.931492 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d81cdf24-ce67-401f-869f-805f4718fce4-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "d81cdf24-ce67-401f-869f-805f4718fce4" (UID: "d81cdf24-ce67-401f-869f-805f4718fce4"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:13:25 crc kubenswrapper[4737]: I0126 20:13:25.953922 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d81cdf24-ce67-401f-869f-805f4718fce4-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "d81cdf24-ce67-401f-869f-805f4718fce4" (UID: "d81cdf24-ce67-401f-869f-805f4718fce4"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:13:25 crc kubenswrapper[4737]: I0126 20:13:25.956778 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d81cdf24-ce67-401f-869f-805f4718fce4-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "d81cdf24-ce67-401f-869f-805f4718fce4" (UID: "d81cdf24-ce67-401f-869f-805f4718fce4"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:13:25 crc kubenswrapper[4737]: I0126 20:13:25.960417 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d81cdf24-ce67-401f-869f-805f4718fce4-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d81cdf24-ce67-401f-869f-805f4718fce4" (UID: "d81cdf24-ce67-401f-869f-805f4718fce4"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:13:25 crc kubenswrapper[4737]: I0126 20:13:25.997041 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d81cdf24-ce67-401f-869f-805f4718fce4-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "d81cdf24-ce67-401f-869f-805f4718fce4" (UID: "d81cdf24-ce67-401f-869f-805f4718fce4"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:13:26 crc kubenswrapper[4737]: I0126 20:13:26.023696 4737 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d81cdf24-ce67-401f-869f-805f4718fce4-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 26 20:13:26 crc kubenswrapper[4737]: I0126 20:13:26.023753 4737 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d81cdf24-ce67-401f-869f-805f4718fce4-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 26 20:13:26 crc kubenswrapper[4737]: I0126 20:13:26.023768 4737 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d81cdf24-ce67-401f-869f-805f4718fce4-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 26 20:13:26 crc kubenswrapper[4737]: I0126 20:13:26.023783 4737 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d81cdf24-ce67-401f-869f-805f4718fce4-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 26 20:13:26 crc kubenswrapper[4737]: I0126 20:13:26.023795 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvk65\" (UniqueName: \"kubernetes.io/projected/d81cdf24-ce67-401f-869f-805f4718fce4-kube-api-access-bvk65\") on node \"crc\" DevicePath \"\"" Jan 26 20:13:26 crc kubenswrapper[4737]: I0126 20:13:26.023830 4737 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d81cdf24-ce67-401f-869f-805f4718fce4-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 26 20:13:26 crc kubenswrapper[4737]: I0126 20:13:26.024317 4737 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 26 20:13:26 crc kubenswrapper[4737]: I0126 20:13:26.054823 4737 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 26 20:13:26 crc kubenswrapper[4737]: I0126 20:13:26.126806 4737 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 26 20:13:26 crc kubenswrapper[4737]: I0126 20:13:26.360333 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"d81cdf24-ce67-401f-869f-805f4718fce4","Type":"ContainerDied","Data":"a6ed3328d7e95852106d94e6730d632042147056c71b8fb6b8f2dfe6e6362332"} Jan 26 20:13:26 crc kubenswrapper[4737]: I0126 20:13:26.360803 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6ed3328d7e95852106d94e6730d632042147056c71b8fb6b8f2dfe6e6362332" Jan 26 20:13:26 crc kubenswrapper[4737]: I0126 20:13:26.360475 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 26 20:13:29 crc kubenswrapper[4737]: I0126 20:13:29.982747 4737 scope.go:117] "RemoveContainer" containerID="33bebcaabae9d57274c8f9ce19e91e5a2ee2c813697141d70a95623238e43961" Jan 26 20:13:29 crc kubenswrapper[4737]: E0126 20:13:29.983834 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:13:31 crc kubenswrapper[4737]: I0126 20:13:31.519965 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 26 20:13:31 crc kubenswrapper[4737]: E0126 20:13:31.522267 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fe07851-8431-4bf7-ad52-dda66e8304f4" containerName="extract-utilities" Jan 26 20:13:31 crc kubenswrapper[4737]: I0126 20:13:31.522291 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fe07851-8431-4bf7-ad52-dda66e8304f4" containerName="extract-utilities" Jan 26 20:13:31 crc kubenswrapper[4737]: E0126 20:13:31.522311 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fe07851-8431-4bf7-ad52-dda66e8304f4" containerName="registry-server" Jan 26 20:13:31 crc kubenswrapper[4737]: I0126 20:13:31.522318 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fe07851-8431-4bf7-ad52-dda66e8304f4" containerName="registry-server" Jan 26 20:13:31 crc kubenswrapper[4737]: E0126 20:13:31.522336 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fe07851-8431-4bf7-ad52-dda66e8304f4" containerName="extract-content" Jan 26 20:13:31 crc kubenswrapper[4737]: I0126 20:13:31.522342 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fe07851-8431-4bf7-ad52-dda66e8304f4" containerName="extract-content" Jan 26 20:13:31 crc kubenswrapper[4737]: E0126 20:13:31.523427 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d81cdf24-ce67-401f-869f-805f4718fce4" containerName="tempest-tests-tempest-tests-runner" Jan 26 20:13:31 crc kubenswrapper[4737]: I0126 20:13:31.523445 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="d81cdf24-ce67-401f-869f-805f4718fce4" containerName="tempest-tests-tempest-tests-runner" Jan 26 20:13:31 crc kubenswrapper[4737]: I0126 20:13:31.523735 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="d81cdf24-ce67-401f-869f-805f4718fce4" containerName="tempest-tests-tempest-tests-runner" Jan 26 20:13:31 crc kubenswrapper[4737]: I0126 20:13:31.523775 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fe07851-8431-4bf7-ad52-dda66e8304f4" containerName="registry-server" Jan 26 20:13:31 crc kubenswrapper[4737]: I0126 20:13:31.528269 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 20:13:31 crc kubenswrapper[4737]: I0126 20:13:31.538851 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 26 20:13:31 crc kubenswrapper[4737]: I0126 20:13:31.547183 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-tk496" Jan 26 20:13:31 crc kubenswrapper[4737]: I0126 20:13:31.731286 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"7e035125-8d0b-4019-9266-fd7abb0057da\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 20:13:31 crc kubenswrapper[4737]: I0126 20:13:31.731587 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gzqx\" (UniqueName: \"kubernetes.io/projected/7e035125-8d0b-4019-9266-fd7abb0057da-kube-api-access-9gzqx\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"7e035125-8d0b-4019-9266-fd7abb0057da\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 20:13:31 crc kubenswrapper[4737]: I0126 20:13:31.834593 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"7e035125-8d0b-4019-9266-fd7abb0057da\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 20:13:31 crc kubenswrapper[4737]: I0126 20:13:31.834741 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gzqx\" (UniqueName: \"kubernetes.io/projected/7e035125-8d0b-4019-9266-fd7abb0057da-kube-api-access-9gzqx\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"7e035125-8d0b-4019-9266-fd7abb0057da\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 20:13:31 crc kubenswrapper[4737]: I0126 20:13:31.835668 4737 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"7e035125-8d0b-4019-9266-fd7abb0057da\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 20:13:31 crc kubenswrapper[4737]: I0126 20:13:31.856394 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gzqx\" (UniqueName: \"kubernetes.io/projected/7e035125-8d0b-4019-9266-fd7abb0057da-kube-api-access-9gzqx\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"7e035125-8d0b-4019-9266-fd7abb0057da\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 20:13:31 crc kubenswrapper[4737]: I0126 20:13:31.868960 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"7e035125-8d0b-4019-9266-fd7abb0057da\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 20:13:32 crc kubenswrapper[4737]: I0126 20:13:32.153998 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 20:13:32 crc kubenswrapper[4737]: I0126 20:13:32.647730 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 26 20:13:33 crc kubenswrapper[4737]: I0126 20:13:33.441455 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"7e035125-8d0b-4019-9266-fd7abb0057da","Type":"ContainerStarted","Data":"0f705593a2704d4a5f775fc57cac12eee0e7cab10a852fbabfe5a8ff0c04d43e"} Jan 26 20:13:34 crc kubenswrapper[4737]: I0126 20:13:34.453682 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"7e035125-8d0b-4019-9266-fd7abb0057da","Type":"ContainerStarted","Data":"5860f332aa5b9855b8c7b144a308bb9ed2cdd83e70e533b059781decafde17ff"} Jan 26 20:13:34 crc kubenswrapper[4737]: I0126 20:13:34.471527 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.635581944 podStartE2EDuration="3.471511634s" podCreationTimestamp="2026-01-26 20:13:31 +0000 UTC" firstStartedPulling="2026-01-26 20:13:32.675281389 +0000 UTC m=+6185.983476127" lastFinishedPulling="2026-01-26 20:13:33.511211099 +0000 UTC m=+6186.819405817" observedRunningTime="2026-01-26 20:13:34.467082396 +0000 UTC m=+6187.775277104" watchObservedRunningTime="2026-01-26 20:13:34.471511634 +0000 UTC m=+6187.779706342" Jan 26 20:13:44 crc kubenswrapper[4737]: I0126 20:13:44.982627 4737 scope.go:117] "RemoveContainer" containerID="33bebcaabae9d57274c8f9ce19e91e5a2ee2c813697141d70a95623238e43961" Jan 26 20:13:44 crc kubenswrapper[4737]: E0126 20:13:44.983747 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:13:59 crc kubenswrapper[4737]: I0126 20:13:59.981825 4737 scope.go:117] "RemoveContainer" containerID="33bebcaabae9d57274c8f9ce19e91e5a2ee2c813697141d70a95623238e43961" Jan 26 20:13:59 crc kubenswrapper[4737]: E0126 20:13:59.982737 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:14:03 crc kubenswrapper[4737]: I0126 20:14:03.398936 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-hfgnj/must-gather-l29l6"] Jan 26 20:14:03 crc kubenswrapper[4737]: I0126 20:14:03.401378 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hfgnj/must-gather-l29l6" Jan 26 20:14:03 crc kubenswrapper[4737]: I0126 20:14:03.407619 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-hfgnj"/"default-dockercfg-5n5qc" Jan 26 20:14:03 crc kubenswrapper[4737]: I0126 20:14:03.411283 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-hfgnj"/"openshift-service-ca.crt" Jan 26 20:14:03 crc kubenswrapper[4737]: I0126 20:14:03.411646 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-hfgnj"/"kube-root-ca.crt" Jan 26 20:14:03 crc kubenswrapper[4737]: I0126 20:14:03.484479 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-hfgnj/must-gather-l29l6"] Jan 26 20:14:03 crc kubenswrapper[4737]: I0126 20:14:03.513958 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2b1ad284-3db2-439a-ab78-6265ba868f9d-must-gather-output\") pod \"must-gather-l29l6\" (UID: \"2b1ad284-3db2-439a-ab78-6265ba868f9d\") " pod="openshift-must-gather-hfgnj/must-gather-l29l6" Jan 26 20:14:03 crc kubenswrapper[4737]: I0126 20:14:03.514011 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvrj6\" (UniqueName: \"kubernetes.io/projected/2b1ad284-3db2-439a-ab78-6265ba868f9d-kube-api-access-bvrj6\") pod \"must-gather-l29l6\" (UID: \"2b1ad284-3db2-439a-ab78-6265ba868f9d\") " pod="openshift-must-gather-hfgnj/must-gather-l29l6" Jan 26 20:14:03 crc kubenswrapper[4737]: I0126 20:14:03.621814 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2b1ad284-3db2-439a-ab78-6265ba868f9d-must-gather-output\") pod \"must-gather-l29l6\" (UID: \"2b1ad284-3db2-439a-ab78-6265ba868f9d\") " pod="openshift-must-gather-hfgnj/must-gather-l29l6" Jan 26 20:14:03 crc kubenswrapper[4737]: I0126 20:14:03.621870 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvrj6\" (UniqueName: \"kubernetes.io/projected/2b1ad284-3db2-439a-ab78-6265ba868f9d-kube-api-access-bvrj6\") pod \"must-gather-l29l6\" (UID: \"2b1ad284-3db2-439a-ab78-6265ba868f9d\") " pod="openshift-must-gather-hfgnj/must-gather-l29l6" Jan 26 20:14:03 crc kubenswrapper[4737]: I0126 20:14:03.622641 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2b1ad284-3db2-439a-ab78-6265ba868f9d-must-gather-output\") pod \"must-gather-l29l6\" (UID: \"2b1ad284-3db2-439a-ab78-6265ba868f9d\") " pod="openshift-must-gather-hfgnj/must-gather-l29l6" Jan 26 20:14:03 crc kubenswrapper[4737]: I0126 20:14:03.646091 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvrj6\" (UniqueName: \"kubernetes.io/projected/2b1ad284-3db2-439a-ab78-6265ba868f9d-kube-api-access-bvrj6\") pod \"must-gather-l29l6\" (UID: \"2b1ad284-3db2-439a-ab78-6265ba868f9d\") " pod="openshift-must-gather-hfgnj/must-gather-l29l6" Jan 26 20:14:03 crc kubenswrapper[4737]: I0126 20:14:03.744849 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hfgnj/must-gather-l29l6" Jan 26 20:14:04 crc kubenswrapper[4737]: I0126 20:14:04.340207 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-hfgnj/must-gather-l29l6"] Jan 26 20:14:04 crc kubenswrapper[4737]: I0126 20:14:04.837594 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hfgnj/must-gather-l29l6" event={"ID":"2b1ad284-3db2-439a-ab78-6265ba868f9d","Type":"ContainerStarted","Data":"f9bc0df117a98481ddea92fbf98ed7107caf54a43916d44d4fcfeb87ebdb2112"} Jan 26 20:14:10 crc kubenswrapper[4737]: I0126 20:14:10.986172 4737 scope.go:117] "RemoveContainer" containerID="33bebcaabae9d57274c8f9ce19e91e5a2ee2c813697141d70a95623238e43961" Jan 26 20:14:10 crc kubenswrapper[4737]: E0126 20:14:10.987834 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:14:12 crc kubenswrapper[4737]: I0126 20:14:12.968736 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hfgnj/must-gather-l29l6" event={"ID":"2b1ad284-3db2-439a-ab78-6265ba868f9d","Type":"ContainerStarted","Data":"4ffe0cfcfbcfc7c0caf8655366e27e67260a2f67268c16591279a532d08fefdc"} Jan 26 20:14:13 crc kubenswrapper[4737]: I0126 20:14:13.985526 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hfgnj/must-gather-l29l6" event={"ID":"2b1ad284-3db2-439a-ab78-6265ba868f9d","Type":"ContainerStarted","Data":"2e1acdfbd93e12646a5e8bf9b5e2ad7ad2dce6b06c420a78799fd264417773b6"} Jan 26 20:14:18 crc kubenswrapper[4737]: I0126 20:14:18.010931 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-hfgnj/must-gather-l29l6" podStartSLOduration=6.738211844 podStartE2EDuration="15.010911001s" podCreationTimestamp="2026-01-26 20:14:03 +0000 UTC" firstStartedPulling="2026-01-26 20:14:04.323681726 +0000 UTC m=+6217.631876434" lastFinishedPulling="2026-01-26 20:14:12.596380883 +0000 UTC m=+6225.904575591" observedRunningTime="2026-01-26 20:14:14.008947184 +0000 UTC m=+6227.317141902" watchObservedRunningTime="2026-01-26 20:14:18.010911001 +0000 UTC m=+6231.319105709" Jan 26 20:14:18 crc kubenswrapper[4737]: I0126 20:14:18.012884 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-hfgnj/crc-debug-cg9ll"] Jan 26 20:14:18 crc kubenswrapper[4737]: I0126 20:14:18.014999 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hfgnj/crc-debug-cg9ll" Jan 26 20:14:18 crc kubenswrapper[4737]: I0126 20:14:18.124694 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2bw7\" (UniqueName: \"kubernetes.io/projected/024b80b9-0ac1-41d8-861d-2348cb945411-kube-api-access-q2bw7\") pod \"crc-debug-cg9ll\" (UID: \"024b80b9-0ac1-41d8-861d-2348cb945411\") " pod="openshift-must-gather-hfgnj/crc-debug-cg9ll" Jan 26 20:14:18 crc kubenswrapper[4737]: I0126 20:14:18.124803 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/024b80b9-0ac1-41d8-861d-2348cb945411-host\") pod \"crc-debug-cg9ll\" (UID: \"024b80b9-0ac1-41d8-861d-2348cb945411\") " pod="openshift-must-gather-hfgnj/crc-debug-cg9ll" Jan 26 20:14:18 crc kubenswrapper[4737]: I0126 20:14:18.228046 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2bw7\" (UniqueName: \"kubernetes.io/projected/024b80b9-0ac1-41d8-861d-2348cb945411-kube-api-access-q2bw7\") pod \"crc-debug-cg9ll\" (UID: \"024b80b9-0ac1-41d8-861d-2348cb945411\") " pod="openshift-must-gather-hfgnj/crc-debug-cg9ll" Jan 26 20:14:18 crc kubenswrapper[4737]: I0126 20:14:18.228172 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/024b80b9-0ac1-41d8-861d-2348cb945411-host\") pod \"crc-debug-cg9ll\" (UID: \"024b80b9-0ac1-41d8-861d-2348cb945411\") " pod="openshift-must-gather-hfgnj/crc-debug-cg9ll" Jan 26 20:14:18 crc kubenswrapper[4737]: I0126 20:14:18.228729 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/024b80b9-0ac1-41d8-861d-2348cb945411-host\") pod \"crc-debug-cg9ll\" (UID: \"024b80b9-0ac1-41d8-861d-2348cb945411\") " pod="openshift-must-gather-hfgnj/crc-debug-cg9ll" Jan 26 20:14:18 crc kubenswrapper[4737]: I0126 20:14:18.251950 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2bw7\" (UniqueName: \"kubernetes.io/projected/024b80b9-0ac1-41d8-861d-2348cb945411-kube-api-access-q2bw7\") pod \"crc-debug-cg9ll\" (UID: \"024b80b9-0ac1-41d8-861d-2348cb945411\") " pod="openshift-must-gather-hfgnj/crc-debug-cg9ll" Jan 26 20:14:18 crc kubenswrapper[4737]: I0126 20:14:18.388896 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hfgnj/crc-debug-cg9ll" Jan 26 20:14:19 crc kubenswrapper[4737]: I0126 20:14:19.039316 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hfgnj/crc-debug-cg9ll" event={"ID":"024b80b9-0ac1-41d8-861d-2348cb945411","Type":"ContainerStarted","Data":"5424f20980896e90eed4b7d4830a325fc07c3e44f6ffac68c332049fd4f4fbe7"} Jan 26 20:14:22 crc kubenswrapper[4737]: I0126 20:14:22.982743 4737 scope.go:117] "RemoveContainer" containerID="33bebcaabae9d57274c8f9ce19e91e5a2ee2c813697141d70a95623238e43961" Jan 26 20:14:22 crc kubenswrapper[4737]: E0126 20:14:22.983735 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:14:31 crc kubenswrapper[4737]: I0126 20:14:31.194054 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hfgnj/crc-debug-cg9ll" event={"ID":"024b80b9-0ac1-41d8-861d-2348cb945411","Type":"ContainerStarted","Data":"40fb2c371f9503b44053751860bbb053cb79a1fc291c57bf4325fa03712f4cf2"} Jan 26 20:14:31 crc kubenswrapper[4737]: I0126 20:14:31.210008 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-hfgnj/crc-debug-cg9ll" podStartSLOduration=2.382713012 podStartE2EDuration="14.209988722s" podCreationTimestamp="2026-01-26 20:14:17 +0000 UTC" firstStartedPulling="2026-01-26 20:14:18.549755066 +0000 UTC m=+6231.857949764" lastFinishedPulling="2026-01-26 20:14:30.377030766 +0000 UTC m=+6243.685225474" observedRunningTime="2026-01-26 20:14:31.209960601 +0000 UTC m=+6244.518155309" watchObservedRunningTime="2026-01-26 20:14:31.209988722 +0000 UTC m=+6244.518183430" Jan 26 20:14:36 crc kubenswrapper[4737]: I0126 20:14:35.981873 4737 scope.go:117] "RemoveContainer" containerID="33bebcaabae9d57274c8f9ce19e91e5a2ee2c813697141d70a95623238e43961" Jan 26 20:14:36 crc kubenswrapper[4737]: E0126 20:14:35.982618 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:14:47 crc kubenswrapper[4737]: I0126 20:14:47.141297 4737 scope.go:117] "RemoveContainer" containerID="33bebcaabae9d57274c8f9ce19e91e5a2ee2c813697141d70a95623238e43961" Jan 26 20:14:47 crc kubenswrapper[4737]: E0126 20:14:47.142503 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:14:59 crc kubenswrapper[4737]: I0126 20:14:59.981604 4737 scope.go:117] "RemoveContainer" containerID="33bebcaabae9d57274c8f9ce19e91e5a2ee2c813697141d70a95623238e43961" Jan 26 20:14:59 crc kubenswrapper[4737]: E0126 20:14:59.982595 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:15:00 crc kubenswrapper[4737]: I0126 20:15:00.229913 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490975-sb6nd"] Jan 26 20:15:00 crc kubenswrapper[4737]: I0126 20:15:00.232431 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490975-sb6nd" Jan 26 20:15:00 crc kubenswrapper[4737]: I0126 20:15:00.235044 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 20:15:00 crc kubenswrapper[4737]: I0126 20:15:00.235292 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 20:15:00 crc kubenswrapper[4737]: I0126 20:15:00.251241 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490975-sb6nd"] Jan 26 20:15:00 crc kubenswrapper[4737]: I0126 20:15:00.383749 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37202b5f-bc5b-4250-bb9a-e782616147e1-config-volume\") pod \"collect-profiles-29490975-sb6nd\" (UID: \"37202b5f-bc5b-4250-bb9a-e782616147e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490975-sb6nd" Jan 26 20:15:00 crc kubenswrapper[4737]: I0126 20:15:00.384549 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/37202b5f-bc5b-4250-bb9a-e782616147e1-secret-volume\") pod \"collect-profiles-29490975-sb6nd\" (UID: \"37202b5f-bc5b-4250-bb9a-e782616147e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490975-sb6nd" Jan 26 20:15:00 crc kubenswrapper[4737]: I0126 20:15:00.384584 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4bt9\" (UniqueName: \"kubernetes.io/projected/37202b5f-bc5b-4250-bb9a-e782616147e1-kube-api-access-v4bt9\") pod \"collect-profiles-29490975-sb6nd\" (UID: \"37202b5f-bc5b-4250-bb9a-e782616147e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490975-sb6nd" Jan 26 20:15:00 crc kubenswrapper[4737]: I0126 20:15:00.487606 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/37202b5f-bc5b-4250-bb9a-e782616147e1-secret-volume\") pod \"collect-profiles-29490975-sb6nd\" (UID: \"37202b5f-bc5b-4250-bb9a-e782616147e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490975-sb6nd" Jan 26 20:15:00 crc kubenswrapper[4737]: I0126 20:15:00.487653 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4bt9\" (UniqueName: \"kubernetes.io/projected/37202b5f-bc5b-4250-bb9a-e782616147e1-kube-api-access-v4bt9\") pod \"collect-profiles-29490975-sb6nd\" (UID: \"37202b5f-bc5b-4250-bb9a-e782616147e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490975-sb6nd" Jan 26 20:15:00 crc kubenswrapper[4737]: I0126 20:15:00.487773 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37202b5f-bc5b-4250-bb9a-e782616147e1-config-volume\") pod \"collect-profiles-29490975-sb6nd\" (UID: \"37202b5f-bc5b-4250-bb9a-e782616147e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490975-sb6nd" Jan 26 20:15:00 crc kubenswrapper[4737]: I0126 20:15:00.489104 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37202b5f-bc5b-4250-bb9a-e782616147e1-config-volume\") pod \"collect-profiles-29490975-sb6nd\" (UID: \"37202b5f-bc5b-4250-bb9a-e782616147e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490975-sb6nd" Jan 26 20:15:00 crc kubenswrapper[4737]: I0126 20:15:00.498944 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/37202b5f-bc5b-4250-bb9a-e782616147e1-secret-volume\") pod \"collect-profiles-29490975-sb6nd\" (UID: \"37202b5f-bc5b-4250-bb9a-e782616147e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490975-sb6nd" Jan 26 20:15:00 crc kubenswrapper[4737]: I0126 20:15:00.518703 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4bt9\" (UniqueName: \"kubernetes.io/projected/37202b5f-bc5b-4250-bb9a-e782616147e1-kube-api-access-v4bt9\") pod \"collect-profiles-29490975-sb6nd\" (UID: \"37202b5f-bc5b-4250-bb9a-e782616147e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490975-sb6nd" Jan 26 20:15:00 crc kubenswrapper[4737]: I0126 20:15:00.560036 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490975-sb6nd" Jan 26 20:15:01 crc kubenswrapper[4737]: I0126 20:15:01.225302 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490975-sb6nd"] Jan 26 20:15:01 crc kubenswrapper[4737]: I0126 20:15:01.534288 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490975-sb6nd" event={"ID":"37202b5f-bc5b-4250-bb9a-e782616147e1","Type":"ContainerStarted","Data":"ca43545b9074a9823b5b4f76ec3ff9c2cc4d71d0848ca79213d28b7af3ba4d69"} Jan 26 20:15:01 crc kubenswrapper[4737]: I0126 20:15:01.534750 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490975-sb6nd" event={"ID":"37202b5f-bc5b-4250-bb9a-e782616147e1","Type":"ContainerStarted","Data":"b86ac50df416d593adf5cf7e9d63b700cd6feede118115052b1bcbd5a65f385b"} Jan 26 20:15:01 crc kubenswrapper[4737]: I0126 20:15:01.558969 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29490975-sb6nd" podStartSLOduration=1.558945058 podStartE2EDuration="1.558945058s" podCreationTimestamp="2026-01-26 20:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:15:01.549378314 +0000 UTC m=+6274.857573022" watchObservedRunningTime="2026-01-26 20:15:01.558945058 +0000 UTC m=+6274.867139766" Jan 26 20:15:02 crc kubenswrapper[4737]: I0126 20:15:02.550604 4737 generic.go:334] "Generic (PLEG): container finished" podID="37202b5f-bc5b-4250-bb9a-e782616147e1" containerID="ca43545b9074a9823b5b4f76ec3ff9c2cc4d71d0848ca79213d28b7af3ba4d69" exitCode=0 Jan 26 20:15:02 crc kubenswrapper[4737]: I0126 20:15:02.550673 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490975-sb6nd" event={"ID":"37202b5f-bc5b-4250-bb9a-e782616147e1","Type":"ContainerDied","Data":"ca43545b9074a9823b5b4f76ec3ff9c2cc4d71d0848ca79213d28b7af3ba4d69"} Jan 26 20:15:03 crc kubenswrapper[4737]: I0126 20:15:03.986629 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490975-sb6nd" Jan 26 20:15:04 crc kubenswrapper[4737]: I0126 20:15:04.077528 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37202b5f-bc5b-4250-bb9a-e782616147e1-config-volume\") pod \"37202b5f-bc5b-4250-bb9a-e782616147e1\" (UID: \"37202b5f-bc5b-4250-bb9a-e782616147e1\") " Jan 26 20:15:04 crc kubenswrapper[4737]: I0126 20:15:04.077718 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/37202b5f-bc5b-4250-bb9a-e782616147e1-secret-volume\") pod \"37202b5f-bc5b-4250-bb9a-e782616147e1\" (UID: \"37202b5f-bc5b-4250-bb9a-e782616147e1\") " Jan 26 20:15:04 crc kubenswrapper[4737]: I0126 20:15:04.077754 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4bt9\" (UniqueName: \"kubernetes.io/projected/37202b5f-bc5b-4250-bb9a-e782616147e1-kube-api-access-v4bt9\") pod \"37202b5f-bc5b-4250-bb9a-e782616147e1\" (UID: \"37202b5f-bc5b-4250-bb9a-e782616147e1\") " Jan 26 20:15:04 crc kubenswrapper[4737]: I0126 20:15:04.079189 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37202b5f-bc5b-4250-bb9a-e782616147e1-config-volume" (OuterVolumeSpecName: "config-volume") pod "37202b5f-bc5b-4250-bb9a-e782616147e1" (UID: "37202b5f-bc5b-4250-bb9a-e782616147e1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:15:04 crc kubenswrapper[4737]: I0126 20:15:04.080282 4737 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37202b5f-bc5b-4250-bb9a-e782616147e1-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 20:15:04 crc kubenswrapper[4737]: I0126 20:15:04.084794 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37202b5f-bc5b-4250-bb9a-e782616147e1-kube-api-access-v4bt9" (OuterVolumeSpecName: "kube-api-access-v4bt9") pod "37202b5f-bc5b-4250-bb9a-e782616147e1" (UID: "37202b5f-bc5b-4250-bb9a-e782616147e1"). InnerVolumeSpecName "kube-api-access-v4bt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:15:04 crc kubenswrapper[4737]: I0126 20:15:04.091393 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37202b5f-bc5b-4250-bb9a-e782616147e1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "37202b5f-bc5b-4250-bb9a-e782616147e1" (UID: "37202b5f-bc5b-4250-bb9a-e782616147e1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:15:04 crc kubenswrapper[4737]: I0126 20:15:04.182295 4737 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/37202b5f-bc5b-4250-bb9a-e782616147e1-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 20:15:04 crc kubenswrapper[4737]: I0126 20:15:04.182326 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4bt9\" (UniqueName: \"kubernetes.io/projected/37202b5f-bc5b-4250-bb9a-e782616147e1-kube-api-access-v4bt9\") on node \"crc\" DevicePath \"\"" Jan 26 20:15:04 crc kubenswrapper[4737]: I0126 20:15:04.288025 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490930-4k5qs"] Jan 26 20:15:04 crc kubenswrapper[4737]: I0126 20:15:04.306777 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490930-4k5qs"] Jan 26 20:15:04 crc kubenswrapper[4737]: I0126 20:15:04.574152 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490975-sb6nd" event={"ID":"37202b5f-bc5b-4250-bb9a-e782616147e1","Type":"ContainerDied","Data":"b86ac50df416d593adf5cf7e9d63b700cd6feede118115052b1bcbd5a65f385b"} Jan 26 20:15:04 crc kubenswrapper[4737]: I0126 20:15:04.574464 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b86ac50df416d593adf5cf7e9d63b700cd6feede118115052b1bcbd5a65f385b" Jan 26 20:15:04 crc kubenswrapper[4737]: I0126 20:15:04.574220 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490975-sb6nd" Jan 26 20:15:04 crc kubenswrapper[4737]: I0126 20:15:04.997930 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04d5c317-4d69-4c80-8d0e-98dcfb41af6c" path="/var/lib/kubelet/pods/04d5c317-4d69-4c80-8d0e-98dcfb41af6c/volumes" Jan 26 20:15:11 crc kubenswrapper[4737]: I0126 20:15:11.982624 4737 scope.go:117] "RemoveContainer" containerID="33bebcaabae9d57274c8f9ce19e91e5a2ee2c813697141d70a95623238e43961" Jan 26 20:15:11 crc kubenswrapper[4737]: E0126 20:15:11.983905 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:15:25 crc kubenswrapper[4737]: I0126 20:15:25.877639 4737 generic.go:334] "Generic (PLEG): container finished" podID="024b80b9-0ac1-41d8-861d-2348cb945411" containerID="40fb2c371f9503b44053751860bbb053cb79a1fc291c57bf4325fa03712f4cf2" exitCode=0 Jan 26 20:15:25 crc kubenswrapper[4737]: I0126 20:15:25.877696 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hfgnj/crc-debug-cg9ll" event={"ID":"024b80b9-0ac1-41d8-861d-2348cb945411","Type":"ContainerDied","Data":"40fb2c371f9503b44053751860bbb053cb79a1fc291c57bf4325fa03712f4cf2"} Jan 26 20:15:26 crc kubenswrapper[4737]: I0126 20:15:26.990366 4737 scope.go:117] "RemoveContainer" containerID="33bebcaabae9d57274c8f9ce19e91e5a2ee2c813697141d70a95623238e43961" Jan 26 20:15:26 crc kubenswrapper[4737]: E0126 20:15:26.991200 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:15:27 crc kubenswrapper[4737]: I0126 20:15:27.033687 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hfgnj/crc-debug-cg9ll" Jan 26 20:15:27 crc kubenswrapper[4737]: I0126 20:15:27.075010 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-hfgnj/crc-debug-cg9ll"] Jan 26 20:15:27 crc kubenswrapper[4737]: I0126 20:15:27.089330 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-hfgnj/crc-debug-cg9ll"] Jan 26 20:15:27 crc kubenswrapper[4737]: I0126 20:15:27.118897 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/024b80b9-0ac1-41d8-861d-2348cb945411-host\") pod \"024b80b9-0ac1-41d8-861d-2348cb945411\" (UID: \"024b80b9-0ac1-41d8-861d-2348cb945411\") " Jan 26 20:15:27 crc kubenswrapper[4737]: I0126 20:15:27.119053 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/024b80b9-0ac1-41d8-861d-2348cb945411-host" (OuterVolumeSpecName: "host") pod "024b80b9-0ac1-41d8-861d-2348cb945411" (UID: "024b80b9-0ac1-41d8-861d-2348cb945411"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 20:15:27 crc kubenswrapper[4737]: I0126 20:15:27.119138 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2bw7\" (UniqueName: \"kubernetes.io/projected/024b80b9-0ac1-41d8-861d-2348cb945411-kube-api-access-q2bw7\") pod \"024b80b9-0ac1-41d8-861d-2348cb945411\" (UID: \"024b80b9-0ac1-41d8-861d-2348cb945411\") " Jan 26 20:15:27 crc kubenswrapper[4737]: I0126 20:15:27.120325 4737 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/024b80b9-0ac1-41d8-861d-2348cb945411-host\") on node \"crc\" DevicePath \"\"" Jan 26 20:15:27 crc kubenswrapper[4737]: I0126 20:15:27.125111 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/024b80b9-0ac1-41d8-861d-2348cb945411-kube-api-access-q2bw7" (OuterVolumeSpecName: "kube-api-access-q2bw7") pod "024b80b9-0ac1-41d8-861d-2348cb945411" (UID: "024b80b9-0ac1-41d8-861d-2348cb945411"). InnerVolumeSpecName "kube-api-access-q2bw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:15:27 crc kubenswrapper[4737]: I0126 20:15:27.223202 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2bw7\" (UniqueName: \"kubernetes.io/projected/024b80b9-0ac1-41d8-861d-2348cb945411-kube-api-access-q2bw7\") on node \"crc\" DevicePath \"\"" Jan 26 20:15:27 crc kubenswrapper[4737]: I0126 20:15:27.903722 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5424f20980896e90eed4b7d4830a325fc07c3e44f6ffac68c332049fd4f4fbe7" Jan 26 20:15:27 crc kubenswrapper[4737]: I0126 20:15:27.903791 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hfgnj/crc-debug-cg9ll" Jan 26 20:15:28 crc kubenswrapper[4737]: I0126 20:15:28.358610 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-hfgnj/crc-debug-bshnd"] Jan 26 20:15:28 crc kubenswrapper[4737]: E0126 20:15:28.359309 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37202b5f-bc5b-4250-bb9a-e782616147e1" containerName="collect-profiles" Jan 26 20:15:28 crc kubenswrapper[4737]: I0126 20:15:28.359326 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="37202b5f-bc5b-4250-bb9a-e782616147e1" containerName="collect-profiles" Jan 26 20:15:28 crc kubenswrapper[4737]: E0126 20:15:28.359365 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="024b80b9-0ac1-41d8-861d-2348cb945411" containerName="container-00" Jan 26 20:15:28 crc kubenswrapper[4737]: I0126 20:15:28.359372 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="024b80b9-0ac1-41d8-861d-2348cb945411" containerName="container-00" Jan 26 20:15:28 crc kubenswrapper[4737]: I0126 20:15:28.359676 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="024b80b9-0ac1-41d8-861d-2348cb945411" containerName="container-00" Jan 26 20:15:28 crc kubenswrapper[4737]: I0126 20:15:28.359688 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="37202b5f-bc5b-4250-bb9a-e782616147e1" containerName="collect-profiles" Jan 26 20:15:28 crc kubenswrapper[4737]: I0126 20:15:28.361154 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hfgnj/crc-debug-bshnd" Jan 26 20:15:28 crc kubenswrapper[4737]: I0126 20:15:28.457700 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c4aed0f9-7f6b-4a9f-8715-6f4941b87322-host\") pod \"crc-debug-bshnd\" (UID: \"c4aed0f9-7f6b-4a9f-8715-6f4941b87322\") " pod="openshift-must-gather-hfgnj/crc-debug-bshnd" Jan 26 20:15:28 crc kubenswrapper[4737]: I0126 20:15:28.458226 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2864x\" (UniqueName: \"kubernetes.io/projected/c4aed0f9-7f6b-4a9f-8715-6f4941b87322-kube-api-access-2864x\") pod \"crc-debug-bshnd\" (UID: \"c4aed0f9-7f6b-4a9f-8715-6f4941b87322\") " pod="openshift-must-gather-hfgnj/crc-debug-bshnd" Jan 26 20:15:28 crc kubenswrapper[4737]: I0126 20:15:28.560374 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2864x\" (UniqueName: \"kubernetes.io/projected/c4aed0f9-7f6b-4a9f-8715-6f4941b87322-kube-api-access-2864x\") pod \"crc-debug-bshnd\" (UID: \"c4aed0f9-7f6b-4a9f-8715-6f4941b87322\") " pod="openshift-must-gather-hfgnj/crc-debug-bshnd" Jan 26 20:15:28 crc kubenswrapper[4737]: I0126 20:15:28.560701 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c4aed0f9-7f6b-4a9f-8715-6f4941b87322-host\") pod \"crc-debug-bshnd\" (UID: \"c4aed0f9-7f6b-4a9f-8715-6f4941b87322\") " pod="openshift-must-gather-hfgnj/crc-debug-bshnd" Jan 26 20:15:28 crc kubenswrapper[4737]: I0126 20:15:28.560858 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c4aed0f9-7f6b-4a9f-8715-6f4941b87322-host\") pod \"crc-debug-bshnd\" (UID: \"c4aed0f9-7f6b-4a9f-8715-6f4941b87322\") " pod="openshift-must-gather-hfgnj/crc-debug-bshnd" Jan 26 20:15:28 crc kubenswrapper[4737]: I0126 20:15:28.583863 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2864x\" (UniqueName: \"kubernetes.io/projected/c4aed0f9-7f6b-4a9f-8715-6f4941b87322-kube-api-access-2864x\") pod \"crc-debug-bshnd\" (UID: \"c4aed0f9-7f6b-4a9f-8715-6f4941b87322\") " pod="openshift-must-gather-hfgnj/crc-debug-bshnd" Jan 26 20:15:28 crc kubenswrapper[4737]: I0126 20:15:28.685284 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hfgnj/crc-debug-bshnd" Jan 26 20:15:28 crc kubenswrapper[4737]: I0126 20:15:28.915079 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hfgnj/crc-debug-bshnd" event={"ID":"c4aed0f9-7f6b-4a9f-8715-6f4941b87322","Type":"ContainerStarted","Data":"9e980a829b4094b42f0e1cc6962efeaee1e780fabcdb84e6a575339d1e952a3e"} Jan 26 20:15:28 crc kubenswrapper[4737]: I0126 20:15:28.997415 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="024b80b9-0ac1-41d8-861d-2348cb945411" path="/var/lib/kubelet/pods/024b80b9-0ac1-41d8-861d-2348cb945411/volumes" Jan 26 20:15:29 crc kubenswrapper[4737]: I0126 20:15:29.930495 4737 generic.go:334] "Generic (PLEG): container finished" podID="c4aed0f9-7f6b-4a9f-8715-6f4941b87322" containerID="e095bcae9c74ec47800ec7d6799d45eea2a30e5ccf3ed83dce8e8e7646f97f6b" exitCode=0 Jan 26 20:15:29 crc kubenswrapper[4737]: I0126 20:15:29.930570 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hfgnj/crc-debug-bshnd" event={"ID":"c4aed0f9-7f6b-4a9f-8715-6f4941b87322","Type":"ContainerDied","Data":"e095bcae9c74ec47800ec7d6799d45eea2a30e5ccf3ed83dce8e8e7646f97f6b"} Jan 26 20:15:31 crc kubenswrapper[4737]: I0126 20:15:31.080451 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hfgnj/crc-debug-bshnd" Jan 26 20:15:31 crc kubenswrapper[4737]: I0126 20:15:31.226542 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c4aed0f9-7f6b-4a9f-8715-6f4941b87322-host\") pod \"c4aed0f9-7f6b-4a9f-8715-6f4941b87322\" (UID: \"c4aed0f9-7f6b-4a9f-8715-6f4941b87322\") " Jan 26 20:15:31 crc kubenswrapper[4737]: I0126 20:15:31.226674 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4aed0f9-7f6b-4a9f-8715-6f4941b87322-host" (OuterVolumeSpecName: "host") pod "c4aed0f9-7f6b-4a9f-8715-6f4941b87322" (UID: "c4aed0f9-7f6b-4a9f-8715-6f4941b87322"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 20:15:31 crc kubenswrapper[4737]: I0126 20:15:31.226775 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2864x\" (UniqueName: \"kubernetes.io/projected/c4aed0f9-7f6b-4a9f-8715-6f4941b87322-kube-api-access-2864x\") pod \"c4aed0f9-7f6b-4a9f-8715-6f4941b87322\" (UID: \"c4aed0f9-7f6b-4a9f-8715-6f4941b87322\") " Jan 26 20:15:31 crc kubenswrapper[4737]: I0126 20:15:31.227416 4737 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c4aed0f9-7f6b-4a9f-8715-6f4941b87322-host\") on node \"crc\" DevicePath \"\"" Jan 26 20:15:31 crc kubenswrapper[4737]: I0126 20:15:31.232731 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4aed0f9-7f6b-4a9f-8715-6f4941b87322-kube-api-access-2864x" (OuterVolumeSpecName: "kube-api-access-2864x") pod "c4aed0f9-7f6b-4a9f-8715-6f4941b87322" (UID: "c4aed0f9-7f6b-4a9f-8715-6f4941b87322"). InnerVolumeSpecName "kube-api-access-2864x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:15:31 crc kubenswrapper[4737]: I0126 20:15:31.329368 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2864x\" (UniqueName: \"kubernetes.io/projected/c4aed0f9-7f6b-4a9f-8715-6f4941b87322-kube-api-access-2864x\") on node \"crc\" DevicePath \"\"" Jan 26 20:15:31 crc kubenswrapper[4737]: I0126 20:15:31.961080 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hfgnj/crc-debug-bshnd" event={"ID":"c4aed0f9-7f6b-4a9f-8715-6f4941b87322","Type":"ContainerDied","Data":"9e980a829b4094b42f0e1cc6962efeaee1e780fabcdb84e6a575339d1e952a3e"} Jan 26 20:15:31 crc kubenswrapper[4737]: I0126 20:15:31.961508 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e980a829b4094b42f0e1cc6962efeaee1e780fabcdb84e6a575339d1e952a3e" Jan 26 20:15:31 crc kubenswrapper[4737]: I0126 20:15:31.961563 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hfgnj/crc-debug-bshnd" Jan 26 20:15:32 crc kubenswrapper[4737]: E0126 20:15:32.083220 4737 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4aed0f9_7f6b_4a9f_8715_6f4941b87322.slice\": RecentStats: unable to find data in memory cache]" Jan 26 20:15:32 crc kubenswrapper[4737]: I0126 20:15:32.488736 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-hfgnj/crc-debug-bshnd"] Jan 26 20:15:32 crc kubenswrapper[4737]: I0126 20:15:32.501564 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-hfgnj/crc-debug-bshnd"] Jan 26 20:15:33 crc kubenswrapper[4737]: I0126 20:15:33.003793 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4aed0f9-7f6b-4a9f-8715-6f4941b87322" path="/var/lib/kubelet/pods/c4aed0f9-7f6b-4a9f-8715-6f4941b87322/volumes" Jan 26 20:15:33 crc kubenswrapper[4737]: I0126 20:15:33.704699 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-hfgnj/crc-debug-6qdwl"] Jan 26 20:15:33 crc kubenswrapper[4737]: E0126 20:15:33.705283 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4aed0f9-7f6b-4a9f-8715-6f4941b87322" containerName="container-00" Jan 26 20:15:33 crc kubenswrapper[4737]: I0126 20:15:33.705302 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4aed0f9-7f6b-4a9f-8715-6f4941b87322" containerName="container-00" Jan 26 20:15:33 crc kubenswrapper[4737]: I0126 20:15:33.705593 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4aed0f9-7f6b-4a9f-8715-6f4941b87322" containerName="container-00" Jan 26 20:15:33 crc kubenswrapper[4737]: I0126 20:15:33.706463 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hfgnj/crc-debug-6qdwl" Jan 26 20:15:33 crc kubenswrapper[4737]: I0126 20:15:33.805328 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv2kx\" (UniqueName: \"kubernetes.io/projected/20bc3963-c291-48b0-b01c-8000605805ce-kube-api-access-tv2kx\") pod \"crc-debug-6qdwl\" (UID: \"20bc3963-c291-48b0-b01c-8000605805ce\") " pod="openshift-must-gather-hfgnj/crc-debug-6qdwl" Jan 26 20:15:33 crc kubenswrapper[4737]: I0126 20:15:33.805729 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/20bc3963-c291-48b0-b01c-8000605805ce-host\") pod \"crc-debug-6qdwl\" (UID: \"20bc3963-c291-48b0-b01c-8000605805ce\") " pod="openshift-must-gather-hfgnj/crc-debug-6qdwl" Jan 26 20:15:33 crc kubenswrapper[4737]: I0126 20:15:33.907950 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tv2kx\" (UniqueName: \"kubernetes.io/projected/20bc3963-c291-48b0-b01c-8000605805ce-kube-api-access-tv2kx\") pod \"crc-debug-6qdwl\" (UID: \"20bc3963-c291-48b0-b01c-8000605805ce\") " pod="openshift-must-gather-hfgnj/crc-debug-6qdwl" Jan 26 20:15:33 crc kubenswrapper[4737]: I0126 20:15:33.908009 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/20bc3963-c291-48b0-b01c-8000605805ce-host\") pod \"crc-debug-6qdwl\" (UID: \"20bc3963-c291-48b0-b01c-8000605805ce\") " pod="openshift-must-gather-hfgnj/crc-debug-6qdwl" Jan 26 20:15:33 crc kubenswrapper[4737]: I0126 20:15:33.908190 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/20bc3963-c291-48b0-b01c-8000605805ce-host\") pod \"crc-debug-6qdwl\" (UID: \"20bc3963-c291-48b0-b01c-8000605805ce\") " pod="openshift-must-gather-hfgnj/crc-debug-6qdwl" Jan 26 20:15:33 crc kubenswrapper[4737]: I0126 20:15:33.939620 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tv2kx\" (UniqueName: \"kubernetes.io/projected/20bc3963-c291-48b0-b01c-8000605805ce-kube-api-access-tv2kx\") pod \"crc-debug-6qdwl\" (UID: \"20bc3963-c291-48b0-b01c-8000605805ce\") " pod="openshift-must-gather-hfgnj/crc-debug-6qdwl" Jan 26 20:15:34 crc kubenswrapper[4737]: I0126 20:15:34.039903 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hfgnj/crc-debug-6qdwl" Jan 26 20:15:34 crc kubenswrapper[4737]: I0126 20:15:34.993562 4737 generic.go:334] "Generic (PLEG): container finished" podID="20bc3963-c291-48b0-b01c-8000605805ce" containerID="d9e2b9dbe5a053b666c303eceeba0841af73b4f107c179364db1b690dfdeea3f" exitCode=0 Jan 26 20:15:34 crc kubenswrapper[4737]: I0126 20:15:34.996774 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hfgnj/crc-debug-6qdwl" event={"ID":"20bc3963-c291-48b0-b01c-8000605805ce","Type":"ContainerDied","Data":"d9e2b9dbe5a053b666c303eceeba0841af73b4f107c179364db1b690dfdeea3f"} Jan 26 20:15:34 crc kubenswrapper[4737]: I0126 20:15:34.996822 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hfgnj/crc-debug-6qdwl" event={"ID":"20bc3963-c291-48b0-b01c-8000605805ce","Type":"ContainerStarted","Data":"a411ee278e9e2c242e60c569d9a1860e5c78ee77d1930ff0f295a2cb5b9949ed"} Jan 26 20:15:35 crc kubenswrapper[4737]: I0126 20:15:35.050463 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-hfgnj/crc-debug-6qdwl"] Jan 26 20:15:35 crc kubenswrapper[4737]: I0126 20:15:35.061189 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-hfgnj/crc-debug-6qdwl"] Jan 26 20:15:36 crc kubenswrapper[4737]: I0126 20:15:36.128239 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hfgnj/crc-debug-6qdwl" Jan 26 20:15:36 crc kubenswrapper[4737]: I0126 20:15:36.167994 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/20bc3963-c291-48b0-b01c-8000605805ce-host\") pod \"20bc3963-c291-48b0-b01c-8000605805ce\" (UID: \"20bc3963-c291-48b0-b01c-8000605805ce\") " Jan 26 20:15:36 crc kubenswrapper[4737]: I0126 20:15:36.168179 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20bc3963-c291-48b0-b01c-8000605805ce-host" (OuterVolumeSpecName: "host") pod "20bc3963-c291-48b0-b01c-8000605805ce" (UID: "20bc3963-c291-48b0-b01c-8000605805ce"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 20:15:36 crc kubenswrapper[4737]: I0126 20:15:36.168479 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tv2kx\" (UniqueName: \"kubernetes.io/projected/20bc3963-c291-48b0-b01c-8000605805ce-kube-api-access-tv2kx\") pod \"20bc3963-c291-48b0-b01c-8000605805ce\" (UID: \"20bc3963-c291-48b0-b01c-8000605805ce\") " Jan 26 20:15:36 crc kubenswrapper[4737]: I0126 20:15:36.169250 4737 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/20bc3963-c291-48b0-b01c-8000605805ce-host\") on node \"crc\" DevicePath \"\"" Jan 26 20:15:36 crc kubenswrapper[4737]: I0126 20:15:36.176283 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20bc3963-c291-48b0-b01c-8000605805ce-kube-api-access-tv2kx" (OuterVolumeSpecName: "kube-api-access-tv2kx") pod "20bc3963-c291-48b0-b01c-8000605805ce" (UID: "20bc3963-c291-48b0-b01c-8000605805ce"). InnerVolumeSpecName "kube-api-access-tv2kx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:15:36 crc kubenswrapper[4737]: I0126 20:15:36.274285 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tv2kx\" (UniqueName: \"kubernetes.io/projected/20bc3963-c291-48b0-b01c-8000605805ce-kube-api-access-tv2kx\") on node \"crc\" DevicePath \"\"" Jan 26 20:15:36 crc kubenswrapper[4737]: I0126 20:15:36.995974 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20bc3963-c291-48b0-b01c-8000605805ce" path="/var/lib/kubelet/pods/20bc3963-c291-48b0-b01c-8000605805ce/volumes" Jan 26 20:15:37 crc kubenswrapper[4737]: I0126 20:15:37.016295 4737 scope.go:117] "RemoveContainer" containerID="d9e2b9dbe5a053b666c303eceeba0841af73b4f107c179364db1b690dfdeea3f" Jan 26 20:15:37 crc kubenswrapper[4737]: I0126 20:15:37.016841 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hfgnj/crc-debug-6qdwl" Jan 26 20:15:38 crc kubenswrapper[4737]: I0126 20:15:38.725467 4737 scope.go:117] "RemoveContainer" containerID="8b51cf34a6a0e319157bc98ff85610b708b510bb897a8c2dc1b086a6a339dd3a" Jan 26 20:15:41 crc kubenswrapper[4737]: I0126 20:15:41.981804 4737 scope.go:117] "RemoveContainer" containerID="33bebcaabae9d57274c8f9ce19e91e5a2ee2c813697141d70a95623238e43961" Jan 26 20:15:43 crc kubenswrapper[4737]: I0126 20:15:43.096948 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" event={"ID":"afd75772-7900-46c3-b392-afb075e1cc08","Type":"ContainerStarted","Data":"747bf40dd18257932204627171230519436ccf208e0d90e1e79a45e89e20948b"} Jan 26 20:16:02 crc kubenswrapper[4737]: I0126 20:16:02.615510 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_147666d0-b0ae-46ad-aaa0-2fcf6db0f137/aodh-api/0.log" Jan 26 20:16:02 crc kubenswrapper[4737]: I0126 20:16:02.872134 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_147666d0-b0ae-46ad-aaa0-2fcf6db0f137/aodh-listener/0.log" Jan 26 20:16:02 crc kubenswrapper[4737]: I0126 20:16:02.908512 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_147666d0-b0ae-46ad-aaa0-2fcf6db0f137/aodh-evaluator/0.log" Jan 26 20:16:02 crc kubenswrapper[4737]: I0126 20:16:02.954311 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_147666d0-b0ae-46ad-aaa0-2fcf6db0f137/aodh-notifier/0.log" Jan 26 20:16:03 crc kubenswrapper[4737]: I0126 20:16:03.249840 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-687b47d654-rb2ft_1aef338e-174a-4bc2-acd1-56374a72e519/barbican-api/0.log" Jan 26 20:16:03 crc kubenswrapper[4737]: I0126 20:16:03.253831 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-687b47d654-rb2ft_1aef338e-174a-4bc2-acd1-56374a72e519/barbican-api-log/0.log" Jan 26 20:16:03 crc kubenswrapper[4737]: I0126 20:16:03.350743 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5c5b6c8cdb-gwc7x_b82b3dcd-dcf3-44a0-bfc7-cb8d484ebd6b/barbican-keystone-listener/0.log" Jan 26 20:16:03 crc kubenswrapper[4737]: I0126 20:16:03.609517 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5c5b6c8cdb-gwc7x_b82b3dcd-dcf3-44a0-bfc7-cb8d484ebd6b/barbican-keystone-listener-log/0.log" Jan 26 20:16:03 crc kubenswrapper[4737]: I0126 20:16:03.642909 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-866f479b9-7wv96_b84a5366-14c9-4b93-b185-18a4e3695ed7/barbican-worker/0.log" Jan 26 20:16:03 crc kubenswrapper[4737]: I0126 20:16:03.728588 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-866f479b9-7wv96_b84a5366-14c9-4b93-b185-18a4e3695ed7/barbican-worker-log/0.log" Jan 26 20:16:03 crc kubenswrapper[4737]: I0126 20:16:03.968902 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-kpbfj_6d1d0ed3-31b7-41a2-8f49-741d206509bd/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:16:04 crc kubenswrapper[4737]: I0126 20:16:04.087535 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_43f4c1d0-e222-4099-ad1a-73d3c9d9530a/ceilometer-central-agent/0.log" Jan 26 20:16:04 crc kubenswrapper[4737]: I0126 20:16:04.184905 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_43f4c1d0-e222-4099-ad1a-73d3c9d9530a/ceilometer-notification-agent/0.log" Jan 26 20:16:04 crc kubenswrapper[4737]: I0126 20:16:04.268927 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_43f4c1d0-e222-4099-ad1a-73d3c9d9530a/sg-core/0.log" Jan 26 20:16:04 crc kubenswrapper[4737]: I0126 20:16:04.302912 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_43f4c1d0-e222-4099-ad1a-73d3c9d9530a/proxy-httpd/0.log" Jan 26 20:16:04 crc kubenswrapper[4737]: I0126 20:16:04.562391 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_715806cf-cb82-4224-bdb0-8aed20e29b49/cinder-api/0.log" Jan 26 20:16:04 crc kubenswrapper[4737]: I0126 20:16:04.580418 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_715806cf-cb82-4224-bdb0-8aed20e29b49/cinder-api-log/0.log" Jan 26 20:16:04 crc kubenswrapper[4737]: I0126 20:16:04.777593 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_635e921c-e7e7-4721-a152-f589e21e4631/cinder-scheduler/0.log" Jan 26 20:16:04 crc kubenswrapper[4737]: I0126 20:16:04.843259 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_635e921c-e7e7-4721-a152-f589e21e4631/probe/0.log" Jan 26 20:16:04 crc kubenswrapper[4737]: I0126 20:16:04.989764 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-m2vxk_f606c12b-460a-4ec1-ac57-d4e5667945de/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:16:05 crc kubenswrapper[4737]: I0126 20:16:05.148301 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-trclj_e5cc8a39-bca0-4175-a418-a24c75e5bc06/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:16:05 crc kubenswrapper[4737]: I0126 20:16:05.209251 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-f67kv_50a8451d-1c9f-4e7b-a24a-36a22672f896/init/0.log" Jan 26 20:16:05 crc kubenswrapper[4737]: I0126 20:16:05.560860 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-f67kv_50a8451d-1c9f-4e7b-a24a-36a22672f896/init/0.log" Jan 26 20:16:05 crc kubenswrapper[4737]: I0126 20:16:05.602552 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-f67kv_50a8451d-1c9f-4e7b-a24a-36a22672f896/dnsmasq-dns/0.log" Jan 26 20:16:05 crc kubenswrapper[4737]: I0126 20:16:05.626902 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-bd28j_5e950231-d00c-4fbd-b9de-a93d2d86eb36/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:16:05 crc kubenswrapper[4737]: I0126 20:16:05.859696 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_5de2e392-7605-4b8c-831c-4101c098fc0e/glance-httpd/0.log" Jan 26 20:16:05 crc kubenswrapper[4737]: I0126 20:16:05.908720 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_5de2e392-7605-4b8c-831c-4101c098fc0e/glance-log/0.log" Jan 26 20:16:06 crc kubenswrapper[4737]: I0126 20:16:06.058944 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_9c0fd189-4592-4f52-a100-e6fc3581ef26/glance-log/0.log" Jan 26 20:16:06 crc kubenswrapper[4737]: I0126 20:16:06.180668 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_9c0fd189-4592-4f52-a100-e6fc3581ef26/glance-httpd/0.log" Jan 26 20:16:06 crc kubenswrapper[4737]: I0126 20:16:06.807890 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-cfff6bbff-s577r_f4b0bd32-90db-4eae-a748-903c5d5cd931/heat-api/0.log" Jan 26 20:16:07 crc kubenswrapper[4737]: I0126 20:16:07.118393 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-858867c5df-ppbxf_de816c7c-1d5a-4226-b17c-b4f5a5c8d07b/heat-engine/0.log" Jan 26 20:16:07 crc kubenswrapper[4737]: I0126 20:16:07.134209 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-mzz54_fa425b93-9221-4f0b-b0fd-7995e092f8f1/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:16:07 crc kubenswrapper[4737]: I0126 20:16:07.301212 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-6b78c96546-lpdfk_bbb9e95d-409d-4b81-a1e4-1dca34c9d1cb/heat-cfnapi/0.log" Jan 26 20:16:07 crc kubenswrapper[4737]: I0126 20:16:07.333322 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-lj8qk_8f08d498-ef07-4e31-ab34-d68972740f02/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:16:07 crc kubenswrapper[4737]: I0126 20:16:07.823336 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-86b84744f8-59mdj_682c692a-8447-4b49-b81d-98b7fa9ccec1/keystone-api/0.log" Jan 26 20:16:07 crc kubenswrapper[4737]: I0126 20:16:07.954124 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29490901-rhxwf_37efbad2-f8c2-4830-9ece-86870bf29923/keystone-cron/0.log" Jan 26 20:16:08 crc kubenswrapper[4737]: I0126 20:16:08.173010 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29490961-hfz45_5f36a330-35fc-46b8-9f3f-4648e4e5485c/keystone-cron/0.log" Jan 26 20:16:08 crc kubenswrapper[4737]: I0126 20:16:08.237816 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_c57d0600-f0a4-43d2-b974-ced2346aae55/kube-state-metrics/0.log" Jan 26 20:16:08 crc kubenswrapper[4737]: I0126 20:16:08.372957 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-wlzfp_35694d2d-33da-4cab-96a8-4e14aa07b4f9/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:16:08 crc kubenswrapper[4737]: I0126 20:16:08.602091 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_logging-edpm-deployment-openstack-edpm-ipam-p6bgr_9f1823e5-fd64-4ddd-a4ed-5727de977754/logging-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:16:08 crc kubenswrapper[4737]: I0126 20:16:08.873837 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_3cc067c6-ba98-4534-a9d8-2028c6e0ccf6/mysqld-exporter/0.log" Jan 26 20:16:09 crc kubenswrapper[4737]: I0126 20:16:09.261274 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-ms8mp_f03ef699-8fd7-4aad-a3a5-8a7306048d86/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:16:09 crc kubenswrapper[4737]: I0126 20:16:09.316980 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-55cbc4d4bf-89lfk_a9b9b411-9b28-486b-bb42-cf668fba2ee5/neutron-httpd/0.log" Jan 26 20:16:09 crc kubenswrapper[4737]: I0126 20:16:09.470291 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-55cbc4d4bf-89lfk_a9b9b411-9b28-486b-bb42-cf668fba2ee5/neutron-api/0.log" Jan 26 20:16:10 crc kubenswrapper[4737]: I0126 20:16:10.030755 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_5d833a0c-e63e-4296-85f9-f7489007fa6c/nova-cell0-conductor-conductor/0.log" Jan 26 20:16:10 crc kubenswrapper[4737]: I0126 20:16:10.587412 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_8c62bab3-337a-4449-ac7f-63dedc641524/nova-cell1-conductor-conductor/0.log" Jan 26 20:16:10 crc kubenswrapper[4737]: I0126 20:16:10.590777 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_dc6d57aa-811b-482e-abc2-5048e523ce88/nova-api-log/0.log" Jan 26 20:16:11 crc kubenswrapper[4737]: I0126 20:16:11.078238 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-m7qxj_c1f6bd41-c1ed-47f9-a3db-03756845afbc/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:16:11 crc kubenswrapper[4737]: I0126 20:16:11.159443 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_32bea17c-5210-413d-81b5-e30c0dbc0c77/nova-cell1-novncproxy-novncproxy/0.log" Jan 26 20:16:11 crc kubenswrapper[4737]: I0126 20:16:11.322764 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_dc6d57aa-811b-482e-abc2-5048e523ce88/nova-api-api/0.log" Jan 26 20:16:11 crc kubenswrapper[4737]: I0126 20:16:11.607395 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_4e472c4b-c138-4b34-b972-84afd363d6dd/nova-metadata-log/0.log" Jan 26 20:16:11 crc kubenswrapper[4737]: I0126 20:16:11.855125 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_a901aed9-dbba-43e3-bf8c-f6026e3ea49d/nova-scheduler-scheduler/0.log" Jan 26 20:16:11 crc kubenswrapper[4737]: I0126 20:16:11.934092 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_89018ab2-3fc5-4855-b47e-ac19d8008c8e/mysql-bootstrap/0.log" Jan 26 20:16:12 crc kubenswrapper[4737]: I0126 20:16:12.109415 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_89018ab2-3fc5-4855-b47e-ac19d8008c8e/mysql-bootstrap/0.log" Jan 26 20:16:12 crc kubenswrapper[4737]: I0126 20:16:12.149034 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_89018ab2-3fc5-4855-b47e-ac19d8008c8e/galera/0.log" Jan 26 20:16:12 crc kubenswrapper[4737]: I0126 20:16:12.348885 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ca50689d-e7af-4267-9ee0-11d254c08962/mysql-bootstrap/0.log" Jan 26 20:16:12 crc kubenswrapper[4737]: I0126 20:16:12.903858 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ca50689d-e7af-4267-9ee0-11d254c08962/mysql-bootstrap/0.log" Jan 26 20:16:13 crc kubenswrapper[4737]: I0126 20:16:13.097443 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ca50689d-e7af-4267-9ee0-11d254c08962/galera/0.log" Jan 26 20:16:13 crc kubenswrapper[4737]: I0126 20:16:13.183416 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_d857f780-d620-4d1a-bacb-8ecff74a012f/openstackclient/0.log" Jan 26 20:16:13 crc kubenswrapper[4737]: I0126 20:16:13.341935 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-96rrx_6bdafee1-1c61-4cbe-b052-c5948c27152d/openstack-network-exporter/0.log" Jan 26 20:16:13 crc kubenswrapper[4737]: I0126 20:16:13.659846 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-tnjz7_b875fe78-bf29-45f1-a4a5-f3881134a171/ovsdb-server-init/0.log" Jan 26 20:16:13 crc kubenswrapper[4737]: I0126 20:16:13.862513 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-tnjz7_b875fe78-bf29-45f1-a4a5-f3881134a171/ovs-vswitchd/0.log" Jan 26 20:16:13 crc kubenswrapper[4737]: I0126 20:16:13.880818 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-tnjz7_b875fe78-bf29-45f1-a4a5-f3881134a171/ovsdb-server-init/0.log" Jan 26 20:16:13 crc kubenswrapper[4737]: I0126 20:16:13.894785 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-tnjz7_b875fe78-bf29-45f1-a4a5-f3881134a171/ovsdb-server/0.log" Jan 26 20:16:14 crc kubenswrapper[4737]: I0126 20:16:14.140906 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-zrckb_11408d0f-4b45-4dab-bc9e-965ac14aed79/ovn-controller/0.log" Jan 26 20:16:14 crc kubenswrapper[4737]: I0126 20:16:14.197394 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_4e472c4b-c138-4b34-b972-84afd363d6dd/nova-metadata-metadata/0.log" Jan 26 20:16:14 crc kubenswrapper[4737]: I0126 20:16:14.356907 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-9r2p8_7602eee6-3627-420f-8e44-c19689be75de/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:16:14 crc kubenswrapper[4737]: I0126 20:16:14.445429 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_19bc14ba-dd2b-4cb9-969d-e44339856cf0/openstack-network-exporter/0.log" Jan 26 20:16:14 crc kubenswrapper[4737]: I0126 20:16:14.523634 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_19bc14ba-dd2b-4cb9-969d-e44339856cf0/ovn-northd/0.log" Jan 26 20:16:14 crc kubenswrapper[4737]: I0126 20:16:14.766909 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_6465a03e-5fc8-4886-b68b-531fe218230f/ovsdbserver-nb/0.log" Jan 26 20:16:14 crc kubenswrapper[4737]: I0126 20:16:14.767125 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_6465a03e-5fc8-4886-b68b-531fe218230f/openstack-network-exporter/0.log" Jan 26 20:16:15 crc kubenswrapper[4737]: I0126 20:16:15.007402 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_923f982a-41f5-4c9d-a2dc-50e96e54c283/openstack-network-exporter/0.log" Jan 26 20:16:15 crc kubenswrapper[4737]: I0126 20:16:15.045831 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_923f982a-41f5-4c9d-a2dc-50e96e54c283/ovsdbserver-sb/0.log" Jan 26 20:16:15 crc kubenswrapper[4737]: I0126 20:16:15.305816 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-c974878b4-m6rmv_faf8de27-9da1-4a0d-9edf-ebb5d53fc272/placement-api/0.log" Jan 26 20:16:15 crc kubenswrapper[4737]: I0126 20:16:15.456038 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-c974878b4-m6rmv_faf8de27-9da1-4a0d-9edf-ebb5d53fc272/placement-log/0.log" Jan 26 20:16:15 crc kubenswrapper[4737]: I0126 20:16:15.467223 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_dd029654-7895-4949-9ef7-b5cdd6043451/init-config-reloader/0.log" Jan 26 20:16:15 crc kubenswrapper[4737]: I0126 20:16:15.794188 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_dd029654-7895-4949-9ef7-b5cdd6043451/thanos-sidecar/0.log" Jan 26 20:16:15 crc kubenswrapper[4737]: I0126 20:16:15.795383 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_dd029654-7895-4949-9ef7-b5cdd6043451/config-reloader/0.log" Jan 26 20:16:15 crc kubenswrapper[4737]: I0126 20:16:15.797173 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_dd029654-7895-4949-9ef7-b5cdd6043451/prometheus/0.log" Jan 26 20:16:15 crc kubenswrapper[4737]: I0126 20:16:15.812038 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_dd029654-7895-4949-9ef7-b5cdd6043451/init-config-reloader/0.log" Jan 26 20:16:16 crc kubenswrapper[4737]: I0126 20:16:16.061183 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_e5db87e3-e7cb-4248-bc3a-5c6f5d92c144/setup-container/0.log" Jan 26 20:16:16 crc kubenswrapper[4737]: I0126 20:16:16.330788 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_e5db87e3-e7cb-4248-bc3a-5c6f5d92c144/rabbitmq/0.log" Jan 26 20:16:16 crc kubenswrapper[4737]: I0126 20:16:16.395356 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bcd52a93-f277-416b-b37b-2ae58d2edaa5/setup-container/0.log" Jan 26 20:16:16 crc kubenswrapper[4737]: I0126 20:16:16.464122 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_e5db87e3-e7cb-4248-bc3a-5c6f5d92c144/setup-container/0.log" Jan 26 20:16:16 crc kubenswrapper[4737]: I0126 20:16:16.844017 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_72e5eb94-0267-4126-b24c-9b816c66badf/setup-container/0.log" Jan 26 20:16:16 crc kubenswrapper[4737]: I0126 20:16:16.924689 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bcd52a93-f277-416b-b37b-2ae58d2edaa5/setup-container/0.log" Jan 26 20:16:16 crc kubenswrapper[4737]: I0126 20:16:16.950747 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bcd52a93-f277-416b-b37b-2ae58d2edaa5/rabbitmq/0.log" Jan 26 20:16:17 crc kubenswrapper[4737]: I0126 20:16:17.137438 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_72e5eb94-0267-4126-b24c-9b816c66badf/setup-container/0.log" Jan 26 20:16:17 crc kubenswrapper[4737]: I0126 20:16:17.229896 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_44d4092c-abb5-4218-81dc-32ba2257004d/setup-container/0.log" Jan 26 20:16:17 crc kubenswrapper[4737]: I0126 20:16:17.298314 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_72e5eb94-0267-4126-b24c-9b816c66badf/rabbitmq/0.log" Jan 26 20:16:17 crc kubenswrapper[4737]: I0126 20:16:17.898818 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_44d4092c-abb5-4218-81dc-32ba2257004d/setup-container/0.log" Jan 26 20:16:17 crc kubenswrapper[4737]: I0126 20:16:17.993921 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_44d4092c-abb5-4218-81dc-32ba2257004d/rabbitmq/0.log" Jan 26 20:16:18 crc kubenswrapper[4737]: I0126 20:16:18.056092 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-h27rt_34f77dce-aaea-4249-be45-fa7c47b5616b/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:16:18 crc kubenswrapper[4737]: I0126 20:16:18.353242 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-ld6hr_2af8847d-3acf-4733-a507-7d00229ef74c/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:16:18 crc kubenswrapper[4737]: I0126 20:16:18.409383 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-x2cd5_67eb47db-a20a-4f95-97c2-67df12c02360/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:16:18 crc kubenswrapper[4737]: I0126 20:16:18.623936 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-4krm6_2440805a-4477-42f6-bc13-01fc157e1b94/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:16:18 crc kubenswrapper[4737]: I0126 20:16:18.749530 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-h2hhm_395dd2b5-3055-45e9-b528-9bc97b61743f/ssh-known-hosts-edpm-deployment/0.log" Jan 26 20:16:18 crc kubenswrapper[4737]: I0126 20:16:18.753404 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_2618c486-a631-4a87-ba8f-d5ccad83a208/memcached/0.log" Jan 26 20:16:19 crc kubenswrapper[4737]: I0126 20:16:19.029091 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6dd8ff9d59-rttts_38df0a7c-47f1-4834-b970-d815d009b6d7/proxy-server/0.log" Jan 26 20:16:19 crc kubenswrapper[4737]: I0126 20:16:19.070753 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-2fbb8_c9be0bf2-1b3f-4f77-89ec-b5afa2362e47/swift-ring-rebalance/0.log" Jan 26 20:16:19 crc kubenswrapper[4737]: I0126 20:16:19.131259 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6dd8ff9d59-rttts_38df0a7c-47f1-4834-b970-d815d009b6d7/proxy-httpd/0.log" Jan 26 20:16:19 crc kubenswrapper[4737]: I0126 20:16:19.291009 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_03970489-bf21-4d19-afc2-bf8d39aa683e/account-auditor/0.log" Jan 26 20:16:19 crc kubenswrapper[4737]: I0126 20:16:19.501257 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_03970489-bf21-4d19-afc2-bf8d39aa683e/account-reaper/0.log" Jan 26 20:16:19 crc kubenswrapper[4737]: I0126 20:16:19.516015 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_03970489-bf21-4d19-afc2-bf8d39aa683e/account-replicator/0.log" Jan 26 20:16:19 crc kubenswrapper[4737]: I0126 20:16:19.556036 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_03970489-bf21-4d19-afc2-bf8d39aa683e/container-auditor/0.log" Jan 26 20:16:19 crc kubenswrapper[4737]: I0126 20:16:19.563934 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_03970489-bf21-4d19-afc2-bf8d39aa683e/account-server/0.log" Jan 26 20:16:19 crc kubenswrapper[4737]: I0126 20:16:19.678278 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_03970489-bf21-4d19-afc2-bf8d39aa683e/container-replicator/0.log" Jan 26 20:16:19 crc kubenswrapper[4737]: I0126 20:16:19.772843 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_03970489-bf21-4d19-afc2-bf8d39aa683e/container-server/0.log" Jan 26 20:16:19 crc kubenswrapper[4737]: I0126 20:16:19.860759 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_03970489-bf21-4d19-afc2-bf8d39aa683e/container-updater/0.log" Jan 26 20:16:19 crc kubenswrapper[4737]: I0126 20:16:19.867372 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_03970489-bf21-4d19-afc2-bf8d39aa683e/object-expirer/0.log" Jan 26 20:16:19 crc kubenswrapper[4737]: I0126 20:16:19.897516 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_03970489-bf21-4d19-afc2-bf8d39aa683e/object-auditor/0.log" Jan 26 20:16:20 crc kubenswrapper[4737]: I0126 20:16:20.030689 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_03970489-bf21-4d19-afc2-bf8d39aa683e/object-replicator/0.log" Jan 26 20:16:20 crc kubenswrapper[4737]: I0126 20:16:20.103371 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_03970489-bf21-4d19-afc2-bf8d39aa683e/object-server/0.log" Jan 26 20:16:20 crc kubenswrapper[4737]: I0126 20:16:20.141085 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_03970489-bf21-4d19-afc2-bf8d39aa683e/rsync/0.log" Jan 26 20:16:20 crc kubenswrapper[4737]: I0126 20:16:20.148794 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_03970489-bf21-4d19-afc2-bf8d39aa683e/object-updater/0.log" Jan 26 20:16:20 crc kubenswrapper[4737]: I0126 20:16:20.181299 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_03970489-bf21-4d19-afc2-bf8d39aa683e/swift-recon-cron/0.log" Jan 26 20:16:20 crc kubenswrapper[4737]: I0126 20:16:20.461641 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-v27w7_6bacdfa3-047c-42c9-a233-7daac1e8b65d/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:16:20 crc kubenswrapper[4737]: I0126 20:16:20.481011 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-power-monitoring-edpm-deployment-openstack-edpm-5rchb_fe3a5992-1b84-4df9-bebe-3f0060fe631d/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:16:20 crc kubenswrapper[4737]: I0126 20:16:20.868762 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_7e035125-8d0b-4019-9266-fd7abb0057da/test-operator-logs-container/0.log" Jan 26 20:16:20 crc kubenswrapper[4737]: I0126 20:16:20.900753 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-74xdm_bb314574-7438-4911-8b54-a1ccfa5a907d/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:16:21 crc kubenswrapper[4737]: I0126 20:16:21.253799 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_d81cdf24-ce67-401f-869f-805f4718fce4/tempest-tests-tempest-tests-runner/0.log" Jan 26 20:16:52 crc kubenswrapper[4737]: I0126 20:16:52.671256 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5c66b61d564fc639d515e788b700d4c5c2c3cff0a71ecd99e42f80cf9454pgp_ad64c1f6-5d9c-4ec5-990c-354f54f9f183/util/0.log" Jan 26 20:16:52 crc kubenswrapper[4737]: I0126 20:16:52.873275 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5c66b61d564fc639d515e788b700d4c5c2c3cff0a71ecd99e42f80cf9454pgp_ad64c1f6-5d9c-4ec5-990c-354f54f9f183/util/0.log" Jan 26 20:16:52 crc kubenswrapper[4737]: I0126 20:16:52.879975 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5c66b61d564fc639d515e788b700d4c5c2c3cff0a71ecd99e42f80cf9454pgp_ad64c1f6-5d9c-4ec5-990c-354f54f9f183/pull/0.log" Jan 26 20:16:52 crc kubenswrapper[4737]: I0126 20:16:52.896479 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5c66b61d564fc639d515e788b700d4c5c2c3cff0a71ecd99e42f80cf9454pgp_ad64c1f6-5d9c-4ec5-990c-354f54f9f183/pull/0.log" Jan 26 20:16:53 crc kubenswrapper[4737]: I0126 20:16:53.058286 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5c66b61d564fc639d515e788b700d4c5c2c3cff0a71ecd99e42f80cf9454pgp_ad64c1f6-5d9c-4ec5-990c-354f54f9f183/util/0.log" Jan 26 20:16:53 crc kubenswrapper[4737]: I0126 20:16:53.107426 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5c66b61d564fc639d515e788b700d4c5c2c3cff0a71ecd99e42f80cf9454pgp_ad64c1f6-5d9c-4ec5-990c-354f54f9f183/extract/0.log" Jan 26 20:16:53 crc kubenswrapper[4737]: I0126 20:16:53.109977 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5c66b61d564fc639d515e788b700d4c5c2c3cff0a71ecd99e42f80cf9454pgp_ad64c1f6-5d9c-4ec5-990c-354f54f9f183/pull/0.log" Jan 26 20:16:53 crc kubenswrapper[4737]: I0126 20:16:53.285173 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-p42h8_288df3c7-1220-419c-bde6-67ee3922b8ad/manager/0.log" Jan 26 20:16:53 crc kubenswrapper[4737]: I0126 20:16:53.341612 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7478f7dbf9-hbqjs_6cc46694-b15a-4417-a0a9-f4c13184f2ca/manager/0.log" Jan 26 20:16:53 crc kubenswrapper[4737]: I0126 20:16:53.480194 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-6mjbw_62ddf97f-7d75-4667-9480-17cb809b98f5/manager/0.log" Jan 26 20:16:53 crc kubenswrapper[4737]: I0126 20:16:53.663486 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-bl8hk_97c0989d-f677-4460-b62b-4733c7db29d4/manager/0.log" Jan 26 20:16:53 crc kubenswrapper[4737]: I0126 20:16:53.809041 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-j9nc9_3508c1f8-c9d9-41bf-b71e-eebb13eb5e86/manager/0.log" Jan 26 20:16:53 crc kubenswrapper[4737]: I0126 20:16:53.862991 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-kq82d_d80defd5-46d2-4e20-b093-dff95dca651b/manager/0.log" Jan 26 20:16:54 crc kubenswrapper[4737]: I0126 20:16:54.096466 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-jpmmh_b3a010fd-4f62-40c6-a377-be5c6f2e6ba7/manager/0.log" Jan 26 20:16:54 crc kubenswrapper[4737]: I0126 20:16:54.305935 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-694cf4f878-9lqk4_6904aa8b-12dd-4139-9a9f-f60be010cf3b/manager/0.log" Jan 26 20:16:54 crc kubenswrapper[4737]: I0126 20:16:54.422127 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-zbp84_03d41d00-eefc-45c4-aaea-f09a5e34362b/manager/0.log" Jan 26 20:16:54 crc kubenswrapper[4737]: I0126 20:16:54.431865 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-v9b85_0d2709bf-2113-45d7-94a1-816bc230044a/manager/0.log" Jan 26 20:16:54 crc kubenswrapper[4737]: I0126 20:16:54.664336 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-c4hpz_5b2ad507-8ef0-40e5-a10c-d5ed62a8181e/manager/0.log" Jan 26 20:16:54 crc kubenswrapper[4737]: I0126 20:16:54.702853 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-tz995_01b83dfe-58bb-40fa-a0e8-b942b4c79b72/manager/0.log" Jan 26 20:16:54 crc kubenswrapper[4737]: I0126 20:16:54.968300 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4cd88d46-qr8vf_284309e9-61a9-47c4-918a-6f097cf10aa1/manager/0.log" Jan 26 20:16:54 crc kubenswrapper[4737]: I0126 20:16:54.986298 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7bdb645866-xrm44_3164f5a5-0f37-4ab6-bc2a-51978eb9f842/manager/0.log" Jan 26 20:16:55 crc kubenswrapper[4737]: I0126 20:16:55.163800 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854wx9kv_5175d9d3-4bf9-4f52-be13-e33b02e03592/manager/0.log" Jan 26 20:16:55 crc kubenswrapper[4737]: I0126 20:16:55.344624 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-848546446f-8xbh6_de29bea2-d234-4bc2-b1fc-90a18e84ed17/operator/0.log" Jan 26 20:16:55 crc kubenswrapper[4737]: I0126 20:16:55.644929 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-n9rk2_8f103d19-388b-408e-a7e5-b17428b986c9/registry-server/0.log" Jan 26 20:16:55 crc kubenswrapper[4737]: I0126 20:16:55.848890 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-55xkx_c9b745b4-487d-4ccb-a398-8d9af643ae50/manager/0.log" Jan 26 20:16:56 crc kubenswrapper[4737]: I0126 20:16:56.016986 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-lfh5n_11c8ec8e-f710-4b3f-9bf2-be1834ddffb9/manager/0.log" Jan 26 20:16:56 crc kubenswrapper[4737]: I0126 20:16:56.331337 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-5xvj4_3c491fdc-889c-4d4a-aedd-60a242e26027/operator/0.log" Jan 26 20:16:56 crc kubenswrapper[4737]: I0126 20:16:56.734418 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6ffbd5d47c-xwdkt_c7cfbb47-6d43-4030-a3d1-516430aeffb7/manager/0.log" Jan 26 20:16:56 crc kubenswrapper[4737]: I0126 20:16:56.756666 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-9lkfc_8aa44595-2352-4a3e-888f-3409254cde36/manager/0.log" Jan 26 20:16:56 crc kubenswrapper[4737]: I0126 20:16:56.944591 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-4n95b_c68a8293-a298-4384-83f0-4a7e50517d3b/manager/0.log" Jan 26 20:16:57 crc kubenswrapper[4737]: I0126 20:16:57.101749 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-hx2gj_148ce19e-3a70-4b27-98e1-87807dee6178/manager/0.log" Jan 26 20:16:57 crc kubenswrapper[4737]: I0126 20:16:57.111047 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6cf49855b4-zfzgj_0716cfbf-95d3-44fd-9e28-9b861568b791/manager/0.log" Jan 26 20:17:20 crc kubenswrapper[4737]: I0126 20:17:20.536322 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-6f78q_cf12407d-16ca-40d9-8279-f46693aee8b1/control-plane-machine-set-operator/0.log" Jan 26 20:17:20 crc kubenswrapper[4737]: I0126 20:17:20.684891 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-ktwh7_c8be3738-e6c1-4cc8-ae8a-a23387b73213/kube-rbac-proxy/0.log" Jan 26 20:17:20 crc kubenswrapper[4737]: I0126 20:17:20.713994 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-ktwh7_c8be3738-e6c1-4cc8-ae8a-a23387b73213/machine-api-operator/0.log" Jan 26 20:17:36 crc kubenswrapper[4737]: I0126 20:17:36.671677 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-bjjtc_780b9f7e-40b5-4b9b-94bc-0401ce35b5e3/cert-manager-controller/0.log" Jan 26 20:17:37 crc kubenswrapper[4737]: I0126 20:17:37.004152 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-qschs_c42be5f9-9a91-43c2-ac4b-5c7b49bb434c/cert-manager-cainjector/0.log" Jan 26 20:17:37 crc kubenswrapper[4737]: I0126 20:17:37.018247 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-57xsl_e5a74a57-5f9a-442f-a166-7787942994c8/cert-manager-webhook/0.log" Jan 26 20:17:53 crc kubenswrapper[4737]: I0126 20:17:53.605321 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-zdxbz_4c4a0a5e-ab9e-478c-8f90-741563313097/nmstate-console-plugin/0.log" Jan 26 20:17:53 crc kubenswrapper[4737]: I0126 20:17:53.858948 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-99d4z_1a140881-5ef3-4582-9694-e24fc14a6fb4/nmstate-handler/0.log" Jan 26 20:17:53 crc kubenswrapper[4737]: I0126 20:17:53.979924 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-qh796_33e00306-edd4-487d-9bc6-e49fa9692a29/kube-rbac-proxy/0.log" Jan 26 20:17:54 crc kubenswrapper[4737]: I0126 20:17:54.079811 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-qh796_33e00306-edd4-487d-9bc6-e49fa9692a29/nmstate-metrics/0.log" Jan 26 20:17:54 crc kubenswrapper[4737]: I0126 20:17:54.242515 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-dg9v7_35a928d3-7171-42be-8005-cbdfec1891c3/nmstate-operator/0.log" Jan 26 20:17:54 crc kubenswrapper[4737]: I0126 20:17:54.383504 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-f425m_30e5ad3f-a8b0-4d6d-b128-e8b126a1fba5/nmstate-webhook/0.log" Jan 26 20:18:00 crc kubenswrapper[4737]: I0126 20:18:00.949179 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 20:18:00 crc kubenswrapper[4737]: I0126 20:18:00.949850 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 20:18:12 crc kubenswrapper[4737]: I0126 20:18:12.943861 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6dbff5787b-d86s9_697c3f44-b05d-4404-bd79-a93c1c29b8ad/manager/0.log" Jan 26 20:18:12 crc kubenswrapper[4737]: I0126 20:18:12.971193 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6dbff5787b-d86s9_697c3f44-b05d-4404-bd79-a93c1c29b8ad/kube-rbac-proxy/0.log" Jan 26 20:18:30 crc kubenswrapper[4737]: I0126 20:18:30.425411 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-jvfnx_780e85db-cb8c-4a8c-920d-2594cd33eebf/prometheus-operator/0.log" Jan 26 20:18:30 crc kubenswrapper[4737]: I0126 20:18:30.650783 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-b48686b7d-s2s9r_33031648-f53a-4f71-8c03-041f7f1fcbf5/prometheus-operator-admission-webhook/0.log" Jan 26 20:18:30 crc kubenswrapper[4737]: I0126 20:18:30.802061 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-b48686b7d-tjv85_cc4df7ac-3298-4316-8c9b-1ac9827330fd/prometheus-operator-admission-webhook/0.log" Jan 26 20:18:30 crc kubenswrapper[4737]: I0126 20:18:30.915409 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-xf99z_b319754a-04cc-40dd-b031-ea72a3d19db2/operator/0.log" Jan 26 20:18:30 crc kubenswrapper[4737]: I0126 20:18:30.949255 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 20:18:30 crc kubenswrapper[4737]: I0126 20:18:30.949324 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 20:18:31 crc kubenswrapper[4737]: I0126 20:18:31.111676 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-ckxn2_6b80cd0d-81ac-4f45-a80c-3b1cf442fc44/observability-ui-dashboards/0.log" Jan 26 20:18:31 crc kubenswrapper[4737]: I0126 20:18:31.214403 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-r5vwv_7478def9-da54-4632-803e-47f36b6fb64b/perses-operator/0.log" Jan 26 20:18:50 crc kubenswrapper[4737]: I0126 20:18:50.290475 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-79cf69ddc8-zx2hl_19021b35-3bd2-40f3-a312-466b0c15bc35/cluster-logging-operator/0.log" Jan 26 20:18:50 crc kubenswrapper[4737]: I0126 20:18:50.595741 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-vbgpv_6e3d8492-59e3-4dc0-b14a-261053397eb7/collector/0.log" Jan 26 20:18:50 crc kubenswrapper[4737]: I0126 20:18:50.639104 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_274a7c37-3e64-45ce-8d6f-dfeac9c15288/loki-compactor/0.log" Jan 26 20:18:50 crc kubenswrapper[4737]: I0126 20:18:50.726931 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pg7ph"] Jan 26 20:18:50 crc kubenswrapper[4737]: E0126 20:18:50.729482 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20bc3963-c291-48b0-b01c-8000605805ce" containerName="container-00" Jan 26 20:18:50 crc kubenswrapper[4737]: I0126 20:18:50.729512 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="20bc3963-c291-48b0-b01c-8000605805ce" containerName="container-00" Jan 26 20:18:50 crc kubenswrapper[4737]: I0126 20:18:50.730272 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="20bc3963-c291-48b0-b01c-8000605805ce" containerName="container-00" Jan 26 20:18:50 crc kubenswrapper[4737]: I0126 20:18:50.747990 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pg7ph" Jan 26 20:18:50 crc kubenswrapper[4737]: I0126 20:18:50.752379 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pg7ph"] Jan 26 20:18:50 crc kubenswrapper[4737]: I0126 20:18:50.884639 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-5f678c8dd6-6wp46_f15f2968-e05a-49f0-8024-3a1764d4b9e2/loki-distributor/0.log" Jan 26 20:18:50 crc kubenswrapper[4737]: I0126 20:18:50.899820 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c670b440-832f-4bf1-8107-72aa5c97f637-catalog-content\") pod \"redhat-marketplace-pg7ph\" (UID: \"c670b440-832f-4bf1-8107-72aa5c97f637\") " pod="openshift-marketplace/redhat-marketplace-pg7ph" Jan 26 20:18:50 crc kubenswrapper[4737]: I0126 20:18:50.899965 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c670b440-832f-4bf1-8107-72aa5c97f637-utilities\") pod \"redhat-marketplace-pg7ph\" (UID: \"c670b440-832f-4bf1-8107-72aa5c97f637\") " pod="openshift-marketplace/redhat-marketplace-pg7ph" Jan 26 20:18:50 crc kubenswrapper[4737]: I0126 20:18:50.900088 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-928zp\" (UniqueName: \"kubernetes.io/projected/c670b440-832f-4bf1-8107-72aa5c97f637-kube-api-access-928zp\") pod \"redhat-marketplace-pg7ph\" (UID: \"c670b440-832f-4bf1-8107-72aa5c97f637\") " pod="openshift-marketplace/redhat-marketplace-pg7ph" Jan 26 20:18:50 crc kubenswrapper[4737]: I0126 20:18:50.952805 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5c6b766d5f-c5kng_e9d6a3ae-5064-4b4a-bbdb-3b05596bc38e/gateway/0.log" Jan 26 20:18:51 crc kubenswrapper[4737]: I0126 20:18:51.003947 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c670b440-832f-4bf1-8107-72aa5c97f637-catalog-content\") pod \"redhat-marketplace-pg7ph\" (UID: \"c670b440-832f-4bf1-8107-72aa5c97f637\") " pod="openshift-marketplace/redhat-marketplace-pg7ph" Jan 26 20:18:51 crc kubenswrapper[4737]: I0126 20:18:51.004081 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c670b440-832f-4bf1-8107-72aa5c97f637-utilities\") pod \"redhat-marketplace-pg7ph\" (UID: \"c670b440-832f-4bf1-8107-72aa5c97f637\") " pod="openshift-marketplace/redhat-marketplace-pg7ph" Jan 26 20:18:51 crc kubenswrapper[4737]: I0126 20:18:51.004199 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-928zp\" (UniqueName: \"kubernetes.io/projected/c670b440-832f-4bf1-8107-72aa5c97f637-kube-api-access-928zp\") pod \"redhat-marketplace-pg7ph\" (UID: \"c670b440-832f-4bf1-8107-72aa5c97f637\") " pod="openshift-marketplace/redhat-marketplace-pg7ph" Jan 26 20:18:51 crc kubenswrapper[4737]: I0126 20:18:51.004889 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c670b440-832f-4bf1-8107-72aa5c97f637-utilities\") pod \"redhat-marketplace-pg7ph\" (UID: \"c670b440-832f-4bf1-8107-72aa5c97f637\") " pod="openshift-marketplace/redhat-marketplace-pg7ph" Jan 26 20:18:51 crc kubenswrapper[4737]: I0126 20:18:51.004976 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c670b440-832f-4bf1-8107-72aa5c97f637-catalog-content\") pod \"redhat-marketplace-pg7ph\" (UID: \"c670b440-832f-4bf1-8107-72aa5c97f637\") " pod="openshift-marketplace/redhat-marketplace-pg7ph" Jan 26 20:18:51 crc kubenswrapper[4737]: I0126 20:18:51.026383 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-928zp\" (UniqueName: \"kubernetes.io/projected/c670b440-832f-4bf1-8107-72aa5c97f637-kube-api-access-928zp\") pod \"redhat-marketplace-pg7ph\" (UID: \"c670b440-832f-4bf1-8107-72aa5c97f637\") " pod="openshift-marketplace/redhat-marketplace-pg7ph" Jan 26 20:18:51 crc kubenswrapper[4737]: I0126 20:18:51.072184 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5c6b766d5f-c5kng_e9d6a3ae-5064-4b4a-bbdb-3b05596bc38e/opa/0.log" Jan 26 20:18:51 crc kubenswrapper[4737]: I0126 20:18:51.083877 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pg7ph" Jan 26 20:18:51 crc kubenswrapper[4737]: I0126 20:18:51.289815 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5c6b766d5f-kcfsl_225843b1-6423-4d7f-aa3c-5945a9e4bd8e/gateway/0.log" Jan 26 20:18:51 crc kubenswrapper[4737]: I0126 20:18:51.316527 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5c6b766d5f-kcfsl_225843b1-6423-4d7f-aa3c-5945a9e4bd8e/opa/0.log" Jan 26 20:18:51 crc kubenswrapper[4737]: I0126 20:18:51.635024 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_7d74d1ee-657b-4404-9390-cd94e3cb6d2c/loki-index-gateway/0.log" Jan 26 20:18:51 crc kubenswrapper[4737]: I0126 20:18:51.845361 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pg7ph"] Jan 26 20:18:51 crc kubenswrapper[4737]: I0126 20:18:51.857993 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_a05526c9-7b63-4f57-bdaf-95d8a7912bb8/loki-ingester/0.log" Jan 26 20:18:51 crc kubenswrapper[4737]: I0126 20:18:51.989506 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-76788598db-rsdfq_15449cbd-7753-47b6-811f-059d9f83ff0b/loki-querier/0.log" Jan 26 20:18:52 crc kubenswrapper[4737]: I0126 20:18:52.030203 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pg7ph" event={"ID":"c670b440-832f-4bf1-8107-72aa5c97f637","Type":"ContainerStarted","Data":"980ef13dc7eebe4942b1250960758ce34c4248226e28b010633d5daf61715ebf"} Jan 26 20:18:52 crc kubenswrapper[4737]: I0126 20:18:52.095865 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-69d9546745-qqkdc_954c3b49-1fc8-4e5c-9312-7b8e66b7a681/loki-query-frontend/0.log" Jan 26 20:18:53 crc kubenswrapper[4737]: I0126 20:18:53.045921 4737 generic.go:334] "Generic (PLEG): container finished" podID="c670b440-832f-4bf1-8107-72aa5c97f637" containerID="e57efa7a84277618ac32ebd3ba63875af3bb41320898577fe9b5dbfe6b658833" exitCode=0 Jan 26 20:18:53 crc kubenswrapper[4737]: I0126 20:18:53.045968 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pg7ph" event={"ID":"c670b440-832f-4bf1-8107-72aa5c97f637","Type":"ContainerDied","Data":"e57efa7a84277618ac32ebd3ba63875af3bb41320898577fe9b5dbfe6b658833"} Jan 26 20:18:53 crc kubenswrapper[4737]: I0126 20:18:53.048785 4737 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 20:18:54 crc kubenswrapper[4737]: I0126 20:18:54.061538 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pg7ph" event={"ID":"c670b440-832f-4bf1-8107-72aa5c97f637","Type":"ContainerStarted","Data":"bbbcd2b510a52a7532d9538723d02ebfe3dbae5dc1382f82b739e6c873f5bc77"} Jan 26 20:18:55 crc kubenswrapper[4737]: I0126 20:18:55.082038 4737 generic.go:334] "Generic (PLEG): container finished" podID="c670b440-832f-4bf1-8107-72aa5c97f637" containerID="bbbcd2b510a52a7532d9538723d02ebfe3dbae5dc1382f82b739e6c873f5bc77" exitCode=0 Jan 26 20:18:55 crc kubenswrapper[4737]: I0126 20:18:55.082572 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pg7ph" event={"ID":"c670b440-832f-4bf1-8107-72aa5c97f637","Type":"ContainerDied","Data":"bbbcd2b510a52a7532d9538723d02ebfe3dbae5dc1382f82b739e6c873f5bc77"} Jan 26 20:18:56 crc kubenswrapper[4737]: I0126 20:18:56.100245 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pg7ph" event={"ID":"c670b440-832f-4bf1-8107-72aa5c97f637","Type":"ContainerStarted","Data":"acaf66f19dcbdb7f3bc7424b708bb84ac56b64b0341ca6ce53fa15bb0c085df3"} Jan 26 20:18:56 crc kubenswrapper[4737]: I0126 20:18:56.132051 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pg7ph" podStartSLOduration=3.489769951 podStartE2EDuration="6.132016682s" podCreationTimestamp="2026-01-26 20:18:50 +0000 UTC" firstStartedPulling="2026-01-26 20:18:53.048222529 +0000 UTC m=+6506.356417237" lastFinishedPulling="2026-01-26 20:18:55.69046922 +0000 UTC m=+6508.998663968" observedRunningTime="2026-01-26 20:18:56.127191295 +0000 UTC m=+6509.435386033" watchObservedRunningTime="2026-01-26 20:18:56.132016682 +0000 UTC m=+6509.440211390" Jan 26 20:19:00 crc kubenswrapper[4737]: I0126 20:19:00.949473 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 20:19:00 crc kubenswrapper[4737]: I0126 20:19:00.950380 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 20:19:00 crc kubenswrapper[4737]: I0126 20:19:00.950458 4737 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" Jan 26 20:19:00 crc kubenswrapper[4737]: I0126 20:19:00.952600 4737 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"747bf40dd18257932204627171230519436ccf208e0d90e1e79a45e89e20948b"} pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 20:19:00 crc kubenswrapper[4737]: I0126 20:19:00.952873 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" containerID="cri-o://747bf40dd18257932204627171230519436ccf208e0d90e1e79a45e89e20948b" gracePeriod=600 Jan 26 20:19:01 crc kubenswrapper[4737]: I0126 20:19:01.084562 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pg7ph" Jan 26 20:19:01 crc kubenswrapper[4737]: I0126 20:19:01.084941 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pg7ph" Jan 26 20:19:01 crc kubenswrapper[4737]: I0126 20:19:01.163021 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pg7ph" Jan 26 20:19:01 crc kubenswrapper[4737]: I0126 20:19:01.166308 4737 generic.go:334] "Generic (PLEG): container finished" podID="afd75772-7900-46c3-b392-afb075e1cc08" containerID="747bf40dd18257932204627171230519436ccf208e0d90e1e79a45e89e20948b" exitCode=0 Jan 26 20:19:01 crc kubenswrapper[4737]: I0126 20:19:01.167450 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" event={"ID":"afd75772-7900-46c3-b392-afb075e1cc08","Type":"ContainerDied","Data":"747bf40dd18257932204627171230519436ccf208e0d90e1e79a45e89e20948b"} Jan 26 20:19:01 crc kubenswrapper[4737]: I0126 20:19:01.167576 4737 scope.go:117] "RemoveContainer" containerID="33bebcaabae9d57274c8f9ce19e91e5a2ee2c813697141d70a95623238e43961" Jan 26 20:19:01 crc kubenswrapper[4737]: I0126 20:19:01.231192 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pg7ph" Jan 26 20:19:01 crc kubenswrapper[4737]: I0126 20:19:01.417261 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pg7ph"] Jan 26 20:19:02 crc kubenswrapper[4737]: I0126 20:19:02.185367 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" event={"ID":"afd75772-7900-46c3-b392-afb075e1cc08","Type":"ContainerStarted","Data":"7aba965480739423a22438a8c1c4daeec43131ccb401d5c79d36c732e6893546"} Jan 26 20:19:03 crc kubenswrapper[4737]: I0126 20:19:03.196100 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pg7ph" podUID="c670b440-832f-4bf1-8107-72aa5c97f637" containerName="registry-server" containerID="cri-o://acaf66f19dcbdb7f3bc7424b708bb84ac56b64b0341ca6ce53fa15bb0c085df3" gracePeriod=2 Jan 26 20:19:04 crc kubenswrapper[4737]: I0126 20:19:03.803111 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pg7ph" Jan 26 20:19:04 crc kubenswrapper[4737]: I0126 20:19:03.887465 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c670b440-832f-4bf1-8107-72aa5c97f637-utilities\") pod \"c670b440-832f-4bf1-8107-72aa5c97f637\" (UID: \"c670b440-832f-4bf1-8107-72aa5c97f637\") " Jan 26 20:19:04 crc kubenswrapper[4737]: I0126 20:19:03.887701 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c670b440-832f-4bf1-8107-72aa5c97f637-catalog-content\") pod \"c670b440-832f-4bf1-8107-72aa5c97f637\" (UID: \"c670b440-832f-4bf1-8107-72aa5c97f637\") " Jan 26 20:19:04 crc kubenswrapper[4737]: I0126 20:19:03.887895 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-928zp\" (UniqueName: \"kubernetes.io/projected/c670b440-832f-4bf1-8107-72aa5c97f637-kube-api-access-928zp\") pod \"c670b440-832f-4bf1-8107-72aa5c97f637\" (UID: \"c670b440-832f-4bf1-8107-72aa5c97f637\") " Jan 26 20:19:04 crc kubenswrapper[4737]: I0126 20:19:03.889910 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c670b440-832f-4bf1-8107-72aa5c97f637-utilities" (OuterVolumeSpecName: "utilities") pod "c670b440-832f-4bf1-8107-72aa5c97f637" (UID: "c670b440-832f-4bf1-8107-72aa5c97f637"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:19:04 crc kubenswrapper[4737]: I0126 20:19:03.905444 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c670b440-832f-4bf1-8107-72aa5c97f637-kube-api-access-928zp" (OuterVolumeSpecName: "kube-api-access-928zp") pod "c670b440-832f-4bf1-8107-72aa5c97f637" (UID: "c670b440-832f-4bf1-8107-72aa5c97f637"). InnerVolumeSpecName "kube-api-access-928zp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:19:04 crc kubenswrapper[4737]: I0126 20:19:03.917716 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c670b440-832f-4bf1-8107-72aa5c97f637-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c670b440-832f-4bf1-8107-72aa5c97f637" (UID: "c670b440-832f-4bf1-8107-72aa5c97f637"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:19:04 crc kubenswrapper[4737]: I0126 20:19:03.990941 4737 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c670b440-832f-4bf1-8107-72aa5c97f637-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 20:19:04 crc kubenswrapper[4737]: I0126 20:19:03.990968 4737 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c670b440-832f-4bf1-8107-72aa5c97f637-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 20:19:04 crc kubenswrapper[4737]: I0126 20:19:03.990984 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-928zp\" (UniqueName: \"kubernetes.io/projected/c670b440-832f-4bf1-8107-72aa5c97f637-kube-api-access-928zp\") on node \"crc\" DevicePath \"\"" Jan 26 20:19:04 crc kubenswrapper[4737]: I0126 20:19:04.215251 4737 generic.go:334] "Generic (PLEG): container finished" podID="c670b440-832f-4bf1-8107-72aa5c97f637" containerID="acaf66f19dcbdb7f3bc7424b708bb84ac56b64b0341ca6ce53fa15bb0c085df3" exitCode=0 Jan 26 20:19:04 crc kubenswrapper[4737]: I0126 20:19:04.215331 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pg7ph" Jan 26 20:19:04 crc kubenswrapper[4737]: I0126 20:19:04.215328 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pg7ph" event={"ID":"c670b440-832f-4bf1-8107-72aa5c97f637","Type":"ContainerDied","Data":"acaf66f19dcbdb7f3bc7424b708bb84ac56b64b0341ca6ce53fa15bb0c085df3"} Jan 26 20:19:04 crc kubenswrapper[4737]: I0126 20:19:04.215492 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pg7ph" event={"ID":"c670b440-832f-4bf1-8107-72aa5c97f637","Type":"ContainerDied","Data":"980ef13dc7eebe4942b1250960758ce34c4248226e28b010633d5daf61715ebf"} Jan 26 20:19:04 crc kubenswrapper[4737]: I0126 20:19:04.215523 4737 scope.go:117] "RemoveContainer" containerID="acaf66f19dcbdb7f3bc7424b708bb84ac56b64b0341ca6ce53fa15bb0c085df3" Jan 26 20:19:04 crc kubenswrapper[4737]: I0126 20:19:04.262005 4737 scope.go:117] "RemoveContainer" containerID="bbbcd2b510a52a7532d9538723d02ebfe3dbae5dc1382f82b739e6c873f5bc77" Jan 26 20:19:04 crc kubenswrapper[4737]: I0126 20:19:04.267765 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pg7ph"] Jan 26 20:19:04 crc kubenswrapper[4737]: I0126 20:19:04.278776 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pg7ph"] Jan 26 20:19:04 crc kubenswrapper[4737]: I0126 20:19:04.302465 4737 scope.go:117] "RemoveContainer" containerID="e57efa7a84277618ac32ebd3ba63875af3bb41320898577fe9b5dbfe6b658833" Jan 26 20:19:04 crc kubenswrapper[4737]: I0126 20:19:04.352355 4737 scope.go:117] "RemoveContainer" containerID="acaf66f19dcbdb7f3bc7424b708bb84ac56b64b0341ca6ce53fa15bb0c085df3" Jan 26 20:19:04 crc kubenswrapper[4737]: E0126 20:19:04.353431 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acaf66f19dcbdb7f3bc7424b708bb84ac56b64b0341ca6ce53fa15bb0c085df3\": container with ID starting with acaf66f19dcbdb7f3bc7424b708bb84ac56b64b0341ca6ce53fa15bb0c085df3 not found: ID does not exist" containerID="acaf66f19dcbdb7f3bc7424b708bb84ac56b64b0341ca6ce53fa15bb0c085df3" Jan 26 20:19:04 crc kubenswrapper[4737]: I0126 20:19:04.353502 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acaf66f19dcbdb7f3bc7424b708bb84ac56b64b0341ca6ce53fa15bb0c085df3"} err="failed to get container status \"acaf66f19dcbdb7f3bc7424b708bb84ac56b64b0341ca6ce53fa15bb0c085df3\": rpc error: code = NotFound desc = could not find container \"acaf66f19dcbdb7f3bc7424b708bb84ac56b64b0341ca6ce53fa15bb0c085df3\": container with ID starting with acaf66f19dcbdb7f3bc7424b708bb84ac56b64b0341ca6ce53fa15bb0c085df3 not found: ID does not exist" Jan 26 20:19:04 crc kubenswrapper[4737]: I0126 20:19:04.353548 4737 scope.go:117] "RemoveContainer" containerID="bbbcd2b510a52a7532d9538723d02ebfe3dbae5dc1382f82b739e6c873f5bc77" Jan 26 20:19:04 crc kubenswrapper[4737]: E0126 20:19:04.353968 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbbcd2b510a52a7532d9538723d02ebfe3dbae5dc1382f82b739e6c873f5bc77\": container with ID starting with bbbcd2b510a52a7532d9538723d02ebfe3dbae5dc1382f82b739e6c873f5bc77 not found: ID does not exist" containerID="bbbcd2b510a52a7532d9538723d02ebfe3dbae5dc1382f82b739e6c873f5bc77" Jan 26 20:19:04 crc kubenswrapper[4737]: I0126 20:19:04.354027 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbbcd2b510a52a7532d9538723d02ebfe3dbae5dc1382f82b739e6c873f5bc77"} err="failed to get container status \"bbbcd2b510a52a7532d9538723d02ebfe3dbae5dc1382f82b739e6c873f5bc77\": rpc error: code = NotFound desc = could not find container \"bbbcd2b510a52a7532d9538723d02ebfe3dbae5dc1382f82b739e6c873f5bc77\": container with ID starting with bbbcd2b510a52a7532d9538723d02ebfe3dbae5dc1382f82b739e6c873f5bc77 not found: ID does not exist" Jan 26 20:19:04 crc kubenswrapper[4737]: I0126 20:19:04.354106 4737 scope.go:117] "RemoveContainer" containerID="e57efa7a84277618ac32ebd3ba63875af3bb41320898577fe9b5dbfe6b658833" Jan 26 20:19:04 crc kubenswrapper[4737]: E0126 20:19:04.354786 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e57efa7a84277618ac32ebd3ba63875af3bb41320898577fe9b5dbfe6b658833\": container with ID starting with e57efa7a84277618ac32ebd3ba63875af3bb41320898577fe9b5dbfe6b658833 not found: ID does not exist" containerID="e57efa7a84277618ac32ebd3ba63875af3bb41320898577fe9b5dbfe6b658833" Jan 26 20:19:04 crc kubenswrapper[4737]: I0126 20:19:04.354829 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e57efa7a84277618ac32ebd3ba63875af3bb41320898577fe9b5dbfe6b658833"} err="failed to get container status \"e57efa7a84277618ac32ebd3ba63875af3bb41320898577fe9b5dbfe6b658833\": rpc error: code = NotFound desc = could not find container \"e57efa7a84277618ac32ebd3ba63875af3bb41320898577fe9b5dbfe6b658833\": container with ID starting with e57efa7a84277618ac32ebd3ba63875af3bb41320898577fe9b5dbfe6b658833 not found: ID does not exist" Jan 26 20:19:04 crc kubenswrapper[4737]: I0126 20:19:04.994724 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c670b440-832f-4bf1-8107-72aa5c97f637" path="/var/lib/kubelet/pods/c670b440-832f-4bf1-8107-72aa5c97f637/volumes" Jan 26 20:19:10 crc kubenswrapper[4737]: I0126 20:19:10.661737 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-54gqz_316b58c7-76eb-4b53-adee-6e456286e313/kube-rbac-proxy/0.log" Jan 26 20:19:10 crc kubenswrapper[4737]: I0126 20:19:10.797311 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-54gqz_316b58c7-76eb-4b53-adee-6e456286e313/controller/0.log" Jan 26 20:19:10 crc kubenswrapper[4737]: I0126 20:19:10.959628 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ts4kl_f86f264d-5704-4995-9e15-13b28bd18dc4/cp-frr-files/0.log" Jan 26 20:19:11 crc kubenswrapper[4737]: I0126 20:19:11.209135 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ts4kl_f86f264d-5704-4995-9e15-13b28bd18dc4/cp-frr-files/0.log" Jan 26 20:19:11 crc kubenswrapper[4737]: I0126 20:19:11.226469 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ts4kl_f86f264d-5704-4995-9e15-13b28bd18dc4/cp-reloader/0.log" Jan 26 20:19:11 crc kubenswrapper[4737]: I0126 20:19:11.264735 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ts4kl_f86f264d-5704-4995-9e15-13b28bd18dc4/cp-reloader/0.log" Jan 26 20:19:11 crc kubenswrapper[4737]: I0126 20:19:11.287430 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ts4kl_f86f264d-5704-4995-9e15-13b28bd18dc4/cp-metrics/0.log" Jan 26 20:19:11 crc kubenswrapper[4737]: I0126 20:19:11.553573 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ts4kl_f86f264d-5704-4995-9e15-13b28bd18dc4/cp-frr-files/0.log" Jan 26 20:19:11 crc kubenswrapper[4737]: I0126 20:19:11.580206 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ts4kl_f86f264d-5704-4995-9e15-13b28bd18dc4/cp-reloader/0.log" Jan 26 20:19:11 crc kubenswrapper[4737]: I0126 20:19:11.589659 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ts4kl_f86f264d-5704-4995-9e15-13b28bd18dc4/cp-metrics/0.log" Jan 26 20:19:11 crc kubenswrapper[4737]: I0126 20:19:11.619764 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ts4kl_f86f264d-5704-4995-9e15-13b28bd18dc4/cp-metrics/0.log" Jan 26 20:19:11 crc kubenswrapper[4737]: I0126 20:19:11.826163 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ts4kl_f86f264d-5704-4995-9e15-13b28bd18dc4/cp-frr-files/0.log" Jan 26 20:19:11 crc kubenswrapper[4737]: I0126 20:19:11.840053 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ts4kl_f86f264d-5704-4995-9e15-13b28bd18dc4/cp-reloader/0.log" Jan 26 20:19:11 crc kubenswrapper[4737]: I0126 20:19:11.849478 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ts4kl_f86f264d-5704-4995-9e15-13b28bd18dc4/controller/0.log" Jan 26 20:19:11 crc kubenswrapper[4737]: I0126 20:19:11.851204 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ts4kl_f86f264d-5704-4995-9e15-13b28bd18dc4/cp-metrics/0.log" Jan 26 20:19:12 crc kubenswrapper[4737]: I0126 20:19:12.047174 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ts4kl_f86f264d-5704-4995-9e15-13b28bd18dc4/kube-rbac-proxy-frr/0.log" Jan 26 20:19:12 crc kubenswrapper[4737]: I0126 20:19:12.087400 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ts4kl_f86f264d-5704-4995-9e15-13b28bd18dc4/frr-metrics/0.log" Jan 26 20:19:12 crc kubenswrapper[4737]: I0126 20:19:12.124587 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ts4kl_f86f264d-5704-4995-9e15-13b28bd18dc4/kube-rbac-proxy/0.log" Jan 26 20:19:12 crc kubenswrapper[4737]: I0126 20:19:12.348806 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ts4kl_f86f264d-5704-4995-9e15-13b28bd18dc4/reloader/0.log" Jan 26 20:19:12 crc kubenswrapper[4737]: I0126 20:19:12.415767 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-zg2pm_db423313-ded0-4540-abdb-a7a8c5944989/frr-k8s-webhook-server/0.log" Jan 26 20:19:12 crc kubenswrapper[4737]: I0126 20:19:12.784798 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-948bbcb9c-jrztq_0a7ecdef-57dc-45fc-9142-3889fb44d2e9/manager/0.log" Jan 26 20:19:12 crc kubenswrapper[4737]: I0126 20:19:12.822982 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-75cffd444d-hgw8t_db9aadf5-9872-40e4-8333-da2779361dcf/webhook-server/0.log" Jan 26 20:19:13 crc kubenswrapper[4737]: I0126 20:19:13.031543 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-bs5fc_ee468080-345d-4821-ab62-d1034fd7cd01/kube-rbac-proxy/0.log" Jan 26 20:19:13 crc kubenswrapper[4737]: I0126 20:19:13.512698 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bkjhh"] Jan 26 20:19:13 crc kubenswrapper[4737]: E0126 20:19:13.513533 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c670b440-832f-4bf1-8107-72aa5c97f637" containerName="registry-server" Jan 26 20:19:13 crc kubenswrapper[4737]: I0126 20:19:13.513677 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="c670b440-832f-4bf1-8107-72aa5c97f637" containerName="registry-server" Jan 26 20:19:13 crc kubenswrapper[4737]: E0126 20:19:13.513743 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c670b440-832f-4bf1-8107-72aa5c97f637" containerName="extract-content" Jan 26 20:19:13 crc kubenswrapper[4737]: I0126 20:19:13.513789 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="c670b440-832f-4bf1-8107-72aa5c97f637" containerName="extract-content" Jan 26 20:19:13 crc kubenswrapper[4737]: E0126 20:19:13.513844 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c670b440-832f-4bf1-8107-72aa5c97f637" containerName="extract-utilities" Jan 26 20:19:13 crc kubenswrapper[4737]: I0126 20:19:13.513887 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="c670b440-832f-4bf1-8107-72aa5c97f637" containerName="extract-utilities" Jan 26 20:19:13 crc kubenswrapper[4737]: I0126 20:19:13.514197 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="c670b440-832f-4bf1-8107-72aa5c97f637" containerName="registry-server" Jan 26 20:19:13 crc kubenswrapper[4737]: I0126 20:19:13.516187 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bkjhh" Jan 26 20:19:13 crc kubenswrapper[4737]: I0126 20:19:13.534324 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bkjhh"] Jan 26 20:19:13 crc kubenswrapper[4737]: I0126 20:19:13.677617 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4xbm\" (UniqueName: \"kubernetes.io/projected/bc6f193e-6422-48fe-a180-d4a9e996f72d-kube-api-access-c4xbm\") pod \"certified-operators-bkjhh\" (UID: \"bc6f193e-6422-48fe-a180-d4a9e996f72d\") " pod="openshift-marketplace/certified-operators-bkjhh" Jan 26 20:19:13 crc kubenswrapper[4737]: I0126 20:19:13.677808 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc6f193e-6422-48fe-a180-d4a9e996f72d-catalog-content\") pod \"certified-operators-bkjhh\" (UID: \"bc6f193e-6422-48fe-a180-d4a9e996f72d\") " pod="openshift-marketplace/certified-operators-bkjhh" Jan 26 20:19:13 crc kubenswrapper[4737]: I0126 20:19:13.677951 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc6f193e-6422-48fe-a180-d4a9e996f72d-utilities\") pod \"certified-operators-bkjhh\" (UID: \"bc6f193e-6422-48fe-a180-d4a9e996f72d\") " pod="openshift-marketplace/certified-operators-bkjhh" Jan 26 20:19:13 crc kubenswrapper[4737]: I0126 20:19:13.764820 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-bs5fc_ee468080-345d-4821-ab62-d1034fd7cd01/speaker/0.log" Jan 26 20:19:13 crc kubenswrapper[4737]: I0126 20:19:13.779841 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4xbm\" (UniqueName: \"kubernetes.io/projected/bc6f193e-6422-48fe-a180-d4a9e996f72d-kube-api-access-c4xbm\") pod \"certified-operators-bkjhh\" (UID: \"bc6f193e-6422-48fe-a180-d4a9e996f72d\") " pod="openshift-marketplace/certified-operators-bkjhh" Jan 26 20:19:13 crc kubenswrapper[4737]: I0126 20:19:13.779925 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc6f193e-6422-48fe-a180-d4a9e996f72d-catalog-content\") pod \"certified-operators-bkjhh\" (UID: \"bc6f193e-6422-48fe-a180-d4a9e996f72d\") " pod="openshift-marketplace/certified-operators-bkjhh" Jan 26 20:19:13 crc kubenswrapper[4737]: I0126 20:19:13.780015 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc6f193e-6422-48fe-a180-d4a9e996f72d-utilities\") pod \"certified-operators-bkjhh\" (UID: \"bc6f193e-6422-48fe-a180-d4a9e996f72d\") " pod="openshift-marketplace/certified-operators-bkjhh" Jan 26 20:19:13 crc kubenswrapper[4737]: I0126 20:19:13.780518 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc6f193e-6422-48fe-a180-d4a9e996f72d-utilities\") pod \"certified-operators-bkjhh\" (UID: \"bc6f193e-6422-48fe-a180-d4a9e996f72d\") " pod="openshift-marketplace/certified-operators-bkjhh" Jan 26 20:19:13 crc kubenswrapper[4737]: I0126 20:19:13.780730 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc6f193e-6422-48fe-a180-d4a9e996f72d-catalog-content\") pod \"certified-operators-bkjhh\" (UID: \"bc6f193e-6422-48fe-a180-d4a9e996f72d\") " pod="openshift-marketplace/certified-operators-bkjhh" Jan 26 20:19:13 crc kubenswrapper[4737]: I0126 20:19:13.800370 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4xbm\" (UniqueName: \"kubernetes.io/projected/bc6f193e-6422-48fe-a180-d4a9e996f72d-kube-api-access-c4xbm\") pod \"certified-operators-bkjhh\" (UID: \"bc6f193e-6422-48fe-a180-d4a9e996f72d\") " pod="openshift-marketplace/certified-operators-bkjhh" Jan 26 20:19:13 crc kubenswrapper[4737]: I0126 20:19:13.863436 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bkjhh" Jan 26 20:19:14 crc kubenswrapper[4737]: I0126 20:19:14.173141 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ts4kl_f86f264d-5704-4995-9e15-13b28bd18dc4/frr/0.log" Jan 26 20:19:14 crc kubenswrapper[4737]: I0126 20:19:14.470576 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bkjhh"] Jan 26 20:19:15 crc kubenswrapper[4737]: I0126 20:19:15.381599 4737 generic.go:334] "Generic (PLEG): container finished" podID="bc6f193e-6422-48fe-a180-d4a9e996f72d" containerID="88127b0a0d8246ca8f2ef73d96ce02dd3a929fdd3eeedc793b0dbb9e83ab25aa" exitCode=0 Jan 26 20:19:15 crc kubenswrapper[4737]: I0126 20:19:15.381701 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bkjhh" event={"ID":"bc6f193e-6422-48fe-a180-d4a9e996f72d","Type":"ContainerDied","Data":"88127b0a0d8246ca8f2ef73d96ce02dd3a929fdd3eeedc793b0dbb9e83ab25aa"} Jan 26 20:19:15 crc kubenswrapper[4737]: I0126 20:19:15.382122 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bkjhh" event={"ID":"bc6f193e-6422-48fe-a180-d4a9e996f72d","Type":"ContainerStarted","Data":"9b611740ee48e6e47537d0cf59c567af260b332a57e652cf079b6bd02d89df4b"} Jan 26 20:19:17 crc kubenswrapper[4737]: I0126 20:19:17.410517 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bkjhh" event={"ID":"bc6f193e-6422-48fe-a180-d4a9e996f72d","Type":"ContainerStarted","Data":"b632cc7084424c69bbc6970c458e7b60986d6a82ba0648b9f4f9f3d97c537891"} Jan 26 20:19:18 crc kubenswrapper[4737]: I0126 20:19:18.427397 4737 generic.go:334] "Generic (PLEG): container finished" podID="bc6f193e-6422-48fe-a180-d4a9e996f72d" containerID="b632cc7084424c69bbc6970c458e7b60986d6a82ba0648b9f4f9f3d97c537891" exitCode=0 Jan 26 20:19:18 crc kubenswrapper[4737]: I0126 20:19:18.427768 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bkjhh" event={"ID":"bc6f193e-6422-48fe-a180-d4a9e996f72d","Type":"ContainerDied","Data":"b632cc7084424c69bbc6970c458e7b60986d6a82ba0648b9f4f9f3d97c537891"} Jan 26 20:19:19 crc kubenswrapper[4737]: I0126 20:19:19.441261 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bkjhh" event={"ID":"bc6f193e-6422-48fe-a180-d4a9e996f72d","Type":"ContainerStarted","Data":"3c794353144b355c908666d0971fe277938cee3ed0f380fc22c3326a1e737c61"} Jan 26 20:19:19 crc kubenswrapper[4737]: I0126 20:19:19.492399 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bkjhh" podStartSLOduration=2.91308093 podStartE2EDuration="6.492371617s" podCreationTimestamp="2026-01-26 20:19:13 +0000 UTC" firstStartedPulling="2026-01-26 20:19:15.38424888 +0000 UTC m=+6528.692443588" lastFinishedPulling="2026-01-26 20:19:18.963539567 +0000 UTC m=+6532.271734275" observedRunningTime="2026-01-26 20:19:19.474439458 +0000 UTC m=+6532.782634166" watchObservedRunningTime="2026-01-26 20:19:19.492371617 +0000 UTC m=+6532.800566325" Jan 26 20:19:23 crc kubenswrapper[4737]: I0126 20:19:23.864428 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bkjhh" Jan 26 20:19:23 crc kubenswrapper[4737]: I0126 20:19:23.864924 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bkjhh" Jan 26 20:19:23 crc kubenswrapper[4737]: I0126 20:19:23.950930 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bkjhh" Jan 26 20:19:24 crc kubenswrapper[4737]: I0126 20:19:24.599713 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bkjhh" Jan 26 20:19:24 crc kubenswrapper[4737]: I0126 20:19:24.653578 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bkjhh"] Jan 26 20:19:26 crc kubenswrapper[4737]: I0126 20:19:26.554141 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bkjhh" podUID="bc6f193e-6422-48fe-a180-d4a9e996f72d" containerName="registry-server" containerID="cri-o://3c794353144b355c908666d0971fe277938cee3ed0f380fc22c3326a1e737c61" gracePeriod=2 Jan 26 20:19:27 crc kubenswrapper[4737]: I0126 20:19:27.051763 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bkjhh" Jan 26 20:19:27 crc kubenswrapper[4737]: I0126 20:19:27.193234 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc6f193e-6422-48fe-a180-d4a9e996f72d-utilities\") pod \"bc6f193e-6422-48fe-a180-d4a9e996f72d\" (UID: \"bc6f193e-6422-48fe-a180-d4a9e996f72d\") " Jan 26 20:19:27 crc kubenswrapper[4737]: I0126 20:19:27.193294 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc6f193e-6422-48fe-a180-d4a9e996f72d-catalog-content\") pod \"bc6f193e-6422-48fe-a180-d4a9e996f72d\" (UID: \"bc6f193e-6422-48fe-a180-d4a9e996f72d\") " Jan 26 20:19:27 crc kubenswrapper[4737]: I0126 20:19:27.193493 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4xbm\" (UniqueName: \"kubernetes.io/projected/bc6f193e-6422-48fe-a180-d4a9e996f72d-kube-api-access-c4xbm\") pod \"bc6f193e-6422-48fe-a180-d4a9e996f72d\" (UID: \"bc6f193e-6422-48fe-a180-d4a9e996f72d\") " Jan 26 20:19:27 crc kubenswrapper[4737]: I0126 20:19:27.194555 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc6f193e-6422-48fe-a180-d4a9e996f72d-utilities" (OuterVolumeSpecName: "utilities") pod "bc6f193e-6422-48fe-a180-d4a9e996f72d" (UID: "bc6f193e-6422-48fe-a180-d4a9e996f72d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:19:27 crc kubenswrapper[4737]: I0126 20:19:27.208279 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc6f193e-6422-48fe-a180-d4a9e996f72d-kube-api-access-c4xbm" (OuterVolumeSpecName: "kube-api-access-c4xbm") pod "bc6f193e-6422-48fe-a180-d4a9e996f72d" (UID: "bc6f193e-6422-48fe-a180-d4a9e996f72d"). InnerVolumeSpecName "kube-api-access-c4xbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:19:27 crc kubenswrapper[4737]: I0126 20:19:27.300000 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4xbm\" (UniqueName: \"kubernetes.io/projected/bc6f193e-6422-48fe-a180-d4a9e996f72d-kube-api-access-c4xbm\") on node \"crc\" DevicePath \"\"" Jan 26 20:19:27 crc kubenswrapper[4737]: I0126 20:19:27.300339 4737 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc6f193e-6422-48fe-a180-d4a9e996f72d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 20:19:27 crc kubenswrapper[4737]: I0126 20:19:27.443437 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc6f193e-6422-48fe-a180-d4a9e996f72d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bc6f193e-6422-48fe-a180-d4a9e996f72d" (UID: "bc6f193e-6422-48fe-a180-d4a9e996f72d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:19:27 crc kubenswrapper[4737]: I0126 20:19:27.505585 4737 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc6f193e-6422-48fe-a180-d4a9e996f72d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 20:19:27 crc kubenswrapper[4737]: I0126 20:19:27.568892 4737 generic.go:334] "Generic (PLEG): container finished" podID="bc6f193e-6422-48fe-a180-d4a9e996f72d" containerID="3c794353144b355c908666d0971fe277938cee3ed0f380fc22c3326a1e737c61" exitCode=0 Jan 26 20:19:27 crc kubenswrapper[4737]: I0126 20:19:27.568969 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bkjhh" Jan 26 20:19:27 crc kubenswrapper[4737]: I0126 20:19:27.569002 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bkjhh" event={"ID":"bc6f193e-6422-48fe-a180-d4a9e996f72d","Type":"ContainerDied","Data":"3c794353144b355c908666d0971fe277938cee3ed0f380fc22c3326a1e737c61"} Jan 26 20:19:27 crc kubenswrapper[4737]: I0126 20:19:27.570479 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bkjhh" event={"ID":"bc6f193e-6422-48fe-a180-d4a9e996f72d","Type":"ContainerDied","Data":"9b611740ee48e6e47537d0cf59c567af260b332a57e652cf079b6bd02d89df4b"} Jan 26 20:19:27 crc kubenswrapper[4737]: I0126 20:19:27.570507 4737 scope.go:117] "RemoveContainer" containerID="3c794353144b355c908666d0971fe277938cee3ed0f380fc22c3326a1e737c61" Jan 26 20:19:27 crc kubenswrapper[4737]: I0126 20:19:27.597365 4737 scope.go:117] "RemoveContainer" containerID="b632cc7084424c69bbc6970c458e7b60986d6a82ba0648b9f4f9f3d97c537891" Jan 26 20:19:27 crc kubenswrapper[4737]: I0126 20:19:27.612055 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bkjhh"] Jan 26 20:19:27 crc kubenswrapper[4737]: I0126 20:19:27.622960 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bkjhh"] Jan 26 20:19:27 crc kubenswrapper[4737]: I0126 20:19:27.629480 4737 scope.go:117] "RemoveContainer" containerID="88127b0a0d8246ca8f2ef73d96ce02dd3a929fdd3eeedc793b0dbb9e83ab25aa" Jan 26 20:19:27 crc kubenswrapper[4737]: I0126 20:19:27.707955 4737 scope.go:117] "RemoveContainer" containerID="3c794353144b355c908666d0971fe277938cee3ed0f380fc22c3326a1e737c61" Jan 26 20:19:27 crc kubenswrapper[4737]: E0126 20:19:27.708515 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c794353144b355c908666d0971fe277938cee3ed0f380fc22c3326a1e737c61\": container with ID starting with 3c794353144b355c908666d0971fe277938cee3ed0f380fc22c3326a1e737c61 not found: ID does not exist" containerID="3c794353144b355c908666d0971fe277938cee3ed0f380fc22c3326a1e737c61" Jan 26 20:19:27 crc kubenswrapper[4737]: I0126 20:19:27.708551 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c794353144b355c908666d0971fe277938cee3ed0f380fc22c3326a1e737c61"} err="failed to get container status \"3c794353144b355c908666d0971fe277938cee3ed0f380fc22c3326a1e737c61\": rpc error: code = NotFound desc = could not find container \"3c794353144b355c908666d0971fe277938cee3ed0f380fc22c3326a1e737c61\": container with ID starting with 3c794353144b355c908666d0971fe277938cee3ed0f380fc22c3326a1e737c61 not found: ID does not exist" Jan 26 20:19:27 crc kubenswrapper[4737]: I0126 20:19:27.708579 4737 scope.go:117] "RemoveContainer" containerID="b632cc7084424c69bbc6970c458e7b60986d6a82ba0648b9f4f9f3d97c537891" Jan 26 20:19:27 crc kubenswrapper[4737]: E0126 20:19:27.709009 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b632cc7084424c69bbc6970c458e7b60986d6a82ba0648b9f4f9f3d97c537891\": container with ID starting with b632cc7084424c69bbc6970c458e7b60986d6a82ba0648b9f4f9f3d97c537891 not found: ID does not exist" containerID="b632cc7084424c69bbc6970c458e7b60986d6a82ba0648b9f4f9f3d97c537891" Jan 26 20:19:27 crc kubenswrapper[4737]: I0126 20:19:27.709170 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b632cc7084424c69bbc6970c458e7b60986d6a82ba0648b9f4f9f3d97c537891"} err="failed to get container status \"b632cc7084424c69bbc6970c458e7b60986d6a82ba0648b9f4f9f3d97c537891\": rpc error: code = NotFound desc = could not find container \"b632cc7084424c69bbc6970c458e7b60986d6a82ba0648b9f4f9f3d97c537891\": container with ID starting with b632cc7084424c69bbc6970c458e7b60986d6a82ba0648b9f4f9f3d97c537891 not found: ID does not exist" Jan 26 20:19:27 crc kubenswrapper[4737]: I0126 20:19:27.709271 4737 scope.go:117] "RemoveContainer" containerID="88127b0a0d8246ca8f2ef73d96ce02dd3a929fdd3eeedc793b0dbb9e83ab25aa" Jan 26 20:19:27 crc kubenswrapper[4737]: E0126 20:19:27.711007 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88127b0a0d8246ca8f2ef73d96ce02dd3a929fdd3eeedc793b0dbb9e83ab25aa\": container with ID starting with 88127b0a0d8246ca8f2ef73d96ce02dd3a929fdd3eeedc793b0dbb9e83ab25aa not found: ID does not exist" containerID="88127b0a0d8246ca8f2ef73d96ce02dd3a929fdd3eeedc793b0dbb9e83ab25aa" Jan 26 20:19:27 crc kubenswrapper[4737]: I0126 20:19:27.711035 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88127b0a0d8246ca8f2ef73d96ce02dd3a929fdd3eeedc793b0dbb9e83ab25aa"} err="failed to get container status \"88127b0a0d8246ca8f2ef73d96ce02dd3a929fdd3eeedc793b0dbb9e83ab25aa\": rpc error: code = NotFound desc = could not find container \"88127b0a0d8246ca8f2ef73d96ce02dd3a929fdd3eeedc793b0dbb9e83ab25aa\": container with ID starting with 88127b0a0d8246ca8f2ef73d96ce02dd3a929fdd3eeedc793b0dbb9e83ab25aa not found: ID does not exist" Jan 26 20:19:28 crc kubenswrapper[4737]: I0126 20:19:28.996659 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc6f193e-6422-48fe-a180-d4a9e996f72d" path="/var/lib/kubelet/pods/bc6f193e-6422-48fe-a180-d4a9e996f72d/volumes" Jan 26 20:19:30 crc kubenswrapper[4737]: I0126 20:19:30.420334 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2jtwmf_52bcbbde-c297-4cce-80fd-cde90894b5df/util/0.log" Jan 26 20:19:30 crc kubenswrapper[4737]: I0126 20:19:30.635503 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2jtwmf_52bcbbde-c297-4cce-80fd-cde90894b5df/pull/0.log" Jan 26 20:19:30 crc kubenswrapper[4737]: I0126 20:19:30.665320 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2jtwmf_52bcbbde-c297-4cce-80fd-cde90894b5df/util/0.log" Jan 26 20:19:30 crc kubenswrapper[4737]: I0126 20:19:30.769552 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2jtwmf_52bcbbde-c297-4cce-80fd-cde90894b5df/pull/0.log" Jan 26 20:19:30 crc kubenswrapper[4737]: I0126 20:19:30.911600 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2jtwmf_52bcbbde-c297-4cce-80fd-cde90894b5df/extract/0.log" Jan 26 20:19:30 crc kubenswrapper[4737]: I0126 20:19:30.912056 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2jtwmf_52bcbbde-c297-4cce-80fd-cde90894b5df/pull/0.log" Jan 26 20:19:30 crc kubenswrapper[4737]: I0126 20:19:30.912615 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2jtwmf_52bcbbde-c297-4cce-80fd-cde90894b5df/util/0.log" Jan 26 20:19:31 crc kubenswrapper[4737]: I0126 20:19:31.159547 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx99dd_31b3687c-76cb-44be-b404-f88ed8a1b901/util/0.log" Jan 26 20:19:31 crc kubenswrapper[4737]: I0126 20:19:31.395242 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx99dd_31b3687c-76cb-44be-b404-f88ed8a1b901/util/0.log" Jan 26 20:19:31 crc kubenswrapper[4737]: I0126 20:19:31.445684 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx99dd_31b3687c-76cb-44be-b404-f88ed8a1b901/pull/0.log" Jan 26 20:19:31 crc kubenswrapper[4737]: I0126 20:19:31.485270 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx99dd_31b3687c-76cb-44be-b404-f88ed8a1b901/pull/0.log" Jan 26 20:19:31 crc kubenswrapper[4737]: I0126 20:19:31.659792 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx99dd_31b3687c-76cb-44be-b404-f88ed8a1b901/pull/0.log" Jan 26 20:19:31 crc kubenswrapper[4737]: I0126 20:19:31.696221 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx99dd_31b3687c-76cb-44be-b404-f88ed8a1b901/util/0.log" Jan 26 20:19:31 crc kubenswrapper[4737]: I0126 20:19:31.699388 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx99dd_31b3687c-76cb-44be-b404-f88ed8a1b901/extract/0.log" Jan 26 20:19:31 crc kubenswrapper[4737]: I0126 20:19:31.869913 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bw4px7_65f7c351-84bb-41e0-9775-a820da54e2eb/util/0.log" Jan 26 20:19:32 crc kubenswrapper[4737]: I0126 20:19:32.064634 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bw4px7_65f7c351-84bb-41e0-9775-a820da54e2eb/pull/0.log" Jan 26 20:19:32 crc kubenswrapper[4737]: I0126 20:19:32.105940 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bw4px7_65f7c351-84bb-41e0-9775-a820da54e2eb/util/0.log" Jan 26 20:19:32 crc kubenswrapper[4737]: I0126 20:19:32.106850 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bw4px7_65f7c351-84bb-41e0-9775-a820da54e2eb/pull/0.log" Jan 26 20:19:32 crc kubenswrapper[4737]: I0126 20:19:32.329277 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bw4px7_65f7c351-84bb-41e0-9775-a820da54e2eb/util/0.log" Jan 26 20:19:32 crc kubenswrapper[4737]: I0126 20:19:32.331822 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bw4px7_65f7c351-84bb-41e0-9775-a820da54e2eb/pull/0.log" Jan 26 20:19:32 crc kubenswrapper[4737]: I0126 20:19:32.345396 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bw4px7_65f7c351-84bb-41e0-9775-a820da54e2eb/extract/0.log" Jan 26 20:19:32 crc kubenswrapper[4737]: I0126 20:19:32.540564 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139kn98_7f36ed9b-a077-4329-803a-d5738c97e844/util/0.log" Jan 26 20:19:32 crc kubenswrapper[4737]: I0126 20:19:32.759919 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139kn98_7f36ed9b-a077-4329-803a-d5738c97e844/util/0.log" Jan 26 20:19:32 crc kubenswrapper[4737]: I0126 20:19:32.817220 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139kn98_7f36ed9b-a077-4329-803a-d5738c97e844/pull/0.log" Jan 26 20:19:32 crc kubenswrapper[4737]: I0126 20:19:32.817444 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139kn98_7f36ed9b-a077-4329-803a-d5738c97e844/pull/0.log" Jan 26 20:19:32 crc kubenswrapper[4737]: I0126 20:19:32.985628 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139kn98_7f36ed9b-a077-4329-803a-d5738c97e844/util/0.log" Jan 26 20:19:32 crc kubenswrapper[4737]: I0126 20:19:32.986125 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139kn98_7f36ed9b-a077-4329-803a-d5738c97e844/pull/0.log" Jan 26 20:19:33 crc kubenswrapper[4737]: I0126 20:19:33.027263 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139kn98_7f36ed9b-a077-4329-803a-d5738c97e844/extract/0.log" Jan 26 20:19:33 crc kubenswrapper[4737]: I0126 20:19:33.217848 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087hj6x_c801ad0c-6ec9-4497-ba0d-bad429d70783/util/0.log" Jan 26 20:19:33 crc kubenswrapper[4737]: I0126 20:19:33.411780 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087hj6x_c801ad0c-6ec9-4497-ba0d-bad429d70783/util/0.log" Jan 26 20:19:33 crc kubenswrapper[4737]: I0126 20:19:33.416631 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087hj6x_c801ad0c-6ec9-4497-ba0d-bad429d70783/pull/0.log" Jan 26 20:19:33 crc kubenswrapper[4737]: I0126 20:19:33.432717 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087hj6x_c801ad0c-6ec9-4497-ba0d-bad429d70783/pull/0.log" Jan 26 20:19:33 crc kubenswrapper[4737]: I0126 20:19:33.644751 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087hj6x_c801ad0c-6ec9-4497-ba0d-bad429d70783/pull/0.log" Jan 26 20:19:33 crc kubenswrapper[4737]: I0126 20:19:33.652396 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087hj6x_c801ad0c-6ec9-4497-ba0d-bad429d70783/extract/0.log" Jan 26 20:19:33 crc kubenswrapper[4737]: I0126 20:19:33.702545 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087hj6x_c801ad0c-6ec9-4497-ba0d-bad429d70783/util/0.log" Jan 26 20:19:33 crc kubenswrapper[4737]: I0126 20:19:33.846500 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hjhjz_99f52814-0bfb-4fa6-9bfd-a9bcf704d8f2/extract-utilities/0.log" Jan 26 20:19:34 crc kubenswrapper[4737]: I0126 20:19:34.151799 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hjhjz_99f52814-0bfb-4fa6-9bfd-a9bcf704d8f2/extract-content/0.log" Jan 26 20:19:34 crc kubenswrapper[4737]: I0126 20:19:34.156010 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hjhjz_99f52814-0bfb-4fa6-9bfd-a9bcf704d8f2/extract-utilities/0.log" Jan 26 20:19:34 crc kubenswrapper[4737]: I0126 20:19:34.156548 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hjhjz_99f52814-0bfb-4fa6-9bfd-a9bcf704d8f2/extract-content/0.log" Jan 26 20:19:34 crc kubenswrapper[4737]: I0126 20:19:34.344500 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hjhjz_99f52814-0bfb-4fa6-9bfd-a9bcf704d8f2/extract-utilities/0.log" Jan 26 20:19:34 crc kubenswrapper[4737]: I0126 20:19:34.344500 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hjhjz_99f52814-0bfb-4fa6-9bfd-a9bcf704d8f2/extract-content/0.log" Jan 26 20:19:34 crc kubenswrapper[4737]: I0126 20:19:34.419334 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qjsbs_f325c214-4902-4a66-a21c-d29413e523f3/extract-utilities/0.log" Jan 26 20:19:34 crc kubenswrapper[4737]: I0126 20:19:34.592547 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qjsbs_f325c214-4902-4a66-a21c-d29413e523f3/extract-utilities/0.log" Jan 26 20:19:34 crc kubenswrapper[4737]: I0126 20:19:34.695446 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qjsbs_f325c214-4902-4a66-a21c-d29413e523f3/extract-content/0.log" Jan 26 20:19:34 crc kubenswrapper[4737]: I0126 20:19:34.720263 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qjsbs_f325c214-4902-4a66-a21c-d29413e523f3/extract-content/0.log" Jan 26 20:19:34 crc kubenswrapper[4737]: I0126 20:19:34.919734 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qjsbs_f325c214-4902-4a66-a21c-d29413e523f3/extract-utilities/0.log" Jan 26 20:19:34 crc kubenswrapper[4737]: I0126 20:19:34.954534 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qjsbs_f325c214-4902-4a66-a21c-d29413e523f3/extract-content/0.log" Jan 26 20:19:35 crc kubenswrapper[4737]: I0126 20:19:35.230631 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-dr8sf_faf30849-7c19-44f9-ba42-3ad3f14efe0d/marketplace-operator/0.log" Jan 26 20:19:35 crc kubenswrapper[4737]: I0126 20:19:35.388358 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kgrfg_927a6ff0-afc5-477b-b139-e02a9f9b4452/extract-utilities/0.log" Jan 26 20:19:35 crc kubenswrapper[4737]: I0126 20:19:35.564459 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hjhjz_99f52814-0bfb-4fa6-9bfd-a9bcf704d8f2/registry-server/0.log" Jan 26 20:19:35 crc kubenswrapper[4737]: I0126 20:19:35.572411 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qjsbs_f325c214-4902-4a66-a21c-d29413e523f3/registry-server/0.log" Jan 26 20:19:35 crc kubenswrapper[4737]: I0126 20:19:35.595804 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kgrfg_927a6ff0-afc5-477b-b139-e02a9f9b4452/extract-utilities/0.log" Jan 26 20:19:35 crc kubenswrapper[4737]: I0126 20:19:35.596344 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kgrfg_927a6ff0-afc5-477b-b139-e02a9f9b4452/extract-content/0.log" Jan 26 20:19:35 crc kubenswrapper[4737]: I0126 20:19:35.639932 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kgrfg_927a6ff0-afc5-477b-b139-e02a9f9b4452/extract-content/0.log" Jan 26 20:19:35 crc kubenswrapper[4737]: I0126 20:19:35.816153 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kgrfg_927a6ff0-afc5-477b-b139-e02a9f9b4452/extract-utilities/0.log" Jan 26 20:19:35 crc kubenswrapper[4737]: I0126 20:19:35.868387 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nx2jv_89059a8c-e6df-4f31-afd5-78a98ee6b4e5/extract-utilities/0.log" Jan 26 20:19:35 crc kubenswrapper[4737]: I0126 20:19:35.906371 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kgrfg_927a6ff0-afc5-477b-b139-e02a9f9b4452/extract-content/0.log" Jan 26 20:19:36 crc kubenswrapper[4737]: I0126 20:19:36.068581 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nx2jv_89059a8c-e6df-4f31-afd5-78a98ee6b4e5/extract-utilities/0.log" Jan 26 20:19:36 crc kubenswrapper[4737]: I0126 20:19:36.097338 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kgrfg_927a6ff0-afc5-477b-b139-e02a9f9b4452/registry-server/0.log" Jan 26 20:19:36 crc kubenswrapper[4737]: I0126 20:19:36.114997 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nx2jv_89059a8c-e6df-4f31-afd5-78a98ee6b4e5/extract-content/0.log" Jan 26 20:19:36 crc kubenswrapper[4737]: I0126 20:19:36.115181 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nx2jv_89059a8c-e6df-4f31-afd5-78a98ee6b4e5/extract-content/0.log" Jan 26 20:19:36 crc kubenswrapper[4737]: I0126 20:19:36.307447 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nx2jv_89059a8c-e6df-4f31-afd5-78a98ee6b4e5/extract-utilities/0.log" Jan 26 20:19:36 crc kubenswrapper[4737]: I0126 20:19:36.319318 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nx2jv_89059a8c-e6df-4f31-afd5-78a98ee6b4e5/extract-content/0.log" Jan 26 20:19:37 crc kubenswrapper[4737]: I0126 20:19:37.005587 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nx2jv_89059a8c-e6df-4f31-afd5-78a98ee6b4e5/registry-server/0.log" Jan 26 20:19:50 crc kubenswrapper[4737]: I0126 20:19:50.746708 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-b48686b7d-s2s9r_33031648-f53a-4f71-8c03-041f7f1fcbf5/prometheus-operator-admission-webhook/0.log" Jan 26 20:19:50 crc kubenswrapper[4737]: I0126 20:19:50.781183 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-jvfnx_780e85db-cb8c-4a8c-920d-2594cd33eebf/prometheus-operator/0.log" Jan 26 20:19:50 crc kubenswrapper[4737]: I0126 20:19:50.812471 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-b48686b7d-tjv85_cc4df7ac-3298-4316-8c9b-1ac9827330fd/prometheus-operator-admission-webhook/0.log" Jan 26 20:19:51 crc kubenswrapper[4737]: I0126 20:19:51.021400 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-xf99z_b319754a-04cc-40dd-b031-ea72a3d19db2/operator/0.log" Jan 26 20:19:51 crc kubenswrapper[4737]: I0126 20:19:51.040860 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-r5vwv_7478def9-da54-4632-803e-47f36b6fb64b/perses-operator/0.log" Jan 26 20:19:51 crc kubenswrapper[4737]: I0126 20:19:51.090134 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-ckxn2_6b80cd0d-81ac-4f45-a80c-3b1cf442fc44/observability-ui-dashboards/0.log" Jan 26 20:20:06 crc kubenswrapper[4737]: I0126 20:20:06.049932 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6dbff5787b-d86s9_697c3f44-b05d-4404-bd79-a93c1c29b8ad/kube-rbac-proxy/0.log" Jan 26 20:20:06 crc kubenswrapper[4737]: I0126 20:20:06.158846 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6dbff5787b-d86s9_697c3f44-b05d-4404-bd79-a93c1c29b8ad/manager/0.log" Jan 26 20:20:21 crc kubenswrapper[4737]: E0126 20:20:21.302707 4737 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.236:46902->38.102.83.236:42217: write tcp 38.102.83.236:46902->38.102.83.236:42217: write: broken pipe Jan 26 20:20:29 crc kubenswrapper[4737]: I0126 20:20:29.740450 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vh5ns"] Jan 26 20:20:29 crc kubenswrapper[4737]: E0126 20:20:29.742217 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc6f193e-6422-48fe-a180-d4a9e996f72d" containerName="registry-server" Jan 26 20:20:29 crc kubenswrapper[4737]: I0126 20:20:29.742235 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc6f193e-6422-48fe-a180-d4a9e996f72d" containerName="registry-server" Jan 26 20:20:29 crc kubenswrapper[4737]: E0126 20:20:29.742262 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc6f193e-6422-48fe-a180-d4a9e996f72d" containerName="extract-content" Jan 26 20:20:29 crc kubenswrapper[4737]: I0126 20:20:29.742269 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc6f193e-6422-48fe-a180-d4a9e996f72d" containerName="extract-content" Jan 26 20:20:29 crc kubenswrapper[4737]: E0126 20:20:29.742316 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc6f193e-6422-48fe-a180-d4a9e996f72d" containerName="extract-utilities" Jan 26 20:20:29 crc kubenswrapper[4737]: I0126 20:20:29.742324 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc6f193e-6422-48fe-a180-d4a9e996f72d" containerName="extract-utilities" Jan 26 20:20:29 crc kubenswrapper[4737]: I0126 20:20:29.742546 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc6f193e-6422-48fe-a180-d4a9e996f72d" containerName="registry-server" Jan 26 20:20:29 crc kubenswrapper[4737]: I0126 20:20:29.744276 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vh5ns" Jan 26 20:20:29 crc kubenswrapper[4737]: I0126 20:20:29.793801 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vh5ns"] Jan 26 20:20:29 crc kubenswrapper[4737]: I0126 20:20:29.831097 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cb89f00-39f0-460c-80c5-1fb80378c523-utilities\") pod \"redhat-operators-vh5ns\" (UID: \"1cb89f00-39f0-460c-80c5-1fb80378c523\") " pod="openshift-marketplace/redhat-operators-vh5ns" Jan 26 20:20:29 crc kubenswrapper[4737]: I0126 20:20:29.831157 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cb89f00-39f0-460c-80c5-1fb80378c523-catalog-content\") pod \"redhat-operators-vh5ns\" (UID: \"1cb89f00-39f0-460c-80c5-1fb80378c523\") " pod="openshift-marketplace/redhat-operators-vh5ns" Jan 26 20:20:29 crc kubenswrapper[4737]: I0126 20:20:29.831249 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzpmc\" (UniqueName: \"kubernetes.io/projected/1cb89f00-39f0-460c-80c5-1fb80378c523-kube-api-access-vzpmc\") pod \"redhat-operators-vh5ns\" (UID: \"1cb89f00-39f0-460c-80c5-1fb80378c523\") " pod="openshift-marketplace/redhat-operators-vh5ns" Jan 26 20:20:29 crc kubenswrapper[4737]: I0126 20:20:29.935725 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cb89f00-39f0-460c-80c5-1fb80378c523-utilities\") pod \"redhat-operators-vh5ns\" (UID: \"1cb89f00-39f0-460c-80c5-1fb80378c523\") " pod="openshift-marketplace/redhat-operators-vh5ns" Jan 26 20:20:29 crc kubenswrapper[4737]: I0126 20:20:29.935804 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cb89f00-39f0-460c-80c5-1fb80378c523-catalog-content\") pod \"redhat-operators-vh5ns\" (UID: \"1cb89f00-39f0-460c-80c5-1fb80378c523\") " pod="openshift-marketplace/redhat-operators-vh5ns" Jan 26 20:20:29 crc kubenswrapper[4737]: I0126 20:20:29.935923 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzpmc\" (UniqueName: \"kubernetes.io/projected/1cb89f00-39f0-460c-80c5-1fb80378c523-kube-api-access-vzpmc\") pod \"redhat-operators-vh5ns\" (UID: \"1cb89f00-39f0-460c-80c5-1fb80378c523\") " pod="openshift-marketplace/redhat-operators-vh5ns" Jan 26 20:20:29 crc kubenswrapper[4737]: I0126 20:20:29.936339 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cb89f00-39f0-460c-80c5-1fb80378c523-utilities\") pod \"redhat-operators-vh5ns\" (UID: \"1cb89f00-39f0-460c-80c5-1fb80378c523\") " pod="openshift-marketplace/redhat-operators-vh5ns" Jan 26 20:20:29 crc kubenswrapper[4737]: I0126 20:20:29.936481 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cb89f00-39f0-460c-80c5-1fb80378c523-catalog-content\") pod \"redhat-operators-vh5ns\" (UID: \"1cb89f00-39f0-460c-80c5-1fb80378c523\") " pod="openshift-marketplace/redhat-operators-vh5ns" Jan 26 20:20:29 crc kubenswrapper[4737]: I0126 20:20:29.979698 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzpmc\" (UniqueName: \"kubernetes.io/projected/1cb89f00-39f0-460c-80c5-1fb80378c523-kube-api-access-vzpmc\") pod \"redhat-operators-vh5ns\" (UID: \"1cb89f00-39f0-460c-80c5-1fb80378c523\") " pod="openshift-marketplace/redhat-operators-vh5ns" Jan 26 20:20:30 crc kubenswrapper[4737]: I0126 20:20:30.067410 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vh5ns" Jan 26 20:20:30 crc kubenswrapper[4737]: I0126 20:20:30.614719 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vh5ns"] Jan 26 20:20:31 crc kubenswrapper[4737]: I0126 20:20:31.427870 4737 generic.go:334] "Generic (PLEG): container finished" podID="1cb89f00-39f0-460c-80c5-1fb80378c523" containerID="40b86ade32fa51856de853de79b1945c86fe9db7a97d1ab625d7071f127e8869" exitCode=0 Jan 26 20:20:31 crc kubenswrapper[4737]: I0126 20:20:31.427968 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vh5ns" event={"ID":"1cb89f00-39f0-460c-80c5-1fb80378c523","Type":"ContainerDied","Data":"40b86ade32fa51856de853de79b1945c86fe9db7a97d1ab625d7071f127e8869"} Jan 26 20:20:31 crc kubenswrapper[4737]: I0126 20:20:31.428626 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vh5ns" event={"ID":"1cb89f00-39f0-460c-80c5-1fb80378c523","Type":"ContainerStarted","Data":"f5d1f3083b59569eb08f36c058d7bcd510771ca8f3fa1aef74f9fd43ebc85578"} Jan 26 20:20:32 crc kubenswrapper[4737]: I0126 20:20:32.442648 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vh5ns" event={"ID":"1cb89f00-39f0-460c-80c5-1fb80378c523","Type":"ContainerStarted","Data":"0d55eaa861608fb56d1bce053e7a00cd83c18ded9152293c52c1c134fbcc0393"} Jan 26 20:20:35 crc kubenswrapper[4737]: I0126 20:20:35.476096 4737 generic.go:334] "Generic (PLEG): container finished" podID="1cb89f00-39f0-460c-80c5-1fb80378c523" containerID="0d55eaa861608fb56d1bce053e7a00cd83c18ded9152293c52c1c134fbcc0393" exitCode=0 Jan 26 20:20:35 crc kubenswrapper[4737]: I0126 20:20:35.476121 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vh5ns" event={"ID":"1cb89f00-39f0-460c-80c5-1fb80378c523","Type":"ContainerDied","Data":"0d55eaa861608fb56d1bce053e7a00cd83c18ded9152293c52c1c134fbcc0393"} Jan 26 20:20:36 crc kubenswrapper[4737]: I0126 20:20:36.489915 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vh5ns" event={"ID":"1cb89f00-39f0-460c-80c5-1fb80378c523","Type":"ContainerStarted","Data":"f3242e996020d7d85a9147c93618ea1686519824ac1b4178db9e663f3a93ea67"} Jan 26 20:20:36 crc kubenswrapper[4737]: I0126 20:20:36.528955 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vh5ns" podStartSLOduration=3.032169324 podStartE2EDuration="7.528932908s" podCreationTimestamp="2026-01-26 20:20:29 +0000 UTC" firstStartedPulling="2026-01-26 20:20:31.430503791 +0000 UTC m=+6604.738698499" lastFinishedPulling="2026-01-26 20:20:35.927267375 +0000 UTC m=+6609.235462083" observedRunningTime="2026-01-26 20:20:36.515306145 +0000 UTC m=+6609.823500853" watchObservedRunningTime="2026-01-26 20:20:36.528932908 +0000 UTC m=+6609.837127616" Jan 26 20:20:38 crc kubenswrapper[4737]: I0126 20:20:38.985372 4737 scope.go:117] "RemoveContainer" containerID="40fb2c371f9503b44053751860bbb053cb79a1fc291c57bf4325fa03712f4cf2" Jan 26 20:20:40 crc kubenswrapper[4737]: I0126 20:20:40.067624 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vh5ns" Jan 26 20:20:40 crc kubenswrapper[4737]: I0126 20:20:40.068149 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vh5ns" Jan 26 20:20:41 crc kubenswrapper[4737]: I0126 20:20:41.132245 4737 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vh5ns" podUID="1cb89f00-39f0-460c-80c5-1fb80378c523" containerName="registry-server" probeResult="failure" output=< Jan 26 20:20:41 crc kubenswrapper[4737]: timeout: failed to connect service ":50051" within 1s Jan 26 20:20:41 crc kubenswrapper[4737]: > Jan 26 20:20:50 crc kubenswrapper[4737]: I0126 20:20:50.128524 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vh5ns" Jan 26 20:20:50 crc kubenswrapper[4737]: I0126 20:20:50.202010 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vh5ns" Jan 26 20:20:51 crc kubenswrapper[4737]: I0126 20:20:51.964364 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vh5ns"] Jan 26 20:20:51 crc kubenswrapper[4737]: I0126 20:20:51.966004 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vh5ns" podUID="1cb89f00-39f0-460c-80c5-1fb80378c523" containerName="registry-server" containerID="cri-o://f3242e996020d7d85a9147c93618ea1686519824ac1b4178db9e663f3a93ea67" gracePeriod=2 Jan 26 20:20:52 crc kubenswrapper[4737]: I0126 20:20:52.610031 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vh5ns" Jan 26 20:20:52 crc kubenswrapper[4737]: I0126 20:20:52.696567 4737 generic.go:334] "Generic (PLEG): container finished" podID="1cb89f00-39f0-460c-80c5-1fb80378c523" containerID="f3242e996020d7d85a9147c93618ea1686519824ac1b4178db9e663f3a93ea67" exitCode=0 Jan 26 20:20:52 crc kubenswrapper[4737]: I0126 20:20:52.696649 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vh5ns" Jan 26 20:20:52 crc kubenswrapper[4737]: I0126 20:20:52.696663 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vh5ns" event={"ID":"1cb89f00-39f0-460c-80c5-1fb80378c523","Type":"ContainerDied","Data":"f3242e996020d7d85a9147c93618ea1686519824ac1b4178db9e663f3a93ea67"} Jan 26 20:20:52 crc kubenswrapper[4737]: I0126 20:20:52.697032 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vh5ns" event={"ID":"1cb89f00-39f0-460c-80c5-1fb80378c523","Type":"ContainerDied","Data":"f5d1f3083b59569eb08f36c058d7bcd510771ca8f3fa1aef74f9fd43ebc85578"} Jan 26 20:20:52 crc kubenswrapper[4737]: I0126 20:20:52.697055 4737 scope.go:117] "RemoveContainer" containerID="f3242e996020d7d85a9147c93618ea1686519824ac1b4178db9e663f3a93ea67" Jan 26 20:20:52 crc kubenswrapper[4737]: I0126 20:20:52.733506 4737 scope.go:117] "RemoveContainer" containerID="0d55eaa861608fb56d1bce053e7a00cd83c18ded9152293c52c1c134fbcc0393" Jan 26 20:20:52 crc kubenswrapper[4737]: I0126 20:20:52.763440 4737 scope.go:117] "RemoveContainer" containerID="40b86ade32fa51856de853de79b1945c86fe9db7a97d1ab625d7071f127e8869" Jan 26 20:20:52 crc kubenswrapper[4737]: I0126 20:20:52.764500 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cb89f00-39f0-460c-80c5-1fb80378c523-catalog-content\") pod \"1cb89f00-39f0-460c-80c5-1fb80378c523\" (UID: \"1cb89f00-39f0-460c-80c5-1fb80378c523\") " Jan 26 20:20:52 crc kubenswrapper[4737]: I0126 20:20:52.764633 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cb89f00-39f0-460c-80c5-1fb80378c523-utilities\") pod \"1cb89f00-39f0-460c-80c5-1fb80378c523\" (UID: \"1cb89f00-39f0-460c-80c5-1fb80378c523\") " Jan 26 20:20:52 crc kubenswrapper[4737]: I0126 20:20:52.764848 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzpmc\" (UniqueName: \"kubernetes.io/projected/1cb89f00-39f0-460c-80c5-1fb80378c523-kube-api-access-vzpmc\") pod \"1cb89f00-39f0-460c-80c5-1fb80378c523\" (UID: \"1cb89f00-39f0-460c-80c5-1fb80378c523\") " Jan 26 20:20:52 crc kubenswrapper[4737]: I0126 20:20:52.765777 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1cb89f00-39f0-460c-80c5-1fb80378c523-utilities" (OuterVolumeSpecName: "utilities") pod "1cb89f00-39f0-460c-80c5-1fb80378c523" (UID: "1cb89f00-39f0-460c-80c5-1fb80378c523"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:20:52 crc kubenswrapper[4737]: I0126 20:20:52.775265 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cb89f00-39f0-460c-80c5-1fb80378c523-kube-api-access-vzpmc" (OuterVolumeSpecName: "kube-api-access-vzpmc") pod "1cb89f00-39f0-460c-80c5-1fb80378c523" (UID: "1cb89f00-39f0-460c-80c5-1fb80378c523"). InnerVolumeSpecName "kube-api-access-vzpmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:20:52 crc kubenswrapper[4737]: I0126 20:20:52.869256 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzpmc\" (UniqueName: \"kubernetes.io/projected/1cb89f00-39f0-460c-80c5-1fb80378c523-kube-api-access-vzpmc\") on node \"crc\" DevicePath \"\"" Jan 26 20:20:52 crc kubenswrapper[4737]: I0126 20:20:52.869350 4737 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cb89f00-39f0-460c-80c5-1fb80378c523-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 20:20:52 crc kubenswrapper[4737]: I0126 20:20:52.899189 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1cb89f00-39f0-460c-80c5-1fb80378c523-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1cb89f00-39f0-460c-80c5-1fb80378c523" (UID: "1cb89f00-39f0-460c-80c5-1fb80378c523"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:20:52 crc kubenswrapper[4737]: I0126 20:20:52.899894 4737 scope.go:117] "RemoveContainer" containerID="f3242e996020d7d85a9147c93618ea1686519824ac1b4178db9e663f3a93ea67" Jan 26 20:20:52 crc kubenswrapper[4737]: E0126 20:20:52.900947 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3242e996020d7d85a9147c93618ea1686519824ac1b4178db9e663f3a93ea67\": container with ID starting with f3242e996020d7d85a9147c93618ea1686519824ac1b4178db9e663f3a93ea67 not found: ID does not exist" containerID="f3242e996020d7d85a9147c93618ea1686519824ac1b4178db9e663f3a93ea67" Jan 26 20:20:52 crc kubenswrapper[4737]: I0126 20:20:52.901045 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3242e996020d7d85a9147c93618ea1686519824ac1b4178db9e663f3a93ea67"} err="failed to get container status \"f3242e996020d7d85a9147c93618ea1686519824ac1b4178db9e663f3a93ea67\": rpc error: code = NotFound desc = could not find container \"f3242e996020d7d85a9147c93618ea1686519824ac1b4178db9e663f3a93ea67\": container with ID starting with f3242e996020d7d85a9147c93618ea1686519824ac1b4178db9e663f3a93ea67 not found: ID does not exist" Jan 26 20:20:52 crc kubenswrapper[4737]: I0126 20:20:52.901133 4737 scope.go:117] "RemoveContainer" containerID="0d55eaa861608fb56d1bce053e7a00cd83c18ded9152293c52c1c134fbcc0393" Jan 26 20:20:52 crc kubenswrapper[4737]: E0126 20:20:52.904014 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d55eaa861608fb56d1bce053e7a00cd83c18ded9152293c52c1c134fbcc0393\": container with ID starting with 0d55eaa861608fb56d1bce053e7a00cd83c18ded9152293c52c1c134fbcc0393 not found: ID does not exist" containerID="0d55eaa861608fb56d1bce053e7a00cd83c18ded9152293c52c1c134fbcc0393" Jan 26 20:20:52 crc kubenswrapper[4737]: I0126 20:20:52.904105 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d55eaa861608fb56d1bce053e7a00cd83c18ded9152293c52c1c134fbcc0393"} err="failed to get container status \"0d55eaa861608fb56d1bce053e7a00cd83c18ded9152293c52c1c134fbcc0393\": rpc error: code = NotFound desc = could not find container \"0d55eaa861608fb56d1bce053e7a00cd83c18ded9152293c52c1c134fbcc0393\": container with ID starting with 0d55eaa861608fb56d1bce053e7a00cd83c18ded9152293c52c1c134fbcc0393 not found: ID does not exist" Jan 26 20:20:52 crc kubenswrapper[4737]: I0126 20:20:52.904142 4737 scope.go:117] "RemoveContainer" containerID="40b86ade32fa51856de853de79b1945c86fe9db7a97d1ab625d7071f127e8869" Jan 26 20:20:52 crc kubenswrapper[4737]: E0126 20:20:52.906086 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40b86ade32fa51856de853de79b1945c86fe9db7a97d1ab625d7071f127e8869\": container with ID starting with 40b86ade32fa51856de853de79b1945c86fe9db7a97d1ab625d7071f127e8869 not found: ID does not exist" containerID="40b86ade32fa51856de853de79b1945c86fe9db7a97d1ab625d7071f127e8869" Jan 26 20:20:52 crc kubenswrapper[4737]: I0126 20:20:52.906157 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40b86ade32fa51856de853de79b1945c86fe9db7a97d1ab625d7071f127e8869"} err="failed to get container status \"40b86ade32fa51856de853de79b1945c86fe9db7a97d1ab625d7071f127e8869\": rpc error: code = NotFound desc = could not find container \"40b86ade32fa51856de853de79b1945c86fe9db7a97d1ab625d7071f127e8869\": container with ID starting with 40b86ade32fa51856de853de79b1945c86fe9db7a97d1ab625d7071f127e8869 not found: ID does not exist" Jan 26 20:20:52 crc kubenswrapper[4737]: I0126 20:20:52.971258 4737 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cb89f00-39f0-460c-80c5-1fb80378c523-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 20:20:53 crc kubenswrapper[4737]: I0126 20:20:53.076134 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vh5ns"] Jan 26 20:20:53 crc kubenswrapper[4737]: I0126 20:20:53.096447 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vh5ns"] Jan 26 20:20:55 crc kubenswrapper[4737]: I0126 20:20:55.012462 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cb89f00-39f0-460c-80c5-1fb80378c523" path="/var/lib/kubelet/pods/1cb89f00-39f0-460c-80c5-1fb80378c523/volumes" Jan 26 20:21:30 crc kubenswrapper[4737]: I0126 20:21:30.949282 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 20:21:30 crc kubenswrapper[4737]: I0126 20:21:30.950080 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 20:21:39 crc kubenswrapper[4737]: I0126 20:21:39.067963 4737 scope.go:117] "RemoveContainer" containerID="e095bcae9c74ec47800ec7d6799d45eea2a30e5ccf3ed83dce8e8e7646f97f6b" Jan 26 20:22:00 crc kubenswrapper[4737]: I0126 20:22:00.948971 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 20:22:00 crc kubenswrapper[4737]: I0126 20:22:00.949859 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 20:22:21 crc kubenswrapper[4737]: I0126 20:22:21.245989 4737 generic.go:334] "Generic (PLEG): container finished" podID="2b1ad284-3db2-439a-ab78-6265ba868f9d" containerID="4ffe0cfcfbcfc7c0caf8655366e27e67260a2f67268c16591279a532d08fefdc" exitCode=0 Jan 26 20:22:21 crc kubenswrapper[4737]: I0126 20:22:21.246864 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hfgnj/must-gather-l29l6" event={"ID":"2b1ad284-3db2-439a-ab78-6265ba868f9d","Type":"ContainerDied","Data":"4ffe0cfcfbcfc7c0caf8655366e27e67260a2f67268c16591279a532d08fefdc"} Jan 26 20:22:21 crc kubenswrapper[4737]: I0126 20:22:21.248663 4737 scope.go:117] "RemoveContainer" containerID="4ffe0cfcfbcfc7c0caf8655366e27e67260a2f67268c16591279a532d08fefdc" Jan 26 20:22:21 crc kubenswrapper[4737]: I0126 20:22:21.735794 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-hfgnj_must-gather-l29l6_2b1ad284-3db2-439a-ab78-6265ba868f9d/gather/0.log" Jan 26 20:22:30 crc kubenswrapper[4737]: I0126 20:22:30.778525 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-hfgnj/must-gather-l29l6"] Jan 26 20:22:30 crc kubenswrapper[4737]: I0126 20:22:30.779883 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-hfgnj/must-gather-l29l6" podUID="2b1ad284-3db2-439a-ab78-6265ba868f9d" containerName="copy" containerID="cri-o://2e1acdfbd93e12646a5e8bf9b5e2ad7ad2dce6b06c420a78799fd264417773b6" gracePeriod=2 Jan 26 20:22:30 crc kubenswrapper[4737]: I0126 20:22:30.812032 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-hfgnj/must-gather-l29l6"] Jan 26 20:22:30 crc kubenswrapper[4737]: I0126 20:22:30.950190 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 20:22:30 crc kubenswrapper[4737]: I0126 20:22:30.950280 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 20:22:30 crc kubenswrapper[4737]: I0126 20:22:30.950354 4737 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" Jan 26 20:22:30 crc kubenswrapper[4737]: I0126 20:22:30.952621 4737 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7aba965480739423a22438a8c1c4daeec43131ccb401d5c79d36c732e6893546"} pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 20:22:30 crc kubenswrapper[4737]: I0126 20:22:30.952803 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" containerID="cri-o://7aba965480739423a22438a8c1c4daeec43131ccb401d5c79d36c732e6893546" gracePeriod=600 Jan 26 20:22:31 crc kubenswrapper[4737]: E0126 20:22:31.133025 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:22:31 crc kubenswrapper[4737]: I0126 20:22:31.373920 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-hfgnj_must-gather-l29l6_2b1ad284-3db2-439a-ab78-6265ba868f9d/copy/0.log" Jan 26 20:22:31 crc kubenswrapper[4737]: I0126 20:22:31.375871 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hfgnj/must-gather-l29l6" Jan 26 20:22:31 crc kubenswrapper[4737]: I0126 20:22:31.377536 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-hfgnj_must-gather-l29l6_2b1ad284-3db2-439a-ab78-6265ba868f9d/copy/0.log" Jan 26 20:22:31 crc kubenswrapper[4737]: I0126 20:22:31.378015 4737 generic.go:334] "Generic (PLEG): container finished" podID="2b1ad284-3db2-439a-ab78-6265ba868f9d" containerID="2e1acdfbd93e12646a5e8bf9b5e2ad7ad2dce6b06c420a78799fd264417773b6" exitCode=143 Jan 26 20:22:31 crc kubenswrapper[4737]: I0126 20:22:31.378261 4737 scope.go:117] "RemoveContainer" containerID="2e1acdfbd93e12646a5e8bf9b5e2ad7ad2dce6b06c420a78799fd264417773b6" Jan 26 20:22:31 crc kubenswrapper[4737]: I0126 20:22:31.381561 4737 generic.go:334] "Generic (PLEG): container finished" podID="afd75772-7900-46c3-b392-afb075e1cc08" containerID="7aba965480739423a22438a8c1c4daeec43131ccb401d5c79d36c732e6893546" exitCode=0 Jan 26 20:22:31 crc kubenswrapper[4737]: I0126 20:22:31.381648 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" event={"ID":"afd75772-7900-46c3-b392-afb075e1cc08","Type":"ContainerDied","Data":"7aba965480739423a22438a8c1c4daeec43131ccb401d5c79d36c732e6893546"} Jan 26 20:22:31 crc kubenswrapper[4737]: I0126 20:22:31.393799 4737 scope.go:117] "RemoveContainer" containerID="7aba965480739423a22438a8c1c4daeec43131ccb401d5c79d36c732e6893546" Jan 26 20:22:31 crc kubenswrapper[4737]: E0126 20:22:31.395042 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:22:31 crc kubenswrapper[4737]: I0126 20:22:31.440672 4737 scope.go:117] "RemoveContainer" containerID="4ffe0cfcfbcfc7c0caf8655366e27e67260a2f67268c16591279a532d08fefdc" Jan 26 20:22:31 crc kubenswrapper[4737]: I0126 20:22:31.537970 4737 scope.go:117] "RemoveContainer" containerID="2e1acdfbd93e12646a5e8bf9b5e2ad7ad2dce6b06c420a78799fd264417773b6" Jan 26 20:22:31 crc kubenswrapper[4737]: E0126 20:22:31.538529 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e1acdfbd93e12646a5e8bf9b5e2ad7ad2dce6b06c420a78799fd264417773b6\": container with ID starting with 2e1acdfbd93e12646a5e8bf9b5e2ad7ad2dce6b06c420a78799fd264417773b6 not found: ID does not exist" containerID="2e1acdfbd93e12646a5e8bf9b5e2ad7ad2dce6b06c420a78799fd264417773b6" Jan 26 20:22:31 crc kubenswrapper[4737]: I0126 20:22:31.538564 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e1acdfbd93e12646a5e8bf9b5e2ad7ad2dce6b06c420a78799fd264417773b6"} err="failed to get container status \"2e1acdfbd93e12646a5e8bf9b5e2ad7ad2dce6b06c420a78799fd264417773b6\": rpc error: code = NotFound desc = could not find container \"2e1acdfbd93e12646a5e8bf9b5e2ad7ad2dce6b06c420a78799fd264417773b6\": container with ID starting with 2e1acdfbd93e12646a5e8bf9b5e2ad7ad2dce6b06c420a78799fd264417773b6 not found: ID does not exist" Jan 26 20:22:31 crc kubenswrapper[4737]: I0126 20:22:31.538589 4737 scope.go:117] "RemoveContainer" containerID="4ffe0cfcfbcfc7c0caf8655366e27e67260a2f67268c16591279a532d08fefdc" Jan 26 20:22:31 crc kubenswrapper[4737]: E0126 20:22:31.538780 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ffe0cfcfbcfc7c0caf8655366e27e67260a2f67268c16591279a532d08fefdc\": container with ID starting with 4ffe0cfcfbcfc7c0caf8655366e27e67260a2f67268c16591279a532d08fefdc not found: ID does not exist" containerID="4ffe0cfcfbcfc7c0caf8655366e27e67260a2f67268c16591279a532d08fefdc" Jan 26 20:22:31 crc kubenswrapper[4737]: I0126 20:22:31.538804 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ffe0cfcfbcfc7c0caf8655366e27e67260a2f67268c16591279a532d08fefdc"} err="failed to get container status \"4ffe0cfcfbcfc7c0caf8655366e27e67260a2f67268c16591279a532d08fefdc\": rpc error: code = NotFound desc = could not find container \"4ffe0cfcfbcfc7c0caf8655366e27e67260a2f67268c16591279a532d08fefdc\": container with ID starting with 4ffe0cfcfbcfc7c0caf8655366e27e67260a2f67268c16591279a532d08fefdc not found: ID does not exist" Jan 26 20:22:31 crc kubenswrapper[4737]: I0126 20:22:31.538817 4737 scope.go:117] "RemoveContainer" containerID="747bf40dd18257932204627171230519436ccf208e0d90e1e79a45e89e20948b" Jan 26 20:22:31 crc kubenswrapper[4737]: I0126 20:22:31.570720 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2b1ad284-3db2-439a-ab78-6265ba868f9d-must-gather-output\") pod \"2b1ad284-3db2-439a-ab78-6265ba868f9d\" (UID: \"2b1ad284-3db2-439a-ab78-6265ba868f9d\") " Jan 26 20:22:31 crc kubenswrapper[4737]: I0126 20:22:31.571227 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvrj6\" (UniqueName: \"kubernetes.io/projected/2b1ad284-3db2-439a-ab78-6265ba868f9d-kube-api-access-bvrj6\") pod \"2b1ad284-3db2-439a-ab78-6265ba868f9d\" (UID: \"2b1ad284-3db2-439a-ab78-6265ba868f9d\") " Jan 26 20:22:31 crc kubenswrapper[4737]: I0126 20:22:31.581442 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b1ad284-3db2-439a-ab78-6265ba868f9d-kube-api-access-bvrj6" (OuterVolumeSpecName: "kube-api-access-bvrj6") pod "2b1ad284-3db2-439a-ab78-6265ba868f9d" (UID: "2b1ad284-3db2-439a-ab78-6265ba868f9d"). InnerVolumeSpecName "kube-api-access-bvrj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:22:31 crc kubenswrapper[4737]: I0126 20:22:31.674693 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvrj6\" (UniqueName: \"kubernetes.io/projected/2b1ad284-3db2-439a-ab78-6265ba868f9d-kube-api-access-bvrj6\") on node \"crc\" DevicePath \"\"" Jan 26 20:22:31 crc kubenswrapper[4737]: I0126 20:22:31.783112 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b1ad284-3db2-439a-ab78-6265ba868f9d-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "2b1ad284-3db2-439a-ab78-6265ba868f9d" (UID: "2b1ad284-3db2-439a-ab78-6265ba868f9d"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:22:31 crc kubenswrapper[4737]: I0126 20:22:31.881805 4737 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2b1ad284-3db2-439a-ab78-6265ba868f9d-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 26 20:22:32 crc kubenswrapper[4737]: I0126 20:22:32.393479 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hfgnj/must-gather-l29l6" Jan 26 20:22:33 crc kubenswrapper[4737]: I0126 20:22:33.002367 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b1ad284-3db2-439a-ab78-6265ba868f9d" path="/var/lib/kubelet/pods/2b1ad284-3db2-439a-ab78-6265ba868f9d/volumes" Jan 26 20:22:38 crc kubenswrapper[4737]: I0126 20:22:38.404014 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jbggs"] Jan 26 20:22:38 crc kubenswrapper[4737]: E0126 20:22:38.405415 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b1ad284-3db2-439a-ab78-6265ba868f9d" containerName="copy" Jan 26 20:22:38 crc kubenswrapper[4737]: I0126 20:22:38.405432 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b1ad284-3db2-439a-ab78-6265ba868f9d" containerName="copy" Jan 26 20:22:38 crc kubenswrapper[4737]: E0126 20:22:38.405447 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b1ad284-3db2-439a-ab78-6265ba868f9d" containerName="gather" Jan 26 20:22:38 crc kubenswrapper[4737]: I0126 20:22:38.405454 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b1ad284-3db2-439a-ab78-6265ba868f9d" containerName="gather" Jan 26 20:22:38 crc kubenswrapper[4737]: E0126 20:22:38.405467 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cb89f00-39f0-460c-80c5-1fb80378c523" containerName="extract-content" Jan 26 20:22:38 crc kubenswrapper[4737]: I0126 20:22:38.405474 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cb89f00-39f0-460c-80c5-1fb80378c523" containerName="extract-content" Jan 26 20:22:38 crc kubenswrapper[4737]: E0126 20:22:38.405508 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cb89f00-39f0-460c-80c5-1fb80378c523" containerName="registry-server" Jan 26 20:22:38 crc kubenswrapper[4737]: I0126 20:22:38.405517 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cb89f00-39f0-460c-80c5-1fb80378c523" containerName="registry-server" Jan 26 20:22:38 crc kubenswrapper[4737]: E0126 20:22:38.405528 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cb89f00-39f0-460c-80c5-1fb80378c523" containerName="extract-utilities" Jan 26 20:22:38 crc kubenswrapper[4737]: I0126 20:22:38.405535 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cb89f00-39f0-460c-80c5-1fb80378c523" containerName="extract-utilities" Jan 26 20:22:38 crc kubenswrapper[4737]: I0126 20:22:38.405773 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b1ad284-3db2-439a-ab78-6265ba868f9d" containerName="copy" Jan 26 20:22:38 crc kubenswrapper[4737]: I0126 20:22:38.405793 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cb89f00-39f0-460c-80c5-1fb80378c523" containerName="registry-server" Jan 26 20:22:38 crc kubenswrapper[4737]: I0126 20:22:38.405812 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b1ad284-3db2-439a-ab78-6265ba868f9d" containerName="gather" Jan 26 20:22:38 crc kubenswrapper[4737]: I0126 20:22:38.407699 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jbggs" Jan 26 20:22:38 crc kubenswrapper[4737]: I0126 20:22:38.451226 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jbggs"] Jan 26 20:22:38 crc kubenswrapper[4737]: I0126 20:22:38.487058 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b15650c-39e4-4520-971a-a45da9d7821b-utilities\") pod \"community-operators-jbggs\" (UID: \"0b15650c-39e4-4520-971a-a45da9d7821b\") " pod="openshift-marketplace/community-operators-jbggs" Jan 26 20:22:38 crc kubenswrapper[4737]: I0126 20:22:38.487404 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6ff6\" (UniqueName: \"kubernetes.io/projected/0b15650c-39e4-4520-971a-a45da9d7821b-kube-api-access-m6ff6\") pod \"community-operators-jbggs\" (UID: \"0b15650c-39e4-4520-971a-a45da9d7821b\") " pod="openshift-marketplace/community-operators-jbggs" Jan 26 20:22:38 crc kubenswrapper[4737]: I0126 20:22:38.487677 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b15650c-39e4-4520-971a-a45da9d7821b-catalog-content\") pod \"community-operators-jbggs\" (UID: \"0b15650c-39e4-4520-971a-a45da9d7821b\") " pod="openshift-marketplace/community-operators-jbggs" Jan 26 20:22:38 crc kubenswrapper[4737]: I0126 20:22:38.590535 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b15650c-39e4-4520-971a-a45da9d7821b-catalog-content\") pod \"community-operators-jbggs\" (UID: \"0b15650c-39e4-4520-971a-a45da9d7821b\") " pod="openshift-marketplace/community-operators-jbggs" Jan 26 20:22:38 crc kubenswrapper[4737]: I0126 20:22:38.590729 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b15650c-39e4-4520-971a-a45da9d7821b-utilities\") pod \"community-operators-jbggs\" (UID: \"0b15650c-39e4-4520-971a-a45da9d7821b\") " pod="openshift-marketplace/community-operators-jbggs" Jan 26 20:22:38 crc kubenswrapper[4737]: I0126 20:22:38.590888 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6ff6\" (UniqueName: \"kubernetes.io/projected/0b15650c-39e4-4520-971a-a45da9d7821b-kube-api-access-m6ff6\") pod \"community-operators-jbggs\" (UID: \"0b15650c-39e4-4520-971a-a45da9d7821b\") " pod="openshift-marketplace/community-operators-jbggs" Jan 26 20:22:38 crc kubenswrapper[4737]: I0126 20:22:38.591794 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b15650c-39e4-4520-971a-a45da9d7821b-catalog-content\") pod \"community-operators-jbggs\" (UID: \"0b15650c-39e4-4520-971a-a45da9d7821b\") " pod="openshift-marketplace/community-operators-jbggs" Jan 26 20:22:38 crc kubenswrapper[4737]: I0126 20:22:38.593719 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b15650c-39e4-4520-971a-a45da9d7821b-utilities\") pod \"community-operators-jbggs\" (UID: \"0b15650c-39e4-4520-971a-a45da9d7821b\") " pod="openshift-marketplace/community-operators-jbggs" Jan 26 20:22:38 crc kubenswrapper[4737]: I0126 20:22:38.631856 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6ff6\" (UniqueName: \"kubernetes.io/projected/0b15650c-39e4-4520-971a-a45da9d7821b-kube-api-access-m6ff6\") pod \"community-operators-jbggs\" (UID: \"0b15650c-39e4-4520-971a-a45da9d7821b\") " pod="openshift-marketplace/community-operators-jbggs" Jan 26 20:22:38 crc kubenswrapper[4737]: I0126 20:22:38.743386 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jbggs" Jan 26 20:22:39 crc kubenswrapper[4737]: I0126 20:22:39.636524 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jbggs"] Jan 26 20:22:40 crc kubenswrapper[4737]: I0126 20:22:40.512499 4737 generic.go:334] "Generic (PLEG): container finished" podID="0b15650c-39e4-4520-971a-a45da9d7821b" containerID="69c4d8fe6bc54049b2d87c0205170ae425dc3e08281dc2622ea1efce50204a68" exitCode=0 Jan 26 20:22:40 crc kubenswrapper[4737]: I0126 20:22:40.512814 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jbggs" event={"ID":"0b15650c-39e4-4520-971a-a45da9d7821b","Type":"ContainerDied","Data":"69c4d8fe6bc54049b2d87c0205170ae425dc3e08281dc2622ea1efce50204a68"} Jan 26 20:22:40 crc kubenswrapper[4737]: I0126 20:22:40.512847 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jbggs" event={"ID":"0b15650c-39e4-4520-971a-a45da9d7821b","Type":"ContainerStarted","Data":"2f777b9bfeed052facfd12d788930e9b8fd8f579ebeb3f68af54acd764c0ce1d"} Jan 26 20:22:41 crc kubenswrapper[4737]: I0126 20:22:41.532251 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jbggs" event={"ID":"0b15650c-39e4-4520-971a-a45da9d7821b","Type":"ContainerStarted","Data":"3f8f19dc0898c3ba3676d10af639e81d13ec9d332772fffd76d9c92c080bd40f"} Jan 26 20:22:42 crc kubenswrapper[4737]: E0126 20:22:42.317528 4737 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b15650c_39e4_4520_971a_a45da9d7821b.slice/crio-conmon-3f8f19dc0898c3ba3676d10af639e81d13ec9d332772fffd76d9c92c080bd40f.scope\": RecentStats: unable to find data in memory cache]" Jan 26 20:22:42 crc kubenswrapper[4737]: I0126 20:22:42.547205 4737 generic.go:334] "Generic (PLEG): container finished" podID="0b15650c-39e4-4520-971a-a45da9d7821b" containerID="3f8f19dc0898c3ba3676d10af639e81d13ec9d332772fffd76d9c92c080bd40f" exitCode=0 Jan 26 20:22:42 crc kubenswrapper[4737]: I0126 20:22:42.547262 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jbggs" event={"ID":"0b15650c-39e4-4520-971a-a45da9d7821b","Type":"ContainerDied","Data":"3f8f19dc0898c3ba3676d10af639e81d13ec9d332772fffd76d9c92c080bd40f"} Jan 26 20:22:43 crc kubenswrapper[4737]: I0126 20:22:43.562757 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jbggs" event={"ID":"0b15650c-39e4-4520-971a-a45da9d7821b","Type":"ContainerStarted","Data":"70a2aca7bb7951daa320a168b602a39d3375128af5ab3be8e014c5a166d9798f"} Jan 26 20:22:43 crc kubenswrapper[4737]: I0126 20:22:43.614777 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jbggs" podStartSLOduration=3.1602132960000002 podStartE2EDuration="5.614749228s" podCreationTimestamp="2026-01-26 20:22:38 +0000 UTC" firstStartedPulling="2026-01-26 20:22:40.515489682 +0000 UTC m=+6733.823684390" lastFinishedPulling="2026-01-26 20:22:42.970025614 +0000 UTC m=+6736.278220322" observedRunningTime="2026-01-26 20:22:43.592823261 +0000 UTC m=+6736.901017969" watchObservedRunningTime="2026-01-26 20:22:43.614749228 +0000 UTC m=+6736.922943946" Jan 26 20:22:43 crc kubenswrapper[4737]: I0126 20:22:43.982906 4737 scope.go:117] "RemoveContainer" containerID="7aba965480739423a22438a8c1c4daeec43131ccb401d5c79d36c732e6893546" Jan 26 20:22:43 crc kubenswrapper[4737]: E0126 20:22:43.983232 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:22:48 crc kubenswrapper[4737]: I0126 20:22:48.744200 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jbggs" Jan 26 20:22:48 crc kubenswrapper[4737]: I0126 20:22:48.744819 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jbggs" Jan 26 20:22:48 crc kubenswrapper[4737]: I0126 20:22:48.804777 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jbggs" Jan 26 20:22:49 crc kubenswrapper[4737]: I0126 20:22:49.703603 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jbggs" Jan 26 20:22:49 crc kubenswrapper[4737]: I0126 20:22:49.953586 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jbggs"] Jan 26 20:22:51 crc kubenswrapper[4737]: I0126 20:22:51.645818 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jbggs" podUID="0b15650c-39e4-4520-971a-a45da9d7821b" containerName="registry-server" containerID="cri-o://70a2aca7bb7951daa320a168b602a39d3375128af5ab3be8e014c5a166d9798f" gracePeriod=2 Jan 26 20:22:52 crc kubenswrapper[4737]: I0126 20:22:52.250979 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jbggs" Jan 26 20:22:52 crc kubenswrapper[4737]: I0126 20:22:52.330370 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b15650c-39e4-4520-971a-a45da9d7821b-catalog-content\") pod \"0b15650c-39e4-4520-971a-a45da9d7821b\" (UID: \"0b15650c-39e4-4520-971a-a45da9d7821b\") " Jan 26 20:22:52 crc kubenswrapper[4737]: I0126 20:22:52.330487 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6ff6\" (UniqueName: \"kubernetes.io/projected/0b15650c-39e4-4520-971a-a45da9d7821b-kube-api-access-m6ff6\") pod \"0b15650c-39e4-4520-971a-a45da9d7821b\" (UID: \"0b15650c-39e4-4520-971a-a45da9d7821b\") " Jan 26 20:22:52 crc kubenswrapper[4737]: I0126 20:22:52.330521 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b15650c-39e4-4520-971a-a45da9d7821b-utilities\") pod \"0b15650c-39e4-4520-971a-a45da9d7821b\" (UID: \"0b15650c-39e4-4520-971a-a45da9d7821b\") " Jan 26 20:22:52 crc kubenswrapper[4737]: I0126 20:22:52.331875 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b15650c-39e4-4520-971a-a45da9d7821b-utilities" (OuterVolumeSpecName: "utilities") pod "0b15650c-39e4-4520-971a-a45da9d7821b" (UID: "0b15650c-39e4-4520-971a-a45da9d7821b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:22:52 crc kubenswrapper[4737]: I0126 20:22:52.355482 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b15650c-39e4-4520-971a-a45da9d7821b-kube-api-access-m6ff6" (OuterVolumeSpecName: "kube-api-access-m6ff6") pod "0b15650c-39e4-4520-971a-a45da9d7821b" (UID: "0b15650c-39e4-4520-971a-a45da9d7821b"). InnerVolumeSpecName "kube-api-access-m6ff6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:22:52 crc kubenswrapper[4737]: I0126 20:22:52.405390 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b15650c-39e4-4520-971a-a45da9d7821b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0b15650c-39e4-4520-971a-a45da9d7821b" (UID: "0b15650c-39e4-4520-971a-a45da9d7821b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:22:52 crc kubenswrapper[4737]: I0126 20:22:52.432485 4737 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b15650c-39e4-4520-971a-a45da9d7821b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 20:22:52 crc kubenswrapper[4737]: I0126 20:22:52.432517 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6ff6\" (UniqueName: \"kubernetes.io/projected/0b15650c-39e4-4520-971a-a45da9d7821b-kube-api-access-m6ff6\") on node \"crc\" DevicePath \"\"" Jan 26 20:22:52 crc kubenswrapper[4737]: I0126 20:22:52.432528 4737 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b15650c-39e4-4520-971a-a45da9d7821b-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 20:22:52 crc kubenswrapper[4737]: I0126 20:22:52.661608 4737 generic.go:334] "Generic (PLEG): container finished" podID="0b15650c-39e4-4520-971a-a45da9d7821b" containerID="70a2aca7bb7951daa320a168b602a39d3375128af5ab3be8e014c5a166d9798f" exitCode=0 Jan 26 20:22:52 crc kubenswrapper[4737]: I0126 20:22:52.661668 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jbggs" event={"ID":"0b15650c-39e4-4520-971a-a45da9d7821b","Type":"ContainerDied","Data":"70a2aca7bb7951daa320a168b602a39d3375128af5ab3be8e014c5a166d9798f"} Jan 26 20:22:52 crc kubenswrapper[4737]: I0126 20:22:52.661711 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jbggs" event={"ID":"0b15650c-39e4-4520-971a-a45da9d7821b","Type":"ContainerDied","Data":"2f777b9bfeed052facfd12d788930e9b8fd8f579ebeb3f68af54acd764c0ce1d"} Jan 26 20:22:52 crc kubenswrapper[4737]: I0126 20:22:52.661741 4737 scope.go:117] "RemoveContainer" containerID="70a2aca7bb7951daa320a168b602a39d3375128af5ab3be8e014c5a166d9798f" Jan 26 20:22:52 crc kubenswrapper[4737]: I0126 20:22:52.664363 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jbggs" Jan 26 20:22:52 crc kubenswrapper[4737]: I0126 20:22:52.718558 4737 scope.go:117] "RemoveContainer" containerID="3f8f19dc0898c3ba3676d10af639e81d13ec9d332772fffd76d9c92c080bd40f" Jan 26 20:22:52 crc kubenswrapper[4737]: I0126 20:22:52.749401 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jbggs"] Jan 26 20:22:52 crc kubenswrapper[4737]: I0126 20:22:52.755285 4737 scope.go:117] "RemoveContainer" containerID="69c4d8fe6bc54049b2d87c0205170ae425dc3e08281dc2622ea1efce50204a68" Jan 26 20:22:52 crc kubenswrapper[4737]: I0126 20:22:52.768192 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jbggs"] Jan 26 20:22:52 crc kubenswrapper[4737]: I0126 20:22:52.804189 4737 scope.go:117] "RemoveContainer" containerID="70a2aca7bb7951daa320a168b602a39d3375128af5ab3be8e014c5a166d9798f" Jan 26 20:22:52 crc kubenswrapper[4737]: E0126 20:22:52.804709 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70a2aca7bb7951daa320a168b602a39d3375128af5ab3be8e014c5a166d9798f\": container with ID starting with 70a2aca7bb7951daa320a168b602a39d3375128af5ab3be8e014c5a166d9798f not found: ID does not exist" containerID="70a2aca7bb7951daa320a168b602a39d3375128af5ab3be8e014c5a166d9798f" Jan 26 20:22:52 crc kubenswrapper[4737]: I0126 20:22:52.804754 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70a2aca7bb7951daa320a168b602a39d3375128af5ab3be8e014c5a166d9798f"} err="failed to get container status \"70a2aca7bb7951daa320a168b602a39d3375128af5ab3be8e014c5a166d9798f\": rpc error: code = NotFound desc = could not find container \"70a2aca7bb7951daa320a168b602a39d3375128af5ab3be8e014c5a166d9798f\": container with ID starting with 70a2aca7bb7951daa320a168b602a39d3375128af5ab3be8e014c5a166d9798f not found: ID does not exist" Jan 26 20:22:52 crc kubenswrapper[4737]: I0126 20:22:52.804782 4737 scope.go:117] "RemoveContainer" containerID="3f8f19dc0898c3ba3676d10af639e81d13ec9d332772fffd76d9c92c080bd40f" Jan 26 20:22:52 crc kubenswrapper[4737]: E0126 20:22:52.805166 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f8f19dc0898c3ba3676d10af639e81d13ec9d332772fffd76d9c92c080bd40f\": container with ID starting with 3f8f19dc0898c3ba3676d10af639e81d13ec9d332772fffd76d9c92c080bd40f not found: ID does not exist" containerID="3f8f19dc0898c3ba3676d10af639e81d13ec9d332772fffd76d9c92c080bd40f" Jan 26 20:22:52 crc kubenswrapper[4737]: I0126 20:22:52.805192 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f8f19dc0898c3ba3676d10af639e81d13ec9d332772fffd76d9c92c080bd40f"} err="failed to get container status \"3f8f19dc0898c3ba3676d10af639e81d13ec9d332772fffd76d9c92c080bd40f\": rpc error: code = NotFound desc = could not find container \"3f8f19dc0898c3ba3676d10af639e81d13ec9d332772fffd76d9c92c080bd40f\": container with ID starting with 3f8f19dc0898c3ba3676d10af639e81d13ec9d332772fffd76d9c92c080bd40f not found: ID does not exist" Jan 26 20:22:52 crc kubenswrapper[4737]: I0126 20:22:52.805206 4737 scope.go:117] "RemoveContainer" containerID="69c4d8fe6bc54049b2d87c0205170ae425dc3e08281dc2622ea1efce50204a68" Jan 26 20:22:52 crc kubenswrapper[4737]: E0126 20:22:52.805621 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69c4d8fe6bc54049b2d87c0205170ae425dc3e08281dc2622ea1efce50204a68\": container with ID starting with 69c4d8fe6bc54049b2d87c0205170ae425dc3e08281dc2622ea1efce50204a68 not found: ID does not exist" containerID="69c4d8fe6bc54049b2d87c0205170ae425dc3e08281dc2622ea1efce50204a68" Jan 26 20:22:52 crc kubenswrapper[4737]: I0126 20:22:52.805647 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69c4d8fe6bc54049b2d87c0205170ae425dc3e08281dc2622ea1efce50204a68"} err="failed to get container status \"69c4d8fe6bc54049b2d87c0205170ae425dc3e08281dc2622ea1efce50204a68\": rpc error: code = NotFound desc = could not find container \"69c4d8fe6bc54049b2d87c0205170ae425dc3e08281dc2622ea1efce50204a68\": container with ID starting with 69c4d8fe6bc54049b2d87c0205170ae425dc3e08281dc2622ea1efce50204a68 not found: ID does not exist" Jan 26 20:22:52 crc kubenswrapper[4737]: I0126 20:22:52.998902 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b15650c-39e4-4520-971a-a45da9d7821b" path="/var/lib/kubelet/pods/0b15650c-39e4-4520-971a-a45da9d7821b/volumes" Jan 26 20:22:55 crc kubenswrapper[4737]: I0126 20:22:55.982933 4737 scope.go:117] "RemoveContainer" containerID="7aba965480739423a22438a8c1c4daeec43131ccb401d5c79d36c732e6893546" Jan 26 20:22:55 crc kubenswrapper[4737]: E0126 20:22:55.984149 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:23:09 crc kubenswrapper[4737]: I0126 20:23:09.981516 4737 scope.go:117] "RemoveContainer" containerID="7aba965480739423a22438a8c1c4daeec43131ccb401d5c79d36c732e6893546" Jan 26 20:23:09 crc kubenswrapper[4737]: E0126 20:23:09.982470 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:23:13 crc kubenswrapper[4737]: E0126 20:23:13.072962 4737 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Jan 26 20:23:20 crc kubenswrapper[4737]: I0126 20:23:20.984443 4737 scope.go:117] "RemoveContainer" containerID="7aba965480739423a22438a8c1c4daeec43131ccb401d5c79d36c732e6893546" Jan 26 20:23:20 crc kubenswrapper[4737]: E0126 20:23:20.985552 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:23:34 crc kubenswrapper[4737]: I0126 20:23:34.983741 4737 scope.go:117] "RemoveContainer" containerID="7aba965480739423a22438a8c1c4daeec43131ccb401d5c79d36c732e6893546" Jan 26 20:23:34 crc kubenswrapper[4737]: E0126 20:23:34.985220 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:23:46 crc kubenswrapper[4737]: I0126 20:23:46.994623 4737 scope.go:117] "RemoveContainer" containerID="7aba965480739423a22438a8c1c4daeec43131ccb401d5c79d36c732e6893546" Jan 26 20:23:46 crc kubenswrapper[4737]: E0126 20:23:46.996369 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:23:58 crc kubenswrapper[4737]: I0126 20:23:58.982205 4737 scope.go:117] "RemoveContainer" containerID="7aba965480739423a22438a8c1c4daeec43131ccb401d5c79d36c732e6893546" Jan 26 20:23:58 crc kubenswrapper[4737]: E0126 20:23:58.983108 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:24:12 crc kubenswrapper[4737]: I0126 20:24:12.989333 4737 scope.go:117] "RemoveContainer" containerID="7aba965480739423a22438a8c1c4daeec43131ccb401d5c79d36c732e6893546" Jan 26 20:24:12 crc kubenswrapper[4737]: E0126 20:24:12.999901 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:24:25 crc kubenswrapper[4737]: I0126 20:24:25.983177 4737 scope.go:117] "RemoveContainer" containerID="7aba965480739423a22438a8c1c4daeec43131ccb401d5c79d36c732e6893546" Jan 26 20:24:25 crc kubenswrapper[4737]: E0126 20:24:25.984565 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:24:38 crc kubenswrapper[4737]: I0126 20:24:38.984025 4737 scope.go:117] "RemoveContainer" containerID="7aba965480739423a22438a8c1c4daeec43131ccb401d5c79d36c732e6893546" Jan 26 20:24:38 crc kubenswrapper[4737]: E0126 20:24:38.985415 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:24:53 crc kubenswrapper[4737]: I0126 20:24:53.982198 4737 scope.go:117] "RemoveContainer" containerID="7aba965480739423a22438a8c1c4daeec43131ccb401d5c79d36c732e6893546" Jan 26 20:24:53 crc kubenswrapper[4737]: E0126 20:24:53.983329 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:25:08 crc kubenswrapper[4737]: I0126 20:25:08.983449 4737 scope.go:117] "RemoveContainer" containerID="7aba965480739423a22438a8c1c4daeec43131ccb401d5c79d36c732e6893546" Jan 26 20:25:08 crc kubenswrapper[4737]: E0126 20:25:08.987309 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:25:21 crc kubenswrapper[4737]: I0126 20:25:21.983094 4737 scope.go:117] "RemoveContainer" containerID="7aba965480739423a22438a8c1c4daeec43131ccb401d5c79d36c732e6893546" Jan 26 20:25:21 crc kubenswrapper[4737]: E0126 20:25:21.984100 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:25:33 crc kubenswrapper[4737]: I0126 20:25:33.982795 4737 scope.go:117] "RemoveContainer" containerID="7aba965480739423a22438a8c1c4daeec43131ccb401d5c79d36c732e6893546" Jan 26 20:25:33 crc kubenswrapper[4737]: E0126 20:25:33.984341 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:25:37 crc kubenswrapper[4737]: I0126 20:25:37.300697 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jsjh2/must-gather-nfwrb"] Jan 26 20:25:37 crc kubenswrapper[4737]: E0126 20:25:37.303772 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b15650c-39e4-4520-971a-a45da9d7821b" containerName="extract-utilities" Jan 26 20:25:37 crc kubenswrapper[4737]: I0126 20:25:37.303875 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b15650c-39e4-4520-971a-a45da9d7821b" containerName="extract-utilities" Jan 26 20:25:37 crc kubenswrapper[4737]: E0126 20:25:37.303970 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b15650c-39e4-4520-971a-a45da9d7821b" containerName="extract-content" Jan 26 20:25:37 crc kubenswrapper[4737]: I0126 20:25:37.304024 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b15650c-39e4-4520-971a-a45da9d7821b" containerName="extract-content" Jan 26 20:25:37 crc kubenswrapper[4737]: E0126 20:25:37.304100 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b15650c-39e4-4520-971a-a45da9d7821b" containerName="registry-server" Jan 26 20:25:37 crc kubenswrapper[4737]: I0126 20:25:37.304157 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b15650c-39e4-4520-971a-a45da9d7821b" containerName="registry-server" Jan 26 20:25:37 crc kubenswrapper[4737]: I0126 20:25:37.304455 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b15650c-39e4-4520-971a-a45da9d7821b" containerName="registry-server" Jan 26 20:25:37 crc kubenswrapper[4737]: I0126 20:25:37.306367 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jsjh2/must-gather-nfwrb" Jan 26 20:25:37 crc kubenswrapper[4737]: I0126 20:25:37.314480 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-jsjh2"/"openshift-service-ca.crt" Jan 26 20:25:37 crc kubenswrapper[4737]: I0126 20:25:37.321135 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-jsjh2"/"kube-root-ca.crt" Jan 26 20:25:37 crc kubenswrapper[4737]: I0126 20:25:37.343397 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jsjh2/must-gather-nfwrb"] Jan 26 20:25:37 crc kubenswrapper[4737]: I0126 20:25:37.398787 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f76904f4-fa51-456c-8c9a-654f31187e4b-must-gather-output\") pod \"must-gather-nfwrb\" (UID: \"f76904f4-fa51-456c-8c9a-654f31187e4b\") " pod="openshift-must-gather-jsjh2/must-gather-nfwrb" Jan 26 20:25:37 crc kubenswrapper[4737]: I0126 20:25:37.400411 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfr2g\" (UniqueName: \"kubernetes.io/projected/f76904f4-fa51-456c-8c9a-654f31187e4b-kube-api-access-dfr2g\") pod \"must-gather-nfwrb\" (UID: \"f76904f4-fa51-456c-8c9a-654f31187e4b\") " pod="openshift-must-gather-jsjh2/must-gather-nfwrb" Jan 26 20:25:37 crc kubenswrapper[4737]: I0126 20:25:37.503425 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f76904f4-fa51-456c-8c9a-654f31187e4b-must-gather-output\") pod \"must-gather-nfwrb\" (UID: \"f76904f4-fa51-456c-8c9a-654f31187e4b\") " pod="openshift-must-gather-jsjh2/must-gather-nfwrb" Jan 26 20:25:37 crc kubenswrapper[4737]: I0126 20:25:37.503491 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfr2g\" (UniqueName: \"kubernetes.io/projected/f76904f4-fa51-456c-8c9a-654f31187e4b-kube-api-access-dfr2g\") pod \"must-gather-nfwrb\" (UID: \"f76904f4-fa51-456c-8c9a-654f31187e4b\") " pod="openshift-must-gather-jsjh2/must-gather-nfwrb" Jan 26 20:25:37 crc kubenswrapper[4737]: I0126 20:25:37.504542 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f76904f4-fa51-456c-8c9a-654f31187e4b-must-gather-output\") pod \"must-gather-nfwrb\" (UID: \"f76904f4-fa51-456c-8c9a-654f31187e4b\") " pod="openshift-must-gather-jsjh2/must-gather-nfwrb" Jan 26 20:25:37 crc kubenswrapper[4737]: I0126 20:25:37.527336 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfr2g\" (UniqueName: \"kubernetes.io/projected/f76904f4-fa51-456c-8c9a-654f31187e4b-kube-api-access-dfr2g\") pod \"must-gather-nfwrb\" (UID: \"f76904f4-fa51-456c-8c9a-654f31187e4b\") " pod="openshift-must-gather-jsjh2/must-gather-nfwrb" Jan 26 20:25:37 crc kubenswrapper[4737]: I0126 20:25:37.630097 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jsjh2/must-gather-nfwrb" Jan 26 20:25:38 crc kubenswrapper[4737]: I0126 20:25:38.197319 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jsjh2/must-gather-nfwrb"] Jan 26 20:25:39 crc kubenswrapper[4737]: I0126 20:25:39.105445 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jsjh2/must-gather-nfwrb" event={"ID":"f76904f4-fa51-456c-8c9a-654f31187e4b","Type":"ContainerStarted","Data":"039fc550ae34919d299e6107c354b530aa0400512a8b3d05acd095598a6004ca"} Jan 26 20:25:39 crc kubenswrapper[4737]: I0126 20:25:39.106251 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jsjh2/must-gather-nfwrb" event={"ID":"f76904f4-fa51-456c-8c9a-654f31187e4b","Type":"ContainerStarted","Data":"79640e9765b040448623723ed759a625b46a3c95d6c5a790058b7aaa17ba5d48"} Jan 26 20:25:39 crc kubenswrapper[4737]: I0126 20:25:39.106267 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jsjh2/must-gather-nfwrb" event={"ID":"f76904f4-fa51-456c-8c9a-654f31187e4b","Type":"ContainerStarted","Data":"368ccec4d7552e735b4db7b745a365fc6a494ca325c769758740de050f802cb8"} Jan 26 20:25:39 crc kubenswrapper[4737]: I0126 20:25:39.128989 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-jsjh2/must-gather-nfwrb" podStartSLOduration=2.128962277 podStartE2EDuration="2.128962277s" podCreationTimestamp="2026-01-26 20:25:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:25:39.119570178 +0000 UTC m=+6912.427764926" watchObservedRunningTime="2026-01-26 20:25:39.128962277 +0000 UTC m=+6912.437156995" Jan 26 20:25:41 crc kubenswrapper[4737]: E0126 20:25:41.801601 4737 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.236:51022->38.102.83.236:42217: read tcp 38.102.83.236:51022->38.102.83.236:42217: read: connection reset by peer Jan 26 20:25:42 crc kubenswrapper[4737]: I0126 20:25:42.996352 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jsjh2/crc-debug-vtf8q"] Jan 26 20:25:42 crc kubenswrapper[4737]: I0126 20:25:42.998604 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jsjh2/crc-debug-vtf8q" Jan 26 20:25:43 crc kubenswrapper[4737]: I0126 20:25:43.001757 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-jsjh2"/"default-dockercfg-hkxqc" Jan 26 20:25:43 crc kubenswrapper[4737]: I0126 20:25:43.198889 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3df46411-d4ef-48d4-8e3c-e1a9a91c1d6b-host\") pod \"crc-debug-vtf8q\" (UID: \"3df46411-d4ef-48d4-8e3c-e1a9a91c1d6b\") " pod="openshift-must-gather-jsjh2/crc-debug-vtf8q" Jan 26 20:25:43 crc kubenswrapper[4737]: I0126 20:25:43.199562 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnjpf\" (UniqueName: \"kubernetes.io/projected/3df46411-d4ef-48d4-8e3c-e1a9a91c1d6b-kube-api-access-tnjpf\") pod \"crc-debug-vtf8q\" (UID: \"3df46411-d4ef-48d4-8e3c-e1a9a91c1d6b\") " pod="openshift-must-gather-jsjh2/crc-debug-vtf8q" Jan 26 20:25:43 crc kubenswrapper[4737]: I0126 20:25:43.302612 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3df46411-d4ef-48d4-8e3c-e1a9a91c1d6b-host\") pod \"crc-debug-vtf8q\" (UID: \"3df46411-d4ef-48d4-8e3c-e1a9a91c1d6b\") " pod="openshift-must-gather-jsjh2/crc-debug-vtf8q" Jan 26 20:25:43 crc kubenswrapper[4737]: I0126 20:25:43.302742 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnjpf\" (UniqueName: \"kubernetes.io/projected/3df46411-d4ef-48d4-8e3c-e1a9a91c1d6b-kube-api-access-tnjpf\") pod \"crc-debug-vtf8q\" (UID: \"3df46411-d4ef-48d4-8e3c-e1a9a91c1d6b\") " pod="openshift-must-gather-jsjh2/crc-debug-vtf8q" Jan 26 20:25:43 crc kubenswrapper[4737]: I0126 20:25:43.303148 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3df46411-d4ef-48d4-8e3c-e1a9a91c1d6b-host\") pod \"crc-debug-vtf8q\" (UID: \"3df46411-d4ef-48d4-8e3c-e1a9a91c1d6b\") " pod="openshift-must-gather-jsjh2/crc-debug-vtf8q" Jan 26 20:25:43 crc kubenswrapper[4737]: I0126 20:25:43.320748 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnjpf\" (UniqueName: \"kubernetes.io/projected/3df46411-d4ef-48d4-8e3c-e1a9a91c1d6b-kube-api-access-tnjpf\") pod \"crc-debug-vtf8q\" (UID: \"3df46411-d4ef-48d4-8e3c-e1a9a91c1d6b\") " pod="openshift-must-gather-jsjh2/crc-debug-vtf8q" Jan 26 20:25:43 crc kubenswrapper[4737]: I0126 20:25:43.321562 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jsjh2/crc-debug-vtf8q" Jan 26 20:25:43 crc kubenswrapper[4737]: W0126 20:25:43.353904 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3df46411_d4ef_48d4_8e3c_e1a9a91c1d6b.slice/crio-a73617f6e678638a2c05869befbe23b681e3e6aa256cb4cdce34a8fbdc2c1b14 WatchSource:0}: Error finding container a73617f6e678638a2c05869befbe23b681e3e6aa256cb4cdce34a8fbdc2c1b14: Status 404 returned error can't find the container with id a73617f6e678638a2c05869befbe23b681e3e6aa256cb4cdce34a8fbdc2c1b14 Jan 26 20:25:44 crc kubenswrapper[4737]: I0126 20:25:44.191592 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jsjh2/crc-debug-vtf8q" event={"ID":"3df46411-d4ef-48d4-8e3c-e1a9a91c1d6b","Type":"ContainerStarted","Data":"af32d1947da6bd08a5e328c4ccb1b35193ba0cc8d414a21d6a802d2b35ec3a56"} Jan 26 20:25:44 crc kubenswrapper[4737]: I0126 20:25:44.199767 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jsjh2/crc-debug-vtf8q" event={"ID":"3df46411-d4ef-48d4-8e3c-e1a9a91c1d6b","Type":"ContainerStarted","Data":"a73617f6e678638a2c05869befbe23b681e3e6aa256cb4cdce34a8fbdc2c1b14"} Jan 26 20:25:44 crc kubenswrapper[4737]: I0126 20:25:44.223668 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-jsjh2/crc-debug-vtf8q" podStartSLOduration=2.223647362 podStartE2EDuration="2.223647362s" podCreationTimestamp="2026-01-26 20:25:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:25:44.221837518 +0000 UTC m=+6917.530032226" watchObservedRunningTime="2026-01-26 20:25:44.223647362 +0000 UTC m=+6917.531842060" Jan 26 20:25:48 crc kubenswrapper[4737]: I0126 20:25:48.983110 4737 scope.go:117] "RemoveContainer" containerID="7aba965480739423a22438a8c1c4daeec43131ccb401d5c79d36c732e6893546" Jan 26 20:25:48 crc kubenswrapper[4737]: E0126 20:25:48.984171 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:26:01 crc kubenswrapper[4737]: I0126 20:26:01.981848 4737 scope.go:117] "RemoveContainer" containerID="7aba965480739423a22438a8c1c4daeec43131ccb401d5c79d36c732e6893546" Jan 26 20:26:01 crc kubenswrapper[4737]: E0126 20:26:01.983056 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:26:16 crc kubenswrapper[4737]: I0126 20:26:16.995779 4737 scope.go:117] "RemoveContainer" containerID="7aba965480739423a22438a8c1c4daeec43131ccb401d5c79d36c732e6893546" Jan 26 20:26:16 crc kubenswrapper[4737]: E0126 20:26:16.996633 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:26:29 crc kubenswrapper[4737]: I0126 20:26:29.982908 4737 scope.go:117] "RemoveContainer" containerID="7aba965480739423a22438a8c1c4daeec43131ccb401d5c79d36c732e6893546" Jan 26 20:26:29 crc kubenswrapper[4737]: E0126 20:26:29.993391 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:26:32 crc kubenswrapper[4737]: I0126 20:26:32.740740 4737 generic.go:334] "Generic (PLEG): container finished" podID="3df46411-d4ef-48d4-8e3c-e1a9a91c1d6b" containerID="af32d1947da6bd08a5e328c4ccb1b35193ba0cc8d414a21d6a802d2b35ec3a56" exitCode=0 Jan 26 20:26:32 crc kubenswrapper[4737]: I0126 20:26:32.740852 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jsjh2/crc-debug-vtf8q" event={"ID":"3df46411-d4ef-48d4-8e3c-e1a9a91c1d6b","Type":"ContainerDied","Data":"af32d1947da6bd08a5e328c4ccb1b35193ba0cc8d414a21d6a802d2b35ec3a56"} Jan 26 20:26:33 crc kubenswrapper[4737]: I0126 20:26:33.908088 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jsjh2/crc-debug-vtf8q" Jan 26 20:26:33 crc kubenswrapper[4737]: I0126 20:26:33.960374 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jsjh2/crc-debug-vtf8q"] Jan 26 20:26:33 crc kubenswrapper[4737]: I0126 20:26:33.966517 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnjpf\" (UniqueName: \"kubernetes.io/projected/3df46411-d4ef-48d4-8e3c-e1a9a91c1d6b-kube-api-access-tnjpf\") pod \"3df46411-d4ef-48d4-8e3c-e1a9a91c1d6b\" (UID: \"3df46411-d4ef-48d4-8e3c-e1a9a91c1d6b\") " Jan 26 20:26:33 crc kubenswrapper[4737]: I0126 20:26:33.966852 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3df46411-d4ef-48d4-8e3c-e1a9a91c1d6b-host\") pod \"3df46411-d4ef-48d4-8e3c-e1a9a91c1d6b\" (UID: \"3df46411-d4ef-48d4-8e3c-e1a9a91c1d6b\") " Jan 26 20:26:33 crc kubenswrapper[4737]: I0126 20:26:33.966971 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3df46411-d4ef-48d4-8e3c-e1a9a91c1d6b-host" (OuterVolumeSpecName: "host") pod "3df46411-d4ef-48d4-8e3c-e1a9a91c1d6b" (UID: "3df46411-d4ef-48d4-8e3c-e1a9a91c1d6b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 20:26:33 crc kubenswrapper[4737]: I0126 20:26:33.967682 4737 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3df46411-d4ef-48d4-8e3c-e1a9a91c1d6b-host\") on node \"crc\" DevicePath \"\"" Jan 26 20:26:33 crc kubenswrapper[4737]: I0126 20:26:33.973899 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jsjh2/crc-debug-vtf8q"] Jan 26 20:26:33 crc kubenswrapper[4737]: I0126 20:26:33.982813 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3df46411-d4ef-48d4-8e3c-e1a9a91c1d6b-kube-api-access-tnjpf" (OuterVolumeSpecName: "kube-api-access-tnjpf") pod "3df46411-d4ef-48d4-8e3c-e1a9a91c1d6b" (UID: "3df46411-d4ef-48d4-8e3c-e1a9a91c1d6b"). InnerVolumeSpecName "kube-api-access-tnjpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:26:34 crc kubenswrapper[4737]: I0126 20:26:34.070725 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnjpf\" (UniqueName: \"kubernetes.io/projected/3df46411-d4ef-48d4-8e3c-e1a9a91c1d6b-kube-api-access-tnjpf\") on node \"crc\" DevicePath \"\"" Jan 26 20:26:34 crc kubenswrapper[4737]: I0126 20:26:34.768401 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a73617f6e678638a2c05869befbe23b681e3e6aa256cb4cdce34a8fbdc2c1b14" Jan 26 20:26:34 crc kubenswrapper[4737]: I0126 20:26:34.768544 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jsjh2/crc-debug-vtf8q" Jan 26 20:26:35 crc kubenswrapper[4737]: I0126 20:26:35.001553 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3df46411-d4ef-48d4-8e3c-e1a9a91c1d6b" path="/var/lib/kubelet/pods/3df46411-d4ef-48d4-8e3c-e1a9a91c1d6b/volumes" Jan 26 20:26:35 crc kubenswrapper[4737]: I0126 20:26:35.200699 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jsjh2/crc-debug-8mbct"] Jan 26 20:26:35 crc kubenswrapper[4737]: E0126 20:26:35.201475 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3df46411-d4ef-48d4-8e3c-e1a9a91c1d6b" containerName="container-00" Jan 26 20:26:35 crc kubenswrapper[4737]: I0126 20:26:35.201494 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="3df46411-d4ef-48d4-8e3c-e1a9a91c1d6b" containerName="container-00" Jan 26 20:26:35 crc kubenswrapper[4737]: I0126 20:26:35.201724 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="3df46411-d4ef-48d4-8e3c-e1a9a91c1d6b" containerName="container-00" Jan 26 20:26:35 crc kubenswrapper[4737]: I0126 20:26:35.202733 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jsjh2/crc-debug-8mbct" Jan 26 20:26:35 crc kubenswrapper[4737]: I0126 20:26:35.205506 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-jsjh2"/"default-dockercfg-hkxqc" Jan 26 20:26:35 crc kubenswrapper[4737]: I0126 20:26:35.303929 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fa0a1343-005e-42f0-88c6-c7301b9550c1-host\") pod \"crc-debug-8mbct\" (UID: \"fa0a1343-005e-42f0-88c6-c7301b9550c1\") " pod="openshift-must-gather-jsjh2/crc-debug-8mbct" Jan 26 20:26:35 crc kubenswrapper[4737]: I0126 20:26:35.304132 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cnwh\" (UniqueName: \"kubernetes.io/projected/fa0a1343-005e-42f0-88c6-c7301b9550c1-kube-api-access-9cnwh\") pod \"crc-debug-8mbct\" (UID: \"fa0a1343-005e-42f0-88c6-c7301b9550c1\") " pod="openshift-must-gather-jsjh2/crc-debug-8mbct" Jan 26 20:26:35 crc kubenswrapper[4737]: I0126 20:26:35.409464 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fa0a1343-005e-42f0-88c6-c7301b9550c1-host\") pod \"crc-debug-8mbct\" (UID: \"fa0a1343-005e-42f0-88c6-c7301b9550c1\") " pod="openshift-must-gather-jsjh2/crc-debug-8mbct" Jan 26 20:26:35 crc kubenswrapper[4737]: I0126 20:26:35.409702 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cnwh\" (UniqueName: \"kubernetes.io/projected/fa0a1343-005e-42f0-88c6-c7301b9550c1-kube-api-access-9cnwh\") pod \"crc-debug-8mbct\" (UID: \"fa0a1343-005e-42f0-88c6-c7301b9550c1\") " pod="openshift-must-gather-jsjh2/crc-debug-8mbct" Jan 26 20:26:35 crc kubenswrapper[4737]: I0126 20:26:35.409832 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fa0a1343-005e-42f0-88c6-c7301b9550c1-host\") pod \"crc-debug-8mbct\" (UID: \"fa0a1343-005e-42f0-88c6-c7301b9550c1\") " pod="openshift-must-gather-jsjh2/crc-debug-8mbct" Jan 26 20:26:35 crc kubenswrapper[4737]: I0126 20:26:35.431704 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cnwh\" (UniqueName: \"kubernetes.io/projected/fa0a1343-005e-42f0-88c6-c7301b9550c1-kube-api-access-9cnwh\") pod \"crc-debug-8mbct\" (UID: \"fa0a1343-005e-42f0-88c6-c7301b9550c1\") " pod="openshift-must-gather-jsjh2/crc-debug-8mbct" Jan 26 20:26:35 crc kubenswrapper[4737]: I0126 20:26:35.520613 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jsjh2/crc-debug-8mbct" Jan 26 20:26:35 crc kubenswrapper[4737]: W0126 20:26:35.566926 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa0a1343_005e_42f0_88c6_c7301b9550c1.slice/crio-0b3275536790e3cc6c3e7ea9f8d98fca680e70a1879a19e93b44eda39803e271 WatchSource:0}: Error finding container 0b3275536790e3cc6c3e7ea9f8d98fca680e70a1879a19e93b44eda39803e271: Status 404 returned error can't find the container with id 0b3275536790e3cc6c3e7ea9f8d98fca680e70a1879a19e93b44eda39803e271 Jan 26 20:26:35 crc kubenswrapper[4737]: I0126 20:26:35.782896 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jsjh2/crc-debug-8mbct" event={"ID":"fa0a1343-005e-42f0-88c6-c7301b9550c1","Type":"ContainerStarted","Data":"0b3275536790e3cc6c3e7ea9f8d98fca680e70a1879a19e93b44eda39803e271"} Jan 26 20:26:36 crc kubenswrapper[4737]: I0126 20:26:36.796757 4737 generic.go:334] "Generic (PLEG): container finished" podID="fa0a1343-005e-42f0-88c6-c7301b9550c1" containerID="653d586082debae7fc7d0e6915090401d6a33e24e072fea94e826fe3154a93f1" exitCode=0 Jan 26 20:26:36 crc kubenswrapper[4737]: I0126 20:26:36.796829 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jsjh2/crc-debug-8mbct" event={"ID":"fa0a1343-005e-42f0-88c6-c7301b9550c1","Type":"ContainerDied","Data":"653d586082debae7fc7d0e6915090401d6a33e24e072fea94e826fe3154a93f1"} Jan 26 20:26:37 crc kubenswrapper[4737]: I0126 20:26:37.952315 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jsjh2/crc-debug-8mbct" Jan 26 20:26:37 crc kubenswrapper[4737]: I0126 20:26:37.993719 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cnwh\" (UniqueName: \"kubernetes.io/projected/fa0a1343-005e-42f0-88c6-c7301b9550c1-kube-api-access-9cnwh\") pod \"fa0a1343-005e-42f0-88c6-c7301b9550c1\" (UID: \"fa0a1343-005e-42f0-88c6-c7301b9550c1\") " Jan 26 20:26:37 crc kubenswrapper[4737]: I0126 20:26:37.994277 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fa0a1343-005e-42f0-88c6-c7301b9550c1-host\") pod \"fa0a1343-005e-42f0-88c6-c7301b9550c1\" (UID: \"fa0a1343-005e-42f0-88c6-c7301b9550c1\") " Jan 26 20:26:37 crc kubenswrapper[4737]: I0126 20:26:37.995061 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa0a1343-005e-42f0-88c6-c7301b9550c1-host" (OuterVolumeSpecName: "host") pod "fa0a1343-005e-42f0-88c6-c7301b9550c1" (UID: "fa0a1343-005e-42f0-88c6-c7301b9550c1"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 20:26:38 crc kubenswrapper[4737]: I0126 20:26:38.001307 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa0a1343-005e-42f0-88c6-c7301b9550c1-kube-api-access-9cnwh" (OuterVolumeSpecName: "kube-api-access-9cnwh") pod "fa0a1343-005e-42f0-88c6-c7301b9550c1" (UID: "fa0a1343-005e-42f0-88c6-c7301b9550c1"). InnerVolumeSpecName "kube-api-access-9cnwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:26:38 crc kubenswrapper[4737]: I0126 20:26:38.099537 4737 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fa0a1343-005e-42f0-88c6-c7301b9550c1-host\") on node \"crc\" DevicePath \"\"" Jan 26 20:26:38 crc kubenswrapper[4737]: I0126 20:26:38.099598 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9cnwh\" (UniqueName: \"kubernetes.io/projected/fa0a1343-005e-42f0-88c6-c7301b9550c1-kube-api-access-9cnwh\") on node \"crc\" DevicePath \"\"" Jan 26 20:26:38 crc kubenswrapper[4737]: I0126 20:26:38.832169 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jsjh2/crc-debug-8mbct" event={"ID":"fa0a1343-005e-42f0-88c6-c7301b9550c1","Type":"ContainerDied","Data":"0b3275536790e3cc6c3e7ea9f8d98fca680e70a1879a19e93b44eda39803e271"} Jan 26 20:26:38 crc kubenswrapper[4737]: I0126 20:26:38.832223 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b3275536790e3cc6c3e7ea9f8d98fca680e70a1879a19e93b44eda39803e271" Jan 26 20:26:38 crc kubenswrapper[4737]: I0126 20:26:38.832275 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jsjh2/crc-debug-8mbct" Jan 26 20:26:39 crc kubenswrapper[4737]: I0126 20:26:39.197971 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jsjh2/crc-debug-8mbct"] Jan 26 20:26:39 crc kubenswrapper[4737]: I0126 20:26:39.225723 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jsjh2/crc-debug-8mbct"] Jan 26 20:26:40 crc kubenswrapper[4737]: I0126 20:26:40.498047 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jsjh2/crc-debug-592k2"] Jan 26 20:26:40 crc kubenswrapper[4737]: E0126 20:26:40.498545 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa0a1343-005e-42f0-88c6-c7301b9550c1" containerName="container-00" Jan 26 20:26:40 crc kubenswrapper[4737]: I0126 20:26:40.498558 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa0a1343-005e-42f0-88c6-c7301b9550c1" containerName="container-00" Jan 26 20:26:40 crc kubenswrapper[4737]: I0126 20:26:40.498802 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa0a1343-005e-42f0-88c6-c7301b9550c1" containerName="container-00" Jan 26 20:26:40 crc kubenswrapper[4737]: I0126 20:26:40.499608 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jsjh2/crc-debug-592k2" Jan 26 20:26:40 crc kubenswrapper[4737]: I0126 20:26:40.502186 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-jsjh2"/"default-dockercfg-hkxqc" Jan 26 20:26:40 crc kubenswrapper[4737]: I0126 20:26:40.567836 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrh4m\" (UniqueName: \"kubernetes.io/projected/a5c34ee9-bc2a-4256-b452-fa7cae0b0bcd-kube-api-access-rrh4m\") pod \"crc-debug-592k2\" (UID: \"a5c34ee9-bc2a-4256-b452-fa7cae0b0bcd\") " pod="openshift-must-gather-jsjh2/crc-debug-592k2" Jan 26 20:26:40 crc kubenswrapper[4737]: I0126 20:26:40.568480 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a5c34ee9-bc2a-4256-b452-fa7cae0b0bcd-host\") pod \"crc-debug-592k2\" (UID: \"a5c34ee9-bc2a-4256-b452-fa7cae0b0bcd\") " pod="openshift-must-gather-jsjh2/crc-debug-592k2" Jan 26 20:26:40 crc kubenswrapper[4737]: I0126 20:26:40.670845 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a5c34ee9-bc2a-4256-b452-fa7cae0b0bcd-host\") pod \"crc-debug-592k2\" (UID: \"a5c34ee9-bc2a-4256-b452-fa7cae0b0bcd\") " pod="openshift-must-gather-jsjh2/crc-debug-592k2" Jan 26 20:26:40 crc kubenswrapper[4737]: I0126 20:26:40.670915 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrh4m\" (UniqueName: \"kubernetes.io/projected/a5c34ee9-bc2a-4256-b452-fa7cae0b0bcd-kube-api-access-rrh4m\") pod \"crc-debug-592k2\" (UID: \"a5c34ee9-bc2a-4256-b452-fa7cae0b0bcd\") " pod="openshift-must-gather-jsjh2/crc-debug-592k2" Jan 26 20:26:40 crc kubenswrapper[4737]: I0126 20:26:40.671036 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a5c34ee9-bc2a-4256-b452-fa7cae0b0bcd-host\") pod \"crc-debug-592k2\" (UID: \"a5c34ee9-bc2a-4256-b452-fa7cae0b0bcd\") " pod="openshift-must-gather-jsjh2/crc-debug-592k2" Jan 26 20:26:40 crc kubenswrapper[4737]: I0126 20:26:40.701804 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrh4m\" (UniqueName: \"kubernetes.io/projected/a5c34ee9-bc2a-4256-b452-fa7cae0b0bcd-kube-api-access-rrh4m\") pod \"crc-debug-592k2\" (UID: \"a5c34ee9-bc2a-4256-b452-fa7cae0b0bcd\") " pod="openshift-must-gather-jsjh2/crc-debug-592k2" Jan 26 20:26:40 crc kubenswrapper[4737]: I0126 20:26:40.822000 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jsjh2/crc-debug-592k2" Jan 26 20:26:40 crc kubenswrapper[4737]: W0126 20:26:40.866082 4737 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda5c34ee9_bc2a_4256_b452_fa7cae0b0bcd.slice/crio-dace51360fc75ae0b207db7cc80a2cfff758b8077a9ce5e7f6a767a8dab55ea8 WatchSource:0}: Error finding container dace51360fc75ae0b207db7cc80a2cfff758b8077a9ce5e7f6a767a8dab55ea8: Status 404 returned error can't find the container with id dace51360fc75ae0b207db7cc80a2cfff758b8077a9ce5e7f6a767a8dab55ea8 Jan 26 20:26:40 crc kubenswrapper[4737]: I0126 20:26:40.996729 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa0a1343-005e-42f0-88c6-c7301b9550c1" path="/var/lib/kubelet/pods/fa0a1343-005e-42f0-88c6-c7301b9550c1/volumes" Jan 26 20:26:41 crc kubenswrapper[4737]: I0126 20:26:41.880621 4737 generic.go:334] "Generic (PLEG): container finished" podID="a5c34ee9-bc2a-4256-b452-fa7cae0b0bcd" containerID="659b3969c1e561d547c359d729f5f38ade6a954591cfbf7de1b8177b282c82ce" exitCode=0 Jan 26 20:26:41 crc kubenswrapper[4737]: I0126 20:26:41.880820 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jsjh2/crc-debug-592k2" event={"ID":"a5c34ee9-bc2a-4256-b452-fa7cae0b0bcd","Type":"ContainerDied","Data":"659b3969c1e561d547c359d729f5f38ade6a954591cfbf7de1b8177b282c82ce"} Jan 26 20:26:41 crc kubenswrapper[4737]: I0126 20:26:41.881042 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jsjh2/crc-debug-592k2" event={"ID":"a5c34ee9-bc2a-4256-b452-fa7cae0b0bcd","Type":"ContainerStarted","Data":"dace51360fc75ae0b207db7cc80a2cfff758b8077a9ce5e7f6a767a8dab55ea8"} Jan 26 20:26:41 crc kubenswrapper[4737]: I0126 20:26:41.942467 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jsjh2/crc-debug-592k2"] Jan 26 20:26:41 crc kubenswrapper[4737]: I0126 20:26:41.956008 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jsjh2/crc-debug-592k2"] Jan 26 20:26:43 crc kubenswrapper[4737]: I0126 20:26:43.024580 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jsjh2/crc-debug-592k2" Jan 26 20:26:43 crc kubenswrapper[4737]: I0126 20:26:43.035687 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrh4m\" (UniqueName: \"kubernetes.io/projected/a5c34ee9-bc2a-4256-b452-fa7cae0b0bcd-kube-api-access-rrh4m\") pod \"a5c34ee9-bc2a-4256-b452-fa7cae0b0bcd\" (UID: \"a5c34ee9-bc2a-4256-b452-fa7cae0b0bcd\") " Jan 26 20:26:43 crc kubenswrapper[4737]: I0126 20:26:43.035743 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a5c34ee9-bc2a-4256-b452-fa7cae0b0bcd-host\") pod \"a5c34ee9-bc2a-4256-b452-fa7cae0b0bcd\" (UID: \"a5c34ee9-bc2a-4256-b452-fa7cae0b0bcd\") " Jan 26 20:26:43 crc kubenswrapper[4737]: I0126 20:26:43.035907 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5c34ee9-bc2a-4256-b452-fa7cae0b0bcd-host" (OuterVolumeSpecName: "host") pod "a5c34ee9-bc2a-4256-b452-fa7cae0b0bcd" (UID: "a5c34ee9-bc2a-4256-b452-fa7cae0b0bcd"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 20:26:43 crc kubenswrapper[4737]: I0126 20:26:43.036742 4737 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a5c34ee9-bc2a-4256-b452-fa7cae0b0bcd-host\") on node \"crc\" DevicePath \"\"" Jan 26 20:26:43 crc kubenswrapper[4737]: I0126 20:26:43.043705 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5c34ee9-bc2a-4256-b452-fa7cae0b0bcd-kube-api-access-rrh4m" (OuterVolumeSpecName: "kube-api-access-rrh4m") pod "a5c34ee9-bc2a-4256-b452-fa7cae0b0bcd" (UID: "a5c34ee9-bc2a-4256-b452-fa7cae0b0bcd"). InnerVolumeSpecName "kube-api-access-rrh4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:26:43 crc kubenswrapper[4737]: I0126 20:26:43.140000 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrh4m\" (UniqueName: \"kubernetes.io/projected/a5c34ee9-bc2a-4256-b452-fa7cae0b0bcd-kube-api-access-rrh4m\") on node \"crc\" DevicePath \"\"" Jan 26 20:26:43 crc kubenswrapper[4737]: I0126 20:26:43.903653 4737 scope.go:117] "RemoveContainer" containerID="659b3969c1e561d547c359d729f5f38ade6a954591cfbf7de1b8177b282c82ce" Jan 26 20:26:43 crc kubenswrapper[4737]: I0126 20:26:43.903744 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jsjh2/crc-debug-592k2" Jan 26 20:26:44 crc kubenswrapper[4737]: I0126 20:26:44.982653 4737 scope.go:117] "RemoveContainer" containerID="7aba965480739423a22438a8c1c4daeec43131ccb401d5c79d36c732e6893546" Jan 26 20:26:44 crc kubenswrapper[4737]: E0126 20:26:44.983520 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:26:44 crc kubenswrapper[4737]: I0126 20:26:44.996330 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5c34ee9-bc2a-4256-b452-fa7cae0b0bcd" path="/var/lib/kubelet/pods/a5c34ee9-bc2a-4256-b452-fa7cae0b0bcd/volumes" Jan 26 20:26:59 crc kubenswrapper[4737]: I0126 20:26:59.982794 4737 scope.go:117] "RemoveContainer" containerID="7aba965480739423a22438a8c1c4daeec43131ccb401d5c79d36c732e6893546" Jan 26 20:26:59 crc kubenswrapper[4737]: E0126 20:26:59.983781 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:27:11 crc kubenswrapper[4737]: I0126 20:27:11.982638 4737 scope.go:117] "RemoveContainer" containerID="7aba965480739423a22438a8c1c4daeec43131ccb401d5c79d36c732e6893546" Jan 26 20:27:11 crc kubenswrapper[4737]: E0126 20:27:11.983453 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:27:25 crc kubenswrapper[4737]: I0126 20:27:25.982336 4737 scope.go:117] "RemoveContainer" containerID="7aba965480739423a22438a8c1c4daeec43131ccb401d5c79d36c732e6893546" Jan 26 20:27:25 crc kubenswrapper[4737]: E0126 20:27:25.983398 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:27:35 crc kubenswrapper[4737]: I0126 20:27:35.428243 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_147666d0-b0ae-46ad-aaa0-2fcf6db0f137/aodh-api/0.log" Jan 26 20:27:35 crc kubenswrapper[4737]: I0126 20:27:35.679599 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_147666d0-b0ae-46ad-aaa0-2fcf6db0f137/aodh-evaluator/0.log" Jan 26 20:27:35 crc kubenswrapper[4737]: I0126 20:27:35.688757 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_147666d0-b0ae-46ad-aaa0-2fcf6db0f137/aodh-listener/0.log" Jan 26 20:27:35 crc kubenswrapper[4737]: I0126 20:27:35.711290 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_147666d0-b0ae-46ad-aaa0-2fcf6db0f137/aodh-notifier/0.log" Jan 26 20:27:35 crc kubenswrapper[4737]: I0126 20:27:35.927331 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-687b47d654-rb2ft_1aef338e-174a-4bc2-acd1-56374a72e519/barbican-api/0.log" Jan 26 20:27:35 crc kubenswrapper[4737]: I0126 20:27:35.962242 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-687b47d654-rb2ft_1aef338e-174a-4bc2-acd1-56374a72e519/barbican-api-log/0.log" Jan 26 20:27:36 crc kubenswrapper[4737]: I0126 20:27:36.193463 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5c5b6c8cdb-gwc7x_b82b3dcd-dcf3-44a0-bfc7-cb8d484ebd6b/barbican-keystone-listener/0.log" Jan 26 20:27:36 crc kubenswrapper[4737]: I0126 20:27:36.322962 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5c5b6c8cdb-gwc7x_b82b3dcd-dcf3-44a0-bfc7-cb8d484ebd6b/barbican-keystone-listener-log/0.log" Jan 26 20:27:36 crc kubenswrapper[4737]: I0126 20:27:36.376241 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-866f479b9-7wv96_b84a5366-14c9-4b93-b185-18a4e3695ed7/barbican-worker/0.log" Jan 26 20:27:36 crc kubenswrapper[4737]: I0126 20:27:36.486462 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-866f479b9-7wv96_b84a5366-14c9-4b93-b185-18a4e3695ed7/barbican-worker-log/0.log" Jan 26 20:27:36 crc kubenswrapper[4737]: I0126 20:27:36.672408 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-kpbfj_6d1d0ed3-31b7-41a2-8f49-741d206509bd/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:27:36 crc kubenswrapper[4737]: I0126 20:27:36.959723 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_43f4c1d0-e222-4099-ad1a-73d3c9d9530a/ceilometer-central-agent/0.log" Jan 26 20:27:36 crc kubenswrapper[4737]: I0126 20:27:36.964956 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_43f4c1d0-e222-4099-ad1a-73d3c9d9530a/ceilometer-notification-agent/0.log" Jan 26 20:27:36 crc kubenswrapper[4737]: I0126 20:27:36.967544 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_43f4c1d0-e222-4099-ad1a-73d3c9d9530a/proxy-httpd/0.log" Jan 26 20:27:36 crc kubenswrapper[4737]: I0126 20:27:36.979579 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_43f4c1d0-e222-4099-ad1a-73d3c9d9530a/sg-core/0.log" Jan 26 20:27:37 crc kubenswrapper[4737]: I0126 20:27:37.201902 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_715806cf-cb82-4224-bdb0-8aed20e29b49/cinder-api-log/0.log" Jan 26 20:27:37 crc kubenswrapper[4737]: I0126 20:27:37.342680 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_715806cf-cb82-4224-bdb0-8aed20e29b49/cinder-api/0.log" Jan 26 20:27:37 crc kubenswrapper[4737]: I0126 20:27:37.563800 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_635e921c-e7e7-4721-a152-f589e21e4631/cinder-scheduler/0.log" Jan 26 20:27:37 crc kubenswrapper[4737]: I0126 20:27:37.619492 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_635e921c-e7e7-4721-a152-f589e21e4631/probe/0.log" Jan 26 20:27:37 crc kubenswrapper[4737]: I0126 20:27:37.780918 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-m2vxk_f606c12b-460a-4ec1-ac57-d4e5667945de/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:27:37 crc kubenswrapper[4737]: I0126 20:27:37.982415 4737 scope.go:117] "RemoveContainer" containerID="7aba965480739423a22438a8c1c4daeec43131ccb401d5c79d36c732e6893546" Jan 26 20:27:38 crc kubenswrapper[4737]: I0126 20:27:38.580242 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-f67kv_50a8451d-1c9f-4e7b-a24a-36a22672f896/init/0.log" Jan 26 20:27:38 crc kubenswrapper[4737]: I0126 20:27:38.580539 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-trclj_e5cc8a39-bca0-4175-a418-a24c75e5bc06/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:27:38 crc kubenswrapper[4737]: I0126 20:27:38.664780 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" event={"ID":"afd75772-7900-46c3-b392-afb075e1cc08","Type":"ContainerStarted","Data":"05a46f8e5c92ce620be075be65e82bacded6a11097569b518c26dfa30624b4cd"} Jan 26 20:27:39 crc kubenswrapper[4737]: I0126 20:27:39.065533 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-f67kv_50a8451d-1c9f-4e7b-a24a-36a22672f896/init/0.log" Jan 26 20:27:39 crc kubenswrapper[4737]: I0126 20:27:39.100556 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-f67kv_50a8451d-1c9f-4e7b-a24a-36a22672f896/dnsmasq-dns/0.log" Jan 26 20:27:39 crc kubenswrapper[4737]: I0126 20:27:39.294346 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-bd28j_5e950231-d00c-4fbd-b9de-a93d2d86eb36/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:27:39 crc kubenswrapper[4737]: I0126 20:27:39.558118 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_5de2e392-7605-4b8c-831c-4101c098fc0e/glance-httpd/0.log" Jan 26 20:27:39 crc kubenswrapper[4737]: I0126 20:27:39.598106 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_5de2e392-7605-4b8c-831c-4101c098fc0e/glance-log/0.log" Jan 26 20:27:40 crc kubenswrapper[4737]: I0126 20:27:40.007429 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_9c0fd189-4592-4f52-a100-e6fc3581ef26/glance-httpd/0.log" Jan 26 20:27:40 crc kubenswrapper[4737]: I0126 20:27:40.060939 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_9c0fd189-4592-4f52-a100-e6fc3581ef26/glance-log/0.log" Jan 26 20:27:40 crc kubenswrapper[4737]: I0126 20:27:40.994653 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-mzz54_fa425b93-9221-4f0b-b0fd-7995e092f8f1/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:27:41 crc kubenswrapper[4737]: I0126 20:27:41.061484 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-858867c5df-ppbxf_de816c7c-1d5a-4226-b17c-b4f5a5c8d07b/heat-engine/0.log" Jan 26 20:27:41 crc kubenswrapper[4737]: I0126 20:27:41.305244 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-lj8qk_8f08d498-ef07-4e31-ab34-d68972740f02/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:27:41 crc kubenswrapper[4737]: I0126 20:27:41.379424 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-cfff6bbff-s577r_f4b0bd32-90db-4eae-a748-903c5d5cd931/heat-api/0.log" Jan 26 20:27:41 crc kubenswrapper[4737]: I0126 20:27:41.523499 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-6b78c96546-lpdfk_bbb9e95d-409d-4b81-a1e4-1dca34c9d1cb/heat-cfnapi/0.log" Jan 26 20:27:41 crc kubenswrapper[4737]: I0126 20:27:41.669501 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29490901-rhxwf_37efbad2-f8c2-4830-9ece-86870bf29923/keystone-cron/0.log" Jan 26 20:27:41 crc kubenswrapper[4737]: I0126 20:27:41.885529 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29490961-hfz45_5f36a330-35fc-46b8-9f3f-4648e4e5485c/keystone-cron/0.log" Jan 26 20:27:42 crc kubenswrapper[4737]: I0126 20:27:42.020348 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-86b84744f8-59mdj_682c692a-8447-4b49-b81d-98b7fa9ccec1/keystone-api/0.log" Jan 26 20:27:42 crc kubenswrapper[4737]: I0126 20:27:42.436944 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_c57d0600-f0a4-43d2-b974-ced2346aae55/kube-state-metrics/0.log" Jan 26 20:27:42 crc kubenswrapper[4737]: I0126 20:27:42.633383 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_logging-edpm-deployment-openstack-edpm-ipam-p6bgr_9f1823e5-fd64-4ddd-a4ed-5727de977754/logging-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:27:42 crc kubenswrapper[4737]: I0126 20:27:42.661936 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-wlzfp_35694d2d-33da-4cab-96a8-4e14aa07b4f9/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:27:43 crc kubenswrapper[4737]: I0126 20:27:43.023732 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_3cc067c6-ba98-4534-a9d8-2028c6e0ccf6/mysqld-exporter/0.log" Jan 26 20:27:43 crc kubenswrapper[4737]: I0126 20:27:43.450677 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-55cbc4d4bf-89lfk_a9b9b411-9b28-486b-bb42-cf668fba2ee5/neutron-api/0.log" Jan 26 20:27:43 crc kubenswrapper[4737]: I0126 20:27:43.488127 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-ms8mp_f03ef699-8fd7-4aad-a3a5-8a7306048d86/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:27:43 crc kubenswrapper[4737]: I0126 20:27:43.595792 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-55cbc4d4bf-89lfk_a9b9b411-9b28-486b-bb42-cf668fba2ee5/neutron-httpd/0.log" Jan 26 20:27:44 crc kubenswrapper[4737]: I0126 20:27:44.445261 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_5d833a0c-e63e-4296-85f9-f7489007fa6c/nova-cell0-conductor-conductor/0.log" Jan 26 20:27:44 crc kubenswrapper[4737]: I0126 20:27:44.683955 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_dc6d57aa-811b-482e-abc2-5048e523ce88/nova-api-log/0.log" Jan 26 20:27:44 crc kubenswrapper[4737]: I0126 20:27:44.894143 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_8c62bab3-337a-4449-ac7f-63dedc641524/nova-cell1-conductor-conductor/0.log" Jan 26 20:27:45 crc kubenswrapper[4737]: I0126 20:27:45.235811 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_32bea17c-5210-413d-81b5-e30c0dbc0c77/nova-cell1-novncproxy-novncproxy/0.log" Jan 26 20:27:45 crc kubenswrapper[4737]: I0126 20:27:45.245643 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-m7qxj_c1f6bd41-c1ed-47f9-a3db-03756845afbc/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:27:45 crc kubenswrapper[4737]: I0126 20:27:45.551270 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_dc6d57aa-811b-482e-abc2-5048e523ce88/nova-api-api/0.log" Jan 26 20:27:45 crc kubenswrapper[4737]: I0126 20:27:45.914340 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_4e472c4b-c138-4b34-b972-84afd363d6dd/nova-metadata-log/0.log" Jan 26 20:27:46 crc kubenswrapper[4737]: I0126 20:27:46.298238 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_89018ab2-3fc5-4855-b47e-ac19d8008c8e/mysql-bootstrap/0.log" Jan 26 20:27:46 crc kubenswrapper[4737]: I0126 20:27:46.413522 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_a901aed9-dbba-43e3-bf8c-f6026e3ea49d/nova-scheduler-scheduler/0.log" Jan 26 20:27:46 crc kubenswrapper[4737]: I0126 20:27:46.570474 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_89018ab2-3fc5-4855-b47e-ac19d8008c8e/mysql-bootstrap/0.log" Jan 26 20:27:46 crc kubenswrapper[4737]: I0126 20:27:46.628556 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_89018ab2-3fc5-4855-b47e-ac19d8008c8e/galera/0.log" Jan 26 20:27:46 crc kubenswrapper[4737]: I0126 20:27:46.807855 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ca50689d-e7af-4267-9ee0-11d254c08962/mysql-bootstrap/0.log" Jan 26 20:27:47 crc kubenswrapper[4737]: I0126 20:27:47.177633 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ca50689d-e7af-4267-9ee0-11d254c08962/mysql-bootstrap/0.log" Jan 26 20:27:47 crc kubenswrapper[4737]: I0126 20:27:47.256180 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ca50689d-e7af-4267-9ee0-11d254c08962/galera/0.log" Jan 26 20:27:47 crc kubenswrapper[4737]: I0126 20:27:47.441653 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_d857f780-d620-4d1a-bacb-8ecff74a012f/openstackclient/0.log" Jan 26 20:27:47 crc kubenswrapper[4737]: I0126 20:27:47.581615 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-96rrx_6bdafee1-1c61-4cbe-b052-c5948c27152d/openstack-network-exporter/0.log" Jan 26 20:27:47 crc kubenswrapper[4737]: I0126 20:27:47.841799 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-tnjz7_b875fe78-bf29-45f1-a4a5-f3881134a171/ovsdb-server-init/0.log" Jan 26 20:27:48 crc kubenswrapper[4737]: I0126 20:27:48.145451 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-tnjz7_b875fe78-bf29-45f1-a4a5-f3881134a171/ovsdb-server/0.log" Jan 26 20:27:48 crc kubenswrapper[4737]: I0126 20:27:48.148126 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-tnjz7_b875fe78-bf29-45f1-a4a5-f3881134a171/ovs-vswitchd/0.log" Jan 26 20:27:48 crc kubenswrapper[4737]: I0126 20:27:48.198507 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-tnjz7_b875fe78-bf29-45f1-a4a5-f3881134a171/ovsdb-server-init/0.log" Jan 26 20:27:48 crc kubenswrapper[4737]: I0126 20:27:48.499505 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-zrckb_11408d0f-4b45-4dab-bc9e-965ac14aed79/ovn-controller/0.log" Jan 26 20:27:48 crc kubenswrapper[4737]: I0126 20:27:48.712653 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-9r2p8_7602eee6-3627-420f-8e44-c19689be75de/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:27:48 crc kubenswrapper[4737]: I0126 20:27:48.773506 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_19bc14ba-dd2b-4cb9-969d-e44339856cf0/openstack-network-exporter/0.log" Jan 26 20:27:49 crc kubenswrapper[4737]: I0126 20:27:49.118322 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_4e472c4b-c138-4b34-b972-84afd363d6dd/nova-metadata-metadata/0.log" Jan 26 20:27:49 crc kubenswrapper[4737]: I0126 20:27:49.348826 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_19bc14ba-dd2b-4cb9-969d-e44339856cf0/ovn-northd/0.log" Jan 26 20:27:49 crc kubenswrapper[4737]: I0126 20:27:49.451798 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_6465a03e-5fc8-4886-b68b-531fe218230f/openstack-network-exporter/0.log" Jan 26 20:27:49 crc kubenswrapper[4737]: I0126 20:27:49.533702 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_6465a03e-5fc8-4886-b68b-531fe218230f/ovsdbserver-nb/0.log" Jan 26 20:27:49 crc kubenswrapper[4737]: I0126 20:27:49.671991 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_923f982a-41f5-4c9d-a2dc-50e96e54c283/openstack-network-exporter/0.log" Jan 26 20:27:49 crc kubenswrapper[4737]: I0126 20:27:49.781989 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_923f982a-41f5-4c9d-a2dc-50e96e54c283/ovsdbserver-sb/0.log" Jan 26 20:27:50 crc kubenswrapper[4737]: I0126 20:27:50.115356 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-c974878b4-m6rmv_faf8de27-9da1-4a0d-9edf-ebb5d53fc272/placement-api/0.log" Jan 26 20:27:50 crc kubenswrapper[4737]: I0126 20:27:50.215049 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-c974878b4-m6rmv_faf8de27-9da1-4a0d-9edf-ebb5d53fc272/placement-log/0.log" Jan 26 20:27:50 crc kubenswrapper[4737]: I0126 20:27:50.248753 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_dd029654-7895-4949-9ef7-b5cdd6043451/init-config-reloader/0.log" Jan 26 20:27:50 crc kubenswrapper[4737]: I0126 20:27:50.440485 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_dd029654-7895-4949-9ef7-b5cdd6043451/init-config-reloader/0.log" Jan 26 20:27:50 crc kubenswrapper[4737]: I0126 20:27:50.483529 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_dd029654-7895-4949-9ef7-b5cdd6043451/config-reloader/0.log" Jan 26 20:27:50 crc kubenswrapper[4737]: I0126 20:27:50.530098 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_dd029654-7895-4949-9ef7-b5cdd6043451/prometheus/0.log" Jan 26 20:27:50 crc kubenswrapper[4737]: I0126 20:27:50.530880 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_dd029654-7895-4949-9ef7-b5cdd6043451/thanos-sidecar/0.log" Jan 26 20:27:50 crc kubenswrapper[4737]: I0126 20:27:50.749986 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_e5db87e3-e7cb-4248-bc3a-5c6f5d92c144/setup-container/0.log" Jan 26 20:27:50 crc kubenswrapper[4737]: I0126 20:27:50.999504 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_e5db87e3-e7cb-4248-bc3a-5c6f5d92c144/setup-container/0.log" Jan 26 20:27:51 crc kubenswrapper[4737]: I0126 20:27:51.082645 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_e5db87e3-e7cb-4248-bc3a-5c6f5d92c144/rabbitmq/0.log" Jan 26 20:27:51 crc kubenswrapper[4737]: I0126 20:27:51.090528 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bcd52a93-f277-416b-b37b-2ae58d2edaa5/setup-container/0.log" Jan 26 20:27:51 crc kubenswrapper[4737]: I0126 20:27:51.343133 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bcd52a93-f277-416b-b37b-2ae58d2edaa5/setup-container/0.log" Jan 26 20:27:51 crc kubenswrapper[4737]: I0126 20:27:51.355587 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bcd52a93-f277-416b-b37b-2ae58d2edaa5/rabbitmq/0.log" Jan 26 20:27:51 crc kubenswrapper[4737]: I0126 20:27:51.419598 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_72e5eb94-0267-4126-b24c-9b816c66badf/setup-container/0.log" Jan 26 20:27:51 crc kubenswrapper[4737]: I0126 20:27:51.679193 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_72e5eb94-0267-4126-b24c-9b816c66badf/setup-container/0.log" Jan 26 20:27:51 crc kubenswrapper[4737]: I0126 20:27:51.697219 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_44d4092c-abb5-4218-81dc-32ba2257004d/setup-container/0.log" Jan 26 20:27:51 crc kubenswrapper[4737]: I0126 20:27:51.723422 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_72e5eb94-0267-4126-b24c-9b816c66badf/rabbitmq/0.log" Jan 26 20:27:52 crc kubenswrapper[4737]: I0126 20:27:52.148666 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_44d4092c-abb5-4218-81dc-32ba2257004d/setup-container/0.log" Jan 26 20:27:52 crc kubenswrapper[4737]: I0126 20:27:52.159015 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_44d4092c-abb5-4218-81dc-32ba2257004d/rabbitmq/0.log" Jan 26 20:27:52 crc kubenswrapper[4737]: I0126 20:27:52.223806 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-h27rt_34f77dce-aaea-4249-be45-fa7c47b5616b/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:27:52 crc kubenswrapper[4737]: I0126 20:27:52.691092 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-ld6hr_2af8847d-3acf-4733-a507-7d00229ef74c/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:27:52 crc kubenswrapper[4737]: I0126 20:27:52.761406 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-x2cd5_67eb47db-a20a-4f95-97c2-67df12c02360/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:27:53 crc kubenswrapper[4737]: I0126 20:27:53.132092 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-4krm6_2440805a-4477-42f6-bc13-01fc157e1b94/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:27:53 crc kubenswrapper[4737]: I0126 20:27:53.163588 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-h2hhm_395dd2b5-3055-45e9-b528-9bc97b61743f/ssh-known-hosts-edpm-deployment/0.log" Jan 26 20:27:53 crc kubenswrapper[4737]: I0126 20:27:53.408602 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6dd8ff9d59-rttts_38df0a7c-47f1-4834-b970-d815d009b6d7/proxy-server/0.log" Jan 26 20:27:53 crc kubenswrapper[4737]: I0126 20:27:53.664276 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-2fbb8_c9be0bf2-1b3f-4f77-89ec-b5afa2362e47/swift-ring-rebalance/0.log" Jan 26 20:27:53 crc kubenswrapper[4737]: I0126 20:27:53.684720 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6dd8ff9d59-rttts_38df0a7c-47f1-4834-b970-d815d009b6d7/proxy-httpd/0.log" Jan 26 20:27:53 crc kubenswrapper[4737]: I0126 20:27:53.869146 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_03970489-bf21-4d19-afc2-bf8d39aa683e/account-auditor/0.log" Jan 26 20:27:53 crc kubenswrapper[4737]: I0126 20:27:53.935918 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_03970489-bf21-4d19-afc2-bf8d39aa683e/account-reaper/0.log" Jan 26 20:27:53 crc kubenswrapper[4737]: I0126 20:27:53.992495 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_03970489-bf21-4d19-afc2-bf8d39aa683e/account-replicator/0.log" Jan 26 20:27:54 crc kubenswrapper[4737]: I0126 20:27:54.047932 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_03970489-bf21-4d19-afc2-bf8d39aa683e/account-server/0.log" Jan 26 20:27:54 crc kubenswrapper[4737]: I0126 20:27:54.175027 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_03970489-bf21-4d19-afc2-bf8d39aa683e/container-auditor/0.log" Jan 26 20:27:54 crc kubenswrapper[4737]: I0126 20:27:54.227626 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_03970489-bf21-4d19-afc2-bf8d39aa683e/container-server/0.log" Jan 26 20:27:54 crc kubenswrapper[4737]: I0126 20:27:54.274368 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_03970489-bf21-4d19-afc2-bf8d39aa683e/container-replicator/0.log" Jan 26 20:27:54 crc kubenswrapper[4737]: I0126 20:27:54.364979 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_03970489-bf21-4d19-afc2-bf8d39aa683e/container-updater/0.log" Jan 26 20:27:54 crc kubenswrapper[4737]: I0126 20:27:54.554374 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_03970489-bf21-4d19-afc2-bf8d39aa683e/object-expirer/0.log" Jan 26 20:27:54 crc kubenswrapper[4737]: I0126 20:27:54.565587 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_03970489-bf21-4d19-afc2-bf8d39aa683e/object-auditor/0.log" Jan 26 20:27:54 crc kubenswrapper[4737]: I0126 20:27:54.695048 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_03970489-bf21-4d19-afc2-bf8d39aa683e/object-replicator/0.log" Jan 26 20:27:54 crc kubenswrapper[4737]: I0126 20:27:54.715662 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_03970489-bf21-4d19-afc2-bf8d39aa683e/object-server/0.log" Jan 26 20:27:54 crc kubenswrapper[4737]: I0126 20:27:54.766248 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_2618c486-a631-4a87-ba8f-d5ccad83a208/memcached/0.log" Jan 26 20:27:54 crc kubenswrapper[4737]: I0126 20:27:54.791353 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_03970489-bf21-4d19-afc2-bf8d39aa683e/object-updater/0.log" Jan 26 20:27:54 crc kubenswrapper[4737]: I0126 20:27:54.844643 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_03970489-bf21-4d19-afc2-bf8d39aa683e/rsync/0.log" Jan 26 20:27:54 crc kubenswrapper[4737]: I0126 20:27:54.984744 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_03970489-bf21-4d19-afc2-bf8d39aa683e/swift-recon-cron/0.log" Jan 26 20:27:55 crc kubenswrapper[4737]: I0126 20:27:55.062244 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-v27w7_6bacdfa3-047c-42c9-a233-7daac1e8b65d/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:27:55 crc kubenswrapper[4737]: I0126 20:27:55.149185 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-power-monitoring-edpm-deployment-openstack-edpm-5rchb_fe3a5992-1b84-4df9-bebe-3f0060fe631d/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:27:55 crc kubenswrapper[4737]: I0126 20:27:55.380396 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_7e035125-8d0b-4019-9266-fd7abb0057da/test-operator-logs-container/0.log" Jan 26 20:27:55 crc kubenswrapper[4737]: I0126 20:27:55.795831 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-74xdm_bb314574-7438-4911-8b54-a1ccfa5a907d/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:27:55 crc kubenswrapper[4737]: I0126 20:27:55.915368 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_d81cdf24-ce67-401f-869f-805f4718fce4/tempest-tests-tempest-tests-runner/0.log" Jan 26 20:28:30 crc kubenswrapper[4737]: I0126 20:28:30.919061 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5c66b61d564fc639d515e788b700d4c5c2c3cff0a71ecd99e42f80cf9454pgp_ad64c1f6-5d9c-4ec5-990c-354f54f9f183/util/0.log" Jan 26 20:28:31 crc kubenswrapper[4737]: I0126 20:28:31.168795 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5c66b61d564fc639d515e788b700d4c5c2c3cff0a71ecd99e42f80cf9454pgp_ad64c1f6-5d9c-4ec5-990c-354f54f9f183/util/0.log" Jan 26 20:28:31 crc kubenswrapper[4737]: I0126 20:28:31.170175 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5c66b61d564fc639d515e788b700d4c5c2c3cff0a71ecd99e42f80cf9454pgp_ad64c1f6-5d9c-4ec5-990c-354f54f9f183/pull/0.log" Jan 26 20:28:31 crc kubenswrapper[4737]: I0126 20:28:31.171548 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5c66b61d564fc639d515e788b700d4c5c2c3cff0a71ecd99e42f80cf9454pgp_ad64c1f6-5d9c-4ec5-990c-354f54f9f183/pull/0.log" Jan 26 20:28:31 crc kubenswrapper[4737]: I0126 20:28:31.431954 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5c66b61d564fc639d515e788b700d4c5c2c3cff0a71ecd99e42f80cf9454pgp_ad64c1f6-5d9c-4ec5-990c-354f54f9f183/pull/0.log" Jan 26 20:28:31 crc kubenswrapper[4737]: I0126 20:28:31.475229 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5c66b61d564fc639d515e788b700d4c5c2c3cff0a71ecd99e42f80cf9454pgp_ad64c1f6-5d9c-4ec5-990c-354f54f9f183/util/0.log" Jan 26 20:28:31 crc kubenswrapper[4737]: I0126 20:28:31.543735 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5c66b61d564fc639d515e788b700d4c5c2c3cff0a71ecd99e42f80cf9454pgp_ad64c1f6-5d9c-4ec5-990c-354f54f9f183/extract/0.log" Jan 26 20:28:31 crc kubenswrapper[4737]: I0126 20:28:31.772302 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-p42h8_288df3c7-1220-419c-bde6-67ee3922b8ad/manager/0.log" Jan 26 20:28:31 crc kubenswrapper[4737]: I0126 20:28:31.846465 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7478f7dbf9-hbqjs_6cc46694-b15a-4417-a0a9-f4c13184f2ca/manager/0.log" Jan 26 20:28:32 crc kubenswrapper[4737]: I0126 20:28:32.009166 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-6mjbw_62ddf97f-7d75-4667-9480-17cb809b98f5/manager/0.log" Jan 26 20:28:32 crc kubenswrapper[4737]: I0126 20:28:32.216822 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-bl8hk_97c0989d-f677-4460-b62b-4733c7db29d4/manager/0.log" Jan 26 20:28:32 crc kubenswrapper[4737]: I0126 20:28:32.380173 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-kq82d_d80defd5-46d2-4e20-b093-dff95dca651b/manager/0.log" Jan 26 20:28:32 crc kubenswrapper[4737]: I0126 20:28:32.427227 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-j9nc9_3508c1f8-c9d9-41bf-b71e-eebb13eb5e86/manager/0.log" Jan 26 20:28:32 crc kubenswrapper[4737]: I0126 20:28:32.712173 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-jpmmh_b3a010fd-4f62-40c6-a377-be5c6f2e6ba7/manager/0.log" Jan 26 20:28:32 crc kubenswrapper[4737]: I0126 20:28:32.946924 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-694cf4f878-9lqk4_6904aa8b-12dd-4139-9a9f-f60be010cf3b/manager/0.log" Jan 26 20:28:33 crc kubenswrapper[4737]: I0126 20:28:33.068586 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-zbp84_03d41d00-eefc-45c4-aaea-f09a5e34362b/manager/0.log" Jan 26 20:28:33 crc kubenswrapper[4737]: I0126 20:28:33.307223 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-v9b85_0d2709bf-2113-45d7-94a1-816bc230044a/manager/0.log" Jan 26 20:28:33 crc kubenswrapper[4737]: I0126 20:28:33.422837 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-c4hpz_5b2ad507-8ef0-40e5-a10c-d5ed62a8181e/manager/0.log" Jan 26 20:28:33 crc kubenswrapper[4737]: I0126 20:28:33.620202 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-tz995_01b83dfe-58bb-40fa-a0e8-b942b4c79b72/manager/0.log" Jan 26 20:28:33 crc kubenswrapper[4737]: I0126 20:28:33.746921 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7bdb645866-xrm44_3164f5a5-0f37-4ab6-bc2a-51978eb9f842/manager/0.log" Jan 26 20:28:33 crc kubenswrapper[4737]: I0126 20:28:33.896931 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4cd88d46-qr8vf_284309e9-61a9-47c4-918a-6f097cf10aa1/manager/0.log" Jan 26 20:28:33 crc kubenswrapper[4737]: I0126 20:28:33.993822 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854wx9kv_5175d9d3-4bf9-4f52-be13-e33b02e03592/manager/0.log" Jan 26 20:28:34 crc kubenswrapper[4737]: I0126 20:28:34.421243 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-848546446f-8xbh6_de29bea2-d234-4bc2-b1fc-90a18e84ed17/operator/0.log" Jan 26 20:28:34 crc kubenswrapper[4737]: I0126 20:28:34.580834 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-n9rk2_8f103d19-388b-408e-a7e5-b17428b986c9/registry-server/0.log" Jan 26 20:28:34 crc kubenswrapper[4737]: I0126 20:28:34.870544 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-55xkx_c9b745b4-487d-4ccb-a398-8d9af643ae50/manager/0.log" Jan 26 20:28:35 crc kubenswrapper[4737]: I0126 20:28:35.082671 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-lfh5n_11c8ec8e-f710-4b3f-9bf2-be1834ddffb9/manager/0.log" Jan 26 20:28:35 crc kubenswrapper[4737]: I0126 20:28:35.236414 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-5xvj4_3c491fdc-889c-4d4a-aedd-60a242e26027/operator/0.log" Jan 26 20:28:35 crc kubenswrapper[4737]: I0126 20:28:35.602879 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-9lkfc_8aa44595-2352-4a3e-888f-3409254cde36/manager/0.log" Jan 26 20:28:35 crc kubenswrapper[4737]: I0126 20:28:35.729634 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6ffbd5d47c-xwdkt_c7cfbb47-6d43-4030-a3d1-516430aeffb7/manager/0.log" Jan 26 20:28:35 crc kubenswrapper[4737]: I0126 20:28:35.844462 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-4n95b_c68a8293-a298-4384-83f0-4a7e50517d3b/manager/0.log" Jan 26 20:28:36 crc kubenswrapper[4737]: I0126 20:28:36.039561 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6cf49855b4-zfzgj_0716cfbf-95d3-44fd-9e28-9b861568b791/manager/0.log" Jan 26 20:28:36 crc kubenswrapper[4737]: I0126 20:28:36.091102 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-hx2gj_148ce19e-3a70-4b27-98e1-87807dee6178/manager/0.log" Jan 26 20:28:59 crc kubenswrapper[4737]: I0126 20:28:59.494172 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-6f78q_cf12407d-16ca-40d9-8279-f46693aee8b1/control-plane-machine-set-operator/0.log" Jan 26 20:28:59 crc kubenswrapper[4737]: I0126 20:28:59.674221 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-ktwh7_c8be3738-e6c1-4cc8-ae8a-a23387b73213/kube-rbac-proxy/0.log" Jan 26 20:28:59 crc kubenswrapper[4737]: I0126 20:28:59.769720 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-ktwh7_c8be3738-e6c1-4cc8-ae8a-a23387b73213/machine-api-operator/0.log" Jan 26 20:29:14 crc kubenswrapper[4737]: I0126 20:29:14.930736 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-bjjtc_780b9f7e-40b5-4b9b-94bc-0401ce35b5e3/cert-manager-controller/0.log" Jan 26 20:29:15 crc kubenswrapper[4737]: I0126 20:29:15.205717 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-qschs_c42be5f9-9a91-43c2-ac4b-5c7b49bb434c/cert-manager-cainjector/0.log" Jan 26 20:29:15 crc kubenswrapper[4737]: I0126 20:29:15.246259 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-57xsl_e5a74a57-5f9a-442f-a166-7787942994c8/cert-manager-webhook/0.log" Jan 26 20:29:23 crc kubenswrapper[4737]: I0126 20:29:23.341810 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rh7tg"] Jan 26 20:29:23 crc kubenswrapper[4737]: E0126 20:29:23.344112 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5c34ee9-bc2a-4256-b452-fa7cae0b0bcd" containerName="container-00" Jan 26 20:29:23 crc kubenswrapper[4737]: I0126 20:29:23.344148 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5c34ee9-bc2a-4256-b452-fa7cae0b0bcd" containerName="container-00" Jan 26 20:29:23 crc kubenswrapper[4737]: I0126 20:29:23.344378 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5c34ee9-bc2a-4256-b452-fa7cae0b0bcd" containerName="container-00" Jan 26 20:29:23 crc kubenswrapper[4737]: I0126 20:29:23.346101 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rh7tg" Jan 26 20:29:23 crc kubenswrapper[4737]: I0126 20:29:23.383137 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rh7tg"] Jan 26 20:29:23 crc kubenswrapper[4737]: I0126 20:29:23.520640 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74wg8\" (UniqueName: \"kubernetes.io/projected/acfbb1cf-569f-48bf-99d7-bde2cff9e14a-kube-api-access-74wg8\") pod \"redhat-marketplace-rh7tg\" (UID: \"acfbb1cf-569f-48bf-99d7-bde2cff9e14a\") " pod="openshift-marketplace/redhat-marketplace-rh7tg" Jan 26 20:29:23 crc kubenswrapper[4737]: I0126 20:29:23.520729 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acfbb1cf-569f-48bf-99d7-bde2cff9e14a-utilities\") pod \"redhat-marketplace-rh7tg\" (UID: \"acfbb1cf-569f-48bf-99d7-bde2cff9e14a\") " pod="openshift-marketplace/redhat-marketplace-rh7tg" Jan 26 20:29:23 crc kubenswrapper[4737]: I0126 20:29:23.520795 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acfbb1cf-569f-48bf-99d7-bde2cff9e14a-catalog-content\") pod \"redhat-marketplace-rh7tg\" (UID: \"acfbb1cf-569f-48bf-99d7-bde2cff9e14a\") " pod="openshift-marketplace/redhat-marketplace-rh7tg" Jan 26 20:29:23 crc kubenswrapper[4737]: I0126 20:29:23.622926 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acfbb1cf-569f-48bf-99d7-bde2cff9e14a-catalog-content\") pod \"redhat-marketplace-rh7tg\" (UID: \"acfbb1cf-569f-48bf-99d7-bde2cff9e14a\") " pod="openshift-marketplace/redhat-marketplace-rh7tg" Jan 26 20:29:23 crc kubenswrapper[4737]: I0126 20:29:23.623137 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74wg8\" (UniqueName: \"kubernetes.io/projected/acfbb1cf-569f-48bf-99d7-bde2cff9e14a-kube-api-access-74wg8\") pod \"redhat-marketplace-rh7tg\" (UID: \"acfbb1cf-569f-48bf-99d7-bde2cff9e14a\") " pod="openshift-marketplace/redhat-marketplace-rh7tg" Jan 26 20:29:23 crc kubenswrapper[4737]: I0126 20:29:23.623177 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acfbb1cf-569f-48bf-99d7-bde2cff9e14a-utilities\") pod \"redhat-marketplace-rh7tg\" (UID: \"acfbb1cf-569f-48bf-99d7-bde2cff9e14a\") " pod="openshift-marketplace/redhat-marketplace-rh7tg" Jan 26 20:29:23 crc kubenswrapper[4737]: I0126 20:29:23.623716 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acfbb1cf-569f-48bf-99d7-bde2cff9e14a-utilities\") pod \"redhat-marketplace-rh7tg\" (UID: \"acfbb1cf-569f-48bf-99d7-bde2cff9e14a\") " pod="openshift-marketplace/redhat-marketplace-rh7tg" Jan 26 20:29:23 crc kubenswrapper[4737]: I0126 20:29:23.623934 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acfbb1cf-569f-48bf-99d7-bde2cff9e14a-catalog-content\") pod \"redhat-marketplace-rh7tg\" (UID: \"acfbb1cf-569f-48bf-99d7-bde2cff9e14a\") " pod="openshift-marketplace/redhat-marketplace-rh7tg" Jan 26 20:29:23 crc kubenswrapper[4737]: I0126 20:29:23.650036 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74wg8\" (UniqueName: \"kubernetes.io/projected/acfbb1cf-569f-48bf-99d7-bde2cff9e14a-kube-api-access-74wg8\") pod \"redhat-marketplace-rh7tg\" (UID: \"acfbb1cf-569f-48bf-99d7-bde2cff9e14a\") " pod="openshift-marketplace/redhat-marketplace-rh7tg" Jan 26 20:29:23 crc kubenswrapper[4737]: I0126 20:29:23.684820 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rh7tg" Jan 26 20:29:24 crc kubenswrapper[4737]: I0126 20:29:24.350386 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rh7tg"] Jan 26 20:29:24 crc kubenswrapper[4737]: I0126 20:29:24.991746 4737 generic.go:334] "Generic (PLEG): container finished" podID="acfbb1cf-569f-48bf-99d7-bde2cff9e14a" containerID="a014a1c947d99b5ce3abacbe8740da9605e5b49f046f5bd8ad5fd2c9f0008e80" exitCode=0 Jan 26 20:29:24 crc kubenswrapper[4737]: I0126 20:29:24.995515 4737 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 20:29:25 crc kubenswrapper[4737]: I0126 20:29:24.999959 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rh7tg" event={"ID":"acfbb1cf-569f-48bf-99d7-bde2cff9e14a","Type":"ContainerDied","Data":"a014a1c947d99b5ce3abacbe8740da9605e5b49f046f5bd8ad5fd2c9f0008e80"} Jan 26 20:29:25 crc kubenswrapper[4737]: I0126 20:29:25.000005 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rh7tg" event={"ID":"acfbb1cf-569f-48bf-99d7-bde2cff9e14a","Type":"ContainerStarted","Data":"91b4f93650e255e630965eb5a88b8bec78cfda78563185e9449a5729e37b948f"} Jan 26 20:29:27 crc kubenswrapper[4737]: I0126 20:29:27.026104 4737 generic.go:334] "Generic (PLEG): container finished" podID="acfbb1cf-569f-48bf-99d7-bde2cff9e14a" containerID="fdd2a8f6b95467de274099803a02a2b7710d8410e71d695fb766245f6ca9a21f" exitCode=0 Jan 26 20:29:27 crc kubenswrapper[4737]: I0126 20:29:27.026225 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rh7tg" event={"ID":"acfbb1cf-569f-48bf-99d7-bde2cff9e14a","Type":"ContainerDied","Data":"fdd2a8f6b95467de274099803a02a2b7710d8410e71d695fb766245f6ca9a21f"} Jan 26 20:29:28 crc kubenswrapper[4737]: I0126 20:29:28.040287 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rh7tg" event={"ID":"acfbb1cf-569f-48bf-99d7-bde2cff9e14a","Type":"ContainerStarted","Data":"c47c38ee8e6cfaba34e053eccb58d3147a8f822c1945b1e4ba9738b94dfe20cf"} Jan 26 20:29:28 crc kubenswrapper[4737]: I0126 20:29:28.067040 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rh7tg" podStartSLOduration=2.4533152 podStartE2EDuration="5.067020786s" podCreationTimestamp="2026-01-26 20:29:23 +0000 UTC" firstStartedPulling="2026-01-26 20:29:24.99469176 +0000 UTC m=+7138.302886458" lastFinishedPulling="2026-01-26 20:29:27.608397296 +0000 UTC m=+7140.916592044" observedRunningTime="2026-01-26 20:29:28.057602765 +0000 UTC m=+7141.365797473" watchObservedRunningTime="2026-01-26 20:29:28.067020786 +0000 UTC m=+7141.375215494" Jan 26 20:29:30 crc kubenswrapper[4737]: I0126 20:29:30.784984 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-zdxbz_4c4a0a5e-ab9e-478c-8f90-741563313097/nmstate-console-plugin/0.log" Jan 26 20:29:30 crc kubenswrapper[4737]: I0126 20:29:30.952089 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-99d4z_1a140881-5ef3-4582-9694-e24fc14a6fb4/nmstate-handler/0.log" Jan 26 20:29:31 crc kubenswrapper[4737]: I0126 20:29:31.040306 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-qh796_33e00306-edd4-487d-9bc6-e49fa9692a29/kube-rbac-proxy/0.log" Jan 26 20:29:31 crc kubenswrapper[4737]: I0126 20:29:31.206451 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-qh796_33e00306-edd4-487d-9bc6-e49fa9692a29/nmstate-metrics/0.log" Jan 26 20:29:31 crc kubenswrapper[4737]: I0126 20:29:31.275824 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-dg9v7_35a928d3-7171-42be-8005-cbdfec1891c3/nmstate-operator/0.log" Jan 26 20:29:31 crc kubenswrapper[4737]: I0126 20:29:31.412480 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-f425m_30e5ad3f-a8b0-4d6d-b128-e8b126a1fba5/nmstate-webhook/0.log" Jan 26 20:29:33 crc kubenswrapper[4737]: I0126 20:29:33.686233 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rh7tg" Jan 26 20:29:33 crc kubenswrapper[4737]: I0126 20:29:33.686689 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rh7tg" Jan 26 20:29:33 crc kubenswrapper[4737]: I0126 20:29:33.748295 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rh7tg" Jan 26 20:29:34 crc kubenswrapper[4737]: I0126 20:29:34.202425 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rh7tg" Jan 26 20:29:34 crc kubenswrapper[4737]: I0126 20:29:34.268996 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rh7tg"] Jan 26 20:29:36 crc kubenswrapper[4737]: I0126 20:29:36.153199 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rh7tg" podUID="acfbb1cf-569f-48bf-99d7-bde2cff9e14a" containerName="registry-server" containerID="cri-o://c47c38ee8e6cfaba34e053eccb58d3147a8f822c1945b1e4ba9738b94dfe20cf" gracePeriod=2 Jan 26 20:29:36 crc kubenswrapper[4737]: I0126 20:29:36.734601 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rh7tg" Jan 26 20:29:36 crc kubenswrapper[4737]: I0126 20:29:36.884528 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acfbb1cf-569f-48bf-99d7-bde2cff9e14a-utilities\") pod \"acfbb1cf-569f-48bf-99d7-bde2cff9e14a\" (UID: \"acfbb1cf-569f-48bf-99d7-bde2cff9e14a\") " Jan 26 20:29:36 crc kubenswrapper[4737]: I0126 20:29:36.884916 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74wg8\" (UniqueName: \"kubernetes.io/projected/acfbb1cf-569f-48bf-99d7-bde2cff9e14a-kube-api-access-74wg8\") pod \"acfbb1cf-569f-48bf-99d7-bde2cff9e14a\" (UID: \"acfbb1cf-569f-48bf-99d7-bde2cff9e14a\") " Jan 26 20:29:36 crc kubenswrapper[4737]: I0126 20:29:36.885017 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acfbb1cf-569f-48bf-99d7-bde2cff9e14a-catalog-content\") pod \"acfbb1cf-569f-48bf-99d7-bde2cff9e14a\" (UID: \"acfbb1cf-569f-48bf-99d7-bde2cff9e14a\") " Jan 26 20:29:36 crc kubenswrapper[4737]: I0126 20:29:36.885591 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/acfbb1cf-569f-48bf-99d7-bde2cff9e14a-utilities" (OuterVolumeSpecName: "utilities") pod "acfbb1cf-569f-48bf-99d7-bde2cff9e14a" (UID: "acfbb1cf-569f-48bf-99d7-bde2cff9e14a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:29:36 crc kubenswrapper[4737]: I0126 20:29:36.887503 4737 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acfbb1cf-569f-48bf-99d7-bde2cff9e14a-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 20:29:36 crc kubenswrapper[4737]: I0126 20:29:36.904367 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acfbb1cf-569f-48bf-99d7-bde2cff9e14a-kube-api-access-74wg8" (OuterVolumeSpecName: "kube-api-access-74wg8") pod "acfbb1cf-569f-48bf-99d7-bde2cff9e14a" (UID: "acfbb1cf-569f-48bf-99d7-bde2cff9e14a"). InnerVolumeSpecName "kube-api-access-74wg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:29:36 crc kubenswrapper[4737]: I0126 20:29:36.916867 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/acfbb1cf-569f-48bf-99d7-bde2cff9e14a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "acfbb1cf-569f-48bf-99d7-bde2cff9e14a" (UID: "acfbb1cf-569f-48bf-99d7-bde2cff9e14a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:29:36 crc kubenswrapper[4737]: I0126 20:29:36.989532 4737 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acfbb1cf-569f-48bf-99d7-bde2cff9e14a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 20:29:36 crc kubenswrapper[4737]: I0126 20:29:36.989570 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74wg8\" (UniqueName: \"kubernetes.io/projected/acfbb1cf-569f-48bf-99d7-bde2cff9e14a-kube-api-access-74wg8\") on node \"crc\" DevicePath \"\"" Jan 26 20:29:37 crc kubenswrapper[4737]: I0126 20:29:37.164423 4737 generic.go:334] "Generic (PLEG): container finished" podID="acfbb1cf-569f-48bf-99d7-bde2cff9e14a" containerID="c47c38ee8e6cfaba34e053eccb58d3147a8f822c1945b1e4ba9738b94dfe20cf" exitCode=0 Jan 26 20:29:37 crc kubenswrapper[4737]: I0126 20:29:37.164469 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rh7tg" event={"ID":"acfbb1cf-569f-48bf-99d7-bde2cff9e14a","Type":"ContainerDied","Data":"c47c38ee8e6cfaba34e053eccb58d3147a8f822c1945b1e4ba9738b94dfe20cf"} Jan 26 20:29:37 crc kubenswrapper[4737]: I0126 20:29:37.164501 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rh7tg" event={"ID":"acfbb1cf-569f-48bf-99d7-bde2cff9e14a","Type":"ContainerDied","Data":"91b4f93650e255e630965eb5a88b8bec78cfda78563185e9449a5729e37b948f"} Jan 26 20:29:37 crc kubenswrapper[4737]: I0126 20:29:37.164524 4737 scope.go:117] "RemoveContainer" containerID="c47c38ee8e6cfaba34e053eccb58d3147a8f822c1945b1e4ba9738b94dfe20cf" Jan 26 20:29:37 crc kubenswrapper[4737]: I0126 20:29:37.164671 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rh7tg" Jan 26 20:29:37 crc kubenswrapper[4737]: I0126 20:29:37.197358 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rh7tg"] Jan 26 20:29:37 crc kubenswrapper[4737]: I0126 20:29:37.209645 4737 scope.go:117] "RemoveContainer" containerID="fdd2a8f6b95467de274099803a02a2b7710d8410e71d695fb766245f6ca9a21f" Jan 26 20:29:37 crc kubenswrapper[4737]: I0126 20:29:37.242780 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rh7tg"] Jan 26 20:29:37 crc kubenswrapper[4737]: I0126 20:29:37.251842 4737 scope.go:117] "RemoveContainer" containerID="a014a1c947d99b5ce3abacbe8740da9605e5b49f046f5bd8ad5fd2c9f0008e80" Jan 26 20:29:37 crc kubenswrapper[4737]: I0126 20:29:37.323169 4737 scope.go:117] "RemoveContainer" containerID="c47c38ee8e6cfaba34e053eccb58d3147a8f822c1945b1e4ba9738b94dfe20cf" Jan 26 20:29:37 crc kubenswrapper[4737]: E0126 20:29:37.323742 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c47c38ee8e6cfaba34e053eccb58d3147a8f822c1945b1e4ba9738b94dfe20cf\": container with ID starting with c47c38ee8e6cfaba34e053eccb58d3147a8f822c1945b1e4ba9738b94dfe20cf not found: ID does not exist" containerID="c47c38ee8e6cfaba34e053eccb58d3147a8f822c1945b1e4ba9738b94dfe20cf" Jan 26 20:29:37 crc kubenswrapper[4737]: I0126 20:29:37.323796 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c47c38ee8e6cfaba34e053eccb58d3147a8f822c1945b1e4ba9738b94dfe20cf"} err="failed to get container status \"c47c38ee8e6cfaba34e053eccb58d3147a8f822c1945b1e4ba9738b94dfe20cf\": rpc error: code = NotFound desc = could not find container \"c47c38ee8e6cfaba34e053eccb58d3147a8f822c1945b1e4ba9738b94dfe20cf\": container with ID starting with c47c38ee8e6cfaba34e053eccb58d3147a8f822c1945b1e4ba9738b94dfe20cf not found: ID does not exist" Jan 26 20:29:37 crc kubenswrapper[4737]: I0126 20:29:37.323824 4737 scope.go:117] "RemoveContainer" containerID="fdd2a8f6b95467de274099803a02a2b7710d8410e71d695fb766245f6ca9a21f" Jan 26 20:29:37 crc kubenswrapper[4737]: E0126 20:29:37.324217 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdd2a8f6b95467de274099803a02a2b7710d8410e71d695fb766245f6ca9a21f\": container with ID starting with fdd2a8f6b95467de274099803a02a2b7710d8410e71d695fb766245f6ca9a21f not found: ID does not exist" containerID="fdd2a8f6b95467de274099803a02a2b7710d8410e71d695fb766245f6ca9a21f" Jan 26 20:29:37 crc kubenswrapper[4737]: I0126 20:29:37.324252 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdd2a8f6b95467de274099803a02a2b7710d8410e71d695fb766245f6ca9a21f"} err="failed to get container status \"fdd2a8f6b95467de274099803a02a2b7710d8410e71d695fb766245f6ca9a21f\": rpc error: code = NotFound desc = could not find container \"fdd2a8f6b95467de274099803a02a2b7710d8410e71d695fb766245f6ca9a21f\": container with ID starting with fdd2a8f6b95467de274099803a02a2b7710d8410e71d695fb766245f6ca9a21f not found: ID does not exist" Jan 26 20:29:37 crc kubenswrapper[4737]: I0126 20:29:37.324274 4737 scope.go:117] "RemoveContainer" containerID="a014a1c947d99b5ce3abacbe8740da9605e5b49f046f5bd8ad5fd2c9f0008e80" Jan 26 20:29:37 crc kubenswrapper[4737]: E0126 20:29:37.324499 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a014a1c947d99b5ce3abacbe8740da9605e5b49f046f5bd8ad5fd2c9f0008e80\": container with ID starting with a014a1c947d99b5ce3abacbe8740da9605e5b49f046f5bd8ad5fd2c9f0008e80 not found: ID does not exist" containerID="a014a1c947d99b5ce3abacbe8740da9605e5b49f046f5bd8ad5fd2c9f0008e80" Jan 26 20:29:37 crc kubenswrapper[4737]: I0126 20:29:37.324528 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a014a1c947d99b5ce3abacbe8740da9605e5b49f046f5bd8ad5fd2c9f0008e80"} err="failed to get container status \"a014a1c947d99b5ce3abacbe8740da9605e5b49f046f5bd8ad5fd2c9f0008e80\": rpc error: code = NotFound desc = could not find container \"a014a1c947d99b5ce3abacbe8740da9605e5b49f046f5bd8ad5fd2c9f0008e80\": container with ID starting with a014a1c947d99b5ce3abacbe8740da9605e5b49f046f5bd8ad5fd2c9f0008e80 not found: ID does not exist" Jan 26 20:29:38 crc kubenswrapper[4737]: I0126 20:29:38.997107 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acfbb1cf-569f-48bf-99d7-bde2cff9e14a" path="/var/lib/kubelet/pods/acfbb1cf-569f-48bf-99d7-bde2cff9e14a/volumes" Jan 26 20:29:46 crc kubenswrapper[4737]: I0126 20:29:46.338030 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6dbff5787b-d86s9_697c3f44-b05d-4404-bd79-a93c1c29b8ad/kube-rbac-proxy/0.log" Jan 26 20:29:46 crc kubenswrapper[4737]: I0126 20:29:46.506867 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6dbff5787b-d86s9_697c3f44-b05d-4404-bd79-a93c1c29b8ad/manager/0.log" Jan 26 20:30:00 crc kubenswrapper[4737]: I0126 20:30:00.256283 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490990-m7ftc"] Jan 26 20:30:00 crc kubenswrapper[4737]: E0126 20:30:00.258697 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acfbb1cf-569f-48bf-99d7-bde2cff9e14a" containerName="registry-server" Jan 26 20:30:00 crc kubenswrapper[4737]: I0126 20:30:00.258730 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="acfbb1cf-569f-48bf-99d7-bde2cff9e14a" containerName="registry-server" Jan 26 20:30:00 crc kubenswrapper[4737]: E0126 20:30:00.258819 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acfbb1cf-569f-48bf-99d7-bde2cff9e14a" containerName="extract-utilities" Jan 26 20:30:00 crc kubenswrapper[4737]: I0126 20:30:00.258838 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="acfbb1cf-569f-48bf-99d7-bde2cff9e14a" containerName="extract-utilities" Jan 26 20:30:00 crc kubenswrapper[4737]: E0126 20:30:00.258888 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acfbb1cf-569f-48bf-99d7-bde2cff9e14a" containerName="extract-content" Jan 26 20:30:00 crc kubenswrapper[4737]: I0126 20:30:00.258899 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="acfbb1cf-569f-48bf-99d7-bde2cff9e14a" containerName="extract-content" Jan 26 20:30:00 crc kubenswrapper[4737]: I0126 20:30:00.259672 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="acfbb1cf-569f-48bf-99d7-bde2cff9e14a" containerName="registry-server" Jan 26 20:30:00 crc kubenswrapper[4737]: I0126 20:30:00.261943 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490990-m7ftc" Jan 26 20:30:00 crc kubenswrapper[4737]: I0126 20:30:00.267807 4737 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 20:30:00 crc kubenswrapper[4737]: I0126 20:30:00.269609 4737 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 20:30:00 crc kubenswrapper[4737]: I0126 20:30:00.296345 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490990-m7ftc"] Jan 26 20:30:00 crc kubenswrapper[4737]: I0126 20:30:00.430140 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6bd90fe9-59a0-4fba-88f2-153cd9d1d6b4-secret-volume\") pod \"collect-profiles-29490990-m7ftc\" (UID: \"6bd90fe9-59a0-4fba-88f2-153cd9d1d6b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490990-m7ftc" Jan 26 20:30:00 crc kubenswrapper[4737]: I0126 20:30:00.430241 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6bd90fe9-59a0-4fba-88f2-153cd9d1d6b4-config-volume\") pod \"collect-profiles-29490990-m7ftc\" (UID: \"6bd90fe9-59a0-4fba-88f2-153cd9d1d6b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490990-m7ftc" Jan 26 20:30:00 crc kubenswrapper[4737]: I0126 20:30:00.431316 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljwr4\" (UniqueName: \"kubernetes.io/projected/6bd90fe9-59a0-4fba-88f2-153cd9d1d6b4-kube-api-access-ljwr4\") pod \"collect-profiles-29490990-m7ftc\" (UID: \"6bd90fe9-59a0-4fba-88f2-153cd9d1d6b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490990-m7ftc" Jan 26 20:30:00 crc kubenswrapper[4737]: I0126 20:30:00.534307 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljwr4\" (UniqueName: \"kubernetes.io/projected/6bd90fe9-59a0-4fba-88f2-153cd9d1d6b4-kube-api-access-ljwr4\") pod \"collect-profiles-29490990-m7ftc\" (UID: \"6bd90fe9-59a0-4fba-88f2-153cd9d1d6b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490990-m7ftc" Jan 26 20:30:00 crc kubenswrapper[4737]: I0126 20:30:00.534419 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6bd90fe9-59a0-4fba-88f2-153cd9d1d6b4-secret-volume\") pod \"collect-profiles-29490990-m7ftc\" (UID: \"6bd90fe9-59a0-4fba-88f2-153cd9d1d6b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490990-m7ftc" Jan 26 20:30:00 crc kubenswrapper[4737]: I0126 20:30:00.534466 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6bd90fe9-59a0-4fba-88f2-153cd9d1d6b4-config-volume\") pod \"collect-profiles-29490990-m7ftc\" (UID: \"6bd90fe9-59a0-4fba-88f2-153cd9d1d6b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490990-m7ftc" Jan 26 20:30:00 crc kubenswrapper[4737]: I0126 20:30:00.535679 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6bd90fe9-59a0-4fba-88f2-153cd9d1d6b4-config-volume\") pod \"collect-profiles-29490990-m7ftc\" (UID: \"6bd90fe9-59a0-4fba-88f2-153cd9d1d6b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490990-m7ftc" Jan 26 20:30:00 crc kubenswrapper[4737]: I0126 20:30:00.543282 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6bd90fe9-59a0-4fba-88f2-153cd9d1d6b4-secret-volume\") pod \"collect-profiles-29490990-m7ftc\" (UID: \"6bd90fe9-59a0-4fba-88f2-153cd9d1d6b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490990-m7ftc" Jan 26 20:30:00 crc kubenswrapper[4737]: I0126 20:30:00.555047 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljwr4\" (UniqueName: \"kubernetes.io/projected/6bd90fe9-59a0-4fba-88f2-153cd9d1d6b4-kube-api-access-ljwr4\") pod \"collect-profiles-29490990-m7ftc\" (UID: \"6bd90fe9-59a0-4fba-88f2-153cd9d1d6b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490990-m7ftc" Jan 26 20:30:00 crc kubenswrapper[4737]: I0126 20:30:00.600520 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490990-m7ftc" Jan 26 20:30:00 crc kubenswrapper[4737]: I0126 20:30:00.949229 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 20:30:00 crc kubenswrapper[4737]: I0126 20:30:00.949680 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 20:30:01 crc kubenswrapper[4737]: I0126 20:30:01.112474 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490990-m7ftc"] Jan 26 20:30:01 crc kubenswrapper[4737]: I0126 20:30:01.548981 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490990-m7ftc" event={"ID":"6bd90fe9-59a0-4fba-88f2-153cd9d1d6b4","Type":"ContainerStarted","Data":"8c76b7b7a54aa6d063b9417c72011916944637da3bb0a65d50565a1d37e216f1"} Jan 26 20:30:01 crc kubenswrapper[4737]: I0126 20:30:01.549496 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490990-m7ftc" event={"ID":"6bd90fe9-59a0-4fba-88f2-153cd9d1d6b4","Type":"ContainerStarted","Data":"9b34dabeb25407395b4508fe1871a7773bb2348fd6838080cf6387dcf1f03af8"} Jan 26 20:30:01 crc kubenswrapper[4737]: I0126 20:30:01.569516 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29490990-m7ftc" podStartSLOduration=1.5694958620000001 podStartE2EDuration="1.569495862s" podCreationTimestamp="2026-01-26 20:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:30:01.563397683 +0000 UTC m=+7174.871592391" watchObservedRunningTime="2026-01-26 20:30:01.569495862 +0000 UTC m=+7174.877690560" Jan 26 20:30:01 crc kubenswrapper[4737]: I0126 20:30:01.736207 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-jvfnx_780e85db-cb8c-4a8c-920d-2594cd33eebf/prometheus-operator/0.log" Jan 26 20:30:01 crc kubenswrapper[4737]: I0126 20:30:01.958236 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-b48686b7d-s2s9r_33031648-f53a-4f71-8c03-041f7f1fcbf5/prometheus-operator-admission-webhook/0.log" Jan 26 20:30:02 crc kubenswrapper[4737]: I0126 20:30:02.097933 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-b48686b7d-tjv85_cc4df7ac-3298-4316-8c9b-1ac9827330fd/prometheus-operator-admission-webhook/0.log" Jan 26 20:30:02 crc kubenswrapper[4737]: I0126 20:30:02.261166 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-xf99z_b319754a-04cc-40dd-b031-ea72a3d19db2/operator/0.log" Jan 26 20:30:02 crc kubenswrapper[4737]: I0126 20:30:02.317895 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-ckxn2_6b80cd0d-81ac-4f45-a80c-3b1cf442fc44/observability-ui-dashboards/0.log" Jan 26 20:30:02 crc kubenswrapper[4737]: I0126 20:30:02.598957 4737 generic.go:334] "Generic (PLEG): container finished" podID="6bd90fe9-59a0-4fba-88f2-153cd9d1d6b4" containerID="8c76b7b7a54aa6d063b9417c72011916944637da3bb0a65d50565a1d37e216f1" exitCode=0 Jan 26 20:30:02 crc kubenswrapper[4737]: I0126 20:30:02.599018 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490990-m7ftc" event={"ID":"6bd90fe9-59a0-4fba-88f2-153cd9d1d6b4","Type":"ContainerDied","Data":"8c76b7b7a54aa6d063b9417c72011916944637da3bb0a65d50565a1d37e216f1"} Jan 26 20:30:02 crc kubenswrapper[4737]: I0126 20:30:02.604581 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-r5vwv_7478def9-da54-4632-803e-47f36b6fb64b/perses-operator/0.log" Jan 26 20:30:04 crc kubenswrapper[4737]: I0126 20:30:04.045561 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490990-m7ftc" Jan 26 20:30:04 crc kubenswrapper[4737]: I0126 20:30:04.089467 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljwr4\" (UniqueName: \"kubernetes.io/projected/6bd90fe9-59a0-4fba-88f2-153cd9d1d6b4-kube-api-access-ljwr4\") pod \"6bd90fe9-59a0-4fba-88f2-153cd9d1d6b4\" (UID: \"6bd90fe9-59a0-4fba-88f2-153cd9d1d6b4\") " Jan 26 20:30:04 crc kubenswrapper[4737]: I0126 20:30:04.089565 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6bd90fe9-59a0-4fba-88f2-153cd9d1d6b4-config-volume\") pod \"6bd90fe9-59a0-4fba-88f2-153cd9d1d6b4\" (UID: \"6bd90fe9-59a0-4fba-88f2-153cd9d1d6b4\") " Jan 26 20:30:04 crc kubenswrapper[4737]: I0126 20:30:04.089604 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6bd90fe9-59a0-4fba-88f2-153cd9d1d6b4-secret-volume\") pod \"6bd90fe9-59a0-4fba-88f2-153cd9d1d6b4\" (UID: \"6bd90fe9-59a0-4fba-88f2-153cd9d1d6b4\") " Jan 26 20:30:04 crc kubenswrapper[4737]: I0126 20:30:04.093222 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bd90fe9-59a0-4fba-88f2-153cd9d1d6b4-config-volume" (OuterVolumeSpecName: "config-volume") pod "6bd90fe9-59a0-4fba-88f2-153cd9d1d6b4" (UID: "6bd90fe9-59a0-4fba-88f2-153cd9d1d6b4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:30:04 crc kubenswrapper[4737]: I0126 20:30:04.099636 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bd90fe9-59a0-4fba-88f2-153cd9d1d6b4-kube-api-access-ljwr4" (OuterVolumeSpecName: "kube-api-access-ljwr4") pod "6bd90fe9-59a0-4fba-88f2-153cd9d1d6b4" (UID: "6bd90fe9-59a0-4fba-88f2-153cd9d1d6b4"). InnerVolumeSpecName "kube-api-access-ljwr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:30:04 crc kubenswrapper[4737]: I0126 20:30:04.113343 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bd90fe9-59a0-4fba-88f2-153cd9d1d6b4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6bd90fe9-59a0-4fba-88f2-153cd9d1d6b4" (UID: "6bd90fe9-59a0-4fba-88f2-153cd9d1d6b4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:30:04 crc kubenswrapper[4737]: I0126 20:30:04.197384 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljwr4\" (UniqueName: \"kubernetes.io/projected/6bd90fe9-59a0-4fba-88f2-153cd9d1d6b4-kube-api-access-ljwr4\") on node \"crc\" DevicePath \"\"" Jan 26 20:30:04 crc kubenswrapper[4737]: I0126 20:30:04.197427 4737 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6bd90fe9-59a0-4fba-88f2-153cd9d1d6b4-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 20:30:04 crc kubenswrapper[4737]: I0126 20:30:04.197440 4737 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6bd90fe9-59a0-4fba-88f2-153cd9d1d6b4-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 20:30:04 crc kubenswrapper[4737]: I0126 20:30:04.625644 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490990-m7ftc" event={"ID":"6bd90fe9-59a0-4fba-88f2-153cd9d1d6b4","Type":"ContainerDied","Data":"9b34dabeb25407395b4508fe1871a7773bb2348fd6838080cf6387dcf1f03af8"} Jan 26 20:30:04 crc kubenswrapper[4737]: I0126 20:30:04.625703 4737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b34dabeb25407395b4508fe1871a7773bb2348fd6838080cf6387dcf1f03af8" Jan 26 20:30:04 crc kubenswrapper[4737]: I0126 20:30:04.625916 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490990-m7ftc" Jan 26 20:30:04 crc kubenswrapper[4737]: I0126 20:30:04.659934 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490945-jw2xc"] Jan 26 20:30:04 crc kubenswrapper[4737]: I0126 20:30:04.672137 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490945-jw2xc"] Jan 26 20:30:05 crc kubenswrapper[4737]: I0126 20:30:05.001621 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d713092-777f-413e-9356-8d5ffaa09d8a" path="/var/lib/kubelet/pods/8d713092-777f-413e-9356-8d5ffaa09d8a/volumes" Jan 26 20:30:09 crc kubenswrapper[4737]: I0126 20:30:09.084151 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lpc54"] Jan 26 20:30:09 crc kubenswrapper[4737]: E0126 20:30:09.087181 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bd90fe9-59a0-4fba-88f2-153cd9d1d6b4" containerName="collect-profiles" Jan 26 20:30:09 crc kubenswrapper[4737]: I0126 20:30:09.087271 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bd90fe9-59a0-4fba-88f2-153cd9d1d6b4" containerName="collect-profiles" Jan 26 20:30:09 crc kubenswrapper[4737]: I0126 20:30:09.087904 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bd90fe9-59a0-4fba-88f2-153cd9d1d6b4" containerName="collect-profiles" Jan 26 20:30:09 crc kubenswrapper[4737]: I0126 20:30:09.091578 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lpc54" Jan 26 20:30:09 crc kubenswrapper[4737]: I0126 20:30:09.120884 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lpc54"] Jan 26 20:30:09 crc kubenswrapper[4737]: I0126 20:30:09.130808 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ce28f0d-3036-4836-8a39-2d77c98813fe-utilities\") pod \"certified-operators-lpc54\" (UID: \"4ce28f0d-3036-4836-8a39-2d77c98813fe\") " pod="openshift-marketplace/certified-operators-lpc54" Jan 26 20:30:09 crc kubenswrapper[4737]: I0126 20:30:09.131023 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ce28f0d-3036-4836-8a39-2d77c98813fe-catalog-content\") pod \"certified-operators-lpc54\" (UID: \"4ce28f0d-3036-4836-8a39-2d77c98813fe\") " pod="openshift-marketplace/certified-operators-lpc54" Jan 26 20:30:09 crc kubenswrapper[4737]: I0126 20:30:09.131461 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czr4k\" (UniqueName: \"kubernetes.io/projected/4ce28f0d-3036-4836-8a39-2d77c98813fe-kube-api-access-czr4k\") pod \"certified-operators-lpc54\" (UID: \"4ce28f0d-3036-4836-8a39-2d77c98813fe\") " pod="openshift-marketplace/certified-operators-lpc54" Jan 26 20:30:09 crc kubenswrapper[4737]: I0126 20:30:09.233537 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czr4k\" (UniqueName: \"kubernetes.io/projected/4ce28f0d-3036-4836-8a39-2d77c98813fe-kube-api-access-czr4k\") pod \"certified-operators-lpc54\" (UID: \"4ce28f0d-3036-4836-8a39-2d77c98813fe\") " pod="openshift-marketplace/certified-operators-lpc54" Jan 26 20:30:09 crc kubenswrapper[4737]: I0126 20:30:09.233738 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ce28f0d-3036-4836-8a39-2d77c98813fe-utilities\") pod \"certified-operators-lpc54\" (UID: \"4ce28f0d-3036-4836-8a39-2d77c98813fe\") " pod="openshift-marketplace/certified-operators-lpc54" Jan 26 20:30:09 crc kubenswrapper[4737]: I0126 20:30:09.233794 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ce28f0d-3036-4836-8a39-2d77c98813fe-catalog-content\") pod \"certified-operators-lpc54\" (UID: \"4ce28f0d-3036-4836-8a39-2d77c98813fe\") " pod="openshift-marketplace/certified-operators-lpc54" Jan 26 20:30:09 crc kubenswrapper[4737]: I0126 20:30:09.234306 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ce28f0d-3036-4836-8a39-2d77c98813fe-catalog-content\") pod \"certified-operators-lpc54\" (UID: \"4ce28f0d-3036-4836-8a39-2d77c98813fe\") " pod="openshift-marketplace/certified-operators-lpc54" Jan 26 20:30:09 crc kubenswrapper[4737]: I0126 20:30:09.234381 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ce28f0d-3036-4836-8a39-2d77c98813fe-utilities\") pod \"certified-operators-lpc54\" (UID: \"4ce28f0d-3036-4836-8a39-2d77c98813fe\") " pod="openshift-marketplace/certified-operators-lpc54" Jan 26 20:30:09 crc kubenswrapper[4737]: I0126 20:30:09.256928 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czr4k\" (UniqueName: \"kubernetes.io/projected/4ce28f0d-3036-4836-8a39-2d77c98813fe-kube-api-access-czr4k\") pod \"certified-operators-lpc54\" (UID: \"4ce28f0d-3036-4836-8a39-2d77c98813fe\") " pod="openshift-marketplace/certified-operators-lpc54" Jan 26 20:30:09 crc kubenswrapper[4737]: I0126 20:30:09.426134 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lpc54" Jan 26 20:30:09 crc kubenswrapper[4737]: I0126 20:30:09.975954 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lpc54"] Jan 26 20:30:10 crc kubenswrapper[4737]: I0126 20:30:10.700461 4737 generic.go:334] "Generic (PLEG): container finished" podID="4ce28f0d-3036-4836-8a39-2d77c98813fe" containerID="e84476aa7503e3b414b321d1cea454cb57a8fc83b9fb82b69b486b85106ddd37" exitCode=0 Jan 26 20:30:10 crc kubenswrapper[4737]: I0126 20:30:10.700575 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lpc54" event={"ID":"4ce28f0d-3036-4836-8a39-2d77c98813fe","Type":"ContainerDied","Data":"e84476aa7503e3b414b321d1cea454cb57a8fc83b9fb82b69b486b85106ddd37"} Jan 26 20:30:10 crc kubenswrapper[4737]: I0126 20:30:10.700937 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lpc54" event={"ID":"4ce28f0d-3036-4836-8a39-2d77c98813fe","Type":"ContainerStarted","Data":"ed10f9f7d436276d802653bb502b4a58d7e097c161e1c93e1c402c1b4c386964"} Jan 26 20:30:15 crc kubenswrapper[4737]: I0126 20:30:15.778674 4737 generic.go:334] "Generic (PLEG): container finished" podID="4ce28f0d-3036-4836-8a39-2d77c98813fe" containerID="85e9cb3e2f2d73c444c90f8cbebc9b2c41bf5f249034b5d9d75a5ad17d019e80" exitCode=0 Jan 26 20:30:15 crc kubenswrapper[4737]: I0126 20:30:15.778820 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lpc54" event={"ID":"4ce28f0d-3036-4836-8a39-2d77c98813fe","Type":"ContainerDied","Data":"85e9cb3e2f2d73c444c90f8cbebc9b2c41bf5f249034b5d9d75a5ad17d019e80"} Jan 26 20:30:16 crc kubenswrapper[4737]: I0126 20:30:16.798919 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lpc54" event={"ID":"4ce28f0d-3036-4836-8a39-2d77c98813fe","Type":"ContainerStarted","Data":"41be8bbdc6c1518e2f2560f2f35e5ab85331e95e85399d2cac5d3a01f0ae910f"} Jan 26 20:30:16 crc kubenswrapper[4737]: I0126 20:30:16.826591 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lpc54" podStartSLOduration=2.316269553 podStartE2EDuration="7.826568715s" podCreationTimestamp="2026-01-26 20:30:09 +0000 UTC" firstStartedPulling="2026-01-26 20:30:10.703985673 +0000 UTC m=+7184.012180381" lastFinishedPulling="2026-01-26 20:30:16.214284835 +0000 UTC m=+7189.522479543" observedRunningTime="2026-01-26 20:30:16.820877426 +0000 UTC m=+7190.129072144" watchObservedRunningTime="2026-01-26 20:30:16.826568715 +0000 UTC m=+7190.134763443" Jan 26 20:30:19 crc kubenswrapper[4737]: I0126 20:30:19.426510 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lpc54" Jan 26 20:30:19 crc kubenswrapper[4737]: I0126 20:30:19.428380 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lpc54" Jan 26 20:30:19 crc kubenswrapper[4737]: I0126 20:30:19.481385 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lpc54" Jan 26 20:30:21 crc kubenswrapper[4737]: I0126 20:30:21.459802 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-79cf69ddc8-zx2hl_19021b35-3bd2-40f3-a312-466b0c15bc35/cluster-logging-operator/0.log" Jan 26 20:30:21 crc kubenswrapper[4737]: I0126 20:30:21.754147 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-vbgpv_6e3d8492-59e3-4dc0-b14a-261053397eb7/collector/0.log" Jan 26 20:30:21 crc kubenswrapper[4737]: I0126 20:30:21.810370 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_274a7c37-3e64-45ce-8d6f-dfeac9c15288/loki-compactor/0.log" Jan 26 20:30:21 crc kubenswrapper[4737]: I0126 20:30:21.976097 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-5f678c8dd6-6wp46_f15f2968-e05a-49f0-8024-3a1764d4b9e2/loki-distributor/0.log" Jan 26 20:30:22 crc kubenswrapper[4737]: I0126 20:30:22.042021 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5c6b766d5f-c5kng_e9d6a3ae-5064-4b4a-bbdb-3b05596bc38e/gateway/0.log" Jan 26 20:30:22 crc kubenswrapper[4737]: I0126 20:30:22.126386 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5c6b766d5f-c5kng_e9d6a3ae-5064-4b4a-bbdb-3b05596bc38e/opa/0.log" Jan 26 20:30:22 crc kubenswrapper[4737]: I0126 20:30:22.254977 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5c6b766d5f-kcfsl_225843b1-6423-4d7f-aa3c-5945a9e4bd8e/gateway/0.log" Jan 26 20:30:22 crc kubenswrapper[4737]: I0126 20:30:22.333791 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5c6b766d5f-kcfsl_225843b1-6423-4d7f-aa3c-5945a9e4bd8e/opa/0.log" Jan 26 20:30:22 crc kubenswrapper[4737]: I0126 20:30:22.481719 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_7d74d1ee-657b-4404-9390-cd94e3cb6d2c/loki-index-gateway/0.log" Jan 26 20:30:22 crc kubenswrapper[4737]: I0126 20:30:22.720679 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_a05526c9-7b63-4f57-bdaf-95d8a7912bb8/loki-ingester/0.log" Jan 26 20:30:22 crc kubenswrapper[4737]: I0126 20:30:22.753752 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-76788598db-rsdfq_15449cbd-7753-47b6-811f-059d9f83ff0b/loki-querier/0.log" Jan 26 20:30:22 crc kubenswrapper[4737]: I0126 20:30:22.927895 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-69d9546745-qqkdc_954c3b49-1fc8-4e5c-9312-7b8e66b7a681/loki-query-frontend/0.log" Jan 26 20:30:29 crc kubenswrapper[4737]: I0126 20:30:29.502539 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lpc54" Jan 26 20:30:29 crc kubenswrapper[4737]: I0126 20:30:29.567711 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lpc54"] Jan 26 20:30:29 crc kubenswrapper[4737]: I0126 20:30:29.985428 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lpc54" podUID="4ce28f0d-3036-4836-8a39-2d77c98813fe" containerName="registry-server" containerID="cri-o://41be8bbdc6c1518e2f2560f2f35e5ab85331e95e85399d2cac5d3a01f0ae910f" gracePeriod=2 Jan 26 20:30:30 crc kubenswrapper[4737]: I0126 20:30:30.557193 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lpc54" Jan 26 20:30:30 crc kubenswrapper[4737]: I0126 20:30:30.676623 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czr4k\" (UniqueName: \"kubernetes.io/projected/4ce28f0d-3036-4836-8a39-2d77c98813fe-kube-api-access-czr4k\") pod \"4ce28f0d-3036-4836-8a39-2d77c98813fe\" (UID: \"4ce28f0d-3036-4836-8a39-2d77c98813fe\") " Jan 26 20:30:30 crc kubenswrapper[4737]: I0126 20:30:30.676834 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ce28f0d-3036-4836-8a39-2d77c98813fe-catalog-content\") pod \"4ce28f0d-3036-4836-8a39-2d77c98813fe\" (UID: \"4ce28f0d-3036-4836-8a39-2d77c98813fe\") " Jan 26 20:30:30 crc kubenswrapper[4737]: I0126 20:30:30.677188 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ce28f0d-3036-4836-8a39-2d77c98813fe-utilities\") pod \"4ce28f0d-3036-4836-8a39-2d77c98813fe\" (UID: \"4ce28f0d-3036-4836-8a39-2d77c98813fe\") " Jan 26 20:30:30 crc kubenswrapper[4737]: I0126 20:30:30.677827 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ce28f0d-3036-4836-8a39-2d77c98813fe-utilities" (OuterVolumeSpecName: "utilities") pod "4ce28f0d-3036-4836-8a39-2d77c98813fe" (UID: "4ce28f0d-3036-4836-8a39-2d77c98813fe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:30:30 crc kubenswrapper[4737]: I0126 20:30:30.678313 4737 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ce28f0d-3036-4836-8a39-2d77c98813fe-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 20:30:30 crc kubenswrapper[4737]: I0126 20:30:30.687289 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ce28f0d-3036-4836-8a39-2d77c98813fe-kube-api-access-czr4k" (OuterVolumeSpecName: "kube-api-access-czr4k") pod "4ce28f0d-3036-4836-8a39-2d77c98813fe" (UID: "4ce28f0d-3036-4836-8a39-2d77c98813fe"). InnerVolumeSpecName "kube-api-access-czr4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:30:30 crc kubenswrapper[4737]: I0126 20:30:30.734733 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ce28f0d-3036-4836-8a39-2d77c98813fe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4ce28f0d-3036-4836-8a39-2d77c98813fe" (UID: "4ce28f0d-3036-4836-8a39-2d77c98813fe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:30:30 crc kubenswrapper[4737]: I0126 20:30:30.781225 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czr4k\" (UniqueName: \"kubernetes.io/projected/4ce28f0d-3036-4836-8a39-2d77c98813fe-kube-api-access-czr4k\") on node \"crc\" DevicePath \"\"" Jan 26 20:30:30 crc kubenswrapper[4737]: I0126 20:30:30.781293 4737 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ce28f0d-3036-4836-8a39-2d77c98813fe-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 20:30:30 crc kubenswrapper[4737]: I0126 20:30:30.948850 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 20:30:30 crc kubenswrapper[4737]: I0126 20:30:30.948950 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 20:30:31 crc kubenswrapper[4737]: I0126 20:30:31.007513 4737 generic.go:334] "Generic (PLEG): container finished" podID="4ce28f0d-3036-4836-8a39-2d77c98813fe" containerID="41be8bbdc6c1518e2f2560f2f35e5ab85331e95e85399d2cac5d3a01f0ae910f" exitCode=0 Jan 26 20:30:31 crc kubenswrapper[4737]: I0126 20:30:31.007676 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lpc54" Jan 26 20:30:31 crc kubenswrapper[4737]: I0126 20:30:31.010060 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lpc54" event={"ID":"4ce28f0d-3036-4836-8a39-2d77c98813fe","Type":"ContainerDied","Data":"41be8bbdc6c1518e2f2560f2f35e5ab85331e95e85399d2cac5d3a01f0ae910f"} Jan 26 20:30:31 crc kubenswrapper[4737]: I0126 20:30:31.010151 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lpc54" event={"ID":"4ce28f0d-3036-4836-8a39-2d77c98813fe","Type":"ContainerDied","Data":"ed10f9f7d436276d802653bb502b4a58d7e097c161e1c93e1c402c1b4c386964"} Jan 26 20:30:31 crc kubenswrapper[4737]: I0126 20:30:31.010183 4737 scope.go:117] "RemoveContainer" containerID="41be8bbdc6c1518e2f2560f2f35e5ab85331e95e85399d2cac5d3a01f0ae910f" Jan 26 20:30:31 crc kubenswrapper[4737]: I0126 20:30:31.052915 4737 scope.go:117] "RemoveContainer" containerID="85e9cb3e2f2d73c444c90f8cbebc9b2c41bf5f249034b5d9d75a5ad17d019e80" Jan 26 20:30:31 crc kubenswrapper[4737]: I0126 20:30:31.083393 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lpc54"] Jan 26 20:30:31 crc kubenswrapper[4737]: I0126 20:30:31.102631 4737 scope.go:117] "RemoveContainer" containerID="e84476aa7503e3b414b321d1cea454cb57a8fc83b9fb82b69b486b85106ddd37" Jan 26 20:30:31 crc kubenswrapper[4737]: I0126 20:30:31.113260 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lpc54"] Jan 26 20:30:31 crc kubenswrapper[4737]: I0126 20:30:31.148936 4737 scope.go:117] "RemoveContainer" containerID="41be8bbdc6c1518e2f2560f2f35e5ab85331e95e85399d2cac5d3a01f0ae910f" Jan 26 20:30:31 crc kubenswrapper[4737]: E0126 20:30:31.149826 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41be8bbdc6c1518e2f2560f2f35e5ab85331e95e85399d2cac5d3a01f0ae910f\": container with ID starting with 41be8bbdc6c1518e2f2560f2f35e5ab85331e95e85399d2cac5d3a01f0ae910f not found: ID does not exist" containerID="41be8bbdc6c1518e2f2560f2f35e5ab85331e95e85399d2cac5d3a01f0ae910f" Jan 26 20:30:31 crc kubenswrapper[4737]: I0126 20:30:31.149875 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41be8bbdc6c1518e2f2560f2f35e5ab85331e95e85399d2cac5d3a01f0ae910f"} err="failed to get container status \"41be8bbdc6c1518e2f2560f2f35e5ab85331e95e85399d2cac5d3a01f0ae910f\": rpc error: code = NotFound desc = could not find container \"41be8bbdc6c1518e2f2560f2f35e5ab85331e95e85399d2cac5d3a01f0ae910f\": container with ID starting with 41be8bbdc6c1518e2f2560f2f35e5ab85331e95e85399d2cac5d3a01f0ae910f not found: ID does not exist" Jan 26 20:30:31 crc kubenswrapper[4737]: I0126 20:30:31.149905 4737 scope.go:117] "RemoveContainer" containerID="85e9cb3e2f2d73c444c90f8cbebc9b2c41bf5f249034b5d9d75a5ad17d019e80" Jan 26 20:30:31 crc kubenswrapper[4737]: E0126 20:30:31.150388 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85e9cb3e2f2d73c444c90f8cbebc9b2c41bf5f249034b5d9d75a5ad17d019e80\": container with ID starting with 85e9cb3e2f2d73c444c90f8cbebc9b2c41bf5f249034b5d9d75a5ad17d019e80 not found: ID does not exist" containerID="85e9cb3e2f2d73c444c90f8cbebc9b2c41bf5f249034b5d9d75a5ad17d019e80" Jan 26 20:30:31 crc kubenswrapper[4737]: I0126 20:30:31.150459 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85e9cb3e2f2d73c444c90f8cbebc9b2c41bf5f249034b5d9d75a5ad17d019e80"} err="failed to get container status \"85e9cb3e2f2d73c444c90f8cbebc9b2c41bf5f249034b5d9d75a5ad17d019e80\": rpc error: code = NotFound desc = could not find container \"85e9cb3e2f2d73c444c90f8cbebc9b2c41bf5f249034b5d9d75a5ad17d019e80\": container with ID starting with 85e9cb3e2f2d73c444c90f8cbebc9b2c41bf5f249034b5d9d75a5ad17d019e80 not found: ID does not exist" Jan 26 20:30:31 crc kubenswrapper[4737]: I0126 20:30:31.150519 4737 scope.go:117] "RemoveContainer" containerID="e84476aa7503e3b414b321d1cea454cb57a8fc83b9fb82b69b486b85106ddd37" Jan 26 20:30:31 crc kubenswrapper[4737]: E0126 20:30:31.150976 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e84476aa7503e3b414b321d1cea454cb57a8fc83b9fb82b69b486b85106ddd37\": container with ID starting with e84476aa7503e3b414b321d1cea454cb57a8fc83b9fb82b69b486b85106ddd37 not found: ID does not exist" containerID="e84476aa7503e3b414b321d1cea454cb57a8fc83b9fb82b69b486b85106ddd37" Jan 26 20:30:31 crc kubenswrapper[4737]: I0126 20:30:31.151006 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e84476aa7503e3b414b321d1cea454cb57a8fc83b9fb82b69b486b85106ddd37"} err="failed to get container status \"e84476aa7503e3b414b321d1cea454cb57a8fc83b9fb82b69b486b85106ddd37\": rpc error: code = NotFound desc = could not find container \"e84476aa7503e3b414b321d1cea454cb57a8fc83b9fb82b69b486b85106ddd37\": container with ID starting with e84476aa7503e3b414b321d1cea454cb57a8fc83b9fb82b69b486b85106ddd37 not found: ID does not exist" Jan 26 20:30:33 crc kubenswrapper[4737]: I0126 20:30:33.007380 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ce28f0d-3036-4836-8a39-2d77c98813fe" path="/var/lib/kubelet/pods/4ce28f0d-3036-4836-8a39-2d77c98813fe/volumes" Jan 26 20:30:39 crc kubenswrapper[4737]: I0126 20:30:39.613634 4737 scope.go:117] "RemoveContainer" containerID="7c1bfb94c5e071b2bda36678fdf3fcf688f2ac601326ed878d61b04568b21b2b" Jan 26 20:30:40 crc kubenswrapper[4737]: I0126 20:30:40.113333 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-54gqz_316b58c7-76eb-4b53-adee-6e456286e313/kube-rbac-proxy/0.log" Jan 26 20:30:40 crc kubenswrapper[4737]: I0126 20:30:40.225030 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-54gqz_316b58c7-76eb-4b53-adee-6e456286e313/controller/0.log" Jan 26 20:30:40 crc kubenswrapper[4737]: I0126 20:30:40.342560 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ts4kl_f86f264d-5704-4995-9e15-13b28bd18dc4/cp-frr-files/0.log" Jan 26 20:30:40 crc kubenswrapper[4737]: I0126 20:30:40.553510 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ts4kl_f86f264d-5704-4995-9e15-13b28bd18dc4/cp-reloader/0.log" Jan 26 20:30:40 crc kubenswrapper[4737]: I0126 20:30:40.585903 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ts4kl_f86f264d-5704-4995-9e15-13b28bd18dc4/cp-metrics/0.log" Jan 26 20:30:40 crc kubenswrapper[4737]: I0126 20:30:40.653775 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ts4kl_f86f264d-5704-4995-9e15-13b28bd18dc4/cp-reloader/0.log" Jan 26 20:30:40 crc kubenswrapper[4737]: I0126 20:30:40.670084 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ts4kl_f86f264d-5704-4995-9e15-13b28bd18dc4/cp-frr-files/0.log" Jan 26 20:30:40 crc kubenswrapper[4737]: I0126 20:30:40.908293 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ts4kl_f86f264d-5704-4995-9e15-13b28bd18dc4/cp-reloader/0.log" Jan 26 20:30:40 crc kubenswrapper[4737]: I0126 20:30:40.909894 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ts4kl_f86f264d-5704-4995-9e15-13b28bd18dc4/cp-frr-files/0.log" Jan 26 20:30:40 crc kubenswrapper[4737]: I0126 20:30:40.919833 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ts4kl_f86f264d-5704-4995-9e15-13b28bd18dc4/cp-metrics/0.log" Jan 26 20:30:40 crc kubenswrapper[4737]: I0126 20:30:40.993104 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ts4kl_f86f264d-5704-4995-9e15-13b28bd18dc4/cp-metrics/0.log" Jan 26 20:30:41 crc kubenswrapper[4737]: I0126 20:30:41.269288 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ts4kl_f86f264d-5704-4995-9e15-13b28bd18dc4/cp-frr-files/0.log" Jan 26 20:30:41 crc kubenswrapper[4737]: I0126 20:30:41.270340 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ts4kl_f86f264d-5704-4995-9e15-13b28bd18dc4/controller/0.log" Jan 26 20:30:41 crc kubenswrapper[4737]: I0126 20:30:41.313420 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ts4kl_f86f264d-5704-4995-9e15-13b28bd18dc4/cp-reloader/0.log" Jan 26 20:30:41 crc kubenswrapper[4737]: I0126 20:30:41.319625 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ts4kl_f86f264d-5704-4995-9e15-13b28bd18dc4/cp-metrics/0.log" Jan 26 20:30:41 crc kubenswrapper[4737]: I0126 20:30:41.566628 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ts4kl_f86f264d-5704-4995-9e15-13b28bd18dc4/kube-rbac-proxy/0.log" Jan 26 20:30:41 crc kubenswrapper[4737]: I0126 20:30:41.575258 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ts4kl_f86f264d-5704-4995-9e15-13b28bd18dc4/frr-metrics/0.log" Jan 26 20:30:41 crc kubenswrapper[4737]: I0126 20:30:41.600320 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ts4kl_f86f264d-5704-4995-9e15-13b28bd18dc4/kube-rbac-proxy-frr/0.log" Jan 26 20:30:41 crc kubenswrapper[4737]: I0126 20:30:41.821588 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ts4kl_f86f264d-5704-4995-9e15-13b28bd18dc4/reloader/0.log" Jan 26 20:30:41 crc kubenswrapper[4737]: I0126 20:30:41.879184 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-zg2pm_db423313-ded0-4540-abdb-a7a8c5944989/frr-k8s-webhook-server/0.log" Jan 26 20:30:42 crc kubenswrapper[4737]: I0126 20:30:42.094548 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-948bbcb9c-jrztq_0a7ecdef-57dc-45fc-9142-3889fb44d2e9/manager/0.log" Jan 26 20:30:42 crc kubenswrapper[4737]: I0126 20:30:42.334109 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-75cffd444d-hgw8t_db9aadf5-9872-40e4-8333-da2779361dcf/webhook-server/0.log" Jan 26 20:30:42 crc kubenswrapper[4737]: I0126 20:30:42.461899 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-bs5fc_ee468080-345d-4821-ab62-d1034fd7cd01/kube-rbac-proxy/0.log" Jan 26 20:30:43 crc kubenswrapper[4737]: I0126 20:30:43.303719 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-bs5fc_ee468080-345d-4821-ab62-d1034fd7cd01/speaker/0.log" Jan 26 20:30:43 crc kubenswrapper[4737]: I0126 20:30:43.755529 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ts4kl_f86f264d-5704-4995-9e15-13b28bd18dc4/frr/0.log" Jan 26 20:30:58 crc kubenswrapper[4737]: I0126 20:30:58.084995 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2jtwmf_52bcbbde-c297-4cce-80fd-cde90894b5df/util/0.log" Jan 26 20:30:58 crc kubenswrapper[4737]: I0126 20:30:58.446374 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2jtwmf_52bcbbde-c297-4cce-80fd-cde90894b5df/util/0.log" Jan 26 20:30:58 crc kubenswrapper[4737]: I0126 20:30:58.456178 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2jtwmf_52bcbbde-c297-4cce-80fd-cde90894b5df/pull/0.log" Jan 26 20:30:58 crc kubenswrapper[4737]: I0126 20:30:58.482390 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2jtwmf_52bcbbde-c297-4cce-80fd-cde90894b5df/pull/0.log" Jan 26 20:30:59 crc kubenswrapper[4737]: I0126 20:30:58.999811 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2jtwmf_52bcbbde-c297-4cce-80fd-cde90894b5df/util/0.log" Jan 26 20:30:59 crc kubenswrapper[4737]: I0126 20:30:59.003747 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2jtwmf_52bcbbde-c297-4cce-80fd-cde90894b5df/pull/0.log" Jan 26 20:30:59 crc kubenswrapper[4737]: I0126 20:30:59.081531 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2jtwmf_52bcbbde-c297-4cce-80fd-cde90894b5df/extract/0.log" Jan 26 20:30:59 crc kubenswrapper[4737]: I0126 20:30:59.215215 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx99dd_31b3687c-76cb-44be-b404-f88ed8a1b901/util/0.log" Jan 26 20:30:59 crc kubenswrapper[4737]: I0126 20:30:59.430565 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx99dd_31b3687c-76cb-44be-b404-f88ed8a1b901/util/0.log" Jan 26 20:30:59 crc kubenswrapper[4737]: I0126 20:30:59.450246 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx99dd_31b3687c-76cb-44be-b404-f88ed8a1b901/pull/0.log" Jan 26 20:30:59 crc kubenswrapper[4737]: I0126 20:30:59.475203 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx99dd_31b3687c-76cb-44be-b404-f88ed8a1b901/pull/0.log" Jan 26 20:30:59 crc kubenswrapper[4737]: I0126 20:30:59.683424 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx99dd_31b3687c-76cb-44be-b404-f88ed8a1b901/util/0.log" Jan 26 20:30:59 crc kubenswrapper[4737]: I0126 20:30:59.696783 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx99dd_31b3687c-76cb-44be-b404-f88ed8a1b901/extract/0.log" Jan 26 20:30:59 crc kubenswrapper[4737]: I0126 20:30:59.697935 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx99dd_31b3687c-76cb-44be-b404-f88ed8a1b901/pull/0.log" Jan 26 20:30:59 crc kubenswrapper[4737]: I0126 20:30:59.878007 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bw4px7_65f7c351-84bb-41e0-9775-a820da54e2eb/util/0.log" Jan 26 20:31:00 crc kubenswrapper[4737]: I0126 20:31:00.147571 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bw4px7_65f7c351-84bb-41e0-9775-a820da54e2eb/util/0.log" Jan 26 20:31:00 crc kubenswrapper[4737]: I0126 20:31:00.173598 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bw4px7_65f7c351-84bb-41e0-9775-a820da54e2eb/pull/0.log" Jan 26 20:31:00 crc kubenswrapper[4737]: I0126 20:31:00.201044 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bw4px7_65f7c351-84bb-41e0-9775-a820da54e2eb/pull/0.log" Jan 26 20:31:00 crc kubenswrapper[4737]: I0126 20:31:00.384796 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bw4px7_65f7c351-84bb-41e0-9775-a820da54e2eb/util/0.log" Jan 26 20:31:00 crc kubenswrapper[4737]: I0126 20:31:00.385888 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bw4px7_65f7c351-84bb-41e0-9775-a820da54e2eb/pull/0.log" Jan 26 20:31:00 crc kubenswrapper[4737]: I0126 20:31:00.395168 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bw4px7_65f7c351-84bb-41e0-9775-a820da54e2eb/extract/0.log" Jan 26 20:31:00 crc kubenswrapper[4737]: I0126 20:31:00.585545 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139kn98_7f36ed9b-a077-4329-803a-d5738c97e844/util/0.log" Jan 26 20:31:00 crc kubenswrapper[4737]: I0126 20:31:00.860608 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139kn98_7f36ed9b-a077-4329-803a-d5738c97e844/pull/0.log" Jan 26 20:31:00 crc kubenswrapper[4737]: I0126 20:31:00.863264 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139kn98_7f36ed9b-a077-4329-803a-d5738c97e844/util/0.log" Jan 26 20:31:00 crc kubenswrapper[4737]: I0126 20:31:00.894749 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139kn98_7f36ed9b-a077-4329-803a-d5738c97e844/pull/0.log" Jan 26 20:31:00 crc kubenswrapper[4737]: I0126 20:31:00.949027 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 20:31:00 crc kubenswrapper[4737]: I0126 20:31:00.949140 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 20:31:00 crc kubenswrapper[4737]: I0126 20:31:00.949213 4737 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" Jan 26 20:31:00 crc kubenswrapper[4737]: I0126 20:31:00.950521 4737 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"05a46f8e5c92ce620be075be65e82bacded6a11097569b518c26dfa30624b4cd"} pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 20:31:00 crc kubenswrapper[4737]: I0126 20:31:00.950626 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" containerID="cri-o://05a46f8e5c92ce620be075be65e82bacded6a11097569b518c26dfa30624b4cd" gracePeriod=600 Jan 26 20:31:01 crc kubenswrapper[4737]: I0126 20:31:01.065809 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139kn98_7f36ed9b-a077-4329-803a-d5738c97e844/util/0.log" Jan 26 20:31:01 crc kubenswrapper[4737]: I0126 20:31:01.093494 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139kn98_7f36ed9b-a077-4329-803a-d5738c97e844/extract/0.log" Jan 26 20:31:01 crc kubenswrapper[4737]: I0126 20:31:01.155660 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139kn98_7f36ed9b-a077-4329-803a-d5738c97e844/pull/0.log" Jan 26 20:31:01 crc kubenswrapper[4737]: I0126 20:31:01.313525 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087hj6x_c801ad0c-6ec9-4497-ba0d-bad429d70783/util/0.log" Jan 26 20:31:01 crc kubenswrapper[4737]: I0126 20:31:01.443576 4737 generic.go:334] "Generic (PLEG): container finished" podID="afd75772-7900-46c3-b392-afb075e1cc08" containerID="05a46f8e5c92ce620be075be65e82bacded6a11097569b518c26dfa30624b4cd" exitCode=0 Jan 26 20:31:01 crc kubenswrapper[4737]: I0126 20:31:01.443634 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" event={"ID":"afd75772-7900-46c3-b392-afb075e1cc08","Type":"ContainerDied","Data":"05a46f8e5c92ce620be075be65e82bacded6a11097569b518c26dfa30624b4cd"} Jan 26 20:31:01 crc kubenswrapper[4737]: I0126 20:31:01.445194 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" event={"ID":"afd75772-7900-46c3-b392-afb075e1cc08","Type":"ContainerStarted","Data":"771be7dd7dd89d05e4011b9c3012a96210dbedb7e310e77109a9521a2ac994b2"} Jan 26 20:31:01 crc kubenswrapper[4737]: I0126 20:31:01.445266 4737 scope.go:117] "RemoveContainer" containerID="7aba965480739423a22438a8c1c4daeec43131ccb401d5c79d36c732e6893546" Jan 26 20:31:01 crc kubenswrapper[4737]: I0126 20:31:01.559541 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087hj6x_c801ad0c-6ec9-4497-ba0d-bad429d70783/util/0.log" Jan 26 20:31:01 crc kubenswrapper[4737]: I0126 20:31:01.623838 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087hj6x_c801ad0c-6ec9-4497-ba0d-bad429d70783/pull/0.log" Jan 26 20:31:01 crc kubenswrapper[4737]: I0126 20:31:01.638681 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087hj6x_c801ad0c-6ec9-4497-ba0d-bad429d70783/pull/0.log" Jan 26 20:31:01 crc kubenswrapper[4737]: I0126 20:31:01.843413 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087hj6x_c801ad0c-6ec9-4497-ba0d-bad429d70783/util/0.log" Jan 26 20:31:01 crc kubenswrapper[4737]: I0126 20:31:01.856182 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087hj6x_c801ad0c-6ec9-4497-ba0d-bad429d70783/pull/0.log" Jan 26 20:31:01 crc kubenswrapper[4737]: I0126 20:31:01.900376 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087hj6x_c801ad0c-6ec9-4497-ba0d-bad429d70783/extract/0.log" Jan 26 20:31:02 crc kubenswrapper[4737]: I0126 20:31:02.049201 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hjhjz_99f52814-0bfb-4fa6-9bfd-a9bcf704d8f2/extract-utilities/0.log" Jan 26 20:31:02 crc kubenswrapper[4737]: I0126 20:31:02.257866 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hjhjz_99f52814-0bfb-4fa6-9bfd-a9bcf704d8f2/extract-utilities/0.log" Jan 26 20:31:02 crc kubenswrapper[4737]: I0126 20:31:02.260892 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hjhjz_99f52814-0bfb-4fa6-9bfd-a9bcf704d8f2/extract-content/0.log" Jan 26 20:31:02 crc kubenswrapper[4737]: I0126 20:31:02.288721 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hjhjz_99f52814-0bfb-4fa6-9bfd-a9bcf704d8f2/extract-content/0.log" Jan 26 20:31:02 crc kubenswrapper[4737]: I0126 20:31:02.436150 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hjhjz_99f52814-0bfb-4fa6-9bfd-a9bcf704d8f2/extract-content/0.log" Jan 26 20:31:02 crc kubenswrapper[4737]: I0126 20:31:02.499047 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hjhjz_99f52814-0bfb-4fa6-9bfd-a9bcf704d8f2/extract-utilities/0.log" Jan 26 20:31:02 crc kubenswrapper[4737]: I0126 20:31:02.531023 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qjsbs_f325c214-4902-4a66-a21c-d29413e523f3/extract-utilities/0.log" Jan 26 20:31:02 crc kubenswrapper[4737]: I0126 20:31:02.807311 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qjsbs_f325c214-4902-4a66-a21c-d29413e523f3/extract-content/0.log" Jan 26 20:31:02 crc kubenswrapper[4737]: I0126 20:31:02.922991 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qjsbs_f325c214-4902-4a66-a21c-d29413e523f3/extract-utilities/0.log" Jan 26 20:31:02 crc kubenswrapper[4737]: I0126 20:31:02.934105 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qjsbs_f325c214-4902-4a66-a21c-d29413e523f3/extract-content/0.log" Jan 26 20:31:03 crc kubenswrapper[4737]: I0126 20:31:03.049382 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qjsbs_f325c214-4902-4a66-a21c-d29413e523f3/extract-utilities/0.log" Jan 26 20:31:03 crc kubenswrapper[4737]: I0126 20:31:03.053845 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qjsbs_f325c214-4902-4a66-a21c-d29413e523f3/extract-content/0.log" Jan 26 20:31:03 crc kubenswrapper[4737]: I0126 20:31:03.372013 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-dr8sf_faf30849-7c19-44f9-ba42-3ad3f14efe0d/marketplace-operator/0.log" Jan 26 20:31:03 crc kubenswrapper[4737]: I0126 20:31:03.462154 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kgrfg_927a6ff0-afc5-477b-b139-e02a9f9b4452/extract-utilities/0.log" Jan 26 20:31:03 crc kubenswrapper[4737]: I0126 20:31:03.844395 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kgrfg_927a6ff0-afc5-477b-b139-e02a9f9b4452/extract-content/0.log" Jan 26 20:31:03 crc kubenswrapper[4737]: I0126 20:31:03.850254 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kgrfg_927a6ff0-afc5-477b-b139-e02a9f9b4452/extract-utilities/0.log" Jan 26 20:31:03 crc kubenswrapper[4737]: I0126 20:31:03.863250 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hjhjz_99f52814-0bfb-4fa6-9bfd-a9bcf704d8f2/registry-server/0.log" Jan 26 20:31:03 crc kubenswrapper[4737]: I0126 20:31:03.904295 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kgrfg_927a6ff0-afc5-477b-b139-e02a9f9b4452/extract-content/0.log" Jan 26 20:31:03 crc kubenswrapper[4737]: I0126 20:31:03.951894 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qjsbs_f325c214-4902-4a66-a21c-d29413e523f3/registry-server/0.log" Jan 26 20:31:04 crc kubenswrapper[4737]: I0126 20:31:04.103272 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kgrfg_927a6ff0-afc5-477b-b139-e02a9f9b4452/extract-content/0.log" Jan 26 20:31:04 crc kubenswrapper[4737]: I0126 20:31:04.166847 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kgrfg_927a6ff0-afc5-477b-b139-e02a9f9b4452/extract-utilities/0.log" Jan 26 20:31:04 crc kubenswrapper[4737]: I0126 20:31:04.230955 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nx2jv_89059a8c-e6df-4f31-afd5-78a98ee6b4e5/extract-utilities/0.log" Jan 26 20:31:04 crc kubenswrapper[4737]: I0126 20:31:04.408861 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kgrfg_927a6ff0-afc5-477b-b139-e02a9f9b4452/registry-server/0.log" Jan 26 20:31:04 crc kubenswrapper[4737]: I0126 20:31:04.436915 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nx2jv_89059a8c-e6df-4f31-afd5-78a98ee6b4e5/extract-utilities/0.log" Jan 26 20:31:04 crc kubenswrapper[4737]: I0126 20:31:04.521612 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nx2jv_89059a8c-e6df-4f31-afd5-78a98ee6b4e5/extract-content/0.log" Jan 26 20:31:04 crc kubenswrapper[4737]: I0126 20:31:04.556725 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nx2jv_89059a8c-e6df-4f31-afd5-78a98ee6b4e5/extract-content/0.log" Jan 26 20:31:04 crc kubenswrapper[4737]: I0126 20:31:04.713137 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nx2jv_89059a8c-e6df-4f31-afd5-78a98ee6b4e5/extract-content/0.log" Jan 26 20:31:04 crc kubenswrapper[4737]: I0126 20:31:04.736344 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nx2jv_89059a8c-e6df-4f31-afd5-78a98ee6b4e5/extract-utilities/0.log" Jan 26 20:31:05 crc kubenswrapper[4737]: I0126 20:31:05.658164 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nx2jv_89059a8c-e6df-4f31-afd5-78a98ee6b4e5/registry-server/0.log" Jan 26 20:31:11 crc kubenswrapper[4737]: I0126 20:31:11.309965 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mxlxc"] Jan 26 20:31:11 crc kubenswrapper[4737]: E0126 20:31:11.312590 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ce28f0d-3036-4836-8a39-2d77c98813fe" containerName="extract-content" Jan 26 20:31:11 crc kubenswrapper[4737]: I0126 20:31:11.312686 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ce28f0d-3036-4836-8a39-2d77c98813fe" containerName="extract-content" Jan 26 20:31:11 crc kubenswrapper[4737]: E0126 20:31:11.312785 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ce28f0d-3036-4836-8a39-2d77c98813fe" containerName="extract-utilities" Jan 26 20:31:11 crc kubenswrapper[4737]: I0126 20:31:11.312846 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ce28f0d-3036-4836-8a39-2d77c98813fe" containerName="extract-utilities" Jan 26 20:31:11 crc kubenswrapper[4737]: E0126 20:31:11.312923 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ce28f0d-3036-4836-8a39-2d77c98813fe" containerName="registry-server" Jan 26 20:31:11 crc kubenswrapper[4737]: I0126 20:31:11.312975 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ce28f0d-3036-4836-8a39-2d77c98813fe" containerName="registry-server" Jan 26 20:31:11 crc kubenswrapper[4737]: I0126 20:31:11.313305 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ce28f0d-3036-4836-8a39-2d77c98813fe" containerName="registry-server" Jan 26 20:31:11 crc kubenswrapper[4737]: I0126 20:31:11.315842 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mxlxc" Jan 26 20:31:11 crc kubenswrapper[4737]: I0126 20:31:11.347324 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mxlxc"] Jan 26 20:31:11 crc kubenswrapper[4737]: I0126 20:31:11.411479 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adf5c442-7693-4f1d-b8fb-a018b86bf8fc-utilities\") pod \"redhat-operators-mxlxc\" (UID: \"adf5c442-7693-4f1d-b8fb-a018b86bf8fc\") " pod="openshift-marketplace/redhat-operators-mxlxc" Jan 26 20:31:11 crc kubenswrapper[4737]: I0126 20:31:11.411884 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adf5c442-7693-4f1d-b8fb-a018b86bf8fc-catalog-content\") pod \"redhat-operators-mxlxc\" (UID: \"adf5c442-7693-4f1d-b8fb-a018b86bf8fc\") " pod="openshift-marketplace/redhat-operators-mxlxc" Jan 26 20:31:11 crc kubenswrapper[4737]: I0126 20:31:11.413344 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99d86\" (UniqueName: \"kubernetes.io/projected/adf5c442-7693-4f1d-b8fb-a018b86bf8fc-kube-api-access-99d86\") pod \"redhat-operators-mxlxc\" (UID: \"adf5c442-7693-4f1d-b8fb-a018b86bf8fc\") " pod="openshift-marketplace/redhat-operators-mxlxc" Jan 26 20:31:11 crc kubenswrapper[4737]: I0126 20:31:11.516703 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99d86\" (UniqueName: \"kubernetes.io/projected/adf5c442-7693-4f1d-b8fb-a018b86bf8fc-kube-api-access-99d86\") pod \"redhat-operators-mxlxc\" (UID: \"adf5c442-7693-4f1d-b8fb-a018b86bf8fc\") " pod="openshift-marketplace/redhat-operators-mxlxc" Jan 26 20:31:11 crc kubenswrapper[4737]: I0126 20:31:11.516796 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adf5c442-7693-4f1d-b8fb-a018b86bf8fc-utilities\") pod \"redhat-operators-mxlxc\" (UID: \"adf5c442-7693-4f1d-b8fb-a018b86bf8fc\") " pod="openshift-marketplace/redhat-operators-mxlxc" Jan 26 20:31:11 crc kubenswrapper[4737]: I0126 20:31:11.516856 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adf5c442-7693-4f1d-b8fb-a018b86bf8fc-catalog-content\") pod \"redhat-operators-mxlxc\" (UID: \"adf5c442-7693-4f1d-b8fb-a018b86bf8fc\") " pod="openshift-marketplace/redhat-operators-mxlxc" Jan 26 20:31:11 crc kubenswrapper[4737]: I0126 20:31:11.517830 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adf5c442-7693-4f1d-b8fb-a018b86bf8fc-catalog-content\") pod \"redhat-operators-mxlxc\" (UID: \"adf5c442-7693-4f1d-b8fb-a018b86bf8fc\") " pod="openshift-marketplace/redhat-operators-mxlxc" Jan 26 20:31:11 crc kubenswrapper[4737]: I0126 20:31:11.517836 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adf5c442-7693-4f1d-b8fb-a018b86bf8fc-utilities\") pod \"redhat-operators-mxlxc\" (UID: \"adf5c442-7693-4f1d-b8fb-a018b86bf8fc\") " pod="openshift-marketplace/redhat-operators-mxlxc" Jan 26 20:31:11 crc kubenswrapper[4737]: I0126 20:31:11.547698 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99d86\" (UniqueName: \"kubernetes.io/projected/adf5c442-7693-4f1d-b8fb-a018b86bf8fc-kube-api-access-99d86\") pod \"redhat-operators-mxlxc\" (UID: \"adf5c442-7693-4f1d-b8fb-a018b86bf8fc\") " pod="openshift-marketplace/redhat-operators-mxlxc" Jan 26 20:31:11 crc kubenswrapper[4737]: I0126 20:31:11.654440 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mxlxc" Jan 26 20:31:12 crc kubenswrapper[4737]: I0126 20:31:12.212945 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mxlxc"] Jan 26 20:31:12 crc kubenswrapper[4737]: I0126 20:31:12.601663 4737 generic.go:334] "Generic (PLEG): container finished" podID="adf5c442-7693-4f1d-b8fb-a018b86bf8fc" containerID="fbb361394aef34bd73eff346727973cc4eb4eeac568d1b6d6ba70fea6371d128" exitCode=0 Jan 26 20:31:12 crc kubenswrapper[4737]: I0126 20:31:12.601723 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mxlxc" event={"ID":"adf5c442-7693-4f1d-b8fb-a018b86bf8fc","Type":"ContainerDied","Data":"fbb361394aef34bd73eff346727973cc4eb4eeac568d1b6d6ba70fea6371d128"} Jan 26 20:31:12 crc kubenswrapper[4737]: I0126 20:31:12.601794 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mxlxc" event={"ID":"adf5c442-7693-4f1d-b8fb-a018b86bf8fc","Type":"ContainerStarted","Data":"0533bbc633ae47c87029efd67c6ae530ac87652175b4a4d395b5608783db3895"} Jan 26 20:31:14 crc kubenswrapper[4737]: I0126 20:31:14.629997 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mxlxc" event={"ID":"adf5c442-7693-4f1d-b8fb-a018b86bf8fc","Type":"ContainerStarted","Data":"049afc51ed6aed179a6b0e73691260cb7656c2378a8fc47a0271ac4ffbe444fe"} Jan 26 20:31:17 crc kubenswrapper[4737]: I0126 20:31:17.671055 4737 generic.go:334] "Generic (PLEG): container finished" podID="adf5c442-7693-4f1d-b8fb-a018b86bf8fc" containerID="049afc51ed6aed179a6b0e73691260cb7656c2378a8fc47a0271ac4ffbe444fe" exitCode=0 Jan 26 20:31:17 crc kubenswrapper[4737]: I0126 20:31:17.671116 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mxlxc" event={"ID":"adf5c442-7693-4f1d-b8fb-a018b86bf8fc","Type":"ContainerDied","Data":"049afc51ed6aed179a6b0e73691260cb7656c2378a8fc47a0271ac4ffbe444fe"} Jan 26 20:31:18 crc kubenswrapper[4737]: I0126 20:31:18.688663 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mxlxc" event={"ID":"adf5c442-7693-4f1d-b8fb-a018b86bf8fc","Type":"ContainerStarted","Data":"7e2d7a8ab397974e080c90be9dbbc48c24a9b45b880aa374d5e71770e7ee96ed"} Jan 26 20:31:18 crc kubenswrapper[4737]: I0126 20:31:18.717633 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mxlxc" podStartSLOduration=2.251377381 podStartE2EDuration="7.717594215s" podCreationTimestamp="2026-01-26 20:31:11 +0000 UTC" firstStartedPulling="2026-01-26 20:31:12.607272752 +0000 UTC m=+7245.915467470" lastFinishedPulling="2026-01-26 20:31:18.073489606 +0000 UTC m=+7251.381684304" observedRunningTime="2026-01-26 20:31:18.712924771 +0000 UTC m=+7252.021119489" watchObservedRunningTime="2026-01-26 20:31:18.717594215 +0000 UTC m=+7252.025788923" Jan 26 20:31:21 crc kubenswrapper[4737]: I0126 20:31:21.574314 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-b48686b7d-s2s9r_33031648-f53a-4f71-8c03-041f7f1fcbf5/prometheus-operator-admission-webhook/0.log" Jan 26 20:31:21 crc kubenswrapper[4737]: I0126 20:31:21.650983 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-jvfnx_780e85db-cb8c-4a8c-920d-2594cd33eebf/prometheus-operator/0.log" Jan 26 20:31:21 crc kubenswrapper[4737]: I0126 20:31:21.654935 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mxlxc" Jan 26 20:31:21 crc kubenswrapper[4737]: I0126 20:31:21.655276 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mxlxc" Jan 26 20:31:21 crc kubenswrapper[4737]: I0126 20:31:21.708282 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-b48686b7d-tjv85_cc4df7ac-3298-4316-8c9b-1ac9827330fd/prometheus-operator-admission-webhook/0.log" Jan 26 20:31:21 crc kubenswrapper[4737]: I0126 20:31:21.898255 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-xf99z_b319754a-04cc-40dd-b031-ea72a3d19db2/operator/0.log" Jan 26 20:31:22 crc kubenswrapper[4737]: I0126 20:31:22.044561 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-r5vwv_7478def9-da54-4632-803e-47f36b6fb64b/perses-operator/0.log" Jan 26 20:31:22 crc kubenswrapper[4737]: I0126 20:31:22.065844 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-ckxn2_6b80cd0d-81ac-4f45-a80c-3b1cf442fc44/observability-ui-dashboards/0.log" Jan 26 20:31:22 crc kubenswrapper[4737]: I0126 20:31:22.725970 4737 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mxlxc" podUID="adf5c442-7693-4f1d-b8fb-a018b86bf8fc" containerName="registry-server" probeResult="failure" output=< Jan 26 20:31:22 crc kubenswrapper[4737]: timeout: failed to connect service ":50051" within 1s Jan 26 20:31:22 crc kubenswrapper[4737]: > Jan 26 20:31:32 crc kubenswrapper[4737]: I0126 20:31:32.716234 4737 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mxlxc" podUID="adf5c442-7693-4f1d-b8fb-a018b86bf8fc" containerName="registry-server" probeResult="failure" output=< Jan 26 20:31:32 crc kubenswrapper[4737]: timeout: failed to connect service ":50051" within 1s Jan 26 20:31:32 crc kubenswrapper[4737]: > Jan 26 20:31:37 crc kubenswrapper[4737]: I0126 20:31:37.056905 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6dbff5787b-d86s9_697c3f44-b05d-4404-bd79-a93c1c29b8ad/kube-rbac-proxy/0.log" Jan 26 20:31:37 crc kubenswrapper[4737]: I0126 20:31:37.164948 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6dbff5787b-d86s9_697c3f44-b05d-4404-bd79-a93c1c29b8ad/manager/0.log" Jan 26 20:31:41 crc kubenswrapper[4737]: I0126 20:31:41.732753 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mxlxc" Jan 26 20:31:41 crc kubenswrapper[4737]: I0126 20:31:41.827229 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mxlxc" Jan 26 20:31:42 crc kubenswrapper[4737]: I0126 20:31:42.521739 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mxlxc"] Jan 26 20:31:42 crc kubenswrapper[4737]: I0126 20:31:42.978788 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mxlxc" podUID="adf5c442-7693-4f1d-b8fb-a018b86bf8fc" containerName="registry-server" containerID="cri-o://7e2d7a8ab397974e080c90be9dbbc48c24a9b45b880aa374d5e71770e7ee96ed" gracePeriod=2 Jan 26 20:31:43 crc kubenswrapper[4737]: I0126 20:31:43.692190 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mxlxc" Jan 26 20:31:43 crc kubenswrapper[4737]: I0126 20:31:43.724342 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99d86\" (UniqueName: \"kubernetes.io/projected/adf5c442-7693-4f1d-b8fb-a018b86bf8fc-kube-api-access-99d86\") pod \"adf5c442-7693-4f1d-b8fb-a018b86bf8fc\" (UID: \"adf5c442-7693-4f1d-b8fb-a018b86bf8fc\") " Jan 26 20:31:43 crc kubenswrapper[4737]: I0126 20:31:43.724607 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adf5c442-7693-4f1d-b8fb-a018b86bf8fc-catalog-content\") pod \"adf5c442-7693-4f1d-b8fb-a018b86bf8fc\" (UID: \"adf5c442-7693-4f1d-b8fb-a018b86bf8fc\") " Jan 26 20:31:43 crc kubenswrapper[4737]: I0126 20:31:43.724846 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adf5c442-7693-4f1d-b8fb-a018b86bf8fc-utilities\") pod \"adf5c442-7693-4f1d-b8fb-a018b86bf8fc\" (UID: \"adf5c442-7693-4f1d-b8fb-a018b86bf8fc\") " Jan 26 20:31:43 crc kubenswrapper[4737]: I0126 20:31:43.742677 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adf5c442-7693-4f1d-b8fb-a018b86bf8fc-utilities" (OuterVolumeSpecName: "utilities") pod "adf5c442-7693-4f1d-b8fb-a018b86bf8fc" (UID: "adf5c442-7693-4f1d-b8fb-a018b86bf8fc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:31:43 crc kubenswrapper[4737]: I0126 20:31:43.743952 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adf5c442-7693-4f1d-b8fb-a018b86bf8fc-kube-api-access-99d86" (OuterVolumeSpecName: "kube-api-access-99d86") pod "adf5c442-7693-4f1d-b8fb-a018b86bf8fc" (UID: "adf5c442-7693-4f1d-b8fb-a018b86bf8fc"). InnerVolumeSpecName "kube-api-access-99d86". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:31:43 crc kubenswrapper[4737]: I0126 20:31:43.828594 4737 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adf5c442-7693-4f1d-b8fb-a018b86bf8fc-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 20:31:43 crc kubenswrapper[4737]: I0126 20:31:43.829167 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99d86\" (UniqueName: \"kubernetes.io/projected/adf5c442-7693-4f1d-b8fb-a018b86bf8fc-kube-api-access-99d86\") on node \"crc\" DevicePath \"\"" Jan 26 20:31:43 crc kubenswrapper[4737]: I0126 20:31:43.890179 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adf5c442-7693-4f1d-b8fb-a018b86bf8fc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "adf5c442-7693-4f1d-b8fb-a018b86bf8fc" (UID: "adf5c442-7693-4f1d-b8fb-a018b86bf8fc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:31:43 crc kubenswrapper[4737]: I0126 20:31:43.932045 4737 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adf5c442-7693-4f1d-b8fb-a018b86bf8fc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 20:31:43 crc kubenswrapper[4737]: I0126 20:31:43.984587 4737 generic.go:334] "Generic (PLEG): container finished" podID="adf5c442-7693-4f1d-b8fb-a018b86bf8fc" containerID="7e2d7a8ab397974e080c90be9dbbc48c24a9b45b880aa374d5e71770e7ee96ed" exitCode=0 Jan 26 20:31:43 crc kubenswrapper[4737]: I0126 20:31:43.984636 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mxlxc" event={"ID":"adf5c442-7693-4f1d-b8fb-a018b86bf8fc","Type":"ContainerDied","Data":"7e2d7a8ab397974e080c90be9dbbc48c24a9b45b880aa374d5e71770e7ee96ed"} Jan 26 20:31:43 crc kubenswrapper[4737]: I0126 20:31:43.984666 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mxlxc" event={"ID":"adf5c442-7693-4f1d-b8fb-a018b86bf8fc","Type":"ContainerDied","Data":"0533bbc633ae47c87029efd67c6ae530ac87652175b4a4d395b5608783db3895"} Jan 26 20:31:43 crc kubenswrapper[4737]: I0126 20:31:43.984880 4737 scope.go:117] "RemoveContainer" containerID="7e2d7a8ab397974e080c90be9dbbc48c24a9b45b880aa374d5e71770e7ee96ed" Jan 26 20:31:43 crc kubenswrapper[4737]: I0126 20:31:43.985915 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mxlxc" Jan 26 20:31:44 crc kubenswrapper[4737]: I0126 20:31:44.015747 4737 scope.go:117] "RemoveContainer" containerID="049afc51ed6aed179a6b0e73691260cb7656c2378a8fc47a0271ac4ffbe444fe" Jan 26 20:31:44 crc kubenswrapper[4737]: I0126 20:31:44.037924 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mxlxc"] Jan 26 20:31:44 crc kubenswrapper[4737]: I0126 20:31:44.075947 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mxlxc"] Jan 26 20:31:44 crc kubenswrapper[4737]: I0126 20:31:44.081258 4737 scope.go:117] "RemoveContainer" containerID="fbb361394aef34bd73eff346727973cc4eb4eeac568d1b6d6ba70fea6371d128" Jan 26 20:31:44 crc kubenswrapper[4737]: I0126 20:31:44.124190 4737 scope.go:117] "RemoveContainer" containerID="7e2d7a8ab397974e080c90be9dbbc48c24a9b45b880aa374d5e71770e7ee96ed" Jan 26 20:31:44 crc kubenswrapper[4737]: E0126 20:31:44.125098 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e2d7a8ab397974e080c90be9dbbc48c24a9b45b880aa374d5e71770e7ee96ed\": container with ID starting with 7e2d7a8ab397974e080c90be9dbbc48c24a9b45b880aa374d5e71770e7ee96ed not found: ID does not exist" containerID="7e2d7a8ab397974e080c90be9dbbc48c24a9b45b880aa374d5e71770e7ee96ed" Jan 26 20:31:44 crc kubenswrapper[4737]: I0126 20:31:44.125172 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e2d7a8ab397974e080c90be9dbbc48c24a9b45b880aa374d5e71770e7ee96ed"} err="failed to get container status \"7e2d7a8ab397974e080c90be9dbbc48c24a9b45b880aa374d5e71770e7ee96ed\": rpc error: code = NotFound desc = could not find container \"7e2d7a8ab397974e080c90be9dbbc48c24a9b45b880aa374d5e71770e7ee96ed\": container with ID starting with 7e2d7a8ab397974e080c90be9dbbc48c24a9b45b880aa374d5e71770e7ee96ed not found: ID does not exist" Jan 26 20:31:44 crc kubenswrapper[4737]: I0126 20:31:44.125200 4737 scope.go:117] "RemoveContainer" containerID="049afc51ed6aed179a6b0e73691260cb7656c2378a8fc47a0271ac4ffbe444fe" Jan 26 20:31:44 crc kubenswrapper[4737]: E0126 20:31:44.125684 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"049afc51ed6aed179a6b0e73691260cb7656c2378a8fc47a0271ac4ffbe444fe\": container with ID starting with 049afc51ed6aed179a6b0e73691260cb7656c2378a8fc47a0271ac4ffbe444fe not found: ID does not exist" containerID="049afc51ed6aed179a6b0e73691260cb7656c2378a8fc47a0271ac4ffbe444fe" Jan 26 20:31:44 crc kubenswrapper[4737]: I0126 20:31:44.125716 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"049afc51ed6aed179a6b0e73691260cb7656c2378a8fc47a0271ac4ffbe444fe"} err="failed to get container status \"049afc51ed6aed179a6b0e73691260cb7656c2378a8fc47a0271ac4ffbe444fe\": rpc error: code = NotFound desc = could not find container \"049afc51ed6aed179a6b0e73691260cb7656c2378a8fc47a0271ac4ffbe444fe\": container with ID starting with 049afc51ed6aed179a6b0e73691260cb7656c2378a8fc47a0271ac4ffbe444fe not found: ID does not exist" Jan 26 20:31:44 crc kubenswrapper[4737]: I0126 20:31:44.125741 4737 scope.go:117] "RemoveContainer" containerID="fbb361394aef34bd73eff346727973cc4eb4eeac568d1b6d6ba70fea6371d128" Jan 26 20:31:44 crc kubenswrapper[4737]: E0126 20:31:44.126042 4737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbb361394aef34bd73eff346727973cc4eb4eeac568d1b6d6ba70fea6371d128\": container with ID starting with fbb361394aef34bd73eff346727973cc4eb4eeac568d1b6d6ba70fea6371d128 not found: ID does not exist" containerID="fbb361394aef34bd73eff346727973cc4eb4eeac568d1b6d6ba70fea6371d128" Jan 26 20:31:44 crc kubenswrapper[4737]: I0126 20:31:44.126117 4737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbb361394aef34bd73eff346727973cc4eb4eeac568d1b6d6ba70fea6371d128"} err="failed to get container status \"fbb361394aef34bd73eff346727973cc4eb4eeac568d1b6d6ba70fea6371d128\": rpc error: code = NotFound desc = could not find container \"fbb361394aef34bd73eff346727973cc4eb4eeac568d1b6d6ba70fea6371d128\": container with ID starting with fbb361394aef34bd73eff346727973cc4eb4eeac568d1b6d6ba70fea6371d128 not found: ID does not exist" Jan 26 20:31:45 crc kubenswrapper[4737]: I0126 20:31:45.024334 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adf5c442-7693-4f1d-b8fb-a018b86bf8fc" path="/var/lib/kubelet/pods/adf5c442-7693-4f1d-b8fb-a018b86bf8fc/volumes" Jan 26 20:32:39 crc kubenswrapper[4737]: I0126 20:32:39.743127 4737 scope.go:117] "RemoveContainer" containerID="653d586082debae7fc7d0e6915090401d6a33e24e072fea94e826fe3154a93f1" Jan 26 20:32:39 crc kubenswrapper[4737]: I0126 20:32:39.796293 4737 scope.go:117] "RemoveContainer" containerID="af32d1947da6bd08a5e328c4ccb1b35193ba0cc8d414a21d6a802d2b35ec3a56" Jan 26 20:33:30 crc kubenswrapper[4737]: I0126 20:33:30.949913 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 20:33:30 crc kubenswrapper[4737]: I0126 20:33:30.950987 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 20:33:54 crc kubenswrapper[4737]: I0126 20:33:54.908392 4737 generic.go:334] "Generic (PLEG): container finished" podID="f76904f4-fa51-456c-8c9a-654f31187e4b" containerID="79640e9765b040448623723ed759a625b46a3c95d6c5a790058b7aaa17ba5d48" exitCode=0 Jan 26 20:33:54 crc kubenswrapper[4737]: I0126 20:33:54.909105 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jsjh2/must-gather-nfwrb" event={"ID":"f76904f4-fa51-456c-8c9a-654f31187e4b","Type":"ContainerDied","Data":"79640e9765b040448623723ed759a625b46a3c95d6c5a790058b7aaa17ba5d48"} Jan 26 20:33:54 crc kubenswrapper[4737]: I0126 20:33:54.910875 4737 scope.go:117] "RemoveContainer" containerID="79640e9765b040448623723ed759a625b46a3c95d6c5a790058b7aaa17ba5d48" Jan 26 20:33:55 crc kubenswrapper[4737]: I0126 20:33:55.504753 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jsjh2_must-gather-nfwrb_f76904f4-fa51-456c-8c9a-654f31187e4b/gather/0.log" Jan 26 20:34:00 crc kubenswrapper[4737]: I0126 20:34:00.949403 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 20:34:00 crc kubenswrapper[4737]: I0126 20:34:00.950242 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 20:34:07 crc kubenswrapper[4737]: I0126 20:34:07.769721 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jsjh2/must-gather-nfwrb"] Jan 26 20:34:07 crc kubenswrapper[4737]: I0126 20:34:07.770836 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-jsjh2/must-gather-nfwrb" podUID="f76904f4-fa51-456c-8c9a-654f31187e4b" containerName="copy" containerID="cri-o://039fc550ae34919d299e6107c354b530aa0400512a8b3d05acd095598a6004ca" gracePeriod=2 Jan 26 20:34:07 crc kubenswrapper[4737]: I0126 20:34:07.788709 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jsjh2/must-gather-nfwrb"] Jan 26 20:34:08 crc kubenswrapper[4737]: I0126 20:34:08.093131 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jsjh2_must-gather-nfwrb_f76904f4-fa51-456c-8c9a-654f31187e4b/copy/0.log" Jan 26 20:34:08 crc kubenswrapper[4737]: I0126 20:34:08.094626 4737 generic.go:334] "Generic (PLEG): container finished" podID="f76904f4-fa51-456c-8c9a-654f31187e4b" containerID="039fc550ae34919d299e6107c354b530aa0400512a8b3d05acd095598a6004ca" exitCode=143 Jan 26 20:34:08 crc kubenswrapper[4737]: I0126 20:34:08.385914 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jsjh2_must-gather-nfwrb_f76904f4-fa51-456c-8c9a-654f31187e4b/copy/0.log" Jan 26 20:34:08 crc kubenswrapper[4737]: I0126 20:34:08.386674 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jsjh2/must-gather-nfwrb" Jan 26 20:34:08 crc kubenswrapper[4737]: I0126 20:34:08.527568 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfr2g\" (UniqueName: \"kubernetes.io/projected/f76904f4-fa51-456c-8c9a-654f31187e4b-kube-api-access-dfr2g\") pod \"f76904f4-fa51-456c-8c9a-654f31187e4b\" (UID: \"f76904f4-fa51-456c-8c9a-654f31187e4b\") " Jan 26 20:34:08 crc kubenswrapper[4737]: I0126 20:34:08.527661 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f76904f4-fa51-456c-8c9a-654f31187e4b-must-gather-output\") pod \"f76904f4-fa51-456c-8c9a-654f31187e4b\" (UID: \"f76904f4-fa51-456c-8c9a-654f31187e4b\") " Jan 26 20:34:08 crc kubenswrapper[4737]: I0126 20:34:08.534501 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f76904f4-fa51-456c-8c9a-654f31187e4b-kube-api-access-dfr2g" (OuterVolumeSpecName: "kube-api-access-dfr2g") pod "f76904f4-fa51-456c-8c9a-654f31187e4b" (UID: "f76904f4-fa51-456c-8c9a-654f31187e4b"). InnerVolumeSpecName "kube-api-access-dfr2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:34:08 crc kubenswrapper[4737]: I0126 20:34:08.633745 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfr2g\" (UniqueName: \"kubernetes.io/projected/f76904f4-fa51-456c-8c9a-654f31187e4b-kube-api-access-dfr2g\") on node \"crc\" DevicePath \"\"" Jan 26 20:34:08 crc kubenswrapper[4737]: I0126 20:34:08.759789 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f76904f4-fa51-456c-8c9a-654f31187e4b-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "f76904f4-fa51-456c-8c9a-654f31187e4b" (UID: "f76904f4-fa51-456c-8c9a-654f31187e4b"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:34:08 crc kubenswrapper[4737]: I0126 20:34:08.848513 4737 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f76904f4-fa51-456c-8c9a-654f31187e4b-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 26 20:34:08 crc kubenswrapper[4737]: I0126 20:34:08.996574 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f76904f4-fa51-456c-8c9a-654f31187e4b" path="/var/lib/kubelet/pods/f76904f4-fa51-456c-8c9a-654f31187e4b/volumes" Jan 26 20:34:09 crc kubenswrapper[4737]: I0126 20:34:09.106580 4737 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jsjh2_must-gather-nfwrb_f76904f4-fa51-456c-8c9a-654f31187e4b/copy/0.log" Jan 26 20:34:09 crc kubenswrapper[4737]: I0126 20:34:09.107004 4737 scope.go:117] "RemoveContainer" containerID="039fc550ae34919d299e6107c354b530aa0400512a8b3d05acd095598a6004ca" Jan 26 20:34:09 crc kubenswrapper[4737]: I0126 20:34:09.107044 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jsjh2/must-gather-nfwrb" Jan 26 20:34:09 crc kubenswrapper[4737]: I0126 20:34:09.144623 4737 scope.go:117] "RemoveContainer" containerID="79640e9765b040448623723ed759a625b46a3c95d6c5a790058b7aaa17ba5d48" Jan 26 20:34:30 crc kubenswrapper[4737]: I0126 20:34:30.948973 4737 patch_prober.go:28] interesting pod/machine-config-daemon-qxkj5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 20:34:30 crc kubenswrapper[4737]: I0126 20:34:30.949725 4737 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 20:34:30 crc kubenswrapper[4737]: I0126 20:34:30.949787 4737 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" Jan 26 20:34:30 crc kubenswrapper[4737]: I0126 20:34:30.950797 4737 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"771be7dd7dd89d05e4011b9c3012a96210dbedb7e310e77109a9521a2ac994b2"} pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 20:34:30 crc kubenswrapper[4737]: I0126 20:34:30.950857 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" containerName="machine-config-daemon" containerID="cri-o://771be7dd7dd89d05e4011b9c3012a96210dbedb7e310e77109a9521a2ac994b2" gracePeriod=600 Jan 26 20:34:31 crc kubenswrapper[4737]: E0126 20:34:31.088639 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:34:31 crc kubenswrapper[4737]: I0126 20:34:31.402674 4737 generic.go:334] "Generic (PLEG): container finished" podID="afd75772-7900-46c3-b392-afb075e1cc08" containerID="771be7dd7dd89d05e4011b9c3012a96210dbedb7e310e77109a9521a2ac994b2" exitCode=0 Jan 26 20:34:31 crc kubenswrapper[4737]: I0126 20:34:31.402749 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" event={"ID":"afd75772-7900-46c3-b392-afb075e1cc08","Type":"ContainerDied","Data":"771be7dd7dd89d05e4011b9c3012a96210dbedb7e310e77109a9521a2ac994b2"} Jan 26 20:34:31 crc kubenswrapper[4737]: I0126 20:34:31.403131 4737 scope.go:117] "RemoveContainer" containerID="05a46f8e5c92ce620be075be65e82bacded6a11097569b518c26dfa30624b4cd" Jan 26 20:34:31 crc kubenswrapper[4737]: I0126 20:34:31.404238 4737 scope.go:117] "RemoveContainer" containerID="771be7dd7dd89d05e4011b9c3012a96210dbedb7e310e77109a9521a2ac994b2" Jan 26 20:34:31 crc kubenswrapper[4737]: E0126 20:34:31.404621 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:34:42 crc kubenswrapper[4737]: I0126 20:34:42.982451 4737 scope.go:117] "RemoveContainer" containerID="771be7dd7dd89d05e4011b9c3012a96210dbedb7e310e77109a9521a2ac994b2" Jan 26 20:34:42 crc kubenswrapper[4737]: E0126 20:34:42.983563 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:34:54 crc kubenswrapper[4737]: I0126 20:34:54.982375 4737 scope.go:117] "RemoveContainer" containerID="771be7dd7dd89d05e4011b9c3012a96210dbedb7e310e77109a9521a2ac994b2" Jan 26 20:34:54 crc kubenswrapper[4737]: E0126 20:34:54.983337 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:35:08 crc kubenswrapper[4737]: I0126 20:35:08.983040 4737 scope.go:117] "RemoveContainer" containerID="771be7dd7dd89d05e4011b9c3012a96210dbedb7e310e77109a9521a2ac994b2" Jan 26 20:35:08 crc kubenswrapper[4737]: E0126 20:35:08.985232 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:35:19 crc kubenswrapper[4737]: I0126 20:35:19.982291 4737 scope.go:117] "RemoveContainer" containerID="771be7dd7dd89d05e4011b9c3012a96210dbedb7e310e77109a9521a2ac994b2" Jan 26 20:35:19 crc kubenswrapper[4737]: E0126 20:35:19.983398 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:35:32 crc kubenswrapper[4737]: I0126 20:35:32.982565 4737 scope.go:117] "RemoveContainer" containerID="771be7dd7dd89d05e4011b9c3012a96210dbedb7e310e77109a9521a2ac994b2" Jan 26 20:35:32 crc kubenswrapper[4737]: E0126 20:35:32.983560 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:35:43 crc kubenswrapper[4737]: I0126 20:35:43.983555 4737 scope.go:117] "RemoveContainer" containerID="771be7dd7dd89d05e4011b9c3012a96210dbedb7e310e77109a9521a2ac994b2" Jan 26 20:35:43 crc kubenswrapper[4737]: E0126 20:35:43.984936 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:35:54 crc kubenswrapper[4737]: I0126 20:35:54.983633 4737 scope.go:117] "RemoveContainer" containerID="771be7dd7dd89d05e4011b9c3012a96210dbedb7e310e77109a9521a2ac994b2" Jan 26 20:35:54 crc kubenswrapper[4737]: E0126 20:35:54.986489 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:36:07 crc kubenswrapper[4737]: I0126 20:36:07.982653 4737 scope.go:117] "RemoveContainer" containerID="771be7dd7dd89d05e4011b9c3012a96210dbedb7e310e77109a9521a2ac994b2" Jan 26 20:36:07 crc kubenswrapper[4737]: E0126 20:36:07.985003 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:36:22 crc kubenswrapper[4737]: I0126 20:36:22.982825 4737 scope.go:117] "RemoveContainer" containerID="771be7dd7dd89d05e4011b9c3012a96210dbedb7e310e77109a9521a2ac994b2" Jan 26 20:36:22 crc kubenswrapper[4737]: E0126 20:36:22.985590 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:36:31 crc kubenswrapper[4737]: I0126 20:36:31.715945 4737 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p7ln7"] Jan 26 20:36:31 crc kubenswrapper[4737]: E0126 20:36:31.718924 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adf5c442-7693-4f1d-b8fb-a018b86bf8fc" containerName="registry-server" Jan 26 20:36:31 crc kubenswrapper[4737]: I0126 20:36:31.718953 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="adf5c442-7693-4f1d-b8fb-a018b86bf8fc" containerName="registry-server" Jan 26 20:36:31 crc kubenswrapper[4737]: E0126 20:36:31.718966 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adf5c442-7693-4f1d-b8fb-a018b86bf8fc" containerName="extract-content" Jan 26 20:36:31 crc kubenswrapper[4737]: I0126 20:36:31.718972 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="adf5c442-7693-4f1d-b8fb-a018b86bf8fc" containerName="extract-content" Jan 26 20:36:31 crc kubenswrapper[4737]: E0126 20:36:31.718991 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f76904f4-fa51-456c-8c9a-654f31187e4b" containerName="gather" Jan 26 20:36:31 crc kubenswrapper[4737]: I0126 20:36:31.718999 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="f76904f4-fa51-456c-8c9a-654f31187e4b" containerName="gather" Jan 26 20:36:31 crc kubenswrapper[4737]: E0126 20:36:31.719007 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adf5c442-7693-4f1d-b8fb-a018b86bf8fc" containerName="extract-utilities" Jan 26 20:36:31 crc kubenswrapper[4737]: I0126 20:36:31.719014 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="adf5c442-7693-4f1d-b8fb-a018b86bf8fc" containerName="extract-utilities" Jan 26 20:36:31 crc kubenswrapper[4737]: E0126 20:36:31.719032 4737 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f76904f4-fa51-456c-8c9a-654f31187e4b" containerName="copy" Jan 26 20:36:31 crc kubenswrapper[4737]: I0126 20:36:31.719039 4737 state_mem.go:107] "Deleted CPUSet assignment" podUID="f76904f4-fa51-456c-8c9a-654f31187e4b" containerName="copy" Jan 26 20:36:31 crc kubenswrapper[4737]: I0126 20:36:31.719688 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="f76904f4-fa51-456c-8c9a-654f31187e4b" containerName="copy" Jan 26 20:36:31 crc kubenswrapper[4737]: I0126 20:36:31.719714 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="f76904f4-fa51-456c-8c9a-654f31187e4b" containerName="gather" Jan 26 20:36:31 crc kubenswrapper[4737]: I0126 20:36:31.719735 4737 memory_manager.go:354] "RemoveStaleState removing state" podUID="adf5c442-7693-4f1d-b8fb-a018b86bf8fc" containerName="registry-server" Jan 26 20:36:31 crc kubenswrapper[4737]: I0126 20:36:31.727591 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p7ln7" Jan 26 20:36:31 crc kubenswrapper[4737]: I0126 20:36:31.784536 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5b46ad5-5106-48e3-838b-0d2bcf9bcc8a-catalog-content\") pod \"community-operators-p7ln7\" (UID: \"e5b46ad5-5106-48e3-838b-0d2bcf9bcc8a\") " pod="openshift-marketplace/community-operators-p7ln7" Jan 26 20:36:31 crc kubenswrapper[4737]: I0126 20:36:31.784704 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5b46ad5-5106-48e3-838b-0d2bcf9bcc8a-utilities\") pod \"community-operators-p7ln7\" (UID: \"e5b46ad5-5106-48e3-838b-0d2bcf9bcc8a\") " pod="openshift-marketplace/community-operators-p7ln7" Jan 26 20:36:31 crc kubenswrapper[4737]: I0126 20:36:31.785185 4737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfnsb\" (UniqueName: \"kubernetes.io/projected/e5b46ad5-5106-48e3-838b-0d2bcf9bcc8a-kube-api-access-sfnsb\") pod \"community-operators-p7ln7\" (UID: \"e5b46ad5-5106-48e3-838b-0d2bcf9bcc8a\") " pod="openshift-marketplace/community-operators-p7ln7" Jan 26 20:36:31 crc kubenswrapper[4737]: I0126 20:36:31.894705 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5b46ad5-5106-48e3-838b-0d2bcf9bcc8a-utilities\") pod \"community-operators-p7ln7\" (UID: \"e5b46ad5-5106-48e3-838b-0d2bcf9bcc8a\") " pod="openshift-marketplace/community-operators-p7ln7" Jan 26 20:36:31 crc kubenswrapper[4737]: I0126 20:36:31.894915 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfnsb\" (UniqueName: \"kubernetes.io/projected/e5b46ad5-5106-48e3-838b-0d2bcf9bcc8a-kube-api-access-sfnsb\") pod \"community-operators-p7ln7\" (UID: \"e5b46ad5-5106-48e3-838b-0d2bcf9bcc8a\") " pod="openshift-marketplace/community-operators-p7ln7" Jan 26 20:36:31 crc kubenswrapper[4737]: I0126 20:36:31.895135 4737 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5b46ad5-5106-48e3-838b-0d2bcf9bcc8a-catalog-content\") pod \"community-operators-p7ln7\" (UID: \"e5b46ad5-5106-48e3-838b-0d2bcf9bcc8a\") " pod="openshift-marketplace/community-operators-p7ln7" Jan 26 20:36:31 crc kubenswrapper[4737]: I0126 20:36:31.896496 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5b46ad5-5106-48e3-838b-0d2bcf9bcc8a-utilities\") pod \"community-operators-p7ln7\" (UID: \"e5b46ad5-5106-48e3-838b-0d2bcf9bcc8a\") " pod="openshift-marketplace/community-operators-p7ln7" Jan 26 20:36:31 crc kubenswrapper[4737]: I0126 20:36:31.897575 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5b46ad5-5106-48e3-838b-0d2bcf9bcc8a-catalog-content\") pod \"community-operators-p7ln7\" (UID: \"e5b46ad5-5106-48e3-838b-0d2bcf9bcc8a\") " pod="openshift-marketplace/community-operators-p7ln7" Jan 26 20:36:31 crc kubenswrapper[4737]: I0126 20:36:31.939648 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p7ln7"] Jan 26 20:36:32 crc kubenswrapper[4737]: I0126 20:36:32.010156 4737 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfnsb\" (UniqueName: \"kubernetes.io/projected/e5b46ad5-5106-48e3-838b-0d2bcf9bcc8a-kube-api-access-sfnsb\") pod \"community-operators-p7ln7\" (UID: \"e5b46ad5-5106-48e3-838b-0d2bcf9bcc8a\") " pod="openshift-marketplace/community-operators-p7ln7" Jan 26 20:36:32 crc kubenswrapper[4737]: I0126 20:36:32.123143 4737 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p7ln7" Jan 26 20:36:32 crc kubenswrapper[4737]: I0126 20:36:32.410616 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-6dd8ff9d59-rttts" podUID="38df0a7c-47f1-4834-b970-d815d009b6d7" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 26 20:36:33 crc kubenswrapper[4737]: I0126 20:36:33.021975 4737 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p7ln7"] Jan 26 20:36:33 crc kubenswrapper[4737]: I0126 20:36:33.154372 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p7ln7" event={"ID":"e5b46ad5-5106-48e3-838b-0d2bcf9bcc8a","Type":"ContainerStarted","Data":"0e5c961d5baacb6cc89c486ff6c82c77a772bde08c72a721a3fa00bc890eabc9"} Jan 26 20:36:34 crc kubenswrapper[4737]: I0126 20:36:34.214806 4737 generic.go:334] "Generic (PLEG): container finished" podID="e5b46ad5-5106-48e3-838b-0d2bcf9bcc8a" containerID="c5f6cefdd1cc9a7799e45caf68826cf00fc0a2a6185c6b315ac72a6914e46e21" exitCode=0 Jan 26 20:36:34 crc kubenswrapper[4737]: I0126 20:36:34.215327 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p7ln7" event={"ID":"e5b46ad5-5106-48e3-838b-0d2bcf9bcc8a","Type":"ContainerDied","Data":"c5f6cefdd1cc9a7799e45caf68826cf00fc0a2a6185c6b315ac72a6914e46e21"} Jan 26 20:36:34 crc kubenswrapper[4737]: I0126 20:36:34.219310 4737 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 20:36:34 crc kubenswrapper[4737]: I0126 20:36:34.982240 4737 scope.go:117] "RemoveContainer" containerID="771be7dd7dd89d05e4011b9c3012a96210dbedb7e310e77109a9521a2ac994b2" Jan 26 20:36:34 crc kubenswrapper[4737]: E0126 20:36:34.982770 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:36:36 crc kubenswrapper[4737]: I0126 20:36:36.251412 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p7ln7" event={"ID":"e5b46ad5-5106-48e3-838b-0d2bcf9bcc8a","Type":"ContainerStarted","Data":"2a74b9d5ab228e74fcbc1755af944c9cc9194bd9a19b10a68dc495b0da852dbd"} Jan 26 20:36:37 crc kubenswrapper[4737]: I0126 20:36:37.269460 4737 generic.go:334] "Generic (PLEG): container finished" podID="e5b46ad5-5106-48e3-838b-0d2bcf9bcc8a" containerID="2a74b9d5ab228e74fcbc1755af944c9cc9194bd9a19b10a68dc495b0da852dbd" exitCode=0 Jan 26 20:36:37 crc kubenswrapper[4737]: I0126 20:36:37.269548 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p7ln7" event={"ID":"e5b46ad5-5106-48e3-838b-0d2bcf9bcc8a","Type":"ContainerDied","Data":"2a74b9d5ab228e74fcbc1755af944c9cc9194bd9a19b10a68dc495b0da852dbd"} Jan 26 20:36:38 crc kubenswrapper[4737]: I0126 20:36:38.283472 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p7ln7" event={"ID":"e5b46ad5-5106-48e3-838b-0d2bcf9bcc8a","Type":"ContainerStarted","Data":"d1c8b8872be8c44557b0d2932ff036173e7ed4c787b8164dd913dc4dfb1d03aa"} Jan 26 20:36:38 crc kubenswrapper[4737]: I0126 20:36:38.396703 4737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p7ln7" podStartSLOduration=3.711009978 podStartE2EDuration="7.312446478s" podCreationTimestamp="2026-01-26 20:36:31 +0000 UTC" firstStartedPulling="2026-01-26 20:36:34.217442783 +0000 UTC m=+7567.525637491" lastFinishedPulling="2026-01-26 20:36:37.818879283 +0000 UTC m=+7571.127073991" observedRunningTime="2026-01-26 20:36:38.309190709 +0000 UTC m=+7571.617385407" watchObservedRunningTime="2026-01-26 20:36:38.312446478 +0000 UTC m=+7571.620641176" Jan 26 20:36:42 crc kubenswrapper[4737]: I0126 20:36:42.124430 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p7ln7" Jan 26 20:36:42 crc kubenswrapper[4737]: I0126 20:36:42.125223 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-p7ln7" Jan 26 20:36:43 crc kubenswrapper[4737]: I0126 20:36:43.212550 4737 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-p7ln7" podUID="e5b46ad5-5106-48e3-838b-0d2bcf9bcc8a" containerName="registry-server" probeResult="failure" output=< Jan 26 20:36:43 crc kubenswrapper[4737]: timeout: failed to connect service ":50051" within 1s Jan 26 20:36:43 crc kubenswrapper[4737]: > Jan 26 20:36:47 crc kubenswrapper[4737]: I0126 20:36:47.991874 4737 scope.go:117] "RemoveContainer" containerID="771be7dd7dd89d05e4011b9c3012a96210dbedb7e310e77109a9521a2ac994b2" Jan 26 20:36:47 crc kubenswrapper[4737]: E0126 20:36:47.994157 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:36:52 crc kubenswrapper[4737]: I0126 20:36:52.191851 4737 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p7ln7" Jan 26 20:36:52 crc kubenswrapper[4737]: I0126 20:36:52.259316 4737 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p7ln7" Jan 26 20:36:52 crc kubenswrapper[4737]: I0126 20:36:52.440277 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p7ln7"] Jan 26 20:36:53 crc kubenswrapper[4737]: I0126 20:36:53.539217 4737 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-p7ln7" podUID="e5b46ad5-5106-48e3-838b-0d2bcf9bcc8a" containerName="registry-server" containerID="cri-o://d1c8b8872be8c44557b0d2932ff036173e7ed4c787b8164dd913dc4dfb1d03aa" gracePeriod=2 Jan 26 20:36:54 crc kubenswrapper[4737]: I0126 20:36:54.567212 4737 generic.go:334] "Generic (PLEG): container finished" podID="e5b46ad5-5106-48e3-838b-0d2bcf9bcc8a" containerID="d1c8b8872be8c44557b0d2932ff036173e7ed4c787b8164dd913dc4dfb1d03aa" exitCode=0 Jan 26 20:36:54 crc kubenswrapper[4737]: I0126 20:36:54.568050 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p7ln7" event={"ID":"e5b46ad5-5106-48e3-838b-0d2bcf9bcc8a","Type":"ContainerDied","Data":"d1c8b8872be8c44557b0d2932ff036173e7ed4c787b8164dd913dc4dfb1d03aa"} Jan 26 20:36:54 crc kubenswrapper[4737]: I0126 20:36:54.812771 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p7ln7" Jan 26 20:36:54 crc kubenswrapper[4737]: I0126 20:36:54.910890 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfnsb\" (UniqueName: \"kubernetes.io/projected/e5b46ad5-5106-48e3-838b-0d2bcf9bcc8a-kube-api-access-sfnsb\") pod \"e5b46ad5-5106-48e3-838b-0d2bcf9bcc8a\" (UID: \"e5b46ad5-5106-48e3-838b-0d2bcf9bcc8a\") " Jan 26 20:36:54 crc kubenswrapper[4737]: I0126 20:36:54.910977 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5b46ad5-5106-48e3-838b-0d2bcf9bcc8a-utilities\") pod \"e5b46ad5-5106-48e3-838b-0d2bcf9bcc8a\" (UID: \"e5b46ad5-5106-48e3-838b-0d2bcf9bcc8a\") " Jan 26 20:36:54 crc kubenswrapper[4737]: I0126 20:36:54.912576 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5b46ad5-5106-48e3-838b-0d2bcf9bcc8a-utilities" (OuterVolumeSpecName: "utilities") pod "e5b46ad5-5106-48e3-838b-0d2bcf9bcc8a" (UID: "e5b46ad5-5106-48e3-838b-0d2bcf9bcc8a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:36:54 crc kubenswrapper[4737]: I0126 20:36:54.913688 4737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5b46ad5-5106-48e3-838b-0d2bcf9bcc8a-catalog-content\") pod \"e5b46ad5-5106-48e3-838b-0d2bcf9bcc8a\" (UID: \"e5b46ad5-5106-48e3-838b-0d2bcf9bcc8a\") " Jan 26 20:36:54 crc kubenswrapper[4737]: I0126 20:36:54.915664 4737 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5b46ad5-5106-48e3-838b-0d2bcf9bcc8a-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 20:36:54 crc kubenswrapper[4737]: I0126 20:36:54.931421 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5b46ad5-5106-48e3-838b-0d2bcf9bcc8a-kube-api-access-sfnsb" (OuterVolumeSpecName: "kube-api-access-sfnsb") pod "e5b46ad5-5106-48e3-838b-0d2bcf9bcc8a" (UID: "e5b46ad5-5106-48e3-838b-0d2bcf9bcc8a"). InnerVolumeSpecName "kube-api-access-sfnsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:36:54 crc kubenswrapper[4737]: I0126 20:36:54.995825 4737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5b46ad5-5106-48e3-838b-0d2bcf9bcc8a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e5b46ad5-5106-48e3-838b-0d2bcf9bcc8a" (UID: "e5b46ad5-5106-48e3-838b-0d2bcf9bcc8a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:36:55 crc kubenswrapper[4737]: I0126 20:36:55.021024 4737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfnsb\" (UniqueName: \"kubernetes.io/projected/e5b46ad5-5106-48e3-838b-0d2bcf9bcc8a-kube-api-access-sfnsb\") on node \"crc\" DevicePath \"\"" Jan 26 20:36:55 crc kubenswrapper[4737]: I0126 20:36:55.021082 4737 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5b46ad5-5106-48e3-838b-0d2bcf9bcc8a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 20:36:55 crc kubenswrapper[4737]: I0126 20:36:55.583006 4737 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p7ln7" event={"ID":"e5b46ad5-5106-48e3-838b-0d2bcf9bcc8a","Type":"ContainerDied","Data":"0e5c961d5baacb6cc89c486ff6c82c77a772bde08c72a721a3fa00bc890eabc9"} Jan 26 20:36:55 crc kubenswrapper[4737]: I0126 20:36:55.583089 4737 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p7ln7" Jan 26 20:36:55 crc kubenswrapper[4737]: I0126 20:36:55.583125 4737 scope.go:117] "RemoveContainer" containerID="d1c8b8872be8c44557b0d2932ff036173e7ed4c787b8164dd913dc4dfb1d03aa" Jan 26 20:36:55 crc kubenswrapper[4737]: I0126 20:36:55.615027 4737 scope.go:117] "RemoveContainer" containerID="2a74b9d5ab228e74fcbc1755af944c9cc9194bd9a19b10a68dc495b0da852dbd" Jan 26 20:36:55 crc kubenswrapper[4737]: I0126 20:36:55.629042 4737 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p7ln7"] Jan 26 20:36:55 crc kubenswrapper[4737]: I0126 20:36:55.641911 4737 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-p7ln7"] Jan 26 20:36:55 crc kubenswrapper[4737]: I0126 20:36:55.656489 4737 scope.go:117] "RemoveContainer" containerID="c5f6cefdd1cc9a7799e45caf68826cf00fc0a2a6185c6b315ac72a6914e46e21" Jan 26 20:36:57 crc kubenswrapper[4737]: I0126 20:36:57.010888 4737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5b46ad5-5106-48e3-838b-0d2bcf9bcc8a" path="/var/lib/kubelet/pods/e5b46ad5-5106-48e3-838b-0d2bcf9bcc8a/volumes" Jan 26 20:36:59 crc kubenswrapper[4737]: I0126 20:36:59.982193 4737 scope.go:117] "RemoveContainer" containerID="771be7dd7dd89d05e4011b9c3012a96210dbedb7e310e77109a9521a2ac994b2" Jan 26 20:36:59 crc kubenswrapper[4737]: E0126 20:36:59.984429 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:37:10 crc kubenswrapper[4737]: I0126 20:37:10.982131 4737 scope.go:117] "RemoveContainer" containerID="771be7dd7dd89d05e4011b9c3012a96210dbedb7e310e77109a9521a2ac994b2" Jan 26 20:37:10 crc kubenswrapper[4737]: E0126 20:37:10.983422 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:37:17 crc kubenswrapper[4737]: I0126 20:37:17.406941 4737 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-6dd8ff9d59-rttts" podUID="38df0a7c-47f1-4834-b970-d815d009b6d7" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 26 20:37:25 crc kubenswrapper[4737]: I0126 20:37:25.983203 4737 scope.go:117] "RemoveContainer" containerID="771be7dd7dd89d05e4011b9c3012a96210dbedb7e310e77109a9521a2ac994b2" Jan 26 20:37:25 crc kubenswrapper[4737]: E0126 20:37:25.984139 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08" Jan 26 20:37:40 crc kubenswrapper[4737]: I0126 20:37:40.983103 4737 scope.go:117] "RemoveContainer" containerID="771be7dd7dd89d05e4011b9c3012a96210dbedb7e310e77109a9521a2ac994b2" Jan 26 20:37:40 crc kubenswrapper[4737]: E0126 20:37:40.984176 4737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qxkj5_openshift-machine-config-operator(afd75772-7900-46c3-b392-afb075e1cc08)\"" pod="openshift-machine-config-operator/machine-config-daemon-qxkj5" podUID="afd75772-7900-46c3-b392-afb075e1cc08"